Compare commits

..

35 Commits

Author SHA1 Message Date
Michael Roth
785adb09b9 update VERSION for v1.1.1
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-07-12 13:36:53 -05:00
Michael Roth
f52d0d639e Merge remote-tracking branch 'agraf/s390-for-upstream-1.1' into HEAD 2012-07-10 14:08:37 -05:00
Alexander Graf
4082e889ee s390x: fix s390 virtio aliases
Some of the virtio devices have the same frontend name, but actually
implement different devices behind the scenes through aliases.

The indicator which device type to use is the architecture. On s390, we
want s390 virtio devices. On everything else, we want PCI devices.

Reflect this in the alias selection code. This way we fix commands like
-device virtio-blk on s390x which with this patch applied select the
correct virtio-blk-s390 device rather than virtio-blk-pci.

Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2012-07-10 18:29:29 +02:00
Jason Wang
b7093f294c rtl8139: validate rx ring before receiving packets
Commit ff71f2e8ca prevent the possible
crash during initialization of linux driver by checking the operating
mode.This seems too strict as:

- the real card could still work in mode other than normal
- some buggy driver who does not set correct opmode after eeprom
 access

So, considering rx ring address were reset to zero (which could be
safely trated as an address not intened to DMA to), in order to
both letting old guest work and preventing the unexpected DMA to
guest, we can forbid packet receiving when rx ring address is zero.

Tested-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit fcce6fd25f)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-29 10:22:30 -05:00
Daniel Verkamp
cd63a77e99 ahci: SATA FIS is 20 bytes, not 0x20
As in the SATA and AHCI specifications, a FIS is 5 Dwords of 4 bytes
each, which comes to 20 bytes (decimal), not 0x20.

Signed-off-by: Daniel Verkamp <daniel@drv.nu>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 4bb9c939a5)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 10:58:24 -05:00
Stefan Hajnoczi
8456852657 qemu-img: document qed format on qemu-img man page
The qemu-img.1 man page is missing the qed format from its list of
supported formats.  Document the image creation options for qed.

Suggested-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit f085800e24)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 09:04:33 -05:00
Stefan Weil
7d440f20bd virtio: Fix compiler warning for non Linux hosts
The local variables ret, i are only used if __linux__ is defined.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 47ce9ef7f8)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 09:04:04 -05:00
MORITA Kazutaka
feba8ae20b sheepdog: fix return value of do_load_save_vm_state
bdrv_save_vmstate and bdrv_load_vmstate should return the vmstate size
on success, and -errno on error.

Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 6f3c714eb7)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 09:03:33 -05:00
Jan Beulich
c9c2479289 qemu/xendisk: set maximum number of grants to be used
Legacy (non-pvops) gntdev drivers may require this to be done when the
number of grants intended to be used simultaneously exceeds a certain
driver specific default limit.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
(cherry picked from commit 64c27e5b1f)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 09:00:42 -05:00
Bruce Rogers
4c45bf61d3 build: install qmp-commands.txt
File is targeted for install, but is never installed.

Signed-off-by: Bruce Rogers <brogers@suse.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
(cherry picked from commit 0cd23fcc0a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:50 -05:00
Pavel Hrdina
70d582074f fdc: fix implied seek while there is no media in drive
The Windows uses 'READ' command at the start of an instalation
without checking the 'dir' register. We have to abort the transfer
with an abnormal termination if there is no media in the drive.

Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit c52acf60b6)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:44 -05:00
Stefan Hajnoczi
0da4c07322 qcow2: fix autoclear image header update
The autoclear feature bits can be used for qcow2 file format features
that are safe to "drop" by old programs that do not understand the
feature.  Upon opening the image file unknown autoclear feature bits are
cleared and the image file header is rewritten, but this was happening
too early in the code when critical header fields were not yet loaded.

Process autoclear feature bits after all necessary header information
has been loaded.

Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit af7b708db2)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:38 -05:00
Pavel Dovgaluk
ee7735fa63 Prevent disk data loss when closing qemu
Prevent disk data loss when closing qemu console window
under Windows 7.

v3. Comment for Sleep() parameter was updated.

Signed-off-by: Pavel Dovgalyuk<pavel.dovgaluk@gmail.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit b75a02829d)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:33 -05:00
Zhi Yong Wu
02fe741375 qcow2: fix endianness conversion
Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
Reviewed-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 87267753a3)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:25 -05:00
Jason Baron
dbe4ac16bb pci_bridge_dev: fix error path in pci_bridge_dev_initfn()
Currently, we do not properly cleanup, if pci_bridge_dev_initfn
fails to initialize properly. Make sure to call pci_bridge_exitfn()
in the error path.

Signed-off-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 80aa796bf3)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:19 -05:00
Jason Baron
f63e60327b qdev: release parent properties on dc->init failure
While looking into hot-plugging bridges, I can create a qemu segfault via:

$ device_add pci-bridge

Bridge chassis not specified. Each bridge is required to be assigned a unique chassis id > 0.
**
ERROR:qom/object.c:389:object_delete: assertion failed: (obj->ref == 0)

I'm proposing to fix this by adding a call to 'object_unparent()', before the
call to qdev_free(). I see there is already a precedent for this usage pattern as
seen in qdev_simple_unplug_cb():

/* can be used as ->unplug() callback for the simple cases */
int qdev_simple_unplug_cb(DeviceState *dev)
{
    /* just zap it */
    object_unparent(OBJECT(dev));
    qdev_free(dev);
    return 0;
}

Signed-off-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 266ca11a04)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:12 -05:00
Jan Kiszka
0ec3907571 intel-hda: Fix reset of MSI function
Call msi_reset on device reset as still required by the core.

CC: Gerd Hoffmann <kraxel@redhat.com>
CC: qemu-stable@nongnu.org
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 8e729e3b52)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:06 -05:00
Jan Kiszka
1658e3cd89 ahci: Fix reset of MSI function
Call msi_reset on device reset as still required by the core.

CC: Alexander Graf <agraf@suse.de>
CC: qemu-stable@nongnu.org
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 868a1a5226)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:00 -05:00
Fernando Luis Vazquez Cao
065436479b rtl8139: honor RxOverflow flag in can_receive method
Some drivers (Linux' 8139too among them) rely on the NIC
injecting an interrupt in the event of a receive buffer overflow
and, accordingly, set the RxOverflow bit in the interrupt
mask. Unfortunately rtl8139's can_receive method ignores the
RxOverflow flag, which may lead to a situation where rtl8139
stops receiving packets (can_receive returns 0) when the receive
buffer becomes full.

If the driver eventually read from the receive buffer or reset
the card the emulator could recover from this situation. However
some implementations only do this upon receiving an interrupt
with either RxOK or RxOverflow set in the ISR; interrupt that
will never come because QEMU's flow control mechanisms would
prevent rtl8139 from receiving any packet.

Letting packets go through when the overflow interrupt is enabled
makes the QEMU emulator compliant to the spec and solves the
problem.

This patch should fix a relatively common (in our experience)
network stall observed when running enterprise distros with
rtl8139 as the NIC; in some cases the 8139too device driver gets
loaded and when under heavy load the network eventually stops
working.

Reported-by: Hayato Kakuta <kakuta.hayato@oss.ntt.co.jp>
Tested-by: Hayato Kakuta <kakuta.hayato@oss.ntt.co.jp>
Acked-by: Igor Kovalenko <igor.v.kovalenko@gmail.com>
Signed-off-by: Fernando Luis Vazquez Cao <fernando@oss.ntt.co.jp>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit fee9d348ff)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:54:55 -05:00
Stefan Weil
f6db26e4f8 configure: Fix build for some versions of glibc (9pfs)
Some versions declare open_by_handle_at, but don't define AT_EMPTY_PATH.
Extend the check in configure to test both preconditions.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Serge Hallyn <serge.hallyn@ubuntu.com>
(cherry picked from commit acc55ba8b1)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:54:44 -05:00
Stefan Weil
c49dd1bf64 monitor: Fix memory leak with readline completion
Each string which is shown during readline completion in the QEMU monitor
is allocated dynamically but currently never deallocated.

Add the missing loop which calls g_free for the allocated strings.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
(cherry picked from commit fc9fa4bd0a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:53:25 -05:00
Kevin Wolf
b4fcb4b499 qcow2: Silence false warning
Some gcc versions seem not to be able to figure out that the switch
statement covers all possible values and that c is therefore always
initialised. Add a default branch for them.

Reported-by: malc <av1474@comtv.ru>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: malc <av1474@comtv.ru>
(cherry picked from commit 1417d7e40e)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:53:20 -05:00
Jan Kiszka
7672b714b2 kvm: i8254: Fix conversion of in-kernel to userspace state
Due to a offset between the clock used to generate the in-kernel
count_load_time (CLOCK_MONOTONIC) and the clock used for processing this
in userspace (vm_clock), reading back the output of PIT channel 2 via
port 0x61 was broken. One use cases that suffered from it was the CPU
frequency calibration of SeaBIOS, which also affected IDE/AHCI timeouts.

This fixes it by calibrating the offset between both clocks on
kvm_pit_get and adjusting the kernel value before saving it in the
userspace state. As the calibration only works while the vm_clock is
running, we cache the in-kernel state across stopped phases.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
(cherry picked from commit 0cdd3d1444)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:53:13 -05:00
Jim Meyering
ca09717e8e kvm/apic: correct short memset
kvm_put_apic_state's attempt to clear *kapic before setting its
bits cleared sizeof(void*) bytes (no more than 8) rather than the
intended 1024 (KVM_APIC_REG_SIZE) bytes. Spotted by coverity.

Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
(cherry picked from commit 0614cb82ca)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:53:05 -05:00
Harsh Prateek Bora
0cc21de484 configure: report missing libraries for virtfs
Signed-off-by: Harsh Prateek Bora <harsh@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
(cherry picked from commit 263ddcc81b)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:52:57 -05:00
Harsh Prateek Bora
08375616a0 trace/simple.c: fix deprecated glib2 interface
Signed-off-by: Harsh Prateek Bora <harsh@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
(cherry picked from commit 0d665005c7)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:52:44 -05:00
Max Filippov
b993b863e7 target-xtensa: fix CCOUNT for conditional branches
Taken conditional branches fail to update CCOUNT register because
accumulated ccount_delta is reset during translation of non-taken
branch. To fix it only update CCOUNT once per conditional branch
instruction translation.

This fixes guest linux freeze on LTP waitpid06 test.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit d865f30739)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:48:56 -05:00
Max Filippov
07ff37597b exec: fix TB invalidation after breakpoint insertion/deletion
tb_invalidate_phys_addr has to be called with the exact physical address of
the breakpoint we add/remove, not just the page's base address.
Otherwise we easily fail to flush the right TB.

This breakage was introduced by the commit f3705d5329 "memory: make
phys_page_find() return an unadjusted".

This appeared to work for some guest architectures because their
cpu_get_phys_page_debug implementation returns full translated physical
address, not just the base of the TARGET_PAGE_SIZE-sized page.

Reported-by: TeLeMan <geleman@gmail.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 9d70c4b7b8)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:48:46 -05:00
Max Filippov
e77326d99c target-xtensa: add MMU pagewalking tests
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit c305e32f43)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:48:38 -05:00
Max Filippov
8b3ac66120 target-xtensa: control page table lookup explicitly
Hardware pagetable walking may not be nested. Stop guessing and pass
explicit flag to the get_physical_addr_mmu function that controls page
table lookup.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 57705a676c)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:48:31 -05:00
Max Filippov
2eb4d314ce target-xtensa: update autorefill TLB entries conditionally
This is to avoid interference of internal QEMU helpers
(cpu_get_phys_page_debug, tb_invalidate_virtual_addr) with guest-visible
TLB state.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit ae4e7982e6)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:48:23 -05:00
Max Filippov
adda59173c target-xtensa: extract TLB entry setting method
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 16bde77a29)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:48:13 -05:00
Max Filippov
b696aeab6a target-xtensa: update EXCVADDR in case of page table lookup
According to ISA, 4.4.2.6, EXCVADDR may be changed by any TLB miss, even
if the miss is handled entirely by processor hardware.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 39e7d37f0f)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:48:06 -05:00
Max Filippov
6514fe5047 target-xtensa: flush TLB page for new MMU mapping
Both old and new mappings need flushing because their VPN may be
different in MMU case.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit e323bdeff2)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:47:58 -05:00
Christian Borntraeger
c63c453889 virtio-blk: Fix geometry sector calculation
Currently the sector value for the geometry is masked, even if the
user usesa command line parameter that explicitely gives a number.
This breaks dasd devices on s390. A dasd device can have
a physical block size of 4096 (== same for logical block size)
and a typcial geometry of 15 heads and 12 sectors per cyl.
The ibm partition detection relies on a correct geometry
reported by the device. Unfortunately the current code changes
12 to 8. This would be necessary if the total size is
not a multiple of logical sector size,  but for dasd this
is not the case.

This patch checks the device size and only applies sector
mask if necessary.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
CC: Christoph Hellwig <hch@lst.de>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 136be99e6e)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:47:47 -05:00
917 changed files with 39027 additions and 81987 deletions

6
.gitignore vendored
View File

@@ -41,14 +41,12 @@ qemu-io
qemu-ga
qemu-bridge-helper
qemu-monitor.texi
vscclient
QMP/qmp-commands.txt
test-coroutine
test-qmp-input-visitor
test-qmp-output-visitor
test-string-input-visitor
test-string-output-visitor
test-visitor-serialization
fsdev/virtfs-proxy-helper.1
fsdev/virtfs-proxy-helper.pod
.gdbinit
@@ -71,10 +69,6 @@ fsdev/virtfs-proxy-helper.pod
*.vr
*.d
*.o
*.lo
*.la
*.pc
.libs
*.swp
*.orig
.pc

View File

@@ -207,12 +207,6 @@ M: qemu-devel@nongnu.org
S: Orphan
F: hw/gumstix.c
i.MX31
M: Peter Chubb <peter.chubb@nicta.com.au>
S: Odd fixes
F: hw/imx*
F: hw/kzm.c
Integrator CP
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
@@ -317,11 +311,6 @@ M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
S: Maintained
F: hw/petalogix_s3adsp1800.c
petalogix_ml605
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
S: Maintained
F: hw/petalogix_ml605_mmu.c
MIPS Machines
-------------
Jazz
@@ -398,12 +387,6 @@ M: Blue Swirl <blauwirbel@gmail.com>
S: Maintained
F: hw/sun4u.c
Leon3
M: Fabien Chouteau <chouteau@adacore.com>
S: Maintained
F: hw/leon3.c
F: hw/grlib*
S390 Machines
-------------
S390 Virtio
@@ -411,14 +394,6 @@ M: Alexander Graf <agraf@suse.de>
S: Maintained
F: hw/s390-*.c
UniCore32 Machines
-------------
PKUnity-3 SoC initramfs-with-busybox
M: Guan Xuetao <gxt@mprc.pku.edu.cn>
S: Maintained
F: hw/puv3*
F: hw/unicore32/
X86 Machines
------------
PC
@@ -502,17 +477,6 @@ S: Supported
F: hw/virtio-serial*
F: hw/virtio-console*
Xilinx EDK
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
S: Maintained
F: hw/xilinx_axi*
F: hw/xilinx_uartlite.c
F: hw/xilinx_intc.c
F: hw/xilinx_ethlite.c
F: hw/xilinx_timer.c
F: hw/xilinx.h
Subsystems
----------
Audio
@@ -531,18 +495,6 @@ M: Anthony Liguori <aliguori@us.ibm.com>
S: Maintained
F: qemu-char.c
CPU
M: Andreas Färber <afaerber@suse.de>
S: Supported
F: qom/cpu.c
F: include/qemu/cpu.h
Device Tree
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
M: Alexander Graf <agraf@suse.de>
S: Maintained
F: device-tree.[ch]
GDB stub
M: qemu-devel@nongnu.org
S: Odd Fixes
@@ -580,10 +532,9 @@ F: monitor.c
Network device layer
M: Anthony Liguori <aliguori@us.ibm.com>
M: Stefan Hajnoczi <stefanha@gmail.com>
M: Mark McLoughlin <markmc@redhat.com>
S: Maintained
F: net/
T: git git://github.com/stefanha/qemu.git net
Network Block Device (NBD)
M: Paolo Bonzini <pbonzini@redhat.com>
@@ -600,7 +551,7 @@ F: slirp/
T: git git://git.kiszka.org/qemu.git queues/slirp
Tracing
M: Stefan Hajnoczi <stefanha@gmail.com>
M: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
S: Maintained
F: trace/
F: scripts/tracetool.py

106
Makefile
View File

@@ -6,7 +6,7 @@ BUILD_DIR=$(CURDIR)
# All following code might depend on configuration variables
ifneq ($(wildcard config-host.mak),)
# Put the all: rule here so that config-host.mak can contain dependencies.
all:
all: build-all
include config-host.mak
include $(SRC_PATH)/rules.mak
config-host.mak: $(SRC_PATH)/configure
@@ -31,9 +31,9 @@ Makefile: ;
configure: ;
.PHONY: all clean cscope distclean dvi html info install install-doc \
pdf recurse-all speed test dist
pdf recurse-all speed tar tarbin test build-all
$(call set-vpath, $(SRC_PATH))
$(call set-vpath, $(SRC_PATH):$(SRC_PATH)/hw)
LIBS+=-lz $(LIBS_TOOLS)
@@ -52,13 +52,8 @@ SUBDIR_MAKEFLAGS=$(if $(V),,--no-print-directory) BUILD_DIR=$(BUILD_DIR)
SUBDIR_DEVICES_MAK=$(patsubst %, %/config-devices.mak, $(TARGET_DIRS))
SUBDIR_DEVICES_MAK_DEP=$(patsubst %, %/config-devices.mak.d, $(TARGET_DIRS))
ifeq ($(SUBDIR_DEVICES_MAK),)
config-all-devices.mak:
$(call quiet-command,echo '# no devices' > $@," GEN $@")
else
config-all-devices.mak: $(SUBDIR_DEVICES_MAK)
$(call quiet-command,cat $(SUBDIR_DEVICES_MAK) | grep =y | sort -u > $@," GEN $@")
endif
-include $(SUBDIR_DEVICES_MAK_DEP)
@@ -87,7 +82,7 @@ defconfig:
-include config-all-devices.mak
all: $(DOCS) $(TOOLS) $(HELPERS-y) recurse-all
build-all: $(DOCS) $(TOOLS) $(HELPERS-y) recurse-all
config-host.h: config-host.h-timestamp
config-host.h-timestamp: config-host.mak
@@ -96,18 +91,19 @@ qemu-options.def: $(SRC_PATH)/qemu-options.hx
SUBDIR_RULES=$(patsubst %,subdir-%, $(TARGET_DIRS))
subdir-%:
subdir-%: $(GENERATED_HEADERS)
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C $* V="$(V)" TARGET_DIR="$*/" all,)
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/Makefile.objs
endif
$(universal-obj-y) $(common-obj-y): $(GENERATED_HEADERS)
subdir-libcacard: $(oslib-obj-y) $(trace-obj-y) qemu-timer-common.o
$(filter %-softmmu,$(SUBDIR_RULES)): $(universal-obj-y) $(trace-obj-y) $(common-obj-y) $(extra-obj-y) subdir-libdis
$(filter %-softmmu,$(SUBDIR_RULES)): $(universal-obj-y) $(trace-obj-y) $(common-obj-y) subdir-libdis
$(filter %-user,$(SUBDIR_RULES)): $(universal-obj-y) $(trace-obj-y) subdir-libdis-user subdir-libuser
$(filter %-user,$(SUBDIR_RULES)): $(GENERATED_HEADERS) $(universal-obj-y) $(trace-obj-y) subdir-libdis-user subdir-libuser
ROMSUBDIR_RULES=$(patsubst %,romsubdir-%, $(ROMS))
romsubdir-%:
@@ -125,7 +121,7 @@ QEMU_CFLAGS += -I$(SRC_PATH)/include
ui/cocoa.o: ui/cocoa.m
ui/sdl.o audio/sdlaudio.o ui/sdl_zoom.o hw/baum.o: QEMU_CFLAGS += $(SDL_CFLAGS)
ui/sdl.o audio/sdlaudio.o ui/sdl_zoom.o baum.o: QEMU_CFLAGS += $(SDL_CFLAGS)
ui/vnc.o: QEMU_CFLAGS += $(VNC_TLS_CFLAGS)
@@ -146,20 +142,19 @@ libcacard.la:
install-libcacard:
@echo "libtool is missing, please install and rerun configure"; exit 1
else
libcacard.la: $(oslib-obj-y) qemu-timer-common.o $(addsuffix .lo, $(basename $(trace-obj-y)))
libcacard.la: $(GENERATED_HEADERS) $(oslib-obj-y) qemu-timer-common.o $(addsuffix .lo, $(basename $(trace-obj-y)))
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C libcacard V="$(V)" TARGET_DIR="$*/" libcacard.la,)
install-libcacard: libcacard.la
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C libcacard V="$(V)" TARGET_DIR="$*/" install-libcacard,)
endif
######################################################################
qemu-img.o: qemu-img-cmds.h
qemu-img.o qemu-tool.o qemu-nbd.o qemu-io.o cmd.o qemu-ga.o: $(GENERATED_HEADERS)
tools-obj-y = $(oslib-obj-y) $(trace-obj-y) qemu-tool.o qemu-timer.o \
qemu-timer-common.o main-loop.o notify.o \
iohandler.o cutils.o iov.o async.o
qemu-timer-common.o main-loop.o notify.o iohandler.o cutils.o async.o
tools-obj-$(CONFIG_POSIX) += compatfd.o
qemu-img$(EXESUF): qemu-img.o $(tools-obj-y) $(block-obj-y)
@@ -167,9 +162,7 @@ qemu-nbd$(EXESUF): qemu-nbd.o $(tools-obj-y) $(block-obj-y)
qemu-io$(EXESUF): qemu-io.o cmd.o $(tools-obj-y) $(block-obj-y)
qemu-bridge-helper$(EXESUF): qemu-bridge-helper.o
vscclient$(EXESUF): $(libcacard-y) $(oslib-obj-y) $(trace-obj-y) $(tools-obj-y) qemu-timer-common.o libcacard/vscclient.o
$(call quiet-command,$(CC) $(LDFLAGS) -o $@ $^ $(libcacard_libs) $(LIBS)," LINK $@")
qemu-bridge-helper.o: $(GENERATED_HEADERS)
fsdev/virtfs-proxy-helper$(EXESUF): fsdev/virtfs-proxy-helper.o fsdev/virtio-9p-marshal.o oslib-posix.o $(trace-obj-y)
fsdev/virtfs-proxy-helper$(EXESUF): LIBS += -lcap
@@ -177,8 +170,10 @@ fsdev/virtfs-proxy-helper$(EXESUF): LIBS += -lcap
qemu-img-cmds.h: $(SRC_PATH)/qemu-img-cmds.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $@")
$(qapi-obj-y): $(GENERATED_HEADERS)
qapi-dir := $(BUILD_DIR)/qapi-generated
qemu-ga$(EXESUF): LIBS = $(LIBS_QGA)
qemu-ga$(EXESUF): QEMU_CFLAGS += -I qga/qapi-generated
qemu-ga$(EXESUF): QEMU_CFLAGS += -I $(qapi-dir)
gen-out-type = $(subst .,-,$(suffix $@))
@@ -186,32 +181,32 @@ ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/tests/Makefile
endif
qapi-py = $(SRC_PATH)/scripts/qapi.py $(SRC_PATH)/scripts/ordereddict.py
qga/qapi-generated/qga-qapi-types.c qga/qapi-generated/qga-qapi-types.h :\
$(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py $(gen-out-type) -o qga/qapi-generated -p "qga-" < $<, " GEN $@")
qga/qapi-generated/qga-qapi-visit.c qga/qapi-generated/qga-qapi-visit.h :\
$(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py $(gen-out-type) -o qga/qapi-generated -p "qga-" < $<, " GEN $@")
qga/qapi-generated/qga-qmp-commands.h qga/qapi-generated/qga-qmp-marshal.c :\
$(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py $(gen-out-type) -o qga/qapi-generated -p "qga-" < $<, " GEN $@")
$(qapi-dir)/qga-qapi-types.c $(qapi-dir)/qga-qapi-types.h :\
$(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-types.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py $(gen-out-type) -o "$(qapi-dir)" -p "qga-" < $<, " GEN $@")
$(qapi-dir)/qga-qapi-visit.c $(qapi-dir)/qga-qapi-visit.h :\
$(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-visit.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py $(gen-out-type) -o "$(qapi-dir)" -p "qga-" < $<, " GEN $@")
$(qapi-dir)/qga-qmp-commands.h $(qapi-dir)/qga-qmp-marshal.c :\
$(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-commands.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py $(gen-out-type) -o "$(qapi-dir)" -p "qga-" < $<, " GEN $@")
qapi-types.c qapi-types.h :\
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-types.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py $(gen-out-type) -o "." < $<, " GEN $@")
qapi-visit.c qapi-visit.h :\
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-visit.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py $(gen-out-type) -o "." < $<, " GEN $@")
qmp-commands.h qmp-marshal.c :\
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-commands.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py $(gen-out-type) -m -o "." < $<, " GEN $@")
QGALIB_GEN=$(addprefix qga/qapi-generated/, qga-qapi-types.h qga-qapi-visit.h qga-qmp-commands.h)
$(qga-obj-y) qemu-ga.o: $(QGALIB_GEN)
QGALIB_OBJ=$(addprefix $(qapi-dir)/, qga-qapi-types.o qga-qapi-visit.o qga-qmp-marshal.o)
QGALIB_GEN=$(addprefix $(qapi-dir)/, qga-qapi-types.h qga-qapi-visit.h qga-qmp-commands.h)
$(QGALIB_OBJ): $(QGALIB_GEN) $(GENERATED_HEADERS)
$(qga-obj-y) qemu-ga.o: $(QGALIB_GEN) $(GENERATED_HEADERS)
qemu-ga$(EXESUF): qemu-ga.o $(qga-obj-y) $(tools-obj-y) $(qapi-obj-y) $(qobject-obj-y) $(version-obj-y)
qemu-ga$(EXESUF): qemu-ga.o $(qga-obj-y) $(tools-obj-y) $(qapi-obj-y) $(qobject-obj-y) $(version-obj-y) $(QGALIB_OBJ)
QEMULIBS=libhw32 libhw64 libuser libdis libdis-user
@@ -219,30 +214,24 @@ clean:
# avoid old build problems by removing potentially incorrect old files
rm -f config.mak op-i386.h opc-i386.h gen-op-i386.h op-arm.h opc-arm.h gen-op-arm.h
rm -f qemu-options.def
find . -name '*.[od]' -exec rm -f {} +
rm -f *.a *.lo $(TOOLS) $(HELPERS-y) qemu-ga TAGS cscope.* *.pod *~ */*~
rm -f *.o *.d *.a *.lo $(TOOLS) $(HELPERS-y) qemu-ga TAGS cscope.* *.pod *~ */*~
rm -Rf .libs
rm -f slirp/*.o slirp/*.d audio/*.o audio/*.d block/*.o block/*.d net/*.o net/*.d fsdev/*.o fsdev/*.d ui/*.o ui/*.d qapi/*.o qapi/*.d qga/*.o qga/*.d
rm -f qom/*.o qom/*.d
rm -f qemu-img-cmds.h
rm -f trace/*.o trace/*.d
rm -f trace-dtrace.dtrace trace-dtrace.dtrace-timestamp
@# May not be present in GENERATED_HEADERS
rm -f trace-dtrace.h trace-dtrace.h-timestamp
rm -f $(foreach f,$(GENERATED_HEADERS),$(f) $(f)-timestamp)
rm -f $(foreach f,$(GENERATED_SOURCES),$(f) $(f)-timestamp)
rm -rf qapi-generated
rm -rf qga/qapi-generated
rm -rf $(qapi-dir)
$(MAKE) -C tests/tcg clean
for d in $(ALL_SUBDIRS) $(QEMULIBS) libcacard; do \
if test -d $$d; then $(MAKE) -C $$d $@ || exit 1; fi; \
rm -f $$d/qemu-options.def; \
done
VERSION ?= $(shell cat VERSION)
dist: qemu-$(VERSION).tar.bz2
qemu-%.tar.bz2:
$(SRC_PATH)/scripts/make-release "$(SRC_PATH)" "$(patsubst qemu-%.tar.bz2,%,$@)"
distclean: clean
rm -f config-host.mak config-host.h* config-host.ld $(DOCS) qemu-options.texi qemu-img-cmds.texi qemu-monitor.texi
rm -f config-all-devices.mak
@@ -260,8 +249,7 @@ distclean: clean
KEYMAPS=da en-gb et fr fr-ch is lt modifiers no pt-br sv \
ar de en-us fi fr-be hr it lv nl pl ru th \
common de-ch es fo fr-ca hu ja mk nl-be pt sl tr \
bepo
common de-ch es fo fr-ca hu ja mk nl-be pt sl tr
ifdef INSTALL_BLOBS
BLOBS=bios.bin sgabios.bin vgabios.bin vgabios-cirrus.bin \
@@ -271,6 +259,7 @@ pxe-e1000.rom pxe-eepro100.rom pxe-ne2k_pci.rom \
pxe-pcnet.rom pxe-rtl8139.rom pxe-virtio.rom \
qemu-icon.bmp \
bamboo.dtb petalogix-s3adsp1800.dtb petalogix-ml605.dtb \
mpc8544ds.dtb \
multiboot.bin linuxboot.bin kvmvapic.bin \
s390-zipl.rom \
spapr-rtas.bin slof.bin \
@@ -401,10 +390,15 @@ qemu-doc.dvi qemu-doc.html qemu-doc.info qemu-doc.pdf: \
qemu-img.texi qemu-nbd.texi qemu-options.texi \
qemu-monitor.texi qemu-img-cmds.texi
# Add a dependency on the generated files, so that they are always
# rebuilt before other object files
Makefile: $(GENERATED_HEADERS)
VERSION ?= $(shell cat VERSION)
FILE = qemu-$(VERSION)
# tar release (use 'make -k tar' on a checkouted tree)
tar:
rm -rf /tmp/$(FILE)
cp -r . /tmp/$(FILE)
cd /tmp && tar zcvf ~/$(FILE).tar.gz $(FILE) --exclude CVS --exclude .git --exclude .svn
rm -rf /tmp/$(FILE)
# Include automatically generated dependency files
# Dependencies in Makefile.objs files come from our recursive subdir rules
-include $(wildcard *.d tests/*.d)
-include $(wildcard *.d audio/*.d slirp/*.d block/*.d net/*.d ui/*.d qapi/*.d qga/*.d)

View File

@@ -18,3 +18,6 @@ all: $(libdis-y)
clean:
rm -f *.o *.d *.a *~
# Include automatically generated dependency files
-include $(wildcard *.d */*.d)

View File

@@ -7,7 +7,7 @@ include $(SRC_PATH)/rules.mak
.PHONY: all
$(call set-vpath, $(SRC_PATH))
$(call set-vpath, $(SRC_PATH):$(SRC_PATH)/hw)
QEMU_CFLAGS+=-I..
QEMU_CFLAGS += -I$(SRC_PATH)/include
@@ -19,5 +19,7 @@ all: $(hw-obj-y)
@true
clean:
rm -f $(addsuffix *.o, $(sort $(dir $(hw-obj-y))))
rm -f $(addsuffix *.d, $(sort $(dir $(hw-obj-y))))
rm -f *.o */*.o *.d */*.d *.a */*.a *~ */*~
# Include automatically generated dependency files
-include $(wildcard *.d */*.d)

View File

@@ -1,7 +1,6 @@
#######################################################################
# Target-independent parts used in system and user emulation
universal-obj-y =
universal-obj-y += qemu-log.o
#######################################################################
# QObject
@@ -13,7 +12,9 @@ universal-obj-y += $(qobject-obj-y)
#######################################################################
# QOM
qom-obj-y = qom/
include $(SRC_PATH)/qom/Makefile
qom-obj-y = $(addprefix qom/, $(qom-y))
qom-obj-twice-y = $(addprefix qom/, $(qom-twice-y))
universal-obj-y += $(qom-obj-y)
@@ -41,18 +42,50 @@ coroutine-obj-$(CONFIG_WIN32) += coroutine-win32.o
#######################################################################
# block-obj-y is code used by both qemu system emulation and qemu-img
block-obj-y = cutils.o iov.o cache-utils.o qemu-option.o module.o async.o
block-obj-y = cutils.o cache-utils.o qemu-option.o module.o async.o
block-obj-y += nbd.o block.o aio.o aes.o qemu-config.o qemu-progress.o qemu-sockets.o
block-obj-y += $(coroutine-obj-y) $(qobject-obj-y) $(version-obj-y)
block-obj-$(CONFIG_POSIX) += posix-aio-compat.o
block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
block-obj-y += block/
block-nested-y += raw.o cow.o qcow.o vdi.o vmdk.o cloop.o dmg.o bochs.o vpc.o vvfat.o
block-nested-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-cache.o
block-nested-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o qed-cluster.o
block-nested-y += qed-check.o
block-nested-y += parallels.o nbd.o blkdebug.o sheepdog.o blkverify.o
block-nested-y += stream.o
block-nested-$(CONFIG_WIN32) += raw-win32.o
block-nested-$(CONFIG_POSIX) += raw-posix.o
block-nested-$(CONFIG_LIBISCSI) += iscsi.o
block-nested-$(CONFIG_CURL) += curl.o
block-nested-$(CONFIG_RBD) += rbd.o
block-obj-y += $(addprefix block/, $(block-nested-y))
net-obj-y = net.o
net-nested-y = queue.o checksum.o util.o
net-nested-y += socket.o
net-nested-y += dump.o
net-nested-$(CONFIG_POSIX) += tap.o
net-nested-$(CONFIG_LINUX) += tap-linux.o
net-nested-$(CONFIG_WIN32) += tap-win32.o
net-nested-$(CONFIG_BSD) += tap-bsd.o
net-nested-$(CONFIG_SOLARIS) += tap-solaris.o
net-nested-$(CONFIG_AIX) += tap-aix.o
net-nested-$(CONFIG_HAIKU) += tap-haiku.o
net-nested-$(CONFIG_SLIRP) += slirp.o
net-nested-$(CONFIG_VDE) += vde.o
net-obj-y += $(addprefix net/, $(net-nested-y))
ifeq ($(CONFIG_VIRTIO)$(CONFIG_VIRTFS)$(CONFIG_PCI),yyy)
# Lots of the fsdev/9pcode is pulled in by vl.c via qemu_fsdev_add.
# only pull in the actual virtio-9p device if we also enabled virtio.
CONFIG_REALLY_VIRTFS=y
fsdev-nested-y = qemu-fsdev.o virtio-9p-marshal.o
else
fsdev-nested-y = qemu-fsdev-dummy.o
endif
fsdev-obj-$(CONFIG_VIRTFS) += $(addprefix fsdev/, $(fsdev-nested-y))
######################################################################
# Target independent part of system emulation. The long term path is to
@@ -60,47 +93,104 @@ endif
# single QEMU executable should support all CPUs and machines.
common-obj-y = $(block-obj-y) blockdev.o
common-obj-y += net.o net/
common-obj-y += qom/
common-obj-y += $(net-obj-y)
common-obj-y += $(qom-obj-twice-y)
common-obj-$(CONFIG_LINUX) += $(fsdev-obj-$(CONFIG_LINUX))
common-obj-y += readline.o console.o cursor.o
common-obj-y += $(oslib-obj-y)
common-obj-$(CONFIG_WIN32) += os-win32.o
common-obj-$(CONFIG_POSIX) += os-posix.o
common-obj-$(CONFIG_LINUX) += fsdev/
extra-obj-$(CONFIG_LINUX) += fsdev/
common-obj-y += tcg-runtime.o host-utils.o main-loop.o
common-obj-y += input.o
common-obj-y += irq.o input.o
common-obj-$(CONFIG_PTIMER) += ptimer.o
common-obj-$(CONFIG_MAX7310) += max7310.o
common-obj-$(CONFIG_WM8750) += wm8750.o
common-obj-$(CONFIG_TWL92230) += twl92230.o
common-obj-$(CONFIG_TSC2005) += tsc2005.o
common-obj-$(CONFIG_LM832X) += lm832x.o
common-obj-$(CONFIG_TMP105) += tmp105.o
common-obj-$(CONFIG_STELLARIS_INPUT) += stellaris_input.o
common-obj-$(CONFIG_SSD0303) += ssd0303.o
common-obj-$(CONFIG_SSD0323) += ssd0323.o
common-obj-$(CONFIG_ADS7846) += ads7846.o
common-obj-$(CONFIG_MAX111X) += max111x.o
common-obj-$(CONFIG_DS1338) += ds1338.o
common-obj-y += i2c.o smbus.o smbus_eeprom.o
common-obj-y += eeprom93xx.o
common-obj-y += scsi-disk.o cdrom.o
common-obj-y += scsi-generic.o scsi-bus.o
common-obj-y += hid.o
common-obj-y += usb/core.o usb/bus.o usb/desc.o usb/dev-hub.o
common-obj-y += usb/host-$(HOST_USB).o
common-obj-y += usb/dev-hid.o usb/dev-storage.o usb/dev-wacom.o
common-obj-y += usb/dev-serial.o usb/dev-network.o usb/dev-audio.o
common-obj-$(CONFIG_SSI) += ssi.o
common-obj-$(CONFIG_SSI_SD) += ssi-sd.o
common-obj-$(CONFIG_SD) += sd.o
common-obj-y += bt.o bt-host.o bt-vhci.o bt-l2cap.o bt-sdp.o bt-hci.o bt-hid.o
common-obj-y += bt-hci-csr.o usb/dev-bluetooth.o
common-obj-y += buffered_file.o migration.o migration-tcp.o
common-obj-y += qemu-char.o #aio.o
common-obj-y += msmouse.o ps2.o
common-obj-y += qdev.o qdev-properties.o qdev-monitor.o
common-obj-y += block-migration.o iohandler.o
common-obj-y += pflib.o
common-obj-y += bitmap.o bitops.o
common-obj-y += page_cache.o
common-obj-$(CONFIG_BRLAPI) += baum.o
common-obj-$(CONFIG_POSIX) += migration-exec.o migration-unix.o migration-fd.o
common-obj-$(CONFIG_WIN32) += version.o
common-obj-$(CONFIG_SPICE) += spice-qemu-char.o
common-obj-$(CONFIG_SPICE) += ui/spice-core.o ui/spice-input.o ui/spice-display.o spice-qemu-char.o
common-obj-y += audio/
common-obj-y += hw/
common-obj-y += ui/
common-obj-y += bt-host.o bt-vhci.o
audio-obj-y = audio.o noaudio.o wavaudio.o mixeng.o
audio-obj-$(CONFIG_SDL) += sdlaudio.o
audio-obj-$(CONFIG_OSS) += ossaudio.o
audio-obj-$(CONFIG_SPICE) += spiceaudio.o
audio-obj-$(CONFIG_COREAUDIO) += coreaudio.o
audio-obj-$(CONFIG_ALSA) += alsaaudio.o
audio-obj-$(CONFIG_DSOUND) += dsoundaudio.o
audio-obj-$(CONFIG_FMOD) += fmodaudio.o
audio-obj-$(CONFIG_ESD) += esdaudio.o
audio-obj-$(CONFIG_PA) += paaudio.o
audio-obj-$(CONFIG_WINWAVE) += winwaveaudio.o
audio-obj-$(CONFIG_AUDIO_PT_INT) += audio_pt_int.o
audio-obj-$(CONFIG_AUDIO_WIN_INT) += audio_win_int.o
audio-obj-y += wavcapture.o
common-obj-y += $(addprefix audio/, $(audio-obj-y))
ui-obj-y += keymaps.o
ui-obj-$(CONFIG_SDL) += sdl.o sdl_zoom.o x_keymap.o
ui-obj-$(CONFIG_COCOA) += cocoa.o
ui-obj-$(CONFIG_CURSES) += curses.o
vnc-obj-y += vnc.o d3des.o
vnc-obj-y += vnc-enc-zlib.o vnc-enc-hextile.o
vnc-obj-y += vnc-enc-tight.o vnc-palette.o
vnc-obj-y += vnc-enc-zrle.o
vnc-obj-$(CONFIG_VNC_TLS) += vnc-tls.o vnc-auth-vencrypt.o
vnc-obj-$(CONFIG_VNC_SASL) += vnc-auth-sasl.o
ifdef CONFIG_VNC_THREAD
vnc-obj-y += vnc-jobs-async.o
else
vnc-obj-y += vnc-jobs-sync.o
endif
common-obj-y += $(addprefix ui/, $(ui-obj-y))
common-obj-$(CONFIG_VNC) += $(addprefix ui/, $(vnc-obj-y))
common-obj-y += iov.o acl.o
common-obj-$(CONFIG_POSIX) += compatfd.o
common-obj-y += notify.o event_notifier.o
common-obj-y += qemu-timer.o qemu-timer-common.o
common-obj-$(CONFIG_SLIRP) += slirp/
slirp-obj-y = cksum.o if.o ip_icmp.o ip_input.o ip_output.o
slirp-obj-y += slirp.o mbuf.o misc.o sbuf.o socket.o tcp_input.o tcp_output.o
slirp-obj-y += tcp_subr.o tcp_timer.o udp.o bootp.o tftp.o arp_table.o
common-obj-$(CONFIG_SLIRP) += $(addprefix slirp/, $(slirp-obj-y))
######################################################################
# libseccomp
ifeq ($(CONFIG_SECCOMP),y)
common-obj-y += qemu-seccomp.o
endif
# xen backend driver support
common-obj-$(CONFIG_XEN_BACKEND) += xen_backend.o xen_devconfig.o
common-obj-$(CONFIG_XEN_BACKEND) += xen_console.o xenfb.o xen_disk.o xen_nic.o
######################################################################
# libuser
@@ -108,16 +198,156 @@ endif
user-obj-y =
user-obj-y += envlist.o path.o
user-obj-y += tcg-runtime.o host-utils.o
user-obj-y += cutils.o iov.o cache-utils.o
user-obj-y += cutils.o cache-utils.o
user-obj-y += module.o
user-obj-y += qemu-user.o
user-obj-y += $(trace-obj-y)
user-obj-y += qom/
user-obj-y += $(qom-obj-twice-y)
######################################################################
# libhw
hw-obj-y = vl.o dma-helpers.o qtest.o hw/
hw-obj-y =
hw-obj-y += vl.o loader.o
hw-obj-$(CONFIG_VIRTIO) += virtio-console.o
hw-obj-y += usb/libhw.o
hw-obj-$(CONFIG_VIRTIO_PCI) += virtio-pci.o
hw-obj-y += fw_cfg.o
hw-obj-$(CONFIG_PCI) += pci.o pci_bridge.o pci_bridge_dev.o
hw-obj-$(CONFIG_PCI) += msix.o msi.o
hw-obj-$(CONFIG_PCI) += shpc.o
hw-obj-$(CONFIG_PCI) += slotid_cap.o
hw-obj-$(CONFIG_PCI) += pci_host.o pcie_host.o
hw-obj-$(CONFIG_PCI) += ioh3420.o xio3130_upstream.o xio3130_downstream.o
hw-obj-y += watchdog.o
hw-obj-$(CONFIG_ISA_MMIO) += isa_mmio.o
hw-obj-$(CONFIG_ECC) += ecc.o
hw-obj-$(CONFIG_NAND) += nand.o
hw-obj-$(CONFIG_PFLASH_CFI01) += pflash_cfi01.o
hw-obj-$(CONFIG_PFLASH_CFI02) += pflash_cfi02.o
hw-obj-$(CONFIG_M48T59) += m48t59.o
hw-obj-$(CONFIG_ESCC) += escc.o
hw-obj-$(CONFIG_EMPTY_SLOT) += empty_slot.o
hw-obj-$(CONFIG_SERIAL) += serial.o
hw-obj-$(CONFIG_PARALLEL) += parallel.o
hw-obj-$(CONFIG_I8254) += i8254_common.o i8254.o
hw-obj-$(CONFIG_PCSPK) += pcspk.o
hw-obj-$(CONFIG_PCKBD) += pckbd.o
hw-obj-$(CONFIG_USB_UHCI) += usb/hcd-uhci.o
hw-obj-$(CONFIG_USB_OHCI) += usb/hcd-ohci.o
hw-obj-$(CONFIG_USB_EHCI) += usb/hcd-ehci.o
hw-obj-$(CONFIG_USB_XHCI) += usb/hcd-xhci.o
hw-obj-$(CONFIG_FDC) += fdc.o
hw-obj-$(CONFIG_ACPI) += acpi.o acpi_piix4.o
hw-obj-$(CONFIG_APM) += pm_smbus.o apm.o
hw-obj-$(CONFIG_DMA) += dma.o
hw-obj-$(CONFIG_I82374) += i82374.o
hw-obj-$(CONFIG_HPET) += hpet.o
hw-obj-$(CONFIG_APPLESMC) += applesmc.o
hw-obj-$(CONFIG_SMARTCARD) += usb/dev-smartcard-reader.o ccid-card-passthru.o
hw-obj-$(CONFIG_SMARTCARD_NSS) += ccid-card-emulated.o
hw-obj-$(CONFIG_USB_REDIR) += usb/redirect.o
hw-obj-$(CONFIG_I8259) += i8259_common.o i8259.o
# PPC devices
hw-obj-$(CONFIG_PREP_PCI) += prep_pci.o
hw-obj-$(CONFIG_I82378) += i82378.o
# Mac shared devices
hw-obj-$(CONFIG_MACIO) += macio.o
hw-obj-$(CONFIG_CUDA) += cuda.o
hw-obj-$(CONFIG_ADB) += adb.o
hw-obj-$(CONFIG_MAC_NVRAM) += mac_nvram.o
hw-obj-$(CONFIG_MAC_DBDMA) += mac_dbdma.o
# OldWorld PowerMac
hw-obj-$(CONFIG_HEATHROW_PIC) += heathrow_pic.o
hw-obj-$(CONFIG_GRACKLE_PCI) += grackle_pci.o
# NewWorld PowerMac
hw-obj-$(CONFIG_UNIN_PCI) += unin_pci.o
hw-obj-$(CONFIG_DEC_PCI) += dec_pci.o
# PowerPC E500 boards
hw-obj-$(CONFIG_PPCE500_PCI) += ppce500_pci.o
# MIPS devices
hw-obj-$(CONFIG_PIIX4) += piix4.o
hw-obj-$(CONFIG_G364FB) += g364fb.o
hw-obj-$(CONFIG_JAZZ_LED) += jazz_led.o
# PCI watchdog devices
hw-obj-$(CONFIG_PCI) += wdt_i6300esb.o
hw-obj-$(CONFIG_PCI) += pcie.o pcie_aer.o pcie_port.o
# PCI network cards
hw-obj-$(CONFIG_NE2000_PCI) += ne2000.o
hw-obj-$(CONFIG_EEPRO100_PCI) += eepro100.o
hw-obj-$(CONFIG_PCNET_PCI) += pcnet-pci.o
hw-obj-$(CONFIG_PCNET_COMMON) += pcnet.o
hw-obj-$(CONFIG_E1000_PCI) += e1000.o
hw-obj-$(CONFIG_RTL8139_PCI) += rtl8139.o
hw-obj-$(CONFIG_SMC91C111) += smc91c111.o
hw-obj-$(CONFIG_LAN9118) += lan9118.o
hw-obj-$(CONFIG_NE2000_ISA) += ne2000-isa.o
hw-obj-$(CONFIG_OPENCORES_ETH) += opencores_eth.o
# IDE
hw-obj-$(CONFIG_IDE_CORE) += ide/core.o ide/atapi.o
hw-obj-$(CONFIG_IDE_QDEV) += ide/qdev.o
hw-obj-$(CONFIG_IDE_PCI) += ide/pci.o
hw-obj-$(CONFIG_IDE_ISA) += ide/isa.o
hw-obj-$(CONFIG_IDE_PIIX) += ide/piix.o
hw-obj-$(CONFIG_IDE_CMD646) += ide/cmd646.o
hw-obj-$(CONFIG_IDE_MACIO) += ide/macio.o
hw-obj-$(CONFIG_IDE_VIA) += ide/via.o
hw-obj-$(CONFIG_AHCI) += ide/ahci.o
hw-obj-$(CONFIG_AHCI) += ide/ich.o
# SCSI layer
hw-obj-$(CONFIG_LSI_SCSI_PCI) += lsi53c895a.o
hw-obj-$(CONFIG_ESP) += esp.o
hw-obj-y += dma-helpers.o sysbus.o isa-bus.o
hw-obj-y += qdev-addr.o
# VGA
hw-obj-$(CONFIG_VGA_PCI) += vga-pci.o
hw-obj-$(CONFIG_VGA_ISA) += vga-isa.o
hw-obj-$(CONFIG_VGA_ISA_MM) += vga-isa-mm.o
hw-obj-$(CONFIG_VMWARE_VGA) += vmware_vga.o
hw-obj-$(CONFIG_VMMOUSE) += vmmouse.o
hw-obj-$(CONFIG_VGA_CIRRUS) += cirrus_vga.o
hw-obj-$(CONFIG_RC4030) += rc4030.o
hw-obj-$(CONFIG_DP8393X) += dp8393x.o
hw-obj-$(CONFIG_DS1225Y) += ds1225y.o
hw-obj-$(CONFIG_MIPSNET) += mipsnet.o
hw-obj-y += qtest.o
# Sound
sound-obj-y =
sound-obj-$(CONFIG_SB16) += sb16.o
sound-obj-$(CONFIG_ES1370) += es1370.o
sound-obj-$(CONFIG_AC97) += ac97.o
sound-obj-$(CONFIG_ADLIB) += fmopl.o adlib.o
sound-obj-$(CONFIG_GUS) += gus.o gusemu_hal.o gusemu_mixer.o
sound-obj-$(CONFIG_CS4231A) += cs4231a.o
sound-obj-$(CONFIG_HDA) += intel-hda.o hda-audio.o
adlib.o fmopl.o: QEMU_CFLAGS += -DBUILD_Y8950=0
hw-obj-$(CONFIG_SOUND) += $(sound-obj-y)
9pfs-nested-$(CONFIG_VIRTFS) = virtio-9p.o
9pfs-nested-$(CONFIG_VIRTFS) += virtio-9p-local.o virtio-9p-xattr.o
9pfs-nested-$(CONFIG_VIRTFS) += virtio-9p-xattr-user.o virtio-9p-posix-acl.o
9pfs-nested-$(CONFIG_VIRTFS) += virtio-9p-coth.o cofs.o codir.o cofile.o
9pfs-nested-$(CONFIG_VIRTFS) += coxattr.o virtio-9p-synth.o
9pfs-nested-$(CONFIG_OPEN_BY_HANDLE) += virtio-9p-handle.o
9pfs-nested-$(CONFIG_VIRTFS) += virtio-9p-proxy.o
hw-obj-$(CONFIG_REALLY_VIRTFS) += $(addprefix 9pfs/, $(9pfs-nested-y))
######################################################################
# libdis
@@ -195,30 +425,31 @@ ifneq ($(TRACE_BACKEND),dtrace)
trace-obj-y = trace.o
endif
trace-obj-$(CONFIG_TRACE_DEFAULT) += trace/default.o
trace-obj-$(CONFIG_TRACE_SIMPLE) += trace/simple.o
trace-nested-$(CONFIG_TRACE_DEFAULT) += default.o
trace-nested-$(CONFIG_TRACE_SIMPLE) += simple.o
trace-obj-$(CONFIG_TRACE_SIMPLE) += qemu-timer-common.o
trace-obj-$(CONFIG_TRACE_STDERR) += trace/stderr.o
trace-obj-y += trace/control.o
trace-nested-$(CONFIG_TRACE_STDERR) += stderr.o
trace-nested-y += control.o
trace-obj-y += $(addprefix trace/, $(trace-nested-y))
$(trace-obj-y): $(GENERATED_HEADERS)
######################################################################
# smartcard
libcacard-y += libcacard/cac.o libcacard/event.o
libcacard-y += libcacard/vcard.o libcacard/vreader.o
libcacard-y += libcacard/vcard_emul_nss.o
libcacard-y += libcacard/vcard_emul_type.o
libcacard-y += libcacard/card_7816.o
common-obj-$(CONFIG_SMARTCARD_NSS) += $(libcacard-y)
libcacard-y = cac.o event.o vcard.o vreader.o vcard_emul_nss.o vcard_emul_type.o card_7816.o
######################################################################
# qapi
qapi-obj-y = qapi/
qapi-obj-y += qapi-types.o qapi-visit.o
qapi-nested-y = qapi-visit-core.o qapi-dealloc-visitor.o qmp-input-visitor.o
qapi-nested-y += qmp-output-visitor.o qmp-registry.o qmp-dispatch.o
qapi-nested-y += string-input-visitor.o string-output-visitor.o
qapi-obj-y = $(addprefix qapi/, $(qapi-nested-y))
common-obj-y += qmp-marshal.o qapi-visit.o qapi-types.o
common-obj-y += qmp.o hmp.o
@@ -228,7 +459,11 @@ universal-obj-y += $(qapi-obj-y)
######################################################################
# guest agent
qga-obj-y = qga/ qemu-ga.o module.o
qga-nested-y = commands.o guest-agent-command-state.o
qga-nested-$(CONFIG_POSIX) += commands-posix.o channel-posix.o
qga-nested-$(CONFIG_WIN32) += commands-win32.o channel-win32.o service-win32.o
qga-obj-y = $(addprefix qga/, $(qga-nested-y))
qga-obj-y += qemu-ga.o module.o
qga-obj-$(CONFIG_WIN32) += oslib-win32.o
qga-obj-$(CONFIG_POSIX) += oslib-posix.o qemu-sockets.o qemu-option.o
@@ -238,13 +473,3 @@ vl.o: QEMU_CFLAGS+=$(SDL_CFLAGS)
QEMU_CFLAGS+=$(GLIB_CFLAGS)
nested-vars += \
hw-obj-y \
qga-obj-y \
block-obj-y \
qom-obj-y \
qapi-obj-y \
user-obj-y \
common-obj-y \
extra-obj-y
dummy := $(call unnest-vars)

View File

@@ -1,5 +1,10 @@
# -*- Mode: makefile -*-
GENERATED_HEADERS = config-target.h
CONFIG_NO_PCI = $(if $(subst n,,$(CONFIG_PCI)),n,y)
CONFIG_NO_KVM = $(if $(subst n,,$(CONFIG_KVM)),n,y)
CONFIG_NO_XEN = $(if $(subst n,,$(CONFIG_XEN)),n,y)
include ../config-host.mak
include config-devices.mak
include config-target.mak
@@ -8,11 +13,14 @@ ifneq ($(HWDIR),)
include $(HWDIR)/config.mak
endif
$(call set-vpath, $(SRC_PATH))
TARGET_PATH=$(SRC_PATH)/target-$(TARGET_BASE_ARCH)
$(call set-vpath, $(SRC_PATH):$(TARGET_PATH):$(SRC_PATH)/hw)
ifdef CONFIG_LINUX
QEMU_CFLAGS += -I../linux-headers
endif
QEMU_CFLAGS += -I.. -I$(SRC_PATH)/target-$(TARGET_BASE_ARCH) -DNEED_CPU_H
QEMU_CFLAGS += -I.. -I$(TARGET_PATH) -DNEED_CPU_H
include $(SRC_PATH)/Makefile.objs
QEMU_CFLAGS+=-I$(SRC_PATH)/include
@@ -69,26 +77,78 @@ all: $(PROGS) stap
#########################################################
# cpu emulator library
obj-y = exec.o translate-all.o cpu-exec.o
obj-y += tcg/tcg.o tcg/optimize.o
obj-$(CONFIG_TCG_INTERPRETER) += tci.o
obj-y += fpu/softfloat.o
obj-y += disas.o
obj-$(CONFIG_TCI_DIS) += tci-dis.o
obj-y += target-$(TARGET_BASE_ARCH)/
obj-$(CONFIG_GDBSTUB_XML) += gdbstub-xml.o
libobj-y = exec.o translate-all.o cpu-exec.o translate.o
libobj-y += tcg/tcg.o tcg/optimize.o
libobj-$(CONFIG_TCG_INTERPRETER) += tci.o
libobj-y += fpu/softfloat.o
ifneq ($(TARGET_BASE_ARCH), sparc)
ifneq ($(TARGET_BASE_ARCH), alpha)
libobj-y += op_helper.o
endif
endif
libobj-y += helper.o
ifneq ($(TARGET_BASE_ARCH), ppc)
libobj-y += cpu.o
endif
libobj-$(TARGET_SPARC64) += vis_helper.o
libobj-$(CONFIG_NEED_MMU) += mmu.o
libobj-$(TARGET_ARM) += neon_helper.o iwmmxt_helper.o
ifeq ($(TARGET_BASE_ARCH), sparc)
libobj-y += fop_helper.o cc_helper.o win_helper.o mmu_helper.o ldst_helper.o
endif
libobj-$(TARGET_SPARC) += int32_helper.o
libobj-$(TARGET_SPARC64) += int64_helper.o
libobj-$(TARGET_ALPHA) += int_helper.o fpu_helper.o sys_helper.o mem_helper.o
libobj-y += disas.o
libobj-$(CONFIG_TCI_DIS) += tci-dis.o
tci-dis.o: QEMU_CFLAGS += -I$(SRC_PATH)/tcg -I$(SRC_PATH)/tcg/tci
$(libobj-y): $(GENERATED_HEADERS)
# HELPER_CFLAGS is used for all the legacy code compiled with static register
# variables
ifneq ($(TARGET_BASE_ARCH), sparc)
op_helper.o: QEMU_CFLAGS += $(HELPER_CFLAGS)
endif
user-exec.o: QEMU_CFLAGS += $(HELPER_CFLAGS)
# Note: this is a workaround. The real fix is to avoid compiling
# cpu_signal_handler() in user-exec.c.
signal.o: QEMU_CFLAGS += $(HELPER_CFLAGS)
#########################################################
# Linux user emulator target
ifdef CONFIG_LINUX_USER
QEMU_CFLAGS+=-I$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR) -I$(SRC_PATH)/linux-user
$(call set-vpath, $(SRC_PATH)/linux-user:$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR))
obj-y += linux-user/
obj-y += gdbstub.o thunk.o user-exec.o $(oslib-obj-y)
QEMU_CFLAGS+=-I$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR) -I$(SRC_PATH)/linux-user
obj-y = main.o syscall.o strace.o mmap.o signal.o thunk.o \
elfload.o linuxload.o uaccess.o gdbstub.o cpu-uname.o \
user-exec.o $(oslib-obj-y)
obj-$(TARGET_HAS_BFLT) += flatload.o
obj-$(TARGET_I386) += vm86.o
obj-i386-y += ioport-user.o
nwfpe-obj-y = fpa11.o fpa11_cpdo.o fpa11_cpdt.o fpa11_cprt.o fpopcode.o
nwfpe-obj-y += single_cpdo.o double_cpdo.o extended_cpdo.o
obj-arm-y += $(addprefix nwfpe/, $(nwfpe-obj-y))
obj-arm-y += arm-semi.o
obj-m68k-y += m68k-sim.o m68k-semi.o
$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y): $(GENERATED_HEADERS)
obj-y += $(addprefix ../, $(universal-obj-y))
obj-y += $(addprefix ../libuser/, $(user-obj-y))
obj-y += $(addprefix ../libdis-user/, $(libdis-y))
obj-y += $(libobj-y)
endif #CONFIG_LINUX_USER
@@ -97,81 +157,269 @@ endif #CONFIG_LINUX_USER
ifdef CONFIG_BSD_USER
$(call set-vpath, $(SRC_PATH)/bsd-user)
QEMU_CFLAGS+=-I$(SRC_PATH)/bsd-user -I$(SRC_PATH)/bsd-user/$(TARGET_ARCH)
obj-y += bsd-user/
obj-y += gdbstub.o user-exec.o $(oslib-obj-y)
obj-y = main.o bsdload.o elfload.o mmap.o signal.o strace.o syscall.o \
gdbstub.o uaccess.o user-exec.o
obj-i386-y += ioport-user.o
$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y): $(GENERATED_HEADERS)
obj-y += $(addprefix ../, $(universal-obj-y))
obj-y += $(addprefix ../libuser/, $(user-obj-y))
obj-y += $(addprefix ../libdis-user/, $(libdis-y))
obj-y += $(libobj-y)
endif #CONFIG_BSD_USER
#########################################################
# System emulator target
ifdef CONFIG_SOFTMMU
CONFIG_NO_PCI = $(if $(subst n,,$(CONFIG_PCI)),n,y)
CONFIG_NO_KVM = $(if $(subst n,,$(CONFIG_KVM)),n,y)
CONFIG_NO_XEN = $(if $(subst n,,$(CONFIG_XEN)),n,y)
CONFIG_NO_GET_MEMORY_MAPPING = $(if $(subst n,,$(CONFIG_HAVE_GET_MEMORY_MAPPING)),n,y)
CONFIG_NO_CORE_DUMP = $(if $(subst n,,$(CONFIG_HAVE_CORE_DUMP)),n,y)
obj-y += arch_init.o cpus.o monitor.o gdbstub.o balloon.o ioport.o
obj-y += hw/
obj-$(CONFIG_KVM) += kvm-all.o
obj-y = arch_init.o cpus.o monitor.o machine.o gdbstub.o balloon.o ioport.o
# virtio has to be here due to weird dependency between PCI and virtio-net.
# need to fix this properly
obj-$(CONFIG_NO_PCI) += pci-stub.o
obj-$(CONFIG_VIRTIO) += virtio.o virtio-blk.o virtio-balloon.o virtio-net.o virtio-serial-bus.o
obj-$(CONFIG_VIRTIO) += virtio-scsi.o
obj-y += vhost_net.o
obj-$(CONFIG_VHOST_NET) += vhost.o
obj-$(CONFIG_REALLY_VIRTFS) += 9pfs/virtio-9p-device.o
obj-$(CONFIG_KVM) += kvm.o kvm-all.o
obj-$(CONFIG_NO_KVM) += kvm-stub.o
obj-$(CONFIG_VGA) += vga.o
obj-y += memory.o savevm.o cputlb.o
obj-$(CONFIG_HAVE_GET_MEMORY_MAPPING) += memory_mapping.o
obj-$(CONFIG_HAVE_CORE_DUMP) += dump.o
obj-$(CONFIG_NO_GET_MEMORY_MAPPING) += memory_mapping-stub.o
obj-$(CONFIG_NO_CORE_DUMP) += dump-stub.o
LIBS+=-lz
obj-i386-$(CONFIG_KVM) += hyperv.o
QEMU_CFLAGS += $(VNC_TLS_CFLAGS)
QEMU_CFLAGS += $(VNC_SASL_CFLAGS)
QEMU_CFLAGS += $(VNC_JPEG_CFLAGS)
QEMU_CFLAGS += $(VNC_PNG_CFLAGS)
# xen support
obj-$(CONFIG_XEN) += xen-all.o xen-mapcache.o
obj-$(CONFIG_XEN) += xen-all.o xen_machine_pv.o xen_domainbuild.o xen-mapcache.o
obj-$(CONFIG_NO_XEN) += xen-stub.o
# Hardware support
ifeq ($(TARGET_ARCH), sparc64)
obj-y += hw/sparc64/
else
obj-y += hw/$(TARGET_BASE_ARCH)/
obj-i386-$(CONFIG_XEN) += xen_platform.o xen_apic.o
# Inter-VM PCI shared memory
CONFIG_IVSHMEM =
ifeq ($(CONFIG_KVM), y)
ifeq ($(CONFIG_PCI), y)
CONFIG_IVSHMEM = y
endif
endif
obj-$(CONFIG_IVSHMEM) += ivshmem.o
# Generic hotplugging
obj-y += device-hotplug.o
# Hardware support
obj-i386-y += mc146818rtc.o pc.o
obj-i386-y += apic_common.o apic.o kvmvapic.o
obj-i386-y += sga.o ioapic_common.o ioapic.o piix_pci.o
obj-i386-y += vmport.o
obj-i386-y += pci-hotplug.o smbios.o wdt_ib700.o
obj-i386-y += debugcon.o multiboot.o
obj-i386-y += pc_piix.o
obj-i386-y += pc_sysfw.o
obj-i386-$(CONFIG_KVM) += kvm/clock.o kvm/apic.o kvm/i8259.o kvm/ioapic.o kvm/i8254.o
obj-i386-$(CONFIG_SPICE) += qxl.o qxl-logger.o qxl-render.o
# shared objects
obj-ppc-y = ppc.o ppc_booke.o
# PREP target
obj-ppc-y += mc146818rtc.o
obj-ppc-y += ppc_prep.o
# OldWorld PowerMac
obj-ppc-y += ppc_oldworld.o
# NewWorld PowerMac
obj-ppc-y += ppc_newworld.o
# IBM pSeries (sPAPR)
obj-ppc-$(CONFIG_PSERIES) += spapr.o spapr_hcall.o spapr_rtas.o spapr_vio.o
obj-ppc-$(CONFIG_PSERIES) += xics.o spapr_vty.o spapr_llan.o spapr_vscsi.o
obj-ppc-$(CONFIG_PSERIES) += spapr_pci.o device-hotplug.o pci-hotplug.o
# PowerPC 4xx boards
obj-ppc-y += ppc4xx_devs.o ppc4xx_pci.o ppc405_uc.o ppc405_boards.o
obj-ppc-y += ppc440_bamboo.o
# PowerPC E500 boards
obj-ppc-y += ppce500_mpc8544ds.o mpc8544_guts.o ppce500_spin.o
# PowerPC 440 Xilinx ML507 reference board.
obj-ppc-y += virtex_ml507.o
obj-ppc-$(CONFIG_KVM) += kvm_ppc.o
obj-ppc-$(CONFIG_FDT) += device_tree.o
# PowerPC OpenPIC
obj-ppc-y += openpic.o
# Xilinx PPC peripherals
obj-ppc-y += xilinx_intc.o
obj-ppc-y += xilinx_timer.o
obj-ppc-y += xilinx_uartlite.o
obj-ppc-y += xilinx_ethlite.o
# LM32 boards
obj-lm32-y += lm32_boards.o
obj-lm32-y += milkymist.o
# LM32 peripherals
obj-lm32-y += lm32_pic.o
obj-lm32-y += lm32_juart.o
obj-lm32-y += lm32_timer.o
obj-lm32-y += lm32_uart.o
obj-lm32-y += lm32_sys.o
obj-lm32-y += milkymist-ac97.o
obj-lm32-y += milkymist-hpdmc.o
obj-lm32-y += milkymist-memcard.o
obj-lm32-y += milkymist-minimac2.o
obj-lm32-y += milkymist-pfpu.o
obj-lm32-y += milkymist-softusb.o
obj-lm32-y += milkymist-sysctl.o
obj-lm32-$(CONFIG_OPENGL) += milkymist-tmu2.o
obj-lm32-y += milkymist-uart.o
obj-lm32-y += milkymist-vgafb.o
obj-lm32-y += framebuffer.o
obj-mips-y = mips_r4k.o mips_jazz.o mips_malta.o mips_mipssim.o
obj-mips-y += mips_addr.o mips_timer.o mips_int.o
obj-mips-y += gt64xxx.o mc146818rtc.o
obj-mips-$(CONFIG_FULONG) += bonito.o vt82c686.o mips_fulong2e.o
obj-microblaze-y = petalogix_s3adsp1800_mmu.o
obj-microblaze-y += petalogix_ml605_mmu.o
obj-microblaze-y += microblaze_boot.o
obj-microblaze-y += microblaze_pic_cpu.o
obj-microblaze-y += xilinx_intc.o
obj-microblaze-y += xilinx_timer.o
obj-microblaze-y += xilinx_uartlite.o
obj-microblaze-y += xilinx_ethlite.o
obj-microblaze-y += xilinx_axidma.o
obj-microblaze-y += xilinx_axienet.o
obj-microblaze-$(CONFIG_FDT) += device_tree.o
# Boards
obj-cris-y = cris_pic_cpu.o
obj-cris-y += cris-boot.o
obj-cris-y += axis_dev88.o
# IO blocks
obj-cris-y += etraxfs_dma.o
obj-cris-y += etraxfs_pic.o
obj-cris-y += etraxfs_eth.o
obj-cris-y += etraxfs_timer.o
obj-cris-y += etraxfs_ser.o
ifeq ($(TARGET_ARCH), sparc64)
obj-sparc-y = sun4u.o apb_pci.o
obj-sparc-y += mc146818rtc.o
else
obj-sparc-y = sun4m.o lance.o tcx.o sun4m_iommu.o slavio_intctl.o
obj-sparc-y += slavio_timer.o slavio_misc.o sparc32_dma.o
obj-sparc-y += cs4231.o eccmemctl.o sbi.o sun4c_intctl.o leon3.o
# GRLIB
obj-sparc-y += grlib_gptimer.o grlib_irqmp.o grlib_apbuart.o
endif
obj-arm-y = integratorcp.o versatilepb.o arm_pic.o arm_timer.o
obj-arm-y += arm_boot.o pl011.o pl031.o pl050.o pl080.o pl110.o pl181.o pl190.o
obj-arm-y += versatile_pci.o
obj-arm-y += versatile_i2c.o
obj-arm-y += cadence_uart.o
obj-arm-y += cadence_ttc.o
obj-arm-y += cadence_gem.o
obj-arm-y += xilinx_zynq.o zynq_slcr.o
obj-arm-y += arm_gic.o
obj-arm-y += realview_gic.o realview.o arm_sysctl.o arm11mpcore.o a9mpcore.o
obj-arm-y += exynos4210_gic.o exynos4210_combiner.o exynos4210.o
obj-arm-y += exynos4_boards.o exynos4210_uart.o exynos4210_pwm.o
obj-arm-y += exynos4210_pmu.o exynos4210_mct.o exynos4210_fimd.o
obj-arm-y += arm_l2x0.o
obj-arm-y += arm_mptimer.o a15mpcore.o
obj-arm-y += armv7m.o armv7m_nvic.o stellaris.o pl022.o stellaris_enet.o
obj-arm-y += highbank.o
obj-arm-y += pl061.o
obj-arm-y += xgmac.o
obj-arm-y += arm-semi.o
obj-arm-y += pxa2xx.o pxa2xx_pic.o pxa2xx_gpio.o pxa2xx_timer.o pxa2xx_dma.o
obj-arm-y += pxa2xx_lcd.o pxa2xx_mmci.o pxa2xx_pcmcia.o pxa2xx_keypad.o
obj-arm-y += gumstix.o
obj-arm-y += zaurus.o ide/microdrive.o spitz.o tosa.o tc6393xb.o
obj-arm-y += omap1.o omap_lcdc.o omap_dma.o omap_clk.o omap_mmc.o omap_i2c.o \
omap_gpio.o omap_intc.o omap_uart.o
obj-arm-y += omap2.o omap_dss.o soc_dma.o omap_gptimer.o omap_synctimer.o \
omap_gpmc.o omap_sdrc.o omap_spi.o omap_tap.o omap_l4.o
obj-arm-y += omap_sx1.o palm.o tsc210x.o
obj-arm-y += nseries.o blizzard.o onenand.o cbus.o tusb6010.o usb/hcd-musb.o
obj-arm-y += mst_fpga.o mainstone.o
obj-arm-y += z2.o
obj-arm-y += musicpal.o bitbang_i2c.o marvell_88w8618_audio.o
obj-arm-y += framebuffer.o
obj-arm-y += vexpress.o
obj-arm-y += strongarm.o
obj-arm-y += collie.o
obj-arm-y += pl041.o lm4549.o
obj-arm-$(CONFIG_FDT) += device_tree.o
obj-sh4-y = shix.o r2d.o sh7750.o sh7750_regnames.o tc58128.o
obj-sh4-y += sh_timer.o sh_serial.o sh_intc.o sh_pci.o sm501.o
obj-sh4-y += ide/mmio.o
obj-m68k-y = an5206.o mcf5206.o mcf_uart.o mcf_intc.o mcf5208.o mcf_fec.o
obj-m68k-y += m68k-semi.o dummy_m68k.o
obj-s390x-y = s390-virtio-bus.o s390-virtio.o
obj-alpha-y = mc146818rtc.o
obj-alpha-y += alpha_pci.o alpha_dp264.o alpha_typhoon.o
obj-xtensa-y += xtensa_pic.o
obj-xtensa-y += xtensa_sim.o
obj-xtensa-y += xtensa_lx60.o
obj-xtensa-y += xtensa-semi.o
obj-xtensa-y += core-dc232b.o
obj-xtensa-y += core-dc233c.o
obj-xtensa-y += core-fsf.o
main.o: QEMU_CFLAGS+=$(GPROF_CFLAGS)
GENERATED_HEADERS += hmp-commands.h qmp-commands-old.h
monitor.o: hmp-commands.h qmp-commands-old.h
$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y): $(GENERATED_HEADERS)
obj-y += $(addprefix ../, $(universal-obj-y))
obj-y += $(addprefix ../, $(common-obj-y))
obj-y += $(addprefix ../libdis/, $(libdis-y))
obj-y += $(libobj-y)
obj-y += $(addprefix $(HWDIR)/, $(hw-obj-y))
obj-y += $(addprefix ../, $(trace-obj-y))
endif # CONFIG_SOFTMMU
nested-vars += obj-y
ifndef CONFIG_LINUX_USER
ifndef CONFIG_BSD_USER
# libcacard needs qemu-thread support, and besides is only needed by devices
# so not requires with linux-user / bsd-user targets
obj-$(CONFIG_SMARTCARD_NSS) += $(addprefix ../libcacard/, $(libcacard-y))
endif # CONFIG_BSD_USER
endif # CONFIG_LINUX_USER
# This resolves all nested paths, so it must come last
include $(SRC_PATH)/Makefile.objs
all-obj-y = $(obj-y)
all-obj-y += $(addprefix ../, $(universal-obj-y))
ifdef CONFIG_SOFTMMU
all-obj-y += $(addprefix ../, $(common-obj-y))
all-obj-y += $(addprefix ../libdis/, $(libdis-y))
all-obj-y += $(addprefix $(HWDIR)/, $(hw-obj-y))
all-obj-y += $(addprefix ../, $(trace-obj-y))
else
all-obj-y += $(addprefix ../libuser/, $(user-obj-y))
all-obj-y += $(addprefix ../libdis-user/, $(libdis-y))
endif #CONFIG_LINUX_USER
obj-$(CONFIG_GDBSTUB_XML) += gdbstub-xml.o
ifdef QEMU_PROGW
# The linker builds a windows executable. Make also a console executable.
$(QEMU_PROGW): $(all-obj-y)
$(QEMU_PROGW): $(obj-y) $(obj-$(TARGET_BASE_ARCH)-y)
$(call LINK,$^)
$(QEMU_PROG): $(QEMU_PROGW)
$(call quiet-command,$(OBJCOPY) --subsystem console $(QEMU_PROGW) $(QEMU_PROG)," GEN $(TARGET_DIR)$(QEMU_PROG)")
else
$(QEMU_PROG): $(all-obj-y)
$(QEMU_PROG): $(obj-y) $(obj-$(TARGET_BASE_ARCH)-y)
$(call LINK,$^)
endif
@@ -185,8 +433,8 @@ qmp-commands-old.h: $(SRC_PATH)/qmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
clean:
rm -f *.a *~ $(PROGS)
rm -f $(shell find . -name '*.[od]')
rm -f *.o *.a *~ $(PROGS) nwfpe/*.o fpu/*.o
rm -f *.d */*.d tcg/*.o ide/*.o 9pfs/*.o kvm/*.o
rm -f hmp-commands.h qmp-commands-old.h gdbstub-xml.c
ifdef CONFIG_TRACE_SYSTEMTAP
rm -f *.stp
@@ -204,5 +452,5 @@ ifdef CONFIG_TRACE_SYSTEMTAP
$(INSTALL_DATA) $(QEMU_PROG).stp "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset"
endif
GENERATED_HEADERS += config-target.h
Makefile: $(GENERATED_HEADERS)
# Include automatically generated dependency files
-include $(wildcard *.d */*.d)

View File

@@ -10,7 +10,6 @@ $(call set-vpath, $(SRC_PATH))
QEMU_CFLAGS+=-I..
QEMU_CFLAGS += -I$(SRC_PATH)/include
QEMU_CFLAGS += -DCONFIG_USER_ONLY
include $(SRC_PATH)/Makefile.objs
@@ -22,3 +21,6 @@ clean:
for d in . trace; do \
rm -f $$d/*.o $$d/*.d $$d/*.a $$d/*~; \
done
# Include automatically generated dependency files
-include $(wildcard *.d */*.d)

View File

@@ -1,23 +1,6 @@
QEMU Monitor Protocol Events
============================
BALLOON_CHANGE
--------------
Emitted when the guest changes the actual BALLOON level. This
value is equivalent to the 'actual' field return by the
'query-balloon' command
Data:
- "actual": actual level of the guest memory balloon in bytes (json-number)
Example:
{ "event": "BALLOON_CHANGE",
"data": { "actual": 944766976 },
"timestamp": { "seconds": 1267020223, "microseconds": 435656 } }
BLOCK_IO_ERROR
--------------
@@ -43,57 +26,6 @@ Example:
Note: If action is "stop", a STOP event will eventually follow the
BLOCK_IO_ERROR event.
BLOCK_JOB_CANCELLED
-------------------
Emitted when a block job has been cancelled.
Data:
- "type": Job type ("stream" for image streaming, json-string)
- "device": Device name (json-string)
- "len": Maximum progress value (json-int)
- "offset": Current progress value (json-int)
On success this is equal to len.
On failure this is less than len.
- "speed": Rate limit, bytes per second (json-int)
Example:
{ "event": "BLOCK_JOB_CANCELLED",
"data": { "type": "stream", "device": "virtio-disk0",
"len": 10737418240, "offset": 134217728,
"speed": 0 },
"timestamp": { "seconds": 1267061043, "microseconds": 959568 } }
BLOCK_JOB_COMPLETED
-------------------
Emitted when a block job has completed.
Data:
- "type": Job type ("stream" for image streaming, json-string)
- "device": Device name (json-string)
- "len": Maximum progress value (json-int)
- "offset": Current progress value (json-int)
On success this is equal to len.
On failure this is less than len.
- "speed": Rate limit, bytes per second (json-int)
- "error": Error message (json-string, optional)
Only present on failure. This field contains a human-readable
error message. There are no semantics other than that streaming
has failed and clients should not try to interpret the error
string.
Example:
{ "event": "BLOCK_JOB_COMPLETED",
"data": { "type": "stream", "device": "virtio-disk0",
"len": 10737418240, "offset": 10737418240,
"speed": 0 },
"timestamp": { "seconds": 1267061043, "microseconds": 959568 } }
DEVICE_TRAY_MOVED
-----------------
@@ -166,68 +98,6 @@ Example:
Note: If the command-line option "-no-shutdown" has been specified, a STOP
event will eventually follow the SHUTDOWN event.
SPICE_CONNECTED, SPICE_DISCONNECTED
-----------------------------------
Emitted when a SPICE client connects or disconnects.
Data:
- "server": Server information (json-object)
- "host": IP address (json-string)
- "port": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "client": Client information (json-object)
- "host": IP address (json-string)
- "port": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
Example:
{ "timestamp": {"seconds": 1290688046, "microseconds": 388707},
"event": "SPICE_CONNECTED",
"data": {
"server": { "port": "5920", "family": "ipv4", "host": "127.0.0.1"},
"client": {"port": "52873", "family": "ipv4", "host": "127.0.0.1"}
}}
SPICE_INITIALIZED
-----------------
Emitted after initial handshake and authentication takes place (if any)
and the SPICE channel is up'n'running
Data:
- "server": Server information (json-object)
- "host": IP address (json-string)
- "port": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "auth": authentication method (json-string, optional)
- "client": Client information (json-object)
- "host": IP address (json-string)
- "port": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "connection-id": spice connection id. All channels with the same id
belong to the same spice session (json-int)
- "channel-type": channel type. "1" is the main control channel, filter for
this one if you want track spice sessions only (json-int)
- "channel-id": channel id. Usually "0", might be different needed when
multiple channels of the same type exist, such as multiple
display channels in a multihead setup (json-int)
- "tls": whevener the channel is encrypted (json-bool)
Example:
{ "timestamp": {"seconds": 1290688046, "microseconds": 417172},
"event": "SPICE_INITIALIZED",
"data": {"server": {"auth": "spice", "port": "5921",
"family": "ipv4", "host": "127.0.0.1"},
"client": {"port": "49004", "family": "ipv4", "channel-type": 3,
"connection-id": 1804289383, "host": "127.0.0.1",
"channel-id": 0, "tls": true}
}}
STOP
----
@@ -240,32 +110,6 @@ Example:
{ "event": "STOP",
"timestamp": { "seconds": 1267041730, "microseconds": 281295 } }
SUSPEND
-------
Emitted when guest enters S3 state.
Data: None.
Example:
{ "event": "SUSPEND",
"timestamp": { "seconds": 1344456160, "microseconds": 309119 } }
SUSPEND_DISK
------------
Emitted when the guest makes a request to enter S4 state.
Data: None.
Example:
{ "event": "SUSPEND_DISK",
"timestamp": { "seconds": 1344456160, "microseconds": 309119 } }
Note: QEMU shuts down when entering S4 state.
VNC_CONNECTED
-------------
@@ -356,17 +200,69 @@ Example:
"host": "127.0.0.1", "sasl_username": "luiz" } },
"timestamp": { "seconds": 1263475302, "microseconds": 150772 } }
WAKEUP
------
SPICE_CONNECTED, SPICE_DISCONNECTED
-----------------------------------
Emitted when the guest has woken up from S3 and is running.
Emitted when a SPICE client connects or disconnects.
Data: None.
Data:
- "server": Server information (json-object)
- "host": IP address (json-string)
- "port": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "client": Client information (json-object)
- "host": IP address (json-string)
- "port": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
Example:
{ "event": "WATCHDOG",
"timestamp": { "seconds": 1344522075, "microseconds": 745528 } }
{ "timestamp": {"seconds": 1290688046, "microseconds": 388707},
"event": "SPICE_CONNECTED",
"data": {
"server": { "port": "5920", "family": "ipv4", "host": "127.0.0.1"},
"client": {"port": "52873", "family": "ipv4", "host": "127.0.0.1"}
}}
SPICE_INITIALIZED
-----------------
Emitted after initial handshake and authentication takes place (if any)
and the SPICE channel is up'n'running
Data:
- "server": Server information (json-object)
- "host": IP address (json-string)
- "port": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "auth": authentication method (json-string, optional)
- "client": Client information (json-object)
- "host": IP address (json-string)
- "port": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "connection-id": spice connection id. All channels with the same id
belong to the same spice session (json-int)
- "channel-type": channel type. "1" is the main control channel, filter for
this one if you want track spice sessions only (json-int)
- "channel-id": channel id. Usually "0", might be different needed when
multiple channels of the same type exist, such as multiple
display channels in a multihead setup (json-int)
- "tls": whevener the channel is encrypted (json-bool)
Example:
{ "timestamp": {"seconds": 1290688046, "microseconds": 417172},
"event": "SPICE_INITIALIZED",
"data": {"server": {"auth": "spice", "port": "5921",
"family": "ipv4", "host": "127.0.0.1"},
"client": {"port": "49004", "family": "ipv4", "channel-type": 3,
"connection-id": 1804289383, "host": "127.0.0.1",
"channel-id": 0, "tls": true}
}}
WATCHDOG
--------
@@ -386,3 +282,56 @@ Example:
Note: If action is "reset", "shutdown", or "pause" the WATCHDOG event is
followed respectively by the RESET, SHUTDOWN, or STOP events.
BLOCK_JOB_COMPLETED
-------------------
Emitted when a block job has completed.
Data:
- "type": Job type ("stream" for image streaming, json-string)
- "device": Device name (json-string)
- "len": Maximum progress value (json-int)
- "offset": Current progress value (json-int)
On success this is equal to len.
On failure this is less than len.
- "speed": Rate limit, bytes per second (json-int)
- "error": Error message (json-string, optional)
Only present on failure. This field contains a human-readable
error message. There are no semantics other than that streaming
has failed and clients should not try to interpret the error
string.
Example:
{ "event": "BLOCK_JOB_COMPLETED",
"data": { "type": "stream", "device": "virtio-disk0",
"len": 10737418240, "offset": 10737418240,
"speed": 0 },
"timestamp": { "seconds": 1267061043, "microseconds": 959568 } }
BLOCK_JOB_CANCELLED
-------------------
Emitted when a block job has been cancelled.
Data:
- "type": Job type ("stream" for image streaming, json-string)
- "device": Device name (json-string)
- "len": Maximum progress value (json-int)
- "offset": Current progress value (json-int)
On success this is equal to len.
On failure this is less than len.
- "speed": Rate limit, bytes per second (json-int)
Example:
{ "event": "BLOCK_JOB_CANCELLED",
"data": { "type": "stream", "device": "virtio-disk0",
"len": 10737418240, "offset": 134217728,
"speed": 0 },
"timestamp": { "seconds": 1267061043, "microseconds": 959568 } }

View File

@@ -106,11 +106,14 @@ completed because of an error condition.
The format is:
{ "error": { "class": json-string, "desc": json-string }, "id": json-value }
{ "error": { "class": json-string, "data": json-object, "desc": json-string },
"id": json-value }
Where,
- The "class" member contains the error class name (eg. "GenericError")
- The "class" member contains the error class name (eg. "ServiceUnavailable")
- The "data" member contains specific error data and is defined in a
per-command basis, it will be an empty json-object if the error has no data
- The "desc" member is a human-readable error message. Clients should
not attempt to parse this message.
- The "id" member contains the transaction identification associated with
@@ -170,7 +173,8 @@ S: {"return": {"enabled": true, "present": true}, "id": "example"}
------------------
C: { "execute": }
S: {"error": {"class": "GenericError", "desc": "Invalid JSON syntax" } }
S: {"error": {"class": "JSONParsing", "desc": "Invalid JSON syntax", "data":
{}}}
3.5 Powerdown event
-------------------

View File

@@ -1 +1 @@
1.2.2
1.1.1

View File

@@ -43,16 +43,6 @@
#include "hw/smbios.h"
#include "exec-memory.h"
#include "hw/pcspk.h"
#include "qemu/page_cache.h"
#include "qmp-commands.h"
#ifdef DEBUG_ARCH_INIT
#define DPRINTF(fmt, ...) \
do { fprintf(stdout, "arch_init: " fmt, ## __VA_ARGS__); } while (0)
#else
#define DPRINTF(fmt, ...) \
do { } while (0)
#endif
#ifdef TARGET_SPARC
int graphic_width = 1024;
@@ -81,8 +71,6 @@ int graphic_depth = 15;
#define QEMU_ARCH QEMU_ARCH_MICROBLAZE
#elif defined(TARGET_MIPS)
#define QEMU_ARCH QEMU_ARCH_MIPS
#elif defined(TARGET_OPENRISC)
#define QEMU_ARCH QEMU_ARCH_OPENRISC
#elif defined(TARGET_PPC)
#define QEMU_ARCH QEMU_ARCH_PPC
#elif defined(TARGET_S390X)
@@ -93,8 +81,6 @@ int graphic_depth = 15;
#define QEMU_ARCH QEMU_ARCH_SPARC
#elif defined(TARGET_XTENSA)
#define QEMU_ARCH QEMU_ARCH_XTENSA
#elif defined(TARGET_UNICORE32)
#define QEMU_ARCH QEMU_ARCH_UNICORE32
#endif
const uint32_t arch_type = QEMU_ARCH;
@@ -108,7 +94,6 @@ const uint32_t arch_type = QEMU_ARCH;
#define RAM_SAVE_FLAG_PAGE 0x08
#define RAM_SAVE_FLAG_EOS 0x10
#define RAM_SAVE_FLAG_CONTINUE 0x20
#define RAM_SAVE_FLAG_XBZRLE 0x40
#ifdef __ALTIVEC__
#include <altivec.h>
@@ -176,177 +161,15 @@ static int is_dup_page(uint8_t *page)
return 1;
}
/* struct contains XBZRLE cache and a static page
used by the compression */
static struct {
/* buffer used for XBZRLE encoding */
uint8_t *encoded_buf;
/* buffer for storing page content */
uint8_t *current_buf;
/* buffer used for XBZRLE decoding */
uint8_t *decoded_buf;
/* Cache for XBZRLE */
PageCache *cache;
} XBZRLE = {
.encoded_buf = NULL,
.current_buf = NULL,
.decoded_buf = NULL,
.cache = NULL,
};
int64_t xbzrle_cache_resize(int64_t new_size)
{
if (XBZRLE.cache != NULL) {
return cache_resize(XBZRLE.cache, new_size / TARGET_PAGE_SIZE) *
TARGET_PAGE_SIZE;
}
return pow2floor(new_size);
}
/* accounting for migration statistics */
typedef struct AccountingInfo {
uint64_t dup_pages;
uint64_t norm_pages;
uint64_t iterations;
uint64_t xbzrle_bytes;
uint64_t xbzrle_pages;
uint64_t xbzrle_cache_miss;
uint64_t xbzrle_overflows;
} AccountingInfo;
static AccountingInfo acct_info;
static void acct_clear(void)
{
memset(&acct_info, 0, sizeof(acct_info));
}
uint64_t dup_mig_bytes_transferred(void)
{
return acct_info.dup_pages * TARGET_PAGE_SIZE;
}
uint64_t dup_mig_pages_transferred(void)
{
return acct_info.dup_pages;
}
uint64_t norm_mig_bytes_transferred(void)
{
return acct_info.norm_pages * TARGET_PAGE_SIZE;
}
uint64_t norm_mig_pages_transferred(void)
{
return acct_info.norm_pages;
}
uint64_t xbzrle_mig_bytes_transferred(void)
{
return acct_info.xbzrle_bytes;
}
uint64_t xbzrle_mig_pages_transferred(void)
{
return acct_info.xbzrle_pages;
}
uint64_t xbzrle_mig_pages_cache_miss(void)
{
return acct_info.xbzrle_cache_miss;
}
uint64_t xbzrle_mig_pages_overflow(void)
{
return acct_info.xbzrle_overflows;
}
static void save_block_hdr(QEMUFile *f, RAMBlock *block, ram_addr_t offset,
int cont, int flag)
{
qemu_put_be64(f, offset | cont | flag);
if (!cont) {
qemu_put_byte(f, strlen(block->idstr));
qemu_put_buffer(f, (uint8_t *)block->idstr,
strlen(block->idstr));
}
}
#define ENCODING_FLAG_XBZRLE 0x1
static int save_xbzrle_page(QEMUFile *f, uint8_t *current_data,
ram_addr_t current_addr, RAMBlock *block,
ram_addr_t offset, int cont, bool last_stage)
{
int encoded_len = 0, bytes_sent = -1;
uint8_t *prev_cached_page;
if (!cache_is_cached(XBZRLE.cache, current_addr)) {
if (!last_stage) {
cache_insert(XBZRLE.cache, current_addr,
g_memdup(current_data, TARGET_PAGE_SIZE));
}
acct_info.xbzrle_cache_miss++;
return -1;
}
prev_cached_page = get_cached_data(XBZRLE.cache, current_addr);
/* save current buffer into memory */
memcpy(XBZRLE.current_buf, current_data, TARGET_PAGE_SIZE);
/* XBZRLE encoding (if there is no overflow) */
encoded_len = xbzrle_encode_buffer(prev_cached_page, XBZRLE.current_buf,
TARGET_PAGE_SIZE, XBZRLE.encoded_buf,
TARGET_PAGE_SIZE);
if (encoded_len == 0) {
DPRINTF("Skipping unmodified page\n");
return 0;
} else if (encoded_len == -1) {
DPRINTF("Overflow\n");
acct_info.xbzrle_overflows++;
/* update data in the cache */
memcpy(prev_cached_page, current_data, TARGET_PAGE_SIZE);
return -1;
}
/* we need to update the data in the cache, in order to get the same data */
if (!last_stage) {
memcpy(prev_cached_page, XBZRLE.current_buf, TARGET_PAGE_SIZE);
}
/* Send XBZRLE based compressed page */
save_block_hdr(f, block, offset, cont, RAM_SAVE_FLAG_XBZRLE);
qemu_put_byte(f, ENCODING_FLAG_XBZRLE);
qemu_put_be16(f, encoded_len);
qemu_put_buffer(f, XBZRLE.encoded_buf, encoded_len);
bytes_sent = encoded_len + 1 + 2;
acct_info.xbzrle_pages++;
acct_info.xbzrle_bytes += bytes_sent;
return bytes_sent;
}
static RAMBlock *last_block;
static ram_addr_t last_offset;
/*
* ram_save_block: Writes a page of memory to the stream f
*
* Returns: 0: if the page hasn't changed
* -1: if there are no more dirty pages
* n: the amount of bytes written in other case
*/
static int ram_save_block(QEMUFile *f, bool last_stage)
static int ram_save_block(QEMUFile *f)
{
RAMBlock *block = last_block;
ram_addr_t offset = last_offset;
int bytes_sent = -1;
int bytes_sent = 0;
MemoryRegion *mr;
ram_addr_t current_addr;
if (!block)
block = QLIST_FIRST(&ram_list.blocks);
@@ -364,31 +187,26 @@ static int ram_save_block(QEMUFile *f, bool last_stage)
p = memory_region_get_ram_ptr(mr) + offset;
if (is_dup_page(p)) {
acct_info.dup_pages++;
save_block_hdr(f, block, offset, cont, RAM_SAVE_FLAG_COMPRESS);
qemu_put_be64(f, offset | cont | RAM_SAVE_FLAG_COMPRESS);
if (!cont) {
qemu_put_byte(f, strlen(block->idstr));
qemu_put_buffer(f, (uint8_t *)block->idstr,
strlen(block->idstr));
}
qemu_put_byte(f, *p);
bytes_sent = 1;
} else if (migrate_use_xbzrle()) {
current_addr = block->offset + offset;
bytes_sent = save_xbzrle_page(f, p, current_addr, block,
offset, cont, last_stage);
if (!last_stage) {
p = get_cached_data(XBZRLE.cache, current_addr);
} else {
qemu_put_be64(f, offset | cont | RAM_SAVE_FLAG_PAGE);
if (!cont) {
qemu_put_byte(f, strlen(block->idstr));
qemu_put_buffer(f, (uint8_t *)block->idstr,
strlen(block->idstr));
}
}
/* either we didn't send yet (we may have had XBZRLE overflow) */
if (bytes_sent == -1) {
save_block_hdr(f, block, offset, cont, RAM_SAVE_FLAG_PAGE);
qemu_put_buffer(f, p, TARGET_PAGE_SIZE);
bytes_sent = TARGET_PAGE_SIZE;
acct_info.norm_pages++;
}
/* if page is unmodified, continue to the next */
if (bytes_sent != 0) {
break;
}
break;
}
offset += TARGET_PAGE_SIZE;
@@ -410,7 +228,20 @@ static uint64_t bytes_transferred;
static ram_addr_t ram_save_remaining(void)
{
return ram_list.dirty_pages;
RAMBlock *block;
ram_addr_t count = 0;
QLIST_FOREACH(block, &ram_list.blocks, next) {
ram_addr_t addr;
for (addr = 0; addr < block->length; addr += TARGET_PAGE_SIZE) {
if (memory_region_get_dirty(block->mr, addr, TARGET_PAGE_SIZE,
DIRTY_MEMORY_MIGRATION)) {
count++;
}
}
}
return count;
}
uint64_t ram_bytes_remaining(void)
@@ -463,111 +294,60 @@ static void sort_ram_list(void)
g_free(blocks);
}
static void migration_end(void)
{
memory_global_dirty_log_stop();
if (migrate_use_xbzrle()) {
cache_fini(XBZRLE.cache);
g_free(XBZRLE.cache);
g_free(XBZRLE.encoded_buf);
g_free(XBZRLE.current_buf);
g_free(XBZRLE.decoded_buf);
XBZRLE.cache = NULL;
}
}
static void ram_migration_cancel(void *opaque)
{
migration_end();
}
#define MAX_WAIT 50 /* ms, half buffered_file limit */
static int ram_save_setup(QEMUFile *f, void *opaque)
int ram_save_live(QEMUFile *f, int stage, void *opaque)
{
ram_addr_t addr;
RAMBlock *block;
bytes_transferred = 0;
last_block = NULL;
last_offset = 0;
sort_ram_list();
if (migrate_use_xbzrle()) {
XBZRLE.cache = cache_init(migrate_xbzrle_cache_size() /
TARGET_PAGE_SIZE,
TARGET_PAGE_SIZE);
if (!XBZRLE.cache) {
DPRINTF("Error creating cache\n");
return -1;
}
XBZRLE.encoded_buf = g_malloc0(TARGET_PAGE_SIZE);
XBZRLE.current_buf = g_malloc(TARGET_PAGE_SIZE);
acct_clear();
}
/* Make sure all dirty bits are set */
QLIST_FOREACH(block, &ram_list.blocks, next) {
for (addr = 0; addr < block->length; addr += TARGET_PAGE_SIZE) {
if (!memory_region_get_dirty(block->mr, addr, TARGET_PAGE_SIZE,
DIRTY_MEMORY_MIGRATION)) {
memory_region_set_dirty(block->mr, addr, TARGET_PAGE_SIZE);
}
}
}
memory_global_dirty_log_start();
qemu_put_be64(f, ram_bytes_total() | RAM_SAVE_FLAG_MEM_SIZE);
QLIST_FOREACH(block, &ram_list.blocks, next) {
qemu_put_byte(f, strlen(block->idstr));
qemu_put_buffer(f, (uint8_t *)block->idstr, strlen(block->idstr));
qemu_put_be64(f, block->length);
}
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
return 0;
}
static int ram_save_iterate(QEMUFile *f, void *opaque)
{
uint64_t bytes_transferred_last;
double bwidth = 0;
uint64_t expected_time = 0;
int ret;
int i;
uint64_t expected_time;
if (stage < 0) {
memory_global_dirty_log_stop();
return 0;
}
memory_global_sync_dirty_bitmap(get_system_memory());
if (stage == 1) {
RAMBlock *block;
bytes_transferred = 0;
last_block = NULL;
last_offset = 0;
sort_ram_list();
/* Make sure all dirty bits are set */
QLIST_FOREACH(block, &ram_list.blocks, next) {
for (addr = 0; addr < block->length; addr += TARGET_PAGE_SIZE) {
if (!memory_region_get_dirty(block->mr, addr, TARGET_PAGE_SIZE,
DIRTY_MEMORY_MIGRATION)) {
memory_region_set_dirty(block->mr, addr, TARGET_PAGE_SIZE);
}
}
}
memory_global_dirty_log_start();
qemu_put_be64(f, ram_bytes_total() | RAM_SAVE_FLAG_MEM_SIZE);
QLIST_FOREACH(block, &ram_list.blocks, next) {
qemu_put_byte(f, strlen(block->idstr));
qemu_put_buffer(f, (uint8_t *)block->idstr, strlen(block->idstr));
qemu_put_be64(f, block->length);
}
}
bytes_transferred_last = bytes_transferred;
bwidth = qemu_get_clock_ns(rt_clock);
i = 0;
while ((ret = qemu_file_rate_limit(f)) == 0) {
int bytes_sent;
bytes_sent = ram_save_block(f, false);
/* no more blocks to sent */
if (bytes_sent < 0) {
bytes_sent = ram_save_block(f);
bytes_transferred += bytes_sent;
if (bytes_sent == 0) { /* no more blocks */
break;
}
bytes_transferred += bytes_sent;
acct_info.iterations++;
/* we want to check in the 1st loop, just in case it was the 1st time
and we had to sync the dirty bitmap.
qemu_get_clock_ns() is a bit expensive, so we only check each some
iterations
*/
if ((i & 63) == 0) {
uint64_t t1 = (qemu_get_clock_ns(rt_clock) - bwidth) / 1000000;
if (t1 > MAX_WAIT) {
DPRINTF("big wait: %" PRIu64 " milliseconds, %d iterations\n",
t1, i);
break;
}
}
i++;
}
if (ret < 0) {
@@ -583,85 +363,22 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
bwidth = 0.000001;
}
/* try transferring iterative blocks of memory */
if (stage == 3) {
int bytes_sent;
/* flush all remaining blocks regardless of rate limiting */
while ((bytes_sent = ram_save_block(f)) != 0) {
bytes_transferred += bytes_sent;
}
memory_global_dirty_log_stop();
}
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
expected_time = ram_save_remaining() * TARGET_PAGE_SIZE / bwidth;
DPRINTF("ram_save_live: expected(%" PRIu64 ") <= max(%" PRIu64 ")?\n",
expected_time, migrate_max_downtime());
if (expected_time <= migrate_max_downtime()) {
memory_global_sync_dirty_bitmap(get_system_memory());
expected_time = ram_save_remaining() * TARGET_PAGE_SIZE / bwidth;
return expected_time <= migrate_max_downtime();
}
return 0;
}
static int ram_save_complete(QEMUFile *f, void *opaque)
{
memory_global_sync_dirty_bitmap(get_system_memory());
/* try transferring iterative blocks of memory */
/* flush all remaining blocks regardless of rate limiting */
while (true) {
int bytes_sent;
bytes_sent = ram_save_block(f, true);
/* no more blocks to sent */
if (bytes_sent < 0) {
break;
}
bytes_transferred += bytes_sent;
}
memory_global_dirty_log_stop();
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
return 0;
}
static int load_xbzrle(QEMUFile *f, ram_addr_t addr, void *host)
{
int ret, rc = 0;
unsigned int xh_len;
int xh_flags;
if (!XBZRLE.decoded_buf) {
XBZRLE.decoded_buf = g_malloc(TARGET_PAGE_SIZE);
}
/* extract RLE header */
xh_flags = qemu_get_byte(f);
xh_len = qemu_get_be16(f);
if (xh_flags != ENCODING_FLAG_XBZRLE) {
fprintf(stderr, "Failed to load XBZRLE page - wrong compression!\n");
return -1;
}
if (xh_len > TARGET_PAGE_SIZE) {
fprintf(stderr, "Failed to load XBZRLE page - len overflow!\n");
return -1;
}
/* load data and decode */
qemu_get_buffer(f, XBZRLE.decoded_buf, xh_len);
/* decode RLE */
ret = xbzrle_decode_buffer(XBZRLE.decoded_buf, xh_len, host,
TARGET_PAGE_SIZE);
if (ret == -1) {
fprintf(stderr, "Failed to load XBZRLE page - decode error!\n");
rc = -1;
} else if (ret > TARGET_PAGE_SIZE) {
fprintf(stderr, "Failed to load XBZRLE page - size %d exceeds %d!\n",
ret, TARGET_PAGE_SIZE);
abort();
}
return rc;
return (stage == 2) && (expected_time <= migrate_max_downtime());
}
static inline void *host_from_stream_offset(QEMUFile *f,
@@ -694,14 +411,11 @@ static inline void *host_from_stream_offset(QEMUFile *f,
return NULL;
}
static int ram_load(QEMUFile *f, void *opaque, int version_id)
int ram_load(QEMUFile *f, void *opaque, int version_id)
{
ram_addr_t addr;
int flags, ret = 0;
int flags;
int error;
static uint64_t seq_iter;
seq_iter++;
if (version_id < 4 || version_id > 4) {
return -EINVAL;
@@ -731,10 +445,8 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
QLIST_FOREACH(block, &ram_list.blocks, next) {
if (!strncmp(id, block->idstr, sizeof(id))) {
if (block->length != length) {
ret = -EINVAL;
goto done;
}
if (block->length != length)
return -EINVAL;
break;
}
}
@@ -742,8 +454,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
if (!block) {
fprintf(stderr, "Unknown ramblock \"%s\", cannot "
"accept migration\n", id);
ret = -EINVAL;
goto done;
return -EINVAL;
}
total_ram_bytes -= length;
@@ -772,46 +483,18 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
void *host;
host = host_from_stream_offset(f, addr, flags);
if (!host) {
return -EINVAL;
}
qemu_get_buffer(f, host, TARGET_PAGE_SIZE);
} else if (flags & RAM_SAVE_FLAG_XBZRLE) {
if (!migrate_use_xbzrle()) {
return -EINVAL;
}
void *host = host_from_stream_offset(f, addr, flags);
if (!host) {
return -EINVAL;
}
if (load_xbzrle(f, addr, host) < 0) {
ret = -EINVAL;
goto done;
}
}
error = qemu_file_get_error(f);
if (error) {
ret = error;
goto done;
return error;
}
} while (!(flags & RAM_SAVE_FLAG_EOS));
done:
DPRINTF("Completed load of VM with exit code %d seq iteration "
"%" PRIu64 "\n", ret, seq_iter);
return ret;
return 0;
}
SaveVMHandlers savevm_ram_handlers = {
.save_live_setup = ram_save_setup,
.save_live_iterate = ram_save_iterate,
.save_live_complete = ram_save_complete,
.load_state = ram_load,
.cancel = ram_migration_cancel,
};
#ifdef HAS_AUDIO
struct soundhw {
const char *name;
@@ -919,20 +602,15 @@ void select_soundhw(const char *optarg)
{
struct soundhw *c;
if (is_help_option(optarg)) {
if (*optarg == '?') {
show_valid_cards:
#ifdef HAS_AUDIO_CHOICE
printf("Valid sound card names (comma separated):\n");
for (c = soundhw; c->name; ++c) {
printf ("%-11s %s\n", c->name, c->descr);
}
printf("\n-soundhw all will enable all of the above\n");
#else
printf("Machine has no user-selectable audio hardware "
"(it may or may not have always-present audio hardware).\n");
#endif
exit(!is_help_option(optarg));
exit(*optarg != '?');
}
else {
size_t l;
@@ -1086,13 +764,3 @@ int xen_available(void)
return 0;
#endif
}
TargetInfo *qmp_query_target(Error **errp)
{
TargetInfo *info = g_malloc0(sizeof(*info));
info->arch = TARGET_TYPE;
return info;
}

View File

@@ -1,8 +1,6 @@
#ifndef QEMU_ARCH_INIT_H
#define QEMU_ARCH_INIT_H
#include "qmp-commands.h"
enum {
QEMU_ARCH_ALL = -1,
QEMU_ARCH_ALPHA = 1,
@@ -18,8 +16,6 @@ enum {
QEMU_ARCH_SH4 = 1024,
QEMU_ARCH_SPARC = 2048,
QEMU_ARCH_XTENSA = 4096,
QEMU_ARCH_OPENRISC = 8192,
QEMU_ARCH_UNICORE32 = 0x4000,
};
extern const uint32_t arch_type;
@@ -34,6 +30,4 @@ int tcg_available(void);
int kvm_available(void);
int xen_available(void);
CpuDefinitionInfoList GCC_WEAK_DECL *arch_query_cpu_definitions(Error **errp);
#endif

View File

@@ -194,19 +194,18 @@ uint32_t do_arm_semihosting(CPUARMState *env)
if (!(s = lock_user_string(ARG(0))))
/* FIXME - should this error code be -TARGET_EFAULT ? */
return (uint32_t)-1;
if (ARG(1) >= 12) {
unlock_user(s, ARG(0), 0);
if (ARG(1) >= 12)
return (uint32_t)-1;
}
if (strcmp(s, ":tt") == 0) {
int result_fileno = ARG(1) < 4 ? STDIN_FILENO : STDOUT_FILENO;
unlock_user(s, ARG(0), 0);
return result_fileno;
if (ARG(1) < 4)
return STDIN_FILENO;
else
return STDOUT_FILENO;
}
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "open,%s,%x,1a4", ARG(0),
(int)ARG(2)+1, gdb_open_modeflags[ARG(1)]);
ret = env->regs[0];
return env->regs[0];
} else {
ret = set_swi_errno(ts, open(s, open_modeflags[ARG(1)], 0644));
}
@@ -282,7 +281,7 @@ uint32_t do_arm_semihosting(CPUARMState *env)
return len - ret;
}
case TARGET_SYS_READC:
/* XXX: Read from debug console. Not implemented. */
/* XXX: Read from debug cosole. Not implemented. */
return 0;
case TARGET_SYS_ISTTY:
if (use_gdb_syscalls()) {

View File

@@ -1,14 +0,0 @@
common-obj-y = audio.o noaudio.o wavaudio.o mixeng.o
common-obj-$(CONFIG_SDL) += sdlaudio.o
common-obj-$(CONFIG_OSS) += ossaudio.o
common-obj-$(CONFIG_SPICE) += spiceaudio.o
common-obj-$(CONFIG_COREAUDIO) += coreaudio.o
common-obj-$(CONFIG_ALSA) += alsaaudio.o
common-obj-$(CONFIG_DSOUND) += dsoundaudio.o
common-obj-$(CONFIG_FMOD) += fmodaudio.o
common-obj-$(CONFIG_ESD) += esdaudio.o
common-obj-$(CONFIG_PA) += paaudio.o
common-obj-$(CONFIG_WINWAVE) += winwaveaudio.o
common-obj-$(CONFIG_AUDIO_PT_INT) += audio_pt_int.o
common-obj-$(CONFIG_AUDIO_WIN_INT) += audio_win_int.o
common-obj-y += wavcapture.o

View File

@@ -818,7 +818,6 @@ static int audio_attach_capture (HWVoiceOut *hw)
sw->active = hw->enabled;
sw->conv = noop_conv;
sw->ratio = ((int64_t) hw_cap->info.freq << 32) / sw->info.freq;
sw->vol = nominal_volume;
sw->rate = st_rate_start (sw->info.freq, hw_cap->info.freq);
if (!sw->rate) {
dolog ("Could not start rate conversion for `%s'\n", SW_NAME (sw));

View File

@@ -410,15 +410,15 @@ SW *glue (AUD_open_, TYPE) (
SW *old_sw = NULL;
#endif
ldebug ("open %s, freq %d, nchannels %d, fmt %d\n",
name, as->freq, as->nchannels, as->fmt);
if (audio_bug (AUDIO_FUNC, !card || !name || !callback_fn || !as)) {
dolog ("card=%p name=%p callback_fn=%p as=%p\n",
card, name, callback_fn, as);
goto fail;
}
ldebug ("open %s, freq %d, nchannels %d, fmt %d\n",
name, as->freq, as->nchannels, as->fmt);
if (audio_bug (AUDIO_FUNC, audio_validate_settings (as))) {
audio_print_settings (as);
goto fail;

View File

@@ -72,7 +72,7 @@ static void winwave_log_mmresult (MMRESULT mr)
break;
case MMSYSERR_NOMEM:
str = "Unable to allocate or lock memory";
str = "Unable to allocate or locl memory";
break;
case WAVERR_SYNC:
@@ -349,15 +349,21 @@ static int winwave_ctl_out (HWVoiceOut *hw, int cmd, ...)
else {
hw->poll_mode = 0;
}
wave->paused = 0;
if (wave->paused) {
mr = waveOutRestart (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutRestart");
}
wave->paused = 0;
}
}
return 0;
case VOICE_DISABLE:
if (!wave->paused) {
mr = waveOutReset (wave->hwo);
mr = waveOutPause (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutReset");
winwave_logerr (mr, "waveOutPause");
}
else {
wave->paused = 1;

View File

@@ -30,7 +30,6 @@
#include "balloon.h"
#include "trace.h"
#include "qmp-commands.h"
#include "qjson.h"
static QEMUBalloonEvent *balloon_event_fn;
static QEMUBalloonStatus *balloon_stat_fn;
@@ -81,19 +80,6 @@ static int qemu_balloon_status(BalloonInfo *info)
return 1;
}
void qemu_balloon_changed(int64_t actual)
{
QObject *data;
data = qobject_from_jsonf("{ 'actual': %" PRId64 " }",
actual);
monitor_protocol_event(QEVENT_BALLOON_CHANGE, data);
qobject_decref(data);
}
BalloonInfo *qmp_query_balloon(Error **errp)
{
BalloonInfo *info;

View File

@@ -24,6 +24,4 @@ int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
QEMUBalloonStatus *stat_func, void *opaque);
void qemu_remove_balloon_handler(void *opaque);
void qemu_balloon_changed(int64_t actual);
#endif

116
bitops.h
View File

@@ -114,10 +114,10 @@ static inline unsigned long ffz(unsigned long word)
* @nr: the bit to set
* @addr: the address to start counting from
*/
static inline void set_bit(int nr, unsigned long *addr)
static inline void set_bit(int nr, volatile unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
*p |= mask;
}
@@ -127,10 +127,10 @@ static inline void set_bit(int nr, unsigned long *addr)
* @nr: Bit to clear
* @addr: Address to start counting from
*/
static inline void clear_bit(int nr, unsigned long *addr)
static inline void clear_bit(int nr, volatile unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
*p &= ~mask;
}
@@ -140,10 +140,10 @@ static inline void clear_bit(int nr, unsigned long *addr)
* @nr: Bit to change
* @addr: Address to start counting from
*/
static inline void change_bit(int nr, unsigned long *addr)
static inline void change_bit(int nr, volatile unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
*p ^= mask;
}
@@ -153,10 +153,10 @@ static inline void change_bit(int nr, unsigned long *addr)
* @nr: Bit to set
* @addr: Address to count from
*/
static inline int test_and_set_bit(int nr, unsigned long *addr)
static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
unsigned long old = *p;
*p = old | mask;
@@ -168,10 +168,10 @@ static inline int test_and_set_bit(int nr, unsigned long *addr)
* @nr: Bit to clear
* @addr: Address to count from
*/
static inline int test_and_clear_bit(int nr, unsigned long *addr)
static inline int test_and_clear_bit(int nr, volatile unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
unsigned long old = *p;
*p = old & ~mask;
@@ -183,10 +183,10 @@ static inline int test_and_clear_bit(int nr, unsigned long *addr)
* @nr: Bit to change
* @addr: Address to count from
*/
static inline int test_and_change_bit(int nr, unsigned long *addr)
static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
unsigned long old = *p;
*p = old ^ mask;
@@ -198,7 +198,7 @@ static inline int test_and_change_bit(int nr, unsigned long *addr)
* @nr: bit number to test
* @addr: Address to start counting from
*/
static inline int test_bit(int nr, const unsigned long *addr)
static inline int test_bit(int nr, const volatile unsigned long *addr)
{
return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1)));
}
@@ -269,94 +269,4 @@ static inline unsigned long hweight_long(unsigned long w)
return count;
}
/**
* extract32:
* @value: the value to extract the bit field from
* @start: the lowest bit in the bit field (numbered from 0)
* @length: the length of the bit field
*
* Extract from the 32 bit input @value the bit field specified by the
* @start and @length parameters, and return it. The bit field must
* lie entirely within the 32 bit word. It is valid to request that
* all 32 bits are returned (ie @length 32 and @start 0).
*
* Returns: the value of the bit field extracted from the input value.
*/
static inline uint32_t extract32(uint32_t value, int start, int length)
{
assert(start >= 0 && length > 0 && length <= 32 - start);
return (value >> start) & (~0U >> (32 - length));
}
/**
* extract64:
* @value: the value to extract the bit field from
* @start: the lowest bit in the bit field (numbered from 0)
* @length: the length of the bit field
*
* Extract from the 64 bit input @value the bit field specified by the
* @start and @length parameters, and return it. The bit field must
* lie entirely within the 64 bit word. It is valid to request that
* all 64 bits are returned (ie @length 64 and @start 0).
*
* Returns: the value of the bit field extracted from the input value.
*/
static inline uint64_t extract64(uint64_t value, int start, int length)
{
assert(start >= 0 && length > 0 && length <= 64 - start);
return (value >> start) & (~0ULL >> (64 - length));
}
/**
* deposit32:
* @value: initial value to insert bit field into
* @start: the lowest bit in the bit field (numbered from 0)
* @length: the length of the bit field
* @fieldval: the value to insert into the bit field
*
* Deposit @fieldval into the 32 bit @value at the bit field specified
* by the @start and @length parameters, and return the modified
* @value. Bits of @value outside the bit field are not modified.
* Bits of @fieldval above the least significant @length bits are
* ignored. The bit field must lie entirely within the 32 bit word.
* It is valid to request that all 32 bits are modified (ie @length
* 32 and @start 0).
*
* Returns: the modified @value.
*/
static inline uint32_t deposit32(uint32_t value, int start, int length,
uint32_t fieldval)
{
uint32_t mask;
assert(start >= 0 && length > 0 && length <= 32 - start);
mask = (~0U >> (32 - length)) << start;
return (value & ~mask) | ((fieldval << start) & mask);
}
/**
* deposit64:
* @value: initial value to insert bit field into
* @start: the lowest bit in the bit field (numbered from 0)
* @length: the length of the bit field
* @fieldval: the value to insert into the bit field
*
* Deposit @fieldval into the 64 bit @value at the bit field specified
* by the @start and @length parameters, and return the modified
* @value. Bits of @value outside the bit field are not modified.
* Bits of @fieldval above the least significant @length bits are
* ignored. The bit field must lie entirely within the 64 bit word.
* It is valid to request that all 64 bits are modified (ie @length
* 64 and @start 0).
*
* Returns: the modified @value.
*/
static inline uint64_t deposit64(uint64_t value, int start, int length,
uint64_t fieldval)
{
uint64_t mask;
assert(start >= 0 && length > 0 && length <= 64 - start);
mask = (~0ULL >> (64 - length)) << start;
return (value & ~mask) | ((fieldval << start) & mask);
}
#endif

View File

@@ -536,22 +536,30 @@ static void blk_mig_cleanup(void)
}
}
static void block_migration_cancel(void *opaque)
{
blk_mig_cleanup();
}
static int block_save_setup(QEMUFile *f, void *opaque)
static int block_save_live(QEMUFile *f, int stage, void *opaque)
{
int ret;
DPRINTF("Enter save live setup submitted %d transferred %d\n",
block_mig_state.submitted, block_mig_state.transferred);
DPRINTF("Enter save live stage %d submitted %d transferred %d\n",
stage, block_mig_state.submitted, block_mig_state.transferred);
init_blk_migration(f);
if (stage < 0) {
blk_mig_cleanup();
return 0;
}
/* start track dirty blocks */
set_dirty_tracking(1);
if (block_mig_state.blk_enable != 1) {
/* no need to migrate storage */
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
return 1;
}
if (stage == 1) {
init_blk_migration(f);
/* start track dirty blocks */
set_dirty_tracking(1);
}
flush_blks(f);
@@ -563,98 +571,56 @@ static int block_save_setup(QEMUFile *f, void *opaque)
blk_mig_reset_dirty_cursor();
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
return 0;
}
static int block_save_iterate(QEMUFile *f, void *opaque)
{
int ret;
DPRINTF("Enter save live iterate submitted %d transferred %d\n",
block_mig_state.submitted, block_mig_state.transferred);
flush_blks(f);
ret = qemu_file_get_error(f);
if (ret) {
blk_mig_cleanup();
return ret;
}
blk_mig_reset_dirty_cursor();
/* control the rate of transfer */
while ((block_mig_state.submitted +
block_mig_state.read_done) * BLOCK_SIZE <
qemu_file_get_rate_limit(f)) {
if (block_mig_state.bulk_completed == 0) {
/* first finish the bulk phase */
if (blk_mig_save_bulked_block(f) == 0) {
/* finished saving bulk on all devices */
block_mig_state.bulk_completed = 1;
}
} else {
if (blk_mig_save_dirty_block(f, 1) == 0) {
/* no more dirty blocks */
break;
if (stage == 2) {
/* control the rate of transfer */
while ((block_mig_state.submitted +
block_mig_state.read_done) * BLOCK_SIZE <
qemu_file_get_rate_limit(f)) {
if (block_mig_state.bulk_completed == 0) {
/* first finish the bulk phase */
if (blk_mig_save_bulked_block(f) == 0) {
/* finished saving bulk on all devices */
block_mig_state.bulk_completed = 1;
}
} else {
if (blk_mig_save_dirty_block(f, 1) == 0) {
/* no more dirty blocks */
break;
}
}
}
flush_blks(f);
ret = qemu_file_get_error(f);
if (ret) {
blk_mig_cleanup();
return ret;
}
}
flush_blks(f);
if (stage == 3) {
/* we know for sure that save bulk is completed and
all async read completed */
assert(block_mig_state.submitted == 0);
ret = qemu_file_get_error(f);
if (ret) {
while (blk_mig_save_dirty_block(f, 0) != 0);
blk_mig_cleanup();
return ret;
/* report completion */
qemu_put_be64(f, (100 << BDRV_SECTOR_BITS) | BLK_MIG_FLAG_PROGRESS);
ret = qemu_file_get_error(f);
if (ret) {
return ret;
}
DPRINTF("Block migration completed\n");
}
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
return is_stage2_completed();
}
static int block_save_complete(QEMUFile *f, void *opaque)
{
int ret;
DPRINTF("Enter save live complete submitted %d transferred %d\n",
block_mig_state.submitted, block_mig_state.transferred);
flush_blks(f);
ret = qemu_file_get_error(f);
if (ret) {
blk_mig_cleanup();
return ret;
}
blk_mig_reset_dirty_cursor();
/* we know for sure that save bulk is completed and
all async read completed */
assert(block_mig_state.submitted == 0);
while (blk_mig_save_dirty_block(f, 0) != 0) {
/* Do nothing */
}
blk_mig_cleanup();
/* report completion */
qemu_put_be64(f, (100 << BDRV_SECTOR_BITS) | BLK_MIG_FLAG_PROGRESS);
ret = qemu_file_get_error(f);
if (ret) {
return ret;
}
DPRINTF("Block migration completed\n");
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
return 0;
return ((stage == 2) && is_stage2_completed());
}
static int block_load(QEMUFile *f, void *opaque, int version_id)
@@ -734,35 +700,20 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
return 0;
}
static void block_set_params(const MigrationParams *params, void *opaque)
static void block_set_params(int blk_enable, int shared_base, void *opaque)
{
block_mig_state.blk_enable = params->blk;
block_mig_state.shared_base = params->shared;
block_mig_state.blk_enable = blk_enable;
block_mig_state.shared_base = shared_base;
/* shared base means that blk_enable = 1 */
block_mig_state.blk_enable |= params->shared;
block_mig_state.blk_enable |= shared_base;
}
static bool block_is_active(void *opaque)
{
return block_mig_state.blk_enable == 1;
}
SaveVMHandlers savevm_block_handlers = {
.set_params = block_set_params,
.save_live_setup = block_save_setup,
.save_live_iterate = block_save_iterate,
.save_live_complete = block_save_complete,
.load_state = block_load,
.cancel = block_migration_cancel,
.is_active = block_is_active,
};
void blk_mig_init(void)
{
QSIMPLEQ_INIT(&block_mig_state.bmds_list);
QSIMPLEQ_INIT(&block_mig_state.blk_list);
register_savevm_live(NULL, "block", 0, 1, &savevm_block_handlers,
&block_mig_state);
register_savevm_live(NULL, "block", 0, 1, block_set_params,
block_save_live, NULL, block_load, &block_mig_state);
}

596
block.c
View File

@@ -433,11 +433,7 @@ int get_tmp_filename(char *filename, int size)
return -EOVERFLOW;
}
fd = mkstemp(filename);
if (fd < 0) {
return -errno;
}
if (close(fd) != 0) {
unlink(filename);
if (fd < 0 || close(fd)) {
return -errno;
}
return 0;
@@ -653,13 +649,12 @@ static int bdrv_open_common(BlockDriverState *bs, const char *filename,
bs->opaque = g_malloc0(drv->instance_size);
bs->enable_write_cache = !!(flags & BDRV_O_CACHE_WB);
open_flags = flags | BDRV_O_CACHE_WB;
/*
* Clear flags that are internal to the block layer before opening the
* image.
*/
open_flags &= ~(BDRV_O_SNAPSHOT | BDRV_O_NO_BACKING);
open_flags = flags & ~(BDRV_O_SNAPSHOT | BDRV_O_NO_BACKING);
/*
* Snapshots should be writable.
@@ -668,7 +663,7 @@ static int bdrv_open_common(BlockDriverState *bs, const char *filename,
open_flags |= BDRV_O_RDWR;
}
bs->read_only = !(open_flags & BDRV_O_RDWR);
bs->keep_read_only = bs->read_only = !(open_flags & BDRV_O_RDWR);
/* Open the image, either directly or using a protocol */
if (drv->bdrv_file_open) {
@@ -739,8 +734,7 @@ int bdrv_open(BlockDriverState *bs, const char *filename, int flags,
BlockDriver *drv)
{
int ret;
/* TODO: extra byte is a hack to ensure MAX_PATH space on Windows. */
char tmp_filename[PATH_MAX + 1];
char tmp_filename[PATH_MAX];
if (flags & BDRV_O_SNAPSHOT) {
BlockDriverState *bs1;
@@ -809,12 +803,6 @@ int bdrv_open(BlockDriverState *bs, const char *filename, int flags,
goto unlink_and_fail;
}
if (flags & BDRV_O_RDWR) {
flags |= BDRV_O_ALLOW_RDWR;
}
bs->keep_read_only = !(flags & BDRV_O_ALLOW_RDWR);
/* Open the image */
ret = bdrv_open_common(bs, filename, flags, drv);
if (ret < 0) {
@@ -844,6 +832,12 @@ int bdrv_open(BlockDriverState *bs, const char *filename, int flags,
bdrv_close(bs);
return ret;
}
if (bs->is_temporary) {
bs->backing_hd->keep_read_only = !(flags & BDRV_O_RDWR);
} else {
/* base image inherits from "parent" */
bs->backing_hd->keep_read_only = bs->keep_read_only;
}
}
if (!bdrv_key_required(bs)) {
@@ -902,9 +896,9 @@ void bdrv_close(BlockDriverState *bs)
bdrv_delete(bs->file);
bs->file = NULL;
}
}
bdrv_dev_change_media_cb(bs, false);
bdrv_dev_change_media_cb(bs, false);
}
/*throttling disk I/O limits*/
if (bs->io_limits_enabled) {
@@ -976,101 +970,6 @@ static void bdrv_rebind(BlockDriverState *bs)
}
}
static void bdrv_move_feature_fields(BlockDriverState *bs_dest,
BlockDriverState *bs_src)
{
/* move some fields that need to stay attached to the device */
bs_dest->open_flags = bs_src->open_flags;
/* dev info */
bs_dest->dev_ops = bs_src->dev_ops;
bs_dest->dev_opaque = bs_src->dev_opaque;
bs_dest->dev = bs_src->dev;
bs_dest->buffer_alignment = bs_src->buffer_alignment;
bs_dest->copy_on_read = bs_src->copy_on_read;
bs_dest->enable_write_cache = bs_src->enable_write_cache;
/* i/o timing parameters */
bs_dest->slice_time = bs_src->slice_time;
bs_dest->slice_start = bs_src->slice_start;
bs_dest->slice_end = bs_src->slice_end;
bs_dest->io_limits = bs_src->io_limits;
bs_dest->io_base = bs_src->io_base;
bs_dest->throttled_reqs = bs_src->throttled_reqs;
bs_dest->block_timer = bs_src->block_timer;
bs_dest->io_limits_enabled = bs_src->io_limits_enabled;
/* r/w error */
bs_dest->on_read_error = bs_src->on_read_error;
bs_dest->on_write_error = bs_src->on_write_error;
/* i/o status */
bs_dest->iostatus_enabled = bs_src->iostatus_enabled;
bs_dest->iostatus = bs_src->iostatus;
/* dirty bitmap */
bs_dest->dirty_count = bs_src->dirty_count;
bs_dest->dirty_bitmap = bs_src->dirty_bitmap;
/* job */
bs_dest->in_use = bs_src->in_use;
bs_dest->job = bs_src->job;
/* keep the same entry in bdrv_states */
pstrcpy(bs_dest->device_name, sizeof(bs_dest->device_name),
bs_src->device_name);
bs_dest->list = bs_src->list;
}
/*
* Swap bs contents for two image chains while they are live,
* while keeping required fields on the BlockDriverState that is
* actually attached to a device.
*
* This will modify the BlockDriverState fields, and swap contents
* between bs_new and bs_old. Both bs_new and bs_old are modified.
*
* bs_new is required to be anonymous.
*
* This function does not create any image files.
*/
void bdrv_swap(BlockDriverState *bs_new, BlockDriverState *bs_old)
{
BlockDriverState tmp;
/* bs_new must be anonymous and shouldn't have anything fancy enabled */
assert(bs_new->device_name[0] == '\0');
assert(bs_new->dirty_bitmap == NULL);
assert(bs_new->job == NULL);
assert(bs_new->dev == NULL);
assert(bs_new->in_use == 0);
assert(bs_new->io_limits_enabled == false);
assert(bs_new->block_timer == NULL);
tmp = *bs_new;
*bs_new = *bs_old;
*bs_old = tmp;
/* there are some fields that should not be swapped, move them back */
bdrv_move_feature_fields(&tmp, bs_old);
bdrv_move_feature_fields(bs_old, bs_new);
bdrv_move_feature_fields(bs_new, &tmp);
/* bs_new shouldn't be in bdrv_states even after the swap! */
assert(bs_new->device_name[0] == '\0');
/* Check a few fields that should remain attached to the device */
assert(bs_new->dev == NULL);
assert(bs_new->job == NULL);
assert(bs_new->in_use == 0);
assert(bs_new->io_limits_enabled == false);
assert(bs_new->block_timer == NULL);
bdrv_rebind(bs_new);
bdrv_rebind(bs_old);
}
/*
* Add new bs contents at the top of an image chain while the chain is
* live, while keeping required fields on the top layer.
@@ -1084,16 +983,85 @@ void bdrv_swap(BlockDriverState *bs_new, BlockDriverState *bs_old)
*/
void bdrv_append(BlockDriverState *bs_new, BlockDriverState *bs_top)
{
bdrv_swap(bs_new, bs_top);
BlockDriverState tmp;
/* bs_new must be anonymous */
assert(bs_new->device_name[0] == '\0');
tmp = *bs_new;
/* there are some fields that need to stay on the top layer: */
tmp.open_flags = bs_top->open_flags;
/* dev info */
tmp.dev_ops = bs_top->dev_ops;
tmp.dev_opaque = bs_top->dev_opaque;
tmp.dev = bs_top->dev;
tmp.buffer_alignment = bs_top->buffer_alignment;
tmp.copy_on_read = bs_top->copy_on_read;
/* i/o timing parameters */
tmp.slice_time = bs_top->slice_time;
tmp.slice_start = bs_top->slice_start;
tmp.slice_end = bs_top->slice_end;
tmp.io_limits = bs_top->io_limits;
tmp.io_base = bs_top->io_base;
tmp.throttled_reqs = bs_top->throttled_reqs;
tmp.block_timer = bs_top->block_timer;
tmp.io_limits_enabled = bs_top->io_limits_enabled;
/* geometry */
tmp.cyls = bs_top->cyls;
tmp.heads = bs_top->heads;
tmp.secs = bs_top->secs;
tmp.translation = bs_top->translation;
/* r/w error */
tmp.on_read_error = bs_top->on_read_error;
tmp.on_write_error = bs_top->on_write_error;
/* i/o status */
tmp.iostatus_enabled = bs_top->iostatus_enabled;
tmp.iostatus = bs_top->iostatus;
/* keep the same entry in bdrv_states */
pstrcpy(tmp.device_name, sizeof(tmp.device_name), bs_top->device_name);
tmp.list = bs_top->list;
/* The contents of 'tmp' will become bs_top, as we are
* swapping bs_new and bs_top contents. */
bs_top->backing_hd = bs_new;
bs_top->open_flags &= ~BDRV_O_NO_BACKING;
pstrcpy(bs_top->backing_file, sizeof(bs_top->backing_file),
bs_new->filename);
pstrcpy(bs_top->backing_format, sizeof(bs_top->backing_format),
bs_new->drv ? bs_new->drv->format_name : "");
tmp.backing_hd = bs_new;
pstrcpy(tmp.backing_file, sizeof(tmp.backing_file), bs_top->filename);
bdrv_get_format(bs_top, tmp.backing_format, sizeof(tmp.backing_format));
/* swap contents of the fixed new bs and the current top */
*bs_new = *bs_top;
*bs_top = tmp;
/* device_name[] was carried over from the old bs_top. bs_new
* shouldn't be in bdrv_states, so we need to make device_name[]
* reflect the anonymity of bs_new
*/
bs_new->device_name[0] = '\0';
/* clear the copied fields in the new backing file */
bdrv_detach_dev(bs_new, bs_new->dev);
qemu_co_queue_init(&bs_new->throttled_reqs);
memset(&bs_new->io_base, 0, sizeof(bs_new->io_base));
memset(&bs_new->io_limits, 0, sizeof(bs_new->io_limits));
bdrv_iostatus_disable(bs_new);
/* we don't use bdrv_io_limits_disable() for this, because we don't want
* to affect or delete the block_timer, as it has been moved to bs_top */
bs_new->io_limits_enabled = false;
bs_new->block_timer = NULL;
bs_new->slice_time = 0;
bs_new->slice_start = 0;
bs_new->slice_end = 0;
bdrv_rebind(bs_new);
bdrv_rebind(bs_top);
}
void bdrv_delete(BlockDriverState *bs)
@@ -1254,14 +1222,14 @@ bool bdrv_dev_is_medium_locked(BlockDriverState *bs)
* free of errors) or -errno when an internal error occurred. The results of the
* check are stored in res.
*/
int bdrv_check(BlockDriverState *bs, BdrvCheckResult *res, BdrvCheckMode fix)
int bdrv_check(BlockDriverState *bs, BdrvCheckResult *res)
{
if (bs->drv->bdrv_check == NULL) {
return -ENOTSUP;
}
memset(res, 0, sizeof(*res));
return bs->drv->bdrv_check(bs, res, fix);
return bs->drv->bdrv_check(bs, res);
}
#define COMMIT_BUF_SECTORS 2048
@@ -1638,20 +1606,6 @@ int bdrv_read(BlockDriverState *bs, int64_t sector_num,
return bdrv_rw_co(bs, sector_num, buf, nb_sectors, false);
}
/* Just like bdrv_read(), but with I/O throttling temporarily disabled */
int bdrv_read_unthrottled(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
bool enabled;
int ret;
enabled = bs->io_limits_enabled;
bs->io_limits_enabled = false;
ret = bdrv_read(bs, 0, buf, 1);
bs->io_limits_enabled = enabled;
return ret;
}
#define BITS_PER_LONG (sizeof(unsigned long) * 8)
static void set_dirty_bitmap(BlockDriverState *bs, int64_t sector_num,
@@ -1804,8 +1758,8 @@ int bdrv_pwrite_sync(BlockDriverState *bs, int64_t offset,
return ret;
}
/* No flush needed for cache modes that already do it */
if (bs->enable_write_cache) {
/* No flush needed for cache modes that use O_DSYNC */
if ((bs->open_flags & BDRV_O_CACHE_WB) != 0) {
bdrv_flush(bs);
}
@@ -1854,9 +1808,6 @@ static int coroutine_fn bdrv_co_do_copy_on_readv(BlockDriverState *bs,
ret = bdrv_co_do_write_zeroes(bs, cluster_sector_num,
cluster_nb_sectors);
} else {
/* This does not change the data on the disk, it is not necessary
* to flush even in cache=writethrough mode.
*/
ret = drv->bdrv_co_writev(bs, cluster_sector_num, cluster_nb_sectors,
&bounce_qiov);
}
@@ -1870,8 +1821,8 @@ static int coroutine_fn bdrv_co_do_copy_on_readv(BlockDriverState *bs,
}
skip_bytes = (sector_num - cluster_sector_num) * BDRV_SECTOR_SIZE;
qemu_iovec_from_buf(qiov, 0, bounce_buffer + skip_bytes,
nb_sectors * BDRV_SECTOR_SIZE);
qemu_iovec_from_buffer(qiov, bounce_buffer + skip_bytes,
nb_sectors * BDRV_SECTOR_SIZE);
err:
qemu_vfree(bounce_buffer);
@@ -2026,10 +1977,6 @@ static int coroutine_fn bdrv_co_do_writev(BlockDriverState *bs,
ret = drv->bdrv_co_writev(bs, sector_num, nb_sectors, qiov);
}
if (ret == 0 && !bs->enable_write_cache) {
ret = bdrv_co_flush(bs);
}
if (bs->dirty_bitmap) {
set_dirty_bitmap(bs, sector_num, nb_sectors, 1);
}
@@ -2131,6 +2078,152 @@ void bdrv_get_geometry(BlockDriverState *bs, uint64_t *nb_sectors_ptr)
*nb_sectors_ptr = length;
}
struct partition {
uint8_t boot_ind; /* 0x80 - active */
uint8_t head; /* starting head */
uint8_t sector; /* starting sector */
uint8_t cyl; /* starting cylinder */
uint8_t sys_ind; /* What partition type */
uint8_t end_head; /* end head */
uint8_t end_sector; /* end sector */
uint8_t end_cyl; /* end cylinder */
uint32_t start_sect; /* starting sector counting from 0 */
uint32_t nr_sects; /* nr of sectors in partition */
} QEMU_PACKED;
/* try to guess the disk logical geometry from the MSDOS partition table. Return 0 if OK, -1 if could not guess */
static int guess_disk_lchs(BlockDriverState *bs,
int *pcylinders, int *pheads, int *psectors)
{
uint8_t buf[BDRV_SECTOR_SIZE];
int ret, i, heads, sectors, cylinders;
struct partition *p;
uint32_t nr_sects;
uint64_t nb_sectors;
bool enabled;
bdrv_get_geometry(bs, &nb_sectors);
/**
* The function will be invoked during startup not only in sync I/O mode,
* but also in async I/O mode. So the I/O throttling function has to
* be disabled temporarily here, not permanently.
*/
enabled = bs->io_limits_enabled;
bs->io_limits_enabled = false;
ret = bdrv_read(bs, 0, buf, 1);
bs->io_limits_enabled = enabled;
if (ret < 0)
return -1;
/* test msdos magic */
if (buf[510] != 0x55 || buf[511] != 0xaa)
return -1;
for(i = 0; i < 4; i++) {
p = ((struct partition *)(buf + 0x1be)) + i;
nr_sects = le32_to_cpu(p->nr_sects);
if (nr_sects && p->end_head) {
/* We make the assumption that the partition terminates on
a cylinder boundary */
heads = p->end_head + 1;
sectors = p->end_sector & 63;
if (sectors == 0)
continue;
cylinders = nb_sectors / (heads * sectors);
if (cylinders < 1 || cylinders > 16383)
continue;
*pheads = heads;
*psectors = sectors;
*pcylinders = cylinders;
#if 0
printf("guessed geometry: LCHS=%d %d %d\n",
cylinders, heads, sectors);
#endif
return 0;
}
}
return -1;
}
void bdrv_guess_geometry(BlockDriverState *bs, int *pcyls, int *pheads, int *psecs)
{
int translation, lba_detected = 0;
int cylinders, heads, secs;
uint64_t nb_sectors;
/* if a geometry hint is available, use it */
bdrv_get_geometry(bs, &nb_sectors);
bdrv_get_geometry_hint(bs, &cylinders, &heads, &secs);
translation = bdrv_get_translation_hint(bs);
if (cylinders != 0) {
*pcyls = cylinders;
*pheads = heads;
*psecs = secs;
} else {
if (guess_disk_lchs(bs, &cylinders, &heads, &secs) == 0) {
if (heads > 16) {
/* if heads > 16, it means that a BIOS LBA
translation was active, so the default
hardware geometry is OK */
lba_detected = 1;
goto default_geometry;
} else {
*pcyls = cylinders;
*pheads = heads;
*psecs = secs;
/* disable any translation to be in sync with
the logical geometry */
if (translation == BIOS_ATA_TRANSLATION_AUTO) {
bdrv_set_translation_hint(bs,
BIOS_ATA_TRANSLATION_NONE);
}
}
} else {
default_geometry:
/* if no geometry, use a standard physical disk geometry */
cylinders = nb_sectors / (16 * 63);
if (cylinders > 16383)
cylinders = 16383;
else if (cylinders < 2)
cylinders = 2;
*pcyls = cylinders;
*pheads = 16;
*psecs = 63;
if ((lba_detected == 1) && (translation == BIOS_ATA_TRANSLATION_AUTO)) {
if ((*pcyls * *pheads) <= 131072) {
bdrv_set_translation_hint(bs,
BIOS_ATA_TRANSLATION_LARGE);
} else {
bdrv_set_translation_hint(bs,
BIOS_ATA_TRANSLATION_LBA);
}
}
}
bdrv_set_geometry_hint(bs, *pcyls, *pheads, *psecs);
}
}
void bdrv_set_geometry_hint(BlockDriverState *bs,
int cyls, int heads, int secs)
{
bs->cyls = cyls;
bs->heads = heads;
bs->secs = secs;
}
void bdrv_set_translation_hint(BlockDriverState *bs, int translation)
{
bs->translation = translation;
}
void bdrv_get_geometry_hint(BlockDriverState *bs,
int *pcyls, int *pheads, int *psecs)
{
*pcyls = bs->cyls;
*pheads = bs->heads;
*psecs = bs->secs;
}
/* throttling disk io limits */
void bdrv_set_io_limits(BlockDriverState *bs,
BlockIOLimit *io_limits)
@@ -2139,6 +2232,118 @@ void bdrv_set_io_limits(BlockDriverState *bs,
bs->io_limits_enabled = bdrv_io_limits_enabled(bs);
}
/* Recognize floppy formats */
typedef struct FDFormat {
FDriveType drive;
uint8_t last_sect;
uint8_t max_track;
uint8_t max_head;
FDriveRate rate;
} FDFormat;
static const FDFormat fd_formats[] = {
/* First entry is default format */
/* 1.44 MB 3"1/2 floppy disks */
{ FDRIVE_DRV_144, 18, 80, 1, FDRIVE_RATE_500K, },
{ FDRIVE_DRV_144, 20, 80, 1, FDRIVE_RATE_500K, },
{ FDRIVE_DRV_144, 21, 80, 1, FDRIVE_RATE_500K, },
{ FDRIVE_DRV_144, 21, 82, 1, FDRIVE_RATE_500K, },
{ FDRIVE_DRV_144, 21, 83, 1, FDRIVE_RATE_500K, },
{ FDRIVE_DRV_144, 22, 80, 1, FDRIVE_RATE_500K, },
{ FDRIVE_DRV_144, 23, 80, 1, FDRIVE_RATE_500K, },
{ FDRIVE_DRV_144, 24, 80, 1, FDRIVE_RATE_500K, },
/* 2.88 MB 3"1/2 floppy disks */
{ FDRIVE_DRV_288, 36, 80, 1, FDRIVE_RATE_1M, },
{ FDRIVE_DRV_288, 39, 80, 1, FDRIVE_RATE_1M, },
{ FDRIVE_DRV_288, 40, 80, 1, FDRIVE_RATE_1M, },
{ FDRIVE_DRV_288, 44, 80, 1, FDRIVE_RATE_1M, },
{ FDRIVE_DRV_288, 48, 80, 1, FDRIVE_RATE_1M, },
/* 720 kB 3"1/2 floppy disks */
{ FDRIVE_DRV_144, 9, 80, 1, FDRIVE_RATE_250K, },
{ FDRIVE_DRV_144, 10, 80, 1, FDRIVE_RATE_250K, },
{ FDRIVE_DRV_144, 10, 82, 1, FDRIVE_RATE_250K, },
{ FDRIVE_DRV_144, 10, 83, 1, FDRIVE_RATE_250K, },
{ FDRIVE_DRV_144, 13, 80, 1, FDRIVE_RATE_250K, },
{ FDRIVE_DRV_144, 14, 80, 1, FDRIVE_RATE_250K, },
/* 1.2 MB 5"1/4 floppy disks */
{ FDRIVE_DRV_120, 15, 80, 1, FDRIVE_RATE_500K, },
{ FDRIVE_DRV_120, 18, 80, 1, FDRIVE_RATE_500K, },
{ FDRIVE_DRV_120, 18, 82, 1, FDRIVE_RATE_500K, },
{ FDRIVE_DRV_120, 18, 83, 1, FDRIVE_RATE_500K, },
{ FDRIVE_DRV_120, 20, 80, 1, FDRIVE_RATE_500K, },
/* 720 kB 5"1/4 floppy disks */
{ FDRIVE_DRV_120, 9, 80, 1, FDRIVE_RATE_250K, },
{ FDRIVE_DRV_120, 11, 80, 1, FDRIVE_RATE_250K, },
/* 360 kB 5"1/4 floppy disks */
{ FDRIVE_DRV_120, 9, 40, 1, FDRIVE_RATE_300K, },
{ FDRIVE_DRV_120, 9, 40, 0, FDRIVE_RATE_300K, },
{ FDRIVE_DRV_120, 10, 41, 1, FDRIVE_RATE_300K, },
{ FDRIVE_DRV_120, 10, 42, 1, FDRIVE_RATE_300K, },
/* 320 kB 5"1/4 floppy disks */
{ FDRIVE_DRV_120, 8, 40, 1, FDRIVE_RATE_250K, },
{ FDRIVE_DRV_120, 8, 40, 0, FDRIVE_RATE_250K, },
/* 360 kB must match 5"1/4 better than 3"1/2... */
{ FDRIVE_DRV_144, 9, 80, 0, FDRIVE_RATE_250K, },
/* end */
{ FDRIVE_DRV_NONE, -1, -1, 0, 0, },
};
void bdrv_get_floppy_geometry_hint(BlockDriverState *bs, int *nb_heads,
int *max_track, int *last_sect,
FDriveType drive_in, FDriveType *drive,
FDriveRate *rate)
{
const FDFormat *parse;
uint64_t nb_sectors, size;
int i, first_match, match;
bdrv_get_geometry_hint(bs, nb_heads, max_track, last_sect);
if (*nb_heads != 0 && *max_track != 0 && *last_sect != 0) {
/* User defined disk */
*rate = FDRIVE_RATE_500K;
} else {
bdrv_get_geometry(bs, &nb_sectors);
match = -1;
first_match = -1;
for (i = 0; ; i++) {
parse = &fd_formats[i];
if (parse->drive == FDRIVE_DRV_NONE) {
break;
}
if (drive_in == parse->drive ||
drive_in == FDRIVE_DRV_NONE) {
size = (parse->max_head + 1) * parse->max_track *
parse->last_sect;
if (nb_sectors == size) {
match = i;
break;
}
if (first_match == -1) {
first_match = i;
}
}
}
if (match == -1) {
if (first_match == -1) {
match = 1;
} else {
match = first_match;
}
parse = &fd_formats[match];
}
*nb_heads = parse->max_head + 1;
*max_track = parse->max_track;
*last_sect = parse->last_sect;
*drive = parse->drive;
*rate = parse->rate;
}
}
int bdrv_get_translation_hint(BlockDriverState *bs)
{
return bs->translation;
}
void bdrv_set_on_error(BlockDriverState *bs, BlockErrorAction on_read_error,
BlockErrorAction on_write_error)
{
@@ -2166,11 +2371,6 @@ int bdrv_enable_write_cache(BlockDriverState *bs)
return bs->enable_write_cache;
}
void bdrv_set_enable_write_cache(BlockDriverState *bs, bool wce)
{
bs->enable_write_cache = wce;
}
int bdrv_is_encrypted(BlockDriverState *bs)
{
if (bs->backing_hd && bs->backing_hd->encrypted)
@@ -2213,9 +2413,13 @@ int bdrv_set_key(BlockDriverState *bs, const char *key)
return ret;
}
const char *bdrv_get_format_name(BlockDriverState *bs)
void bdrv_get_format(BlockDriverState *bs, char *buf, int buf_size)
{
return bs->drv ? bs->drv->format_name : NULL;
if (!bs->drv) {
buf[0] = '\0';
} else {
pstrcpy(buf, buf_size, bs->drv->format_name);
}
}
void bdrv_iterate_format(void (*it)(void *opaque, const char *name),
@@ -2262,11 +2466,6 @@ const char *bdrv_get_device_name(BlockDriverState *bs)
return bs->device_name;
}
int bdrv_get_flags(BlockDriverState *bs)
{
return bs->open_flags;
}
void bdrv_flush_all(void)
{
BlockDriverState *bs;
@@ -2370,55 +2569,6 @@ int bdrv_is_allocated(BlockDriverState *bs, int64_t sector_num, int nb_sectors,
return data.ret;
}
/*
* Given an image chain: ... -> [BASE] -> [INTER1] -> [INTER2] -> [TOP]
*
* Return true if the given sector is allocated in any image between
* BASE and TOP (inclusive). BASE can be NULL to check if the given
* sector is allocated in any image of the chain. Return false otherwise.
*
* 'pnum' is set to the number of sectors (including and immediately following
* the specified sector) that are known to be in the same
* allocated/unallocated state.
*
*/
int coroutine_fn bdrv_co_is_allocated_above(BlockDriverState *top,
BlockDriverState *base,
int64_t sector_num,
int nb_sectors, int *pnum)
{
BlockDriverState *intermediate;
int ret, n = nb_sectors;
intermediate = top;
while (intermediate && intermediate != base) {
int pnum_inter;
ret = bdrv_co_is_allocated(intermediate, sector_num, nb_sectors,
&pnum_inter);
if (ret < 0) {
return ret;
} else if (ret) {
*pnum = pnum_inter;
return 1;
}
/*
* [sector_num, nb_sectors] is unallocated on top but intermediate
* might have
*
* [sector_num+x, nr_sectors] allocated.
*/
if (n > pnum_inter) {
n = pnum_inter;
}
intermediate = intermediate->backing_hd;
}
*pnum = n;
return 0;
}
BlockInfoList *qmp_query_block(Error **errp)
{
BlockInfoList *head = NULL, *cur_item = NULL;
@@ -2450,15 +2600,11 @@ BlockInfoList *qmp_query_block(Error **errp)
info->value->inserted->ro = bs->read_only;
info->value->inserted->drv = g_strdup(bs->drv->format_name);
info->value->inserted->encrypted = bs->encrypted;
info->value->inserted->encryption_key_missing = bdrv_key_required(bs);
if (bs->backing_file[0]) {
info->value->inserted->has_backing_file = true;
info->value->inserted->backing_file = g_strdup(bs->backing_file);
}
info->value->inserted->backing_file_depth =
bdrv_get_backing_file_depth(bs);
if (bs->io_limits_enabled) {
info->value->inserted->bps =
bs->io_limits.bps[BLOCK_IO_LIMIT_TOTAL];
@@ -2618,7 +2764,7 @@ void bdrv_debug_event(BlockDriverState *bs, BlkDebugEvent event)
return;
}
drv->bdrv_debug_event(bs, event);
return drv->bdrv_debug_event(bs, event);
}
@@ -2763,19 +2909,6 @@ BlockDriverState *bdrv_find_backing_image(BlockDriverState *bs,
return NULL;
}
int bdrv_get_backing_file_depth(BlockDriverState *bs)
{
if (!bs->drv) {
return 0;
}
if (!bs->backing_hd) {
return 0;
}
return 1 + bdrv_get_backing_file_depth(bs->backing_hd);
}
#define NB_SUFFIXES 4
char *get_human_readable_size(char *buf, int buf_size, int64_t size)
@@ -2968,13 +3101,13 @@ static int multiwrite_merge(BlockDriverState *bs, BlockRequest *reqs,
// Add the first request to the merged one. If the requests are
// overlapping, drop the last sectors of the first request.
size = (reqs[i].sector - reqs[outidx].sector) << 9;
qemu_iovec_concat(qiov, reqs[outidx].qiov, 0, size);
qemu_iovec_concat(qiov, reqs[outidx].qiov, size);
// We should need to add any zeros between the two requests
assert (reqs[i].sector <= oldreq_last);
// Add the second request
qemu_iovec_concat(qiov, reqs[i].qiov, 0, reqs[i].qiov->size);
qemu_iovec_concat(qiov, reqs[i].qiov, reqs[i].qiov->size);
reqs[outidx].nb_sectors = qiov->size >> 9;
reqs[outidx].qiov = qiov;
@@ -3249,7 +3382,7 @@ static void bdrv_aio_bh_cb(void *opaque)
BlockDriverAIOCBSync *acb = opaque;
if (!acb->is_write)
qemu_iovec_from_buf(acb->qiov, 0, acb->bounce, acb->qiov->size);
qemu_iovec_from_buffer(acb->qiov, acb->bounce, acb->qiov->size);
qemu_vfree(acb->bounce);
acb->common.cb(acb->common.opaque, acb->ret);
qemu_bh_delete(acb->bh);
@@ -3275,7 +3408,7 @@ static BlockDriverAIOCB *bdrv_aio_rw_vector(BlockDriverState *bs,
acb->bh = qemu_bh_new(bdrv_aio_bh_cb, acb);
if (is_write) {
qemu_iovec_to_buf(acb->qiov, 0, acb->bounce, qiov->size);
qemu_iovec_to_buffer(acb->qiov, acb->bounce);
acb->ret = bs->drv->bdrv_write(bs, sector_num, acb->bounce, nb_sectors);
} else {
acb->ret = bs->drv->bdrv_read(bs, sector_num, acb->bounce, nb_sectors);
@@ -3539,7 +3672,7 @@ int coroutine_fn bdrv_co_flush(BlockDriverState *bs)
/* But don't actually force it to the disk with cache=unsafe */
if (bs->open_flags & BDRV_O_NO_FLUSH) {
goto flush_parent;
return 0;
}
if (bs->drv->bdrv_co_flush_to_disk) {
@@ -3578,7 +3711,6 @@ int coroutine_fn bdrv_co_flush(BlockDriverState *bs)
/* Now flush the underlying protocol. It will also have BDRV_O_NO_FLUSH
* in the case of cache=unsafe, so there are no useless flushes.
*/
flush_parent:
return bdrv_co_flush(bs->file);
}

95
block.h
View File

@@ -79,8 +79,6 @@ typedef struct BlockDevOps {
#define BDRV_O_NO_FLUSH 0x0200 /* disable flushing on this disk */
#define BDRV_O_COPY_ON_READ 0x0400 /* copy read backing sectors into image */
#define BDRV_O_INCOMING 0x0800 /* consistency hint for incoming migration */
#define BDRV_O_CHECK 0x1000 /* open solely for consistency check */
#define BDRV_O_ALLOW_RDWR 0x2000 /* allow reopen to change from r/o to r/w */
#define BDRV_O_CACHE_MASK (BDRV_O_NOCACHE | BDRV_O_CACHE_WB | BDRV_O_NO_FLUSH)
@@ -124,7 +122,6 @@ int bdrv_create(BlockDriver *drv, const char* filename,
int bdrv_create_file(const char* filename, QEMUOptionParameter *options);
BlockDriverState *bdrv_new(const char *device_name);
void bdrv_make_anon(BlockDriverState *bs);
void bdrv_swap(BlockDriverState *bs_new, BlockDriverState *bs_old);
void bdrv_append(BlockDriverState *bs_new, BlockDriverState *bs_top);
void bdrv_delete(BlockDriverState *bs);
int bdrv_parse_cache_flags(const char *mode, int *flags);
@@ -144,8 +141,6 @@ bool bdrv_dev_is_tray_open(BlockDriverState *bs);
bool bdrv_dev_is_medium_locked(BlockDriverState *bs);
int bdrv_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors);
int bdrv_read_unthrottled(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors);
int bdrv_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors);
int bdrv_pread(BlockDriverState *bs, int64_t offset,
@@ -170,17 +165,13 @@ int coroutine_fn bdrv_co_write_zeroes(BlockDriverState *bs, int64_t sector_num,
int nb_sectors);
int coroutine_fn bdrv_co_is_allocated(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum);
int coroutine_fn bdrv_co_is_allocated_above(BlockDriverState *top,
BlockDriverState *base,
int64_t sector_num,
int nb_sectors, int *pnum);
BlockDriverState *bdrv_find_backing_image(BlockDriverState *bs,
const char *backing_file);
int bdrv_get_backing_file_depth(BlockDriverState *bs);
int bdrv_truncate(BlockDriverState *bs, int64_t offset);
int64_t bdrv_getlength(BlockDriverState *bs);
int64_t bdrv_get_allocated_file_size(BlockDriverState *bs);
void bdrv_get_geometry(BlockDriverState *bs, uint64_t *nb_sectors_ptr);
void bdrv_guess_geometry(BlockDriverState *bs, int *pcyls, int *pheads, int *psecs);
int bdrv_commit(BlockDriverState *bs);
int bdrv_commit_all(void);
int bdrv_change_backing_file(BlockDriverState *bs,
@@ -192,17 +183,10 @@ typedef struct BdrvCheckResult {
int corruptions;
int leaks;
int check_errors;
int corruptions_fixed;
int leaks_fixed;
BlockFragInfo bfi;
} BdrvCheckResult;
typedef enum {
BDRV_FIX_LEAKS = 1,
BDRV_FIX_ERRORS = 2,
} BdrvCheckMode;
int bdrv_check(BlockDriverState *bs, BdrvCheckResult *res, BdrvCheckMode fix);
int bdrv_check(BlockDriverState *bs, BdrvCheckResult *res);
/* async block I/O */
typedef void BlockDriverDirtyHandler(BlockDriverState *bs, int64_t sector,
@@ -260,18 +244,47 @@ int bdrv_has_zero_init(BlockDriverState *bs);
int bdrv_is_allocated(BlockDriverState *bs, int64_t sector_num, int nb_sectors,
int *pnum);
#define BIOS_ATA_TRANSLATION_AUTO 0
#define BIOS_ATA_TRANSLATION_NONE 1
#define BIOS_ATA_TRANSLATION_LBA 2
#define BIOS_ATA_TRANSLATION_LARGE 3
#define BIOS_ATA_TRANSLATION_RECHS 4
void bdrv_set_geometry_hint(BlockDriverState *bs,
int cyls, int heads, int secs);
void bdrv_set_translation_hint(BlockDriverState *bs, int translation);
void bdrv_get_geometry_hint(BlockDriverState *bs,
int *pcyls, int *pheads, int *psecs);
typedef enum FDriveType {
FDRIVE_DRV_144 = 0x00, /* 1.44 MB 3"5 drive */
FDRIVE_DRV_288 = 0x01, /* 2.88 MB 3"5 drive */
FDRIVE_DRV_120 = 0x02, /* 1.2 MB 5"25 drive */
FDRIVE_DRV_NONE = 0x03, /* No drive connected */
} FDriveType;
typedef enum FDriveRate {
FDRIVE_RATE_500K = 0x00, /* 500 Kbps */
FDRIVE_RATE_300K = 0x01, /* 300 Kbps */
FDRIVE_RATE_250K = 0x02, /* 250 Kbps */
FDRIVE_RATE_1M = 0x03, /* 1 Mbps */
} FDriveRate;
void bdrv_get_floppy_geometry_hint(BlockDriverState *bs, int *nb_heads,
int *max_track, int *last_sect,
FDriveType drive_in, FDriveType *drive,
FDriveRate *rate);
int bdrv_get_translation_hint(BlockDriverState *bs);
void bdrv_set_on_error(BlockDriverState *bs, BlockErrorAction on_read_error,
BlockErrorAction on_write_error);
BlockErrorAction bdrv_get_on_error(BlockDriverState *bs, int is_read);
int bdrv_is_read_only(BlockDriverState *bs);
int bdrv_is_sg(BlockDriverState *bs);
int bdrv_enable_write_cache(BlockDriverState *bs);
void bdrv_set_enable_write_cache(BlockDriverState *bs, bool wce);
int bdrv_is_inserted(BlockDriverState *bs);
int bdrv_media_changed(BlockDriverState *bs);
void bdrv_lock_medium(BlockDriverState *bs, bool locked);
void bdrv_eject(BlockDriverState *bs, bool eject_flag);
const char *bdrv_get_format_name(BlockDriverState *bs);
void bdrv_get_format(BlockDriverState *bs, char *buf, int buf_size);
BlockDriverState *bdrv_find(const char *name);
BlockDriverState *bdrv_next(BlockDriverState *bs);
void bdrv_iterate(void (*it)(void *opaque, BlockDriverState *bs),
@@ -283,7 +296,6 @@ int bdrv_query_missing_keys(void);
void bdrv_iterate_format(void (*it)(void *opaque, const char *name),
void *opaque);
const char *bdrv_get_device_name(BlockDriverState *bs);
int bdrv_get_flags(BlockDriverState *bs);
int bdrv_write_compressed(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors);
int bdrv_get_info(BlockDriverState *bs, BlockDriverInfo *bdi);
@@ -370,7 +382,9 @@ typedef enum {
BLKDBG_L2_ALLOC_COW_READ,
BLKDBG_L2_ALLOC_WRITE,
BLKDBG_READ,
BLKDBG_READ_AIO,
BLKDBG_READ_BACKING,
BLKDBG_READ_BACKING_AIO,
BLKDBG_READ_COMPRESSED,
@@ -406,4 +420,43 @@ typedef enum {
#define BLKDBG_EVENT(bs, evt) bdrv_debug_event(bs, evt)
void bdrv_debug_event(BlockDriverState *bs, BlkDebugEvent event);
/* Convenience for block device models */
typedef struct BlockConf {
BlockDriverState *bs;
uint16_t physical_block_size;
uint16_t logical_block_size;
uint16_t min_io_size;
uint32_t opt_io_size;
int32_t bootindex;
uint32_t discard_granularity;
} BlockConf;
static inline unsigned int get_physical_block_exp(BlockConf *conf)
{
unsigned int exp = 0, size;
for (size = conf->physical_block_size;
size > conf->logical_block_size;
size >>= 1) {
exp++;
}
return exp;
}
#define DEFINE_BLOCK_PROPERTIES(_state, _conf) \
DEFINE_PROP_DRIVE("drive", _state, _conf.bs), \
DEFINE_PROP_BLOCKSIZE("logical_block_size", _state, \
_conf.logical_block_size, 512), \
DEFINE_PROP_BLOCKSIZE("physical_block_size", _state, \
_conf.physical_block_size, 512), \
DEFINE_PROP_UINT16("min_io_size", _state, _conf.min_io_size, 0), \
DEFINE_PROP_UINT32("opt_io_size", _state, _conf.opt_io_size, 0), \
DEFINE_PROP_INT32("bootindex", _state, _conf.bootindex, -1), \
DEFINE_PROP_UINT32("discard_granularity", _state, \
_conf.discard_granularity, 0)
#endif

View File

@@ -1,11 +0,0 @@
block-obj-y += raw.o cow.o qcow.o vdi.o vmdk.o cloop.o dmg.o bochs.o vpc.o vvfat.o
block-obj-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-cache.o
block-obj-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o qed-cluster.o
block-obj-y += qed-check.o
block-obj-y += parallels.o nbd.o blkdebug.o sheepdog.o blkverify.o
block-obj-y += stream.o
block-obj-$(CONFIG_WIN32) += raw-win32.o
block-obj-$(CONFIG_POSIX) += raw-posix.o
block-obj-$(CONFIG_LIBISCSI) += iscsi.o
block-obj-$(CONFIG_CURL) += curl.o
block-obj-$(CONFIG_RBD) += rbd.o

View File

@@ -26,10 +26,24 @@
#include "block_int.h"
#include "module.h"
typedef struct BDRVBlkdebugState {
typedef struct BlkdebugVars {
int state;
QLIST_HEAD(, BlkdebugRule) rules[BLKDBG_EVENT_MAX];
QSIMPLEQ_HEAD(, BlkdebugRule) active_rules;
/* If inject_errno != 0, an error is injected for requests */
int inject_errno;
/* Decides if all future requests fail (false) or only the next one and
* after the next request inject_errno is reset to 0 (true) */
bool inject_once;
/* Decides if aio_readv/writev fails right away (true) or returns an error
* return value only in the callback (false) */
bool inject_immediately;
} BlkdebugVars;
typedef struct BDRVBlkdebugState {
BlkdebugVars vars;
QLIST_HEAD(list, BlkdebugRule) rules[BLKDBG_EVENT_MAX];
} BDRVBlkdebugState;
typedef struct BlkdebugAIOCB {
@@ -59,14 +73,12 @@ typedef struct BlkdebugRule {
int error;
int immediately;
int once;
int64_t sector;
} inject;
struct {
int new_state;
} set_state;
} options;
QLIST_ENTRY(BlkdebugRule) next;
QSIMPLEQ_ENTRY(BlkdebugRule) active_next;
} BlkdebugRule;
static QemuOptsList inject_error_opts = {
@@ -85,10 +97,6 @@ static QemuOptsList inject_error_opts = {
.name = "errno",
.type = QEMU_OPT_NUMBER,
},
{
.name = "sector",
.type = QEMU_OPT_NUMBER,
},
{
.name = "once",
.type = QEMU_OPT_BOOL,
@@ -139,7 +147,9 @@ static const char *event_names[BLKDBG_EVENT_MAX] = {
[BLKDBG_L2_ALLOC_COW_READ] = "l2_alloc.cow_read",
[BLKDBG_L2_ALLOC_WRITE] = "l2_alloc.write",
[BLKDBG_READ] = "read",
[BLKDBG_READ_AIO] = "read_aio",
[BLKDBG_READ_BACKING] = "read_backing",
[BLKDBG_READ_BACKING_AIO] = "read_backing_aio",
[BLKDBG_READ_COMPRESSED] = "read_compressed",
@@ -218,7 +228,6 @@ static int add_rule(QemuOpts *opts, void *opaque)
rule->options.inject.once = qemu_opt_get_bool(opts, "once", 0);
rule->options.inject.immediately =
qemu_opt_get_bool(opts, "immediately", 0);
rule->options.inject.sector = qemu_opt_get_number(opts, "sector", -1);
break;
case ACTION_SET_STATE:
@@ -293,7 +302,7 @@ static int blkdebug_open(BlockDriverState *bs, const char *filename, int flags)
filename = c + 1;
/* Set initial state */
s->state = 1;
s->vars.state = 1;
/* Open the backing file */
ret = bdrv_file_open(&bs->file, filename, flags);
@@ -319,18 +328,18 @@ static void blkdebug_aio_cancel(BlockDriverAIOCB *blockacb)
}
static BlockDriverAIOCB *inject_error(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque, BlkdebugRule *rule)
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVBlkdebugState *s = bs->opaque;
int error = rule->options.inject.error;
int error = s->vars.inject_errno;
struct BlkdebugAIOCB *acb;
QEMUBH *bh;
if (rule->options.inject.once) {
QSIMPLEQ_INIT(&s->active_rules);
if (s->vars.inject_once) {
s->vars.inject_errno = 0;
}
if (rule->options.inject.immediately) {
if (s->vars.inject_immediately) {
return NULL;
}
@@ -349,21 +358,14 @@ static BlockDriverAIOCB *blkdebug_aio_readv(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugRule *rule = NULL;
QSIMPLEQ_FOREACH(rule, &s->active_rules, active_next) {
if (rule->options.inject.sector == -1 ||
(rule->options.inject.sector >= sector_num &&
rule->options.inject.sector < sector_num + nb_sectors)) {
break;
}
if (s->vars.inject_errno) {
return inject_error(bs, cb, opaque);
}
if (rule && rule->options.inject.error) {
return inject_error(bs, cb, opaque, rule);
}
return bdrv_aio_readv(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
BlockDriverAIOCB *acb =
bdrv_aio_readv(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
return acb;
}
static BlockDriverAIOCB *blkdebug_aio_writev(BlockDriverState *bs,
@@ -371,21 +373,14 @@ static BlockDriverAIOCB *blkdebug_aio_writev(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugRule *rule = NULL;
QSIMPLEQ_FOREACH(rule, &s->active_rules, active_next) {
if (rule->options.inject.sector == -1 ||
(rule->options.inject.sector >= sector_num &&
rule->options.inject.sector < sector_num + nb_sectors)) {
break;
}
if (s->vars.inject_errno) {
return inject_error(bs, cb, opaque);
}
if (rule && rule->options.inject.error) {
return inject_error(bs, cb, opaque, rule);
}
return bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
BlockDriverAIOCB *acb =
bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
return acb;
}
static void blkdebug_close(BlockDriverState *bs)
@@ -402,53 +397,44 @@ static void blkdebug_close(BlockDriverState *bs)
}
}
static bool process_rule(BlockDriverState *bs, struct BlkdebugRule *rule,
int old_state, bool injected)
static void process_rule(BlockDriverState *bs, struct BlkdebugRule *rule,
BlkdebugVars *old_vars)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugVars *vars = &s->vars;
/* Only process rules for the current state */
if (rule->state && rule->state != old_state) {
return injected;
if (rule->state && rule->state != old_vars->state) {
return;
}
/* Take the action */
switch (rule->action) {
case ACTION_INJECT_ERROR:
if (!injected) {
QSIMPLEQ_INIT(&s->active_rules);
injected = true;
}
QSIMPLEQ_INSERT_HEAD(&s->active_rules, rule, active_next);
vars->inject_errno = rule->options.inject.error;
vars->inject_once = rule->options.inject.once;
vars->inject_immediately = rule->options.inject.immediately;
break;
case ACTION_SET_STATE:
s->state = rule->options.set_state.new_state;
vars->state = rule->options.set_state.new_state;
break;
}
return injected;
}
static void blkdebug_debug_event(BlockDriverState *bs, BlkDebugEvent event)
{
BDRVBlkdebugState *s = bs->opaque;
struct BlkdebugRule *rule;
int old_state = s->state;
bool injected;
BlkdebugVars old_vars = s->vars;
assert((int)event >= 0 && event < BLKDBG_EVENT_MAX);
injected = false;
QLIST_FOREACH(rule, &s->rules[event], next) {
injected = process_rule(bs, rule, old_state, injected);
process_rule(bs, rule, &old_vars);
}
}
static int64_t blkdebug_getlength(BlockDriverState *bs)
{
return bdrv_getlength(bs->file);
}
static BlockDriver bdrv_blkdebug = {
.format_name = "blkdebug",
.protocol_name = "blkdebug",
@@ -457,7 +443,6 @@ static BlockDriver bdrv_blkdebug = {
.bdrv_file_open = blkdebug_open,
.bdrv_close = blkdebug_close,
.bdrv_getlength = blkdebug_getlength,
.bdrv_aio_readv = blkdebug_aio_readv,
.bdrv_aio_writev = blkdebug_aio_writev,

View File

@@ -140,8 +140,8 @@ static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
continue;
if ((s->buf_off >= acb->end)) {
qemu_iovec_from_buf(acb->qiov, 0, s->orig_buf + acb->start,
acb->end - acb->start);
qemu_iovec_from_buffer(acb->qiov, s->orig_buf + acb->start,
acb->end - acb->start);
acb->common.cb(acb->common.opaque, 0);
qemu_aio_release(acb);
s->acb[i] = NULL;
@@ -176,7 +176,7 @@ static int curl_find_buf(BDRVCURLState *s, size_t start, size_t len,
{
char *buf = state->orig_buf + (start - state->buf_start);
qemu_iovec_from_buf(acb->qiov, 0, buf, len);
qemu_iovec_from_buffer(acb->qiov, buf, len);
acb->common.cb(acb->common.opaque, 0);
return FIND_RET_OK;
@@ -542,7 +542,8 @@ static void curl_close(BlockDriverState *bs)
}
if (s->multi)
curl_multi_cleanup(s->multi);
g_free(s->url);
if (s->url)
free(s->url);
}
static int64_t curl_getlength(BlockDriverState *bs)

View File

@@ -35,10 +35,6 @@
#include <iscsi/iscsi.h>
#include <iscsi/scsi-lowlevel.h>
#ifdef __linux__
#include <scsi/sg.h>
#include <hw/scsi-defs.h>
#endif
typedef struct IscsiLun {
struct iscsi_context *iscsi;
@@ -60,49 +56,19 @@ typedef struct IscsiAIOCB {
int canceled;
size_t read_size;
size_t read_offset;
#ifdef __linux__
sg_io_hdr_t *ioh;
#endif
} IscsiAIOCB;
static void
iscsi_bh_cb(void *p)
{
IscsiAIOCB *acb = p;
qemu_bh_delete(acb->bh);
if (acb->canceled == 0) {
acb->common.cb(acb->common.opaque, acb->status);
}
if (acb->task != NULL) {
scsi_free_scsi_task(acb->task);
acb->task = NULL;
}
qemu_aio_release(acb);
}
static void
iscsi_schedule_bh(IscsiAIOCB *acb)
{
if (acb->bh) {
return;
}
acb->bh = qemu_bh_new(iscsi_bh_cb, acb);
qemu_bh_schedule(acb->bh);
}
struct IscsiTask {
IscsiLun *iscsilun;
BlockDriverState *bs;
int status;
int complete;
};
static void
iscsi_abort_task_cb(struct iscsi_context *iscsi, int status, void *command_data,
void *private_data)
{
IscsiAIOCB *acb = private_data;
acb->status = -ECANCELED;
iscsi_schedule_bh(acb);
}
static void
@@ -111,19 +77,15 @@ iscsi_aio_cancel(BlockDriverAIOCB *blockacb)
IscsiAIOCB *acb = (IscsiAIOCB *)blockacb;
IscsiLun *iscsilun = acb->iscsilun;
if (acb->status != -EINPROGRESS) {
return;
}
acb->common.cb(acb->common.opaque, -ECANCELED);
acb->canceled = 1;
/* send a task mgmt call to the target to cancel the task on the target */
iscsi_task_mgmt_abort_task_async(iscsilun->iscsi, acb->task,
iscsi_abort_task_cb, acb);
iscsi_abort_task_cb, NULL);
while (acb->status == -EINPROGRESS) {
qemu_aio_wait();
}
/* then also cancel the task locally in libiscsi */
iscsi_scsi_task_cancel(iscsilun->iscsi, acb->task);
}
static AIOPool iscsi_aio_pool = {
@@ -160,6 +122,12 @@ iscsi_set_events(IscsiLun *iscsilun)
}
/* If we just added an event, the callback might be delayed
* unless we call qemu_notify_event().
*/
if (ev & ~iscsilun->events) {
qemu_notify_event();
}
iscsilun->events = ev;
}
@@ -184,6 +152,34 @@ iscsi_process_write(void *arg)
}
static int
iscsi_schedule_bh(QEMUBHFunc *cb, IscsiAIOCB *acb)
{
acb->bh = qemu_bh_new(cb, acb);
if (!acb->bh) {
error_report("oom: could not create iscsi bh");
return -EIO;
}
qemu_bh_schedule(acb->bh);
return 0;
}
static void
iscsi_readv_writev_bh_cb(void *p)
{
IscsiAIOCB *acb = p;
qemu_bh_delete(acb->bh);
if (acb->canceled == 0) {
acb->common.cb(acb->common.opaque, acb->status);
}
qemu_aio_release(acb);
}
static void
iscsi_aio_write16_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
@@ -195,6 +191,9 @@ iscsi_aio_write16_cb(struct iscsi_context *iscsi, int status,
g_free(acb->buf);
if (acb->canceled != 0) {
qemu_aio_release(acb);
scsi_free_scsi_task(acb->task);
acb->task = NULL;
return;
}
@@ -205,7 +204,9 @@ iscsi_aio_write16_cb(struct iscsi_context *iscsi, int status,
acb->status = -EIO;
}
iscsi_schedule_bh(acb);
iscsi_schedule_bh(iscsi_readv_writev_bh_cb, acb);
scsi_free_scsi_task(acb->task);
acb->task = NULL;
}
static int64_t sector_qemu2lun(int64_t sector, IscsiLun *iscsilun)
@@ -234,14 +235,13 @@ iscsi_aio_writev(BlockDriverState *bs, int64_t sector_num,
acb->qiov = qiov;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
/* XXX we should pass the iovec to write16 to avoid the extra copy */
/* this will allow us to get rid of 'buf' completely */
size = nb_sectors * BDRV_SECTOR_SIZE;
acb->buf = g_malloc(size);
qemu_iovec_to_buf(acb->qiov, 0, acb->buf, size);
qemu_iovec_to_buffer(acb->qiov, acb->buf);
acb->task = malloc(sizeof(struct scsi_task));
if (acb->task == NULL) {
@@ -293,6 +293,9 @@ iscsi_aio_read16_cb(struct iscsi_context *iscsi, int status,
trace_iscsi_aio_read16_cb(iscsi, status, acb, acb->canceled);
if (acb->canceled != 0) {
qemu_aio_release(acb);
scsi_free_scsi_task(acb->task);
acb->task = NULL;
return;
}
@@ -303,7 +306,9 @@ iscsi_aio_read16_cb(struct iscsi_context *iscsi, int status,
acb->status = -EIO;
}
iscsi_schedule_bh(acb);
iscsi_schedule_bh(iscsi_readv_writev_bh_cb, acb);
scsi_free_scsi_task(acb->task);
acb->task = NULL;
}
static BlockDriverAIOCB *
@@ -329,8 +334,6 @@ iscsi_aio_readv(BlockDriverState *bs, int64_t sector_num,
acb->qiov = qiov;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
acb->read_size = qemu_read_size;
acb->buf = NULL;
@@ -377,7 +380,7 @@ iscsi_aio_readv(BlockDriverState *bs, int64_t sector_num,
*(uint16_t *)&acb->task->cdb[7] = htons(num_sectors);
break;
}
if (iscsi_scsi_command_async(iscsi, iscsilun->lun, acb->task,
iscsi_aio_read16_cb,
NULL,
@@ -406,6 +409,9 @@ iscsi_synccache10_cb(struct iscsi_context *iscsi, int status,
IscsiAIOCB *acb = opaque;
if (acb->canceled != 0) {
qemu_aio_release(acb);
scsi_free_scsi_task(acb->task);
acb->task = NULL;
return;
}
@@ -416,7 +422,9 @@ iscsi_synccache10_cb(struct iscsi_context *iscsi, int status,
acb->status = -EIO;
}
iscsi_schedule_bh(acb);
iscsi_schedule_bh(iscsi_readv_writev_bh_cb, acb);
scsi_free_scsi_task(acb->task);
acb->task = NULL;
}
static BlockDriverAIOCB *
@@ -431,8 +439,6 @@ iscsi_aio_flush(BlockDriverState *bs,
acb->iscsilun = iscsilun;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
acb->task = iscsi_synchronizecache10_task(iscsi, iscsilun->lun,
0, 0, 0, 0,
@@ -457,6 +463,9 @@ iscsi_unmap_cb(struct iscsi_context *iscsi, int status,
IscsiAIOCB *acb = opaque;
if (acb->canceled != 0) {
qemu_aio_release(acb);
scsi_free_scsi_task(acb->task);
acb->task = NULL;
return;
}
@@ -467,7 +476,9 @@ iscsi_unmap_cb(struct iscsi_context *iscsi, int status,
acb->status = -EIO;
}
iscsi_schedule_bh(acb);
iscsi_schedule_bh(iscsi_readv_writev_bh_cb, acb);
scsi_free_scsi_task(acb->task);
acb->task = NULL;
}
static BlockDriverAIOCB *
@@ -484,8 +495,6 @@ iscsi_aio_discard(BlockDriverState *bs,
acb->iscsilun = iscsilun;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
list[0].lba = sector_qemu2lun(sector_num, iscsilun);
list[0].num = nb_sectors * BDRV_SECTOR_SIZE / iscsilun->block_size;
@@ -506,150 +515,6 @@ iscsi_aio_discard(BlockDriverState *bs,
return &acb->common;
}
#ifdef __linux__
static void
iscsi_aio_ioctl_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
{
IscsiAIOCB *acb = opaque;
if (acb->canceled != 0) {
return;
}
acb->status = 0;
if (status < 0) {
error_report("Failed to ioctl(SG_IO) to iSCSI lun. %s",
iscsi_get_error(iscsi));
acb->status = -EIO;
}
acb->ioh->driver_status = 0;
acb->ioh->host_status = 0;
acb->ioh->resid = 0;
#define SG_ERR_DRIVER_SENSE 0x08
if (status == SCSI_STATUS_CHECK_CONDITION && acb->task->datain.size >= 2) {
int ss;
acb->ioh->driver_status |= SG_ERR_DRIVER_SENSE;
acb->ioh->sb_len_wr = acb->task->datain.size - 2;
ss = (acb->ioh->mx_sb_len >= acb->ioh->sb_len_wr) ?
acb->ioh->mx_sb_len : acb->ioh->sb_len_wr;
memcpy(acb->ioh->sbp, &acb->task->datain.data[2], ss);
}
iscsi_schedule_bh(acb);
}
static BlockDriverAIOCB *iscsi_aio_ioctl(BlockDriverState *bs,
unsigned long int req, void *buf,
BlockDriverCompletionFunc *cb, void *opaque)
{
IscsiLun *iscsilun = bs->opaque;
struct iscsi_context *iscsi = iscsilun->iscsi;
struct iscsi_data data;
IscsiAIOCB *acb;
assert(req == SG_IO);
acb = qemu_aio_get(&iscsi_aio_pool, bs, cb, opaque);
acb->iscsilun = iscsilun;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
acb->buf = NULL;
acb->ioh = buf;
acb->task = malloc(sizeof(struct scsi_task));
if (acb->task == NULL) {
error_report("iSCSI: Failed to allocate task for scsi command. %s",
iscsi_get_error(iscsi));
qemu_aio_release(acb);
return NULL;
}
memset(acb->task, 0, sizeof(struct scsi_task));
switch (acb->ioh->dxfer_direction) {
case SG_DXFER_TO_DEV:
acb->task->xfer_dir = SCSI_XFER_WRITE;
break;
case SG_DXFER_FROM_DEV:
acb->task->xfer_dir = SCSI_XFER_READ;
break;
default:
acb->task->xfer_dir = SCSI_XFER_NONE;
break;
}
acb->task->cdb_size = acb->ioh->cmd_len;
memcpy(&acb->task->cdb[0], acb->ioh->cmdp, acb->ioh->cmd_len);
acb->task->expxferlen = acb->ioh->dxfer_len;
if (acb->task->xfer_dir == SCSI_XFER_WRITE) {
data.data = acb->ioh->dxferp;
data.size = acb->ioh->dxfer_len;
}
if (iscsi_scsi_command_async(iscsi, iscsilun->lun, acb->task,
iscsi_aio_ioctl_cb,
(acb->task->xfer_dir == SCSI_XFER_WRITE) ?
&data : NULL,
acb) != 0) {
scsi_free_scsi_task(acb->task);
qemu_aio_release(acb);
return NULL;
}
/* tell libiscsi to read straight into the buffer we got from ioctl */
if (acb->task->xfer_dir == SCSI_XFER_READ) {
scsi_task_add_data_in_buffer(acb->task,
acb->ioh->dxfer_len,
acb->ioh->dxferp);
}
iscsi_set_events(iscsilun);
return &acb->common;
}
static void ioctl_cb(void *opaque, int status)
{
int *p_status = opaque;
*p_status = status;
}
static int iscsi_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
{
IscsiLun *iscsilun = bs->opaque;
int status;
switch (req) {
case SG_GET_VERSION_NUM:
*(int *)buf = 30000;
break;
case SG_GET_SCSI_ID:
((struct sg_scsi_id *)buf)->scsi_type = iscsilun->type;
break;
case SG_IO:
status = -EINPROGRESS;
iscsi_aio_ioctl(bs, req, buf, ioctl_cb, &status);
while (status == -EINPROGRESS) {
qemu_aio_wait();
}
return 0;
default:
return -1;
}
return 0;
}
#endif
static int64_t
iscsi_getlength(BlockDriverState *bs)
{
@@ -662,6 +527,158 @@ iscsi_getlength(BlockDriverState *bs)
return len;
}
static void
iscsi_readcapacity16_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
{
struct IscsiTask *itask = opaque;
struct scsi_readcapacity16 *rc16;
struct scsi_task *task = command_data;
if (status != 0) {
error_report("iSCSI: Failed to read capacity of iSCSI lun. %s",
iscsi_get_error(iscsi));
itask->status = 1;
itask->complete = 1;
scsi_free_scsi_task(task);
return;
}
rc16 = scsi_datain_unmarshall(task);
if (rc16 == NULL) {
error_report("iSCSI: Failed to unmarshall readcapacity16 data.");
itask->status = 1;
itask->complete = 1;
scsi_free_scsi_task(task);
return;
}
itask->iscsilun->block_size = rc16->block_length;
itask->iscsilun->num_blocks = rc16->returned_lba + 1;
itask->bs->total_sectors = itask->iscsilun->num_blocks *
itask->iscsilun->block_size / BDRV_SECTOR_SIZE ;
itask->status = 0;
itask->complete = 1;
scsi_free_scsi_task(task);
}
static void
iscsi_readcapacity10_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
{
struct IscsiTask *itask = opaque;
struct scsi_readcapacity10 *rc10;
struct scsi_task *task = command_data;
if (status != 0) {
error_report("iSCSI: Failed to read capacity of iSCSI lun. %s",
iscsi_get_error(iscsi));
itask->status = 1;
itask->complete = 1;
scsi_free_scsi_task(task);
return;
}
rc10 = scsi_datain_unmarshall(task);
if (rc10 == NULL) {
error_report("iSCSI: Failed to unmarshall readcapacity10 data.");
itask->status = 1;
itask->complete = 1;
scsi_free_scsi_task(task);
return;
}
itask->iscsilun->block_size = rc10->block_size;
itask->iscsilun->num_blocks = rc10->lba + 1;
itask->bs->total_sectors = itask->iscsilun->num_blocks *
itask->iscsilun->block_size / BDRV_SECTOR_SIZE ;
itask->status = 0;
itask->complete = 1;
scsi_free_scsi_task(task);
}
static void
iscsi_inquiry_cb(struct iscsi_context *iscsi, int status, void *command_data,
void *opaque)
{
struct IscsiTask *itask = opaque;
struct scsi_task *task = command_data;
struct scsi_inquiry_standard *inq;
if (status != 0) {
itask->status = 1;
itask->complete = 1;
scsi_free_scsi_task(task);
return;
}
inq = scsi_datain_unmarshall(task);
if (inq == NULL) {
error_report("iSCSI: Failed to unmarshall inquiry data.");
itask->status = 1;
itask->complete = 1;
scsi_free_scsi_task(task);
return;
}
itask->iscsilun->type = inq->periperal_device_type;
scsi_free_scsi_task(task);
switch (itask->iscsilun->type) {
case TYPE_DISK:
task = iscsi_readcapacity16_task(iscsi, itask->iscsilun->lun,
iscsi_readcapacity16_cb, opaque);
if (task == NULL) {
error_report("iSCSI: failed to send readcapacity16 command.");
itask->status = 1;
itask->complete = 1;
return;
}
break;
case TYPE_ROM:
task = iscsi_readcapacity10_task(iscsi, itask->iscsilun->lun,
0, 0,
iscsi_readcapacity10_cb, opaque);
if (task == NULL) {
error_report("iSCSI: failed to send readcapacity16 command.");
itask->status = 1;
itask->complete = 1;
return;
}
break;
default:
itask->status = 0;
itask->complete = 1;
}
}
static void
iscsi_connect_cb(struct iscsi_context *iscsi, int status, void *command_data,
void *opaque)
{
struct IscsiTask *itask = opaque;
struct scsi_task *task;
if (status != 0) {
itask->status = 1;
itask->complete = 1;
return;
}
task = iscsi_inquiry_task(iscsi, itask->iscsilun->lun,
0, 0, 36,
iscsi_inquiry_cb, opaque);
if (task == NULL) {
error_report("iSCSI: failed to send inquiry command.");
itask->status = 1;
itask->complete = 1;
return;
}
}
static int parse_chap(struct iscsi_context *iscsi, const char *target)
{
QemuOptsList *list;
@@ -743,26 +760,26 @@ static char *parse_initiator_name(const char *target)
QemuOptsList *list;
QemuOpts *opts;
const char *name = NULL;
const char *iscsi_name = qemu_get_vm_name();
list = qemu_find_opts("iscsi");
if (list) {
opts = qemu_opts_find(list, target);
if (!list) {
return g_strdup("iqn.2008-11.org.linux-kvm");
}
opts = qemu_opts_find(list, target);
if (opts == NULL) {
opts = QTAILQ_FIRST(&list->head);
if (!opts) {
opts = QTAILQ_FIRST(&list->head);
}
if (opts) {
name = qemu_opt_get(opts, "initiator-name");
return g_strdup("iqn.2008-11.org.linux-kvm");
}
}
if (name) {
return g_strdup(name);
} else {
return g_strdup_printf("iqn.2008-11.org.linux-kvm%s%s",
iscsi_name ? ":" : "",
iscsi_name ? iscsi_name : "");
name = qemu_opt_get(opts, "initiator-name");
if (!name) {
return g_strdup("iqn.2008-11.org.linux-kvm");
}
return g_strdup(name);
}
/*
@@ -774,10 +791,7 @@ static int iscsi_open(BlockDriverState *bs, const char *filename, int flags)
IscsiLun *iscsilun = bs->opaque;
struct iscsi_context *iscsi = NULL;
struct iscsi_url *iscsi_url = NULL;
struct scsi_task *task = NULL;
struct scsi_inquiry_standard *inq = NULL;
struct scsi_readcapacity10 *rc10 = NULL;
struct scsi_readcapacity16 *rc16 = NULL;
struct IscsiTask task;
char *initiator_name = NULL;
int ret;
@@ -790,9 +804,10 @@ static int iscsi_open(BlockDriverState *bs, const char *filename, int flags)
iscsi_url = iscsi_parse_full_url(iscsi, filename);
if (iscsi_url == NULL) {
error_report("Failed to parse URL : %s", filename);
error_report("Failed to parse URL : %s %s", filename,
iscsi_get_error(iscsi));
ret = -EINVAL;
goto out;
goto failed;
}
memset(iscsilun, 0, sizeof(IscsiLun));
@@ -803,13 +818,13 @@ static int iscsi_open(BlockDriverState *bs, const char *filename, int flags)
if (iscsi == NULL) {
error_report("iSCSI: Failed to create iSCSI context.");
ret = -ENOMEM;
goto out;
goto failed;
}
if (iscsi_set_targetname(iscsi, iscsi_url->target)) {
error_report("iSCSI: Failed to set target name.");
ret = -EINVAL;
goto out;
goto failed;
}
if (iscsi_url->user != NULL) {
@@ -818,7 +833,7 @@ static int iscsi_open(BlockDriverState *bs, const char *filename, int flags)
if (ret != 0) {
error_report("Failed to set initiator username and password");
ret = -EINVAL;
goto out;
goto failed;
}
}
@@ -826,13 +841,13 @@ static int iscsi_open(BlockDriverState *bs, const char *filename, int flags)
if (parse_chap(iscsi, iscsi_url->target) != 0) {
error_report("iSCSI: Failed to set CHAP user/password");
ret = -EINVAL;
goto out;
goto failed;
}
if (iscsi_set_session_type(iscsi, ISCSI_SESSION_NORMAL) != 0) {
error_report("iSCSI: Failed to set session type to normal.");
ret = -EINVAL;
goto out;
goto failed;
}
iscsi_set_header_digest(iscsi, ISCSI_HEADER_DIGEST_NONE_CRC32C);
@@ -840,108 +855,49 @@ static int iscsi_open(BlockDriverState *bs, const char *filename, int flags)
/* check if we got HEADER_DIGEST via the options */
parse_header_digest(iscsi, iscsi_url->target);
if (iscsi_full_connect_sync(iscsi, iscsi_url->portal, iscsi_url->lun) != 0) {
error_report("iSCSI: Failed to connect to LUN : %s",
iscsi_get_error(iscsi));
ret = -EINVAL;
goto out;
}
task.iscsilun = iscsilun;
task.status = 0;
task.complete = 0;
task.bs = bs;
iscsilun->iscsi = iscsi;
iscsilun->lun = iscsi_url->lun;
task = iscsi_inquiry_sync(iscsi, iscsilun->lun, 0, 0, 36);
if (task == NULL || task->status != SCSI_STATUS_GOOD) {
error_report("iSCSI: failed to send inquiry command.");
if (iscsi_full_connect_async(iscsi, iscsi_url->portal, iscsi_url->lun,
iscsi_connect_cb, &task)
!= 0) {
error_report("iSCSI: Failed to start async connect.");
ret = -EINVAL;
goto out;
goto failed;
}
inq = scsi_datain_unmarshall(task);
if (inq == NULL) {
error_report("iSCSI: Failed to unmarshall inquiry data.");
while (!task.complete) {
iscsi_set_events(iscsilun);
qemu_aio_wait();
}
if (task.status != 0) {
error_report("iSCSI: Failed to connect to LUN : %s",
iscsi_get_error(iscsi));
ret = -EINVAL;
goto out;
goto failed;
}
iscsilun->type = inq->periperal_device_type;
scsi_free_scsi_task(task);
switch (iscsilun->type) {
case TYPE_DISK:
task = iscsi_readcapacity16_sync(iscsi, iscsilun->lun);
if (task == NULL || task->status != SCSI_STATUS_GOOD) {
error_report("iSCSI: failed to send readcapacity16 command.");
ret = -EINVAL;
goto out;
}
rc16 = scsi_datain_unmarshall(task);
if (rc16 == NULL) {
error_report("iSCSI: Failed to unmarshall readcapacity16 data.");
ret = -EINVAL;
goto out;
}
iscsilun->block_size = rc16->block_length;
iscsilun->num_blocks = rc16->returned_lba + 1;
break;
case TYPE_ROM:
task = iscsi_readcapacity10_sync(iscsi, iscsilun->lun, 0, 0);
if (task == NULL || task->status != SCSI_STATUS_GOOD) {
error_report("iSCSI: failed to send readcapacity10 command.");
ret = -EINVAL;
goto out;
}
rc10 = scsi_datain_unmarshall(task);
if (rc10 == NULL) {
error_report("iSCSI: Failed to unmarshall readcapacity10 data.");
ret = -EINVAL;
goto out;
}
iscsilun->block_size = rc10->block_size;
if (rc10->lba == 0) {
/* blank disk loaded */
iscsilun->num_blocks = 0;
} else {
iscsilun->num_blocks = rc10->lba + 1;
}
break;
default:
break;
if (iscsi_url != NULL) {
iscsi_destroy_url(iscsi_url);
}
return 0;
bs->total_sectors = iscsilun->num_blocks *
iscsilun->block_size / BDRV_SECTOR_SIZE ;
/* Medium changer or tape. We dont have any emulation for this so this must
* be sg ioctl compatible. We force it to be sg, otherwise qemu will try
* to read from the device to guess the image format.
*/
if (iscsilun->type == TYPE_MEDIUM_CHANGER ||
iscsilun->type == TYPE_TAPE) {
bs->sg = 1;
}
ret = 0;
out:
failed:
if (initiator_name != NULL) {
g_free(initiator_name);
}
if (iscsi_url != NULL) {
iscsi_destroy_url(iscsi_url);
}
if (task != NULL) {
scsi_free_scsi_task(task);
}
if (ret) {
if (iscsi != NULL) {
iscsi_destroy_context(iscsi);
}
memset(iscsilun, 0, sizeof(IscsiLun));
if (iscsi != NULL) {
iscsi_destroy_context(iscsi);
}
memset(iscsilun, 0, sizeof(IscsiLun));
return ret;
}
@@ -955,11 +911,6 @@ static void iscsi_close(BlockDriverState *bs)
memset(iscsilun, 0, sizeof(IscsiLun));
}
static int iscsi_has_zero_init(BlockDriverState *bs)
{
return 0;
}
static BlockDriver bdrv_iscsi = {
.format_name = "iscsi",
.protocol_name = "iscsi",
@@ -975,12 +926,6 @@ static BlockDriver bdrv_iscsi = {
.bdrv_aio_flush = iscsi_aio_flush,
.bdrv_aio_discard = iscsi_aio_discard,
.bdrv_has_zero_init = iscsi_has_zero_init,
#ifdef __linux__
.bdrv_ioctl = iscsi_ioctl,
.bdrv_aio_ioctl = iscsi_aio_ioctl,
#endif
};
static void iscsi_block_init(void)

View File

@@ -196,7 +196,7 @@ static void nbd_restart_write(void *opaque)
}
static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
QEMUIOVector *qiov, int offset)
struct iovec *iov, int offset)
{
int rc, ret;
@@ -205,9 +205,8 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write,
nbd_have_request, s);
rc = nbd_send_request(s->sock, request);
if (rc >= 0 && qiov) {
ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (rc >= 0 && iov) {
ret = qemu_co_sendv(s->sock, iov, request->len, offset);
if (ret != request->len) {
return -EIO;
}
@@ -221,7 +220,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
static void nbd_co_receive_reply(BDRVNBDState *s, struct nbd_request *request,
struct nbd_reply *reply,
QEMUIOVector *qiov, int offset)
struct iovec *iov, int offset)
{
int ret;
@@ -232,9 +231,8 @@ static void nbd_co_receive_reply(BDRVNBDState *s, struct nbd_request *request,
if (reply->handle != request->handle) {
reply->error = EIO;
} else {
if (qiov && reply->error == 0) {
ret = qemu_co_recvv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (iov && reply->error == 0) {
ret = qemu_co_recvv(s->sock, iov, request->len, offset);
if (ret != request->len) {
reply->error = EIO;
}
@@ -351,7 +349,7 @@ static int nbd_co_readv_1(BlockDriverState *bs, int64_t sector_num,
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, qiov, offset);
nbd_co_receive_reply(s, &request, &reply, qiov->iov, offset);
}
nbd_coroutine_end(s, &request);
return -reply.error;
@@ -376,7 +374,7 @@ static int nbd_co_writev_1(BlockDriverState *bs, int64_t sector_num,
request.len = nb_sectors * 512;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, qiov, offset);
ret = nbd_co_send_request(s, &request, qiov->iov, offset);
if (ret < 0) {
reply.error = -ret;
} else {

View File

@@ -540,7 +540,7 @@ done:
qemu_co_mutex_unlock(&s->lock);
if (qiov->niov > 1) {
qemu_iovec_from_buf(qiov, 0, orig_buf, qiov->size);
qemu_iovec_from_buffer(qiov, orig_buf, qiov->size);
qemu_vfree(orig_buf);
}
@@ -569,7 +569,7 @@ static coroutine_fn int qcow_co_writev(BlockDriverState *bs, int64_t sector_num,
if (qiov->niov > 1) {
buf = orig_buf = qemu_blockalign(bs, qiov->size);
qemu_iovec_to_buf(qiov, 0, buf, qiov->size);
qemu_iovec_to_buffer(qiov, buf);
} else {
orig_buf = NULL;
buf = (uint8_t *)qiov->iov->iov_base;

View File

@@ -40,9 +40,11 @@ struct Qcow2Cache {
struct Qcow2Cache* depends;
int size;
bool depends_on_flush;
bool writethrough;
};
Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables)
Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables,
bool writethrough)
{
BDRVQcowState *s = bs->opaque;
Qcow2Cache *c;
@@ -51,6 +53,7 @@ Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables)
c = g_malloc0(sizeof(*c));
c->size = num_tables;
c->entries = g_malloc0(sizeof(*c->entries) * num_tables);
c->writethrough = writethrough;
for (i = 0; i < c->size; i++) {
c->entries[i].table = qemu_blockalign(bs, s->cluster_size);
@@ -304,7 +307,12 @@ found:
*table = NULL;
assert(c->entries[i].ref >= 0);
return 0;
if (c->writethrough) {
return qcow2_cache_entry_flush(bs, c, i);
} else {
return 0;
}
}
void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table)
@@ -321,3 +329,16 @@ void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table)
found:
c->entries[i].dirty = true;
}
bool qcow2_cache_set_writethrough(BlockDriverState *bs, Qcow2Cache *c,
bool enable)
{
bool old = c->writethrough;
if (!old && enable) {
qcow2_cache_flush(bs, c);
}
c->writethrough = enable;
return old;
}

View File

@@ -540,6 +540,7 @@ static int get_cluster_table(BlockDriverState *bs, uint64_t offset,
if (l2_offset) {
qcow2_free_clusters(bs, l2_offset, s->l2_size * sizeof(uint64_t));
}
l2_offset = s->l1_table[l1_index] & L1E_OFFSET_MASK;
}
/* find the cluster offset for the given disk offset */
@@ -642,10 +643,11 @@ int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m)
}
if (m->nb_available & (s->cluster_sectors - 1)) {
uint64_t end = m->nb_available & ~(uint64_t)(s->cluster_sectors - 1);
cow = true;
qemu_co_mutex_unlock(&s->lock);
ret = copy_sectors(bs, start_sect, cluster_offset, m->nb_available,
align_offset(m->nb_available, s->cluster_sectors));
ret = copy_sectors(bs, start_sect + end, cluster_offset + (end << 9),
m->nb_available - end, s->cluster_sectors);
qemu_co_mutex_lock(&s->lock);
if (ret < 0)
goto err;
@@ -662,10 +664,7 @@ int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m)
qcow2_cache_depends_on_flush(s->l2_table_cache);
}
if (qcow2_need_accurate_refcounts(s)) {
qcow2_cache_set_dependency(bs, s->l2_table_cache,
s->refcount_block_cache);
}
qcow2_cache_set_dependency(bs, s->l2_table_cache, s->refcount_block_cache);
ret = get_cluster_table(bs, m->offset, &l2_table, &l2_index);
if (ret < 0) {
goto err;
@@ -950,16 +949,8 @@ again:
/* save info needed for meta data update */
if (nb_clusters > 0) {
/*
* requested_sectors: Number of sectors from the start of the first
* newly allocated cluster to the end of the (possibly shortened
* before) write request.
*
* avail_sectors: Number of sectors from the start of the first
* newly allocated to the end of the last newly allocated cluster.
*/
int requested_sectors = n_end - keep_clusters * s->cluster_sectors;
int avail_sectors = nb_clusters
int avail_sectors = (keep_clusters + nb_clusters)
<< (s->cluster_bits - BDRV_SECTOR_BITS);
*m = (QCowL2Meta) {

View File

@@ -301,8 +301,7 @@ static int alloc_refcount_block(BlockDriverState *bs,
uint64_t last_table_size;
uint64_t blocks_clusters;
do {
uint64_t table_clusters =
size_to_clusters(s, table_size * sizeof(uint64_t));
uint64_t table_clusters = size_to_clusters(s, table_size);
blocks_clusters = 1 +
((table_clusters + refcount_block_clusters - 1)
/ refcount_block_clusters);
@@ -628,11 +627,10 @@ int64_t qcow2_alloc_bytes(BlockDriverState *bs, int size)
BLKDBG_EVENT(bs->file, BLKDBG_CLUSTER_ALLOC_BYTES);
assert(size > 0 && size <= s->cluster_size);
if (s->free_byte_offset == 0) {
offset = qcow2_alloc_clusters(bs, s->cluster_size);
if (offset < 0) {
return offset;
s->free_byte_offset = qcow2_alloc_clusters(bs, s->cluster_size);
if (s->free_byte_offset < 0) {
return s->free_byte_offset;
}
s->free_byte_offset = offset;
}
redo:
free_in_cluster = s->cluster_size -
@@ -728,6 +726,13 @@ int qcow2_update_snapshot_refcount(BlockDriverState *bs,
int64_t old_offset, old_l2_offset;
int i, j, l1_modified = 0, nb_csectors, refcount;
int ret;
bool old_l2_writethrough, old_refcount_writethrough;
/* Switch caches to writeback mode during update */
old_l2_writethrough =
qcow2_cache_set_writethrough(bs, s->l2_table_cache, false);
old_refcount_writethrough =
qcow2_cache_set_writethrough(bs, s->refcount_block_cache, false);
l2_table = NULL;
l1_table = NULL;
@@ -851,6 +856,11 @@ fail:
qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
}
/* Enable writethrough cache mode again */
qcow2_cache_set_writethrough(bs, s->l2_table_cache, old_l2_writethrough);
qcow2_cache_set_writethrough(bs, s->refcount_block_cache,
old_refcount_writethrough);
/* Update L1 only if it isn't deleted anyway (addend = -1) */
if (addend >= 0 && l1_modified) {
for(i = 0; i < l1_size; i++)
@@ -1112,12 +1122,11 @@ fail:
* Returns 0 if no errors are found, the number of errors in case the image is
* detected as corrupted, and -errno when an internal error occurred.
*/
int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix)
int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res)
{
BDRVQcowState *s = bs->opaque;
int64_t size, i;
int nb_clusters, refcount1, refcount2;
int64_t size;
int nb_clusters, refcount1, refcount2, i;
QCowSnapshot *sn;
uint16_t *refcount_table;
int ret;
@@ -1161,15 +1170,14 @@ int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
/* Refcount blocks are cluster aligned */
if (offset & (s->cluster_size - 1)) {
fprintf(stderr, "ERROR refcount block %" PRId64 " is not "
fprintf(stderr, "ERROR refcount block %d is not "
"cluster aligned; refcount table entry corrupted\n", i);
res->corruptions++;
continue;
}
if (cluster >= nb_clusters) {
fprintf(stderr, "ERROR refcount block %" PRId64
" is outside image\n", i);
fprintf(stderr, "ERROR refcount block %d is outside image\n", i);
res->corruptions++;
continue;
}
@@ -1178,8 +1186,7 @@ int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
inc_refcounts(bs, res, refcount_table, nb_clusters,
offset, s->cluster_size);
if (refcount_table[cluster] != 1) {
fprintf(stderr, "ERROR refcount block %" PRId64
" refcount=%d\n",
fprintf(stderr, "ERROR refcount block %d refcount=%d\n",
i, refcount_table[cluster]);
res->corruptions++;
}
@@ -1190,7 +1197,7 @@ int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
for(i = 0; i < nb_clusters; i++) {
refcount1 = get_refcount(bs, i);
if (refcount1 < 0) {
fprintf(stderr, "Can't get refcount for cluster %" PRId64 ": %s\n",
fprintf(stderr, "Can't get refcount for cluster %d: %s\n",
i, strerror(-refcount1));
res->check_errors++;
continue;
@@ -1198,31 +1205,9 @@ int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
refcount2 = refcount_table[i];
if (refcount1 != refcount2) {
/* Check if we're allowed to fix the mismatch */
int *num_fixed = NULL;
if (refcount1 > refcount2 && (fix & BDRV_FIX_LEAKS)) {
num_fixed = &res->leaks_fixed;
} else if (refcount1 < refcount2 && (fix & BDRV_FIX_ERRORS)) {
num_fixed = &res->corruptions_fixed;
}
fprintf(stderr, "%s cluster %" PRId64 " refcount=%d reference=%d\n",
num_fixed != NULL ? "Repairing" :
refcount1 < refcount2 ? "ERROR" :
"Leaked",
fprintf(stderr, "%s cluster %d refcount=%d reference=%d\n",
refcount1 < refcount2 ? "ERROR" : "Leaked",
i, refcount1, refcount2);
if (num_fixed) {
ret = update_refcount(bs, i << s->cluster_bits, 1,
refcount2 - refcount1);
if (ret >= 0) {
(*num_fixed)++;
continue;
}
}
/* And if we couldn't, print an error */
if (refcount1 < refcount2) {
res->corruptions++;
} else {

View File

@@ -405,7 +405,7 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
#ifdef DEBUG_ALLOC
{
BdrvCheckResult result = {0};
qcow2_check_refcounts(bs, &result, 0);
qcow2_check_refcounts(bs, &result);
}
#endif
return 0;
@@ -522,7 +522,7 @@ int qcow2_snapshot_goto(BlockDriverState *bs, const char *snapshot_id)
#ifdef DEBUG_ALLOC
{
BdrvCheckResult result = {0};
qcow2_check_refcounts(bs, &result, 0);
qcow2_check_refcounts(bs, &result);
}
#endif
return 0;
@@ -582,7 +582,7 @@ int qcow2_snapshot_delete(BlockDriverState *bs, const char *snapshot_id)
#ifdef DEBUG_ALLOC
{
BdrvCheckResult result = {0};
qcow2_check_refcounts(bs, &result, 0);
qcow2_check_refcounts(bs, &result);
}
#endif
return 0;

View File

@@ -214,82 +214,13 @@ static void report_unsupported_feature(BlockDriverState *bs,
}
}
/*
* Sets the dirty bit and flushes afterwards if necessary.
*
* The incompatible_features bit is only set if the image file header was
* updated successfully. Therefore it is not required to check the return
* value of this function.
*/
static int qcow2_mark_dirty(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
uint64_t val;
int ret;
assert(s->qcow_version >= 3);
if (s->incompatible_features & QCOW2_INCOMPAT_DIRTY) {
return 0; /* already dirty */
}
val = cpu_to_be64(s->incompatible_features | QCOW2_INCOMPAT_DIRTY);
ret = bdrv_pwrite(bs->file, offsetof(QCowHeader, incompatible_features),
&val, sizeof(val));
if (ret < 0) {
return ret;
}
ret = bdrv_flush(bs->file);
if (ret < 0) {
return ret;
}
/* Only treat image as dirty if the header was updated successfully */
s->incompatible_features |= QCOW2_INCOMPAT_DIRTY;
return 0;
}
/*
* Clears the dirty bit and flushes before if necessary. Only call this
* function when there are no pending requests, it does not guard against
* concurrent requests dirtying the image.
*/
static int qcow2_mark_clean(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
if (s->incompatible_features & QCOW2_INCOMPAT_DIRTY) {
int ret = bdrv_flush(bs);
if (ret < 0) {
return ret;
}
s->incompatible_features &= ~QCOW2_INCOMPAT_DIRTY;
return qcow2_update_header(bs);
}
return 0;
}
static int qcow2_check(BlockDriverState *bs, BdrvCheckResult *result,
BdrvCheckMode fix)
{
int ret = qcow2_check_refcounts(bs, result, fix);
if (ret < 0) {
return ret;
}
if (fix && result->check_errors == 0 && result->corruptions == 0) {
return qcow2_mark_clean(bs);
}
return ret;
}
static int qcow2_open(BlockDriverState *bs, int flags)
{
BDRVQcowState *s = bs->opaque;
int len, i, ret = 0;
QCowHeader header;
uint64_t ext_end;
bool writethrough;
ret = bdrv_pread(bs->file, 0, &header, sizeof(header));
if (ret < 0) {
@@ -357,13 +288,12 @@ static int qcow2_open(BlockDriverState *bs, int flags)
s->compatible_features = header.compatible_features;
s->autoclear_features = header.autoclear_features;
if (s->incompatible_features & ~QCOW2_INCOMPAT_MASK) {
if (s->incompatible_features != 0) {
void *feature_table = NULL;
qcow2_read_extensions(bs, header.header_length, ext_end,
&feature_table);
report_unsupported_feature(bs, feature_table,
s->incompatible_features &
~QCOW2_INCOMPAT_MASK);
s->incompatible_features);
ret = -ENOTSUP;
goto fail;
}
@@ -429,8 +359,10 @@ static int qcow2_open(BlockDriverState *bs, int flags)
}
/* alloc L2 table/refcount block cache */
s->l2_table_cache = qcow2_cache_create(bs, L2_CACHE_SIZE);
s->refcount_block_cache = qcow2_cache_create(bs, REFCOUNT_CACHE_SIZE);
writethrough = ((flags & BDRV_O_CACHE_WB) == 0);
s->l2_table_cache = qcow2_cache_create(bs, L2_CACHE_SIZE, writethrough);
s->refcount_block_cache = qcow2_cache_create(bs, REFCOUNT_CACHE_SIZE,
writethrough);
s->cluster_cache = g_malloc(s->cluster_size);
/* one more sector for decompressed data alignment */
@@ -483,21 +415,10 @@ static int qcow2_open(BlockDriverState *bs, int flags)
/* Initialise locks */
qemu_co_mutex_init(&s->lock);
/* Repair image if dirty */
if (!(flags & BDRV_O_CHECK) && !bs->read_only &&
(s->incompatible_features & QCOW2_INCOMPAT_DIRTY)) {
BdrvCheckResult result = {0};
ret = qcow2_check(bs, &result, BDRV_FIX_ERRORS);
if (ret < 0) {
goto fail;
}
}
#ifdef DEBUG_ALLOC
{
BdrvCheckResult result = {0};
qcow2_check_refcounts(bs, &result, 0);
qcow2_check_refcounts(bs, &result);
}
#endif
return ret;
@@ -590,7 +511,7 @@ int qcow2_backing_read1(BlockDriverState *bs, QEMUIOVector *qiov,
else
n1 = bs->total_sectors - sector_num;
qemu_iovec_memset(qiov, 512 * n1, 0, 512 * (nb_sectors - n1));
qemu_iovec_memset_skip(qiov, 0, 512 * (nb_sectors - n1), 512 * n1);
return n1;
}
@@ -629,7 +550,7 @@ static coroutine_fn int qcow2_co_readv(BlockDriverState *bs, int64_t sector_num,
index_in_cluster = sector_num & (s->cluster_sectors - 1);
qemu_iovec_reset(&hd_qiov);
qemu_iovec_concat(&hd_qiov, qiov, bytes_done,
qemu_iovec_copy(&hd_qiov, qiov, bytes_done,
cur_nr_sectors * 512);
switch (ret) {
@@ -651,7 +572,7 @@ static coroutine_fn int qcow2_co_readv(BlockDriverState *bs, int64_t sector_num,
}
} else {
/* Note: in this case, no need to wait */
qemu_iovec_memset(&hd_qiov, 0, 0, 512 * cur_nr_sectors);
qemu_iovec_memset(&hd_qiov, 0, 512 * cur_nr_sectors);
}
break;
@@ -660,7 +581,7 @@ static coroutine_fn int qcow2_co_readv(BlockDriverState *bs, int64_t sector_num,
ret = -EIO;
goto fail;
}
qemu_iovec_memset(&hd_qiov, 0, 0, 512 * cur_nr_sectors);
qemu_iovec_memset(&hd_qiov, 0, 512 * cur_nr_sectors);
break;
case QCOW2_CLUSTER_COMPRESSED:
@@ -670,7 +591,7 @@ static coroutine_fn int qcow2_co_readv(BlockDriverState *bs, int64_t sector_num,
goto fail;
}
qemu_iovec_from_buf(&hd_qiov, 0,
qemu_iovec_from_buffer(&hd_qiov,
s->cluster_cache + index_in_cluster * 512,
512 * cur_nr_sectors);
break;
@@ -710,8 +631,11 @@ static coroutine_fn int qcow2_co_readv(BlockDriverState *bs, int64_t sector_num,
if (s->crypt_method) {
qcow2_encrypt_sectors(s, sector_num, cluster_data,
cluster_data, cur_nr_sectors, 0, &s->aes_decrypt_key);
qemu_iovec_from_buf(qiov, bytes_done,
cluster_data, 512 * cur_nr_sectors);
qemu_iovec_reset(&hd_qiov);
qemu_iovec_copy(&hd_qiov, qiov, bytes_done,
cur_nr_sectors * 512);
qemu_iovec_from_buffer(&hd_qiov, cluster_data,
512 * cur_nr_sectors);
}
break;
@@ -796,16 +720,11 @@ static coroutine_fn int qcow2_co_writev(BlockDriverState *bs,
goto fail;
}
if (l2meta.nb_clusters > 0 &&
(s->compatible_features & QCOW2_COMPAT_LAZY_REFCOUNTS)) {
qcow2_mark_dirty(bs);
}
cluster_offset = l2meta.cluster_offset;
assert((cluster_offset & 511) == 0);
qemu_iovec_reset(&hd_qiov);
qemu_iovec_concat(&hd_qiov, qiov, bytes_done,
qemu_iovec_copy(&hd_qiov, qiov, bytes_done,
cur_nr_sectors * 512);
if (s->crypt_method) {
@@ -816,7 +735,7 @@ static coroutine_fn int qcow2_co_writev(BlockDriverState *bs,
assert(hd_qiov.size <=
QCOW_MAX_CRYPT_CLUSTERS * s->cluster_size);
qemu_iovec_to_buf(&hd_qiov, 0, cluster_data, hd_qiov.size);
qemu_iovec_to_buffer(&hd_qiov, cluster_data);
qcow2_encrypt_sectors(s, sector_num, cluster_data,
cluster_data, cur_nr_sectors, 1, &s->aes_encrypt_key);
@@ -872,8 +791,6 @@ static void qcow2_close(BlockDriverState *bs)
qcow2_cache_flush(bs, s->l2_table_cache);
qcow2_cache_flush(bs, s->refcount_block_cache);
qcow2_mark_clean(bs);
qcow2_cache_destroy(bs, s->l2_table_cache);
qcow2_cache_destroy(bs, s->refcount_block_cache);
@@ -1038,16 +955,7 @@ int qcow2_update_header(BlockDriverState *bs)
/* Feature table */
Qcow2Feature features[] = {
{
.type = QCOW2_FEAT_TYPE_INCOMPATIBLE,
.bit = QCOW2_INCOMPAT_DIRTY_BITNR,
.name = "dirty bit",
},
{
.type = QCOW2_FEAT_TYPE_COMPATIBLE,
.bit = QCOW2_COMPAT_LAZY_REFCOUNTS_BITNR,
.name = "lazy refcounts",
},
/* no feature defined yet */
};
ret = header_ext_add(buf, QCOW2_EXT_MAGIC_FEATURE_TABLE,
@@ -1230,11 +1138,6 @@ static int qcow2_create2(const char *filename, int64_t total_size,
header.crypt_method = cpu_to_be32(QCOW_CRYPT_NONE);
}
if (flags & BLOCK_FLAG_LAZY_REFCOUNTS) {
header.compatible_features |=
cpu_to_be64(QCOW2_COMPAT_LAZY_REFCOUNTS);
}
ret = bdrv_pwrite(bs, 0, &header, sizeof(header));
if (ret < 0) {
goto out;
@@ -1348,8 +1251,6 @@ static int qcow2_create(const char *filename, QEMUOptionParameter *options)
options->value.s);
return -EINVAL;
}
} else if (!strcmp(options->name, BLOCK_OPT_LAZY_REFCOUNTS)) {
flags |= options->value.n ? BLOCK_FLAG_LAZY_REFCOUNTS : 0;
}
options++;
}
@@ -1360,12 +1261,6 @@ static int qcow2_create(const char *filename, QEMUOptionParameter *options)
return -EINVAL;
}
if (version < 3 && (flags & BLOCK_FLAG_LAZY_REFCOUNTS)) {
fprintf(stderr, "Lazy refcounts only supported with compatibility "
"level 1.1 and above (use compat=1.1 or greater)\n");
return -EINVAL;
}
return qcow2_create2(filename, sectors, backing_file, backing_fmt, flags,
cluster_size, prealloc, options, version);
}
@@ -1552,12 +1447,10 @@ static coroutine_fn int qcow2_co_flush_to_os(BlockDriverState *bs)
return ret;
}
if (qcow2_need_accurate_refcounts(s)) {
ret = qcow2_cache_flush(bs, s->refcount_block_cache);
if (ret < 0) {
qemu_co_mutex_unlock(&s->lock);
return ret;
}
ret = qcow2_cache_flush(bs, s->refcount_block_cache);
if (ret < 0) {
qemu_co_mutex_unlock(&s->lock);
return ret;
}
qemu_co_mutex_unlock(&s->lock);
@@ -1577,6 +1470,12 @@ static int qcow2_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
return 0;
}
static int qcow2_check(BlockDriverState *bs, BdrvCheckResult *result)
{
return qcow2_check_refcounts(bs, result);
}
#if 0
static void dump_refcounts(BlockDriverState *bs)
{
@@ -1665,11 +1564,6 @@ static QEMUOptionParameter qcow2_create_options[] = {
.type = OPT_STRING,
.help = "Preallocation mode (allowed values: off, metadata)"
},
{
.name = BLOCK_OPT_LAZY_REFCOUNTS,
.type = OPT_FLAG,
.help = "Postpone refcount updates",
},
{ NULL }
};

View File

@@ -110,22 +110,6 @@ enum {
QCOW2_FEAT_TYPE_AUTOCLEAR = 2,
};
/* Incompatible feature bits */
enum {
QCOW2_INCOMPAT_DIRTY_BITNR = 0,
QCOW2_INCOMPAT_DIRTY = 1 << QCOW2_INCOMPAT_DIRTY_BITNR,
QCOW2_INCOMPAT_MASK = QCOW2_INCOMPAT_DIRTY,
};
/* Compatible feature bits */
enum {
QCOW2_COMPAT_LAZY_REFCOUNTS_BITNR = 0,
QCOW2_COMPAT_LAZY_REFCOUNTS = 1 << QCOW2_COMPAT_LAZY_REFCOUNTS_BITNR,
QCOW2_COMPAT_FEAT_MASK = QCOW2_COMPAT_LAZY_REFCOUNTS,
};
typedef struct Qcow2Feature {
uint8_t type;
uint8_t bit;
@@ -253,11 +237,6 @@ static inline int qcow2_get_cluster_type(uint64_t l2_entry)
}
}
/* Check whether refcounts are eager or lazy */
static inline bool qcow2_need_accurate_refcounts(BDRVQcowState *s)
{
return !(s->incompatible_features & QCOW2_INCOMPAT_DIRTY);
}
// FIXME Need qcow2_ prefix to global functions
@@ -282,8 +261,7 @@ void qcow2_free_any_clusters(BlockDriverState *bs,
int qcow2_update_snapshot_refcount(BlockDriverState *bs,
int64_t l1_table_offset, int l1_size, int addend);
int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix);
int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res);
/* qcow2-cluster.c functions */
int qcow2_grow_l1_table(BlockDriverState *bs, int min_size, bool exact_size);
@@ -318,8 +296,11 @@ void qcow2_free_snapshots(BlockDriverState *bs);
int qcow2_read_snapshots(BlockDriverState *bs);
/* qcow2-cache.c functions */
Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables);
Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables,
bool writethrough);
int qcow2_cache_destroy(BlockDriverState* bs, Qcow2Cache *c);
bool qcow2_cache_set_writethrough(BlockDriverState *bs, Qcow2Cache *c,
bool enable);
void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table);
int qcow2_cache_flush(BlockDriverState *bs, Qcow2Cache *c);

View File

@@ -87,7 +87,6 @@ static unsigned int qed_check_l2_table(QEDCheck *check, QEDTable *table)
if (!qed_check_cluster_offset(s, offset)) {
if (check->fix) {
table->offsets[i] = 0;
check->result->corruptions_fixed++;
} else {
check->result->corruptions++;
}
@@ -128,7 +127,6 @@ static int qed_check_l1_table(QEDCheck *check, QEDTable *table)
/* Clear invalid offset */
if (check->fix) {
table->offsets[i] = 0;
check->result->corruptions_fixed++;
} else {
check->result->corruptions++;
}
@@ -194,28 +192,6 @@ static void qed_check_for_leaks(QEDCheck *check)
}
}
/**
* Mark an image clean once it passes check or has been repaired
*/
static void qed_check_mark_clean(BDRVQEDState *s, BdrvCheckResult *result)
{
/* Skip if there were unfixable corruptions or I/O errors */
if (result->corruptions > 0 || result->check_errors > 0) {
return;
}
/* Skip if image is already marked clean */
if (!(s->header.features & QED_F_NEED_CHECK)) {
return;
}
/* Ensure fixes reach storage before clearing check bit */
bdrv_flush(s->bs);
s->header.features &= ~QED_F_NEED_CHECK;
qed_write_header_sync(s);
}
int qed_check(BDRVQEDState *s, BdrvCheckResult *result, bool fix)
{
QEDCheck check = {
@@ -237,10 +213,6 @@ int qed_check(BDRVQEDState *s, BdrvCheckResult *result, bool fix)
if (ret == 0) {
/* Only check for leaks if entire image was scanned successfully */
qed_check_for_leaks(&check);
if (fix) {
qed_check_mark_clean(s, result);
}
}
g_free(check.used_clusters);

View File

@@ -89,7 +89,7 @@ static void qed_header_cpu_to_le(const QEDHeader *cpu, QEDHeader *le)
le->backing_filename_size = cpu_to_le32(cpu->backing_filename_size);
}
int qed_write_header_sync(BDRVQEDState *s)
static int qed_write_header_sync(BDRVQEDState *s)
{
QEDHeader le;
int ret;
@@ -477,7 +477,7 @@ static int bdrv_qed_open(BlockDriverState *bs, int flags)
}
/* If image was not closed cleanly, check consistency */
if (!(flags & BDRV_O_CHECK) && (s->header.features & QED_F_NEED_CHECK)) {
if (s->header.features & QED_F_NEED_CHECK) {
/* Read-only images cannot be fixed. There is no risk of corruption
* since write operations are not possible. Therefore, allow
* potentially inconsistent images to be opened read-only. This can
@@ -491,6 +491,13 @@ static int bdrv_qed_open(BlockDriverState *bs, int flags)
if (ret) {
goto out;
}
if (!result.corruptions && !result.check_errors) {
/* Ensure fixes reach storage before clearing check bit */
bdrv_flush(s->bs);
s->header.features &= ~QED_F_NEED_CHECK;
qed_write_header_sync(s);
}
}
}
@@ -729,7 +736,7 @@ static void qed_read_backing_file(BDRVQEDState *s, uint64_t pos,
/* Zero all sectors if reading beyond the end of the backing file */
if (pos >= backing_length ||
pos + qiov->size > backing_length) {
qemu_iovec_memset(qiov, 0, 0, qiov->size);
qemu_iovec_memset(qiov, 0, qiov->size);
}
/* Complete now if there are no backing file sectors to read */
@@ -741,7 +748,7 @@ static void qed_read_backing_file(BDRVQEDState *s, uint64_t pos,
/* If the read straddles the end of the backing file, shorten it */
size = MIN((uint64_t)backing_length - pos, qiov->size);
BLKDBG_EVENT(s->bs->file, BLKDBG_READ_BACKING_AIO);
BLKDBG_EVENT(s->bs->file, BLKDBG_READ_BACKING);
bdrv_aio_readv(s->bs->backing_hd, pos / BDRV_SECTOR_SIZE,
qiov, size / BDRV_SECTOR_SIZE, cb, opaque);
}
@@ -1124,7 +1131,7 @@ static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len)
acb->cur_nclusters = qed_bytes_to_clusters(s,
qed_offset_into_cluster(s, acb->cur_pos) + len);
qemu_iovec_concat(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len);
qemu_iovec_copy(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len);
if (acb->flags & QED_AIOCB_ZERO) {
/* Skip ahead if the clusters are already zero */
@@ -1170,7 +1177,7 @@ static void qed_aio_write_inplace(QEDAIOCB *acb, uint64_t offset, size_t len)
/* Calculate the I/O vector */
acb->cur_cluster = offset;
qemu_iovec_concat(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len);
qemu_iovec_copy(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len);
/* Do the actual write */
qed_aio_write_main(acb, 0);
@@ -1240,11 +1247,11 @@ static void qed_aio_read_data(void *opaque, int ret,
goto err;
}
qemu_iovec_concat(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len);
qemu_iovec_copy(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len);
/* Handle zero cluster and backing file reads */
if (ret == QED_CLUSTER_ZERO) {
qemu_iovec_memset(&acb->cur_qiov, 0, 0, acb->cur_qiov.size);
qemu_iovec_memset(&acb->cur_qiov, 0, acb->cur_qiov.size);
qed_aio_next_io(acb, 0);
return;
} else if (ret != QED_CLUSTER_FOUND) {
@@ -1363,21 +1370,10 @@ static int coroutine_fn bdrv_qed_co_write_zeroes(BlockDriverState *bs,
int nb_sectors)
{
BlockDriverAIOCB *blockacb;
BDRVQEDState *s = bs->opaque;
QEDWriteZeroesCB cb = { .done = false };
QEMUIOVector qiov;
struct iovec iov;
/* Refuse if there are untouched backing file sectors */
if (bs->backing_hd) {
if (qed_offset_into_cluster(s, sector_num * BDRV_SECTOR_SIZE) != 0) {
return -ENOTSUP;
}
if (qed_offset_into_cluster(s, nb_sectors * BDRV_SECTOR_SIZE) != 0) {
return -ENOTSUP;
}
}
/* Zero writes start without an I/O buffer. If a buffer becomes necessary
* then it will be allocated during request processing.
*/
@@ -1521,12 +1517,11 @@ static void bdrv_qed_invalidate_cache(BlockDriverState *bs)
bdrv_qed_open(bs, bs->open_flags);
}
static int bdrv_qed_check(BlockDriverState *bs, BdrvCheckResult *result,
BdrvCheckMode fix)
static int bdrv_qed_check(BlockDriverState *bs, BdrvCheckResult *result)
{
BDRVQEDState *s = bs->opaque;
return qed_check(s, result, !!fix);
return qed_check(s, result, false);
}
static QEMUOptionParameter qed_create_options[] = {

View File

@@ -210,11 +210,6 @@ typedef struct {
void *gencb_alloc(size_t len, BlockDriverCompletionFunc *cb, void *opaque);
void gencb_complete(void *opaque, int ret);
/**
* Header functions
*/
int qed_write_header_sync(BDRVQEDState *s);
/**
* L2 cache functions
*/

View File

@@ -52,10 +52,6 @@
#include <sys/param.h>
#include <linux/cdrom.h>
#include <linux/fd.h>
#include <linux/fs.h>
#endif
#ifdef CONFIG_FIEMAP
#include <linux/fiemap.h>
#endif
#if defined (__FreeBSD__) || defined(__FreeBSD_kernel__)
#include <sys/disk.h>
@@ -271,7 +267,7 @@ static int raw_open_common(BlockDriverState *bs, const char *filename,
out_free_buf:
qemu_vfree(s->aligned_buf);
out_close:
qemu_close(fd);
close(fd);
return -errno;
}
@@ -376,7 +372,7 @@ static void raw_close(BlockDriverState *bs)
{
BDRVRawState *s = bs->opaque;
if (s->fd >= 0) {
qemu_close(s->fd);
close(s->fd);
s->fd = -1;
if (s->aligned_buf != NULL)
qemu_vfree(s->aligned_buf);
@@ -572,121 +568,21 @@ static int raw_create(const char *filename, QEMUOptionParameter *options)
options++;
}
fd = qemu_open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY,
0644);
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY,
0644);
if (fd < 0) {
result = -errno;
} else {
if (ftruncate(fd, total_size * BDRV_SECTOR_SIZE) != 0) {
result = -errno;
}
if (qemu_close(fd) != 0) {
if (close(fd) != 0) {
result = -errno;
}
}
return result;
}
/*
* Returns true iff the specified sector is present in the disk image. Drivers
* not implementing the functionality are assumed to not support backing files,
* hence all their sectors are reported as allocated.
*
* If 'sector_num' is beyond the end of the disk image the return value is 0
* and 'pnum' is set to 0.
*
* 'pnum' is set to the number of sectors (including and immediately following
* the specified sector) that are known to be in the same
* allocated/unallocated state.
*
* 'nb_sectors' is the max value 'pnum' should be set to. If nb_sectors goes
* beyond the end of the disk image it will be clamped.
*/
static int coroutine_fn raw_co_is_allocated(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors, int *pnum)
{
off_t start, data, hole;
int ret;
ret = fd_open(bs);
if (ret < 0) {
return ret;
}
start = sector_num * BDRV_SECTOR_SIZE;
#ifdef CONFIG_FIEMAP
BDRVRawState *s = bs->opaque;
struct {
struct fiemap fm;
struct fiemap_extent fe;
} f;
f.fm.fm_start = start;
f.fm.fm_length = (int64_t)nb_sectors * BDRV_SECTOR_SIZE;
f.fm.fm_flags = 0;
f.fm.fm_extent_count = 1;
f.fm.fm_reserved = 0;
if (ioctl(s->fd, FS_IOC_FIEMAP, &f) == -1) {
/* Assume everything is allocated. */
*pnum = nb_sectors;
return 1;
}
if (f.fm.fm_mapped_extents == 0) {
/* No extents found, data is beyond f.fm.fm_start + f.fm.fm_length.
* f.fm.fm_start + f.fm.fm_length must be clamped to the file size!
*/
off_t length = lseek(s->fd, 0, SEEK_END);
hole = f.fm.fm_start;
data = MIN(f.fm.fm_start + f.fm.fm_length, length);
} else {
data = f.fe.fe_logical;
hole = f.fe.fe_logical + f.fe.fe_length;
}
#elif defined SEEK_HOLE && defined SEEK_DATA
BDRVRawState *s = bs->opaque;
hole = lseek(s->fd, start, SEEK_HOLE);
if (hole == -1) {
/* -ENXIO indicates that sector_num was past the end of the file.
* There is a virtual hole there. */
assert(errno != -ENXIO);
/* Most likely EINVAL. Assume everything is allocated. */
*pnum = nb_sectors;
return 1;
}
if (hole > start) {
data = start;
} else {
/* On a hole. We need another syscall to find its end. */
data = lseek(s->fd, start, SEEK_DATA);
if (data == -1) {
data = lseek(s->fd, 0, SEEK_END);
}
}
#else
*pnum = nb_sectors;
return 1;
#endif
if (data <= start) {
/* On a data extent, compute sectors to the end of the extent. */
*pnum = MIN(nb_sectors, (hole - start) / BDRV_SECTOR_SIZE);
return 1;
} else {
/* On a hole, compute sectors to the beginning of the next extent. */
*pnum = MIN(nb_sectors, (data - start) / BDRV_SECTOR_SIZE);
return 0;
}
}
#ifdef CONFIG_XFS
static int xfs_discard(BDRVRawState *s, int64_t sector_num, int nb_sectors)
{
@@ -738,7 +634,6 @@ static BlockDriver bdrv_file = {
.bdrv_close = raw_close,
.bdrv_create = raw_create,
.bdrv_co_discard = raw_co_discard,
.bdrv_co_is_allocated = raw_co_is_allocated,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
@@ -846,11 +741,11 @@ static int hdev_open(BlockDriverState *bs, const char *filename, int flags)
if ( bsdPath[ 0 ] != '\0' ) {
strcat(bsdPath,"s0");
/* some CDs don't have a partition 0 */
fd = qemu_open(bsdPath, O_RDONLY | O_BINARY | O_LARGEFILE);
fd = open(bsdPath, O_RDONLY | O_BINARY | O_LARGEFILE);
if (fd < 0) {
bsdPath[strlen(bsdPath)-1] = '1';
} else {
qemu_close(fd);
close(fd);
}
filename = bsdPath;
}
@@ -889,7 +784,7 @@ static int fd_open(BlockDriverState *bs)
last_media_present = (s->fd >= 0);
if (s->fd >= 0 &&
(get_clock() - s->fd_open_time) >= FD_OPEN_TIMEOUT) {
qemu_close(s->fd);
close(s->fd);
s->fd = -1;
#ifdef DEBUG_FLOPPY
printf("Floppy closed\n");
@@ -903,7 +798,7 @@ static int fd_open(BlockDriverState *bs)
#endif
return -EIO;
}
s->fd = qemu_open(bs->filename, s->open_flags & ~O_NONBLOCK);
s->fd = open(bs->filename, s->open_flags & ~O_NONBLOCK);
if (s->fd < 0) {
s->fd_error_time = get_clock();
s->fd_got_error = 1;
@@ -977,7 +872,7 @@ static int hdev_create(const char *filename, QEMUOptionParameter *options)
options++;
}
fd = qemu_open(filename, O_WRONLY | O_BINARY);
fd = open(filename, O_WRONLY | O_BINARY);
if (fd < 0)
return -errno;
@@ -988,7 +883,7 @@ static int hdev_create(const char *filename, QEMUOptionParameter *options)
else if (lseek(fd, 0, SEEK_END) < total_size * BDRV_SECTOR_SIZE)
ret = -ENOSPC;
qemu_close(fd);
close(fd);
return ret;
}
@@ -1038,7 +933,7 @@ static int floppy_open(BlockDriverState *bs, const char *filename, int flags)
return ret;
/* close fd so that we can reopen it as needed */
qemu_close(s->fd);
close(s->fd);
s->fd = -1;
s->fd_media_changed = 1;
@@ -1052,12 +947,10 @@ static int floppy_probe_device(const char *filename)
struct floppy_struct fdparam;
struct stat st;
if (strstart(filename, "/dev/fd", NULL) &&
!strstart(filename, "/dev/fdset/", NULL)) {
if (strstart(filename, "/dev/fd", NULL))
prio = 50;
}
fd = qemu_open(filename, O_RDONLY | O_NONBLOCK);
fd = open(filename, O_RDONLY | O_NONBLOCK);
if (fd < 0) {
goto out;
}
@@ -1072,7 +965,7 @@ static int floppy_probe_device(const char *filename)
prio = 100;
outc:
qemu_close(fd);
close(fd);
out:
return prio;
}
@@ -1107,14 +1000,14 @@ static void floppy_eject(BlockDriverState *bs, bool eject_flag)
int fd;
if (s->fd >= 0) {
qemu_close(s->fd);
close(s->fd);
s->fd = -1;
}
fd = qemu_open(bs->filename, s->open_flags | O_NONBLOCK);
fd = open(bs->filename, s->open_flags | O_NONBLOCK);
if (fd >= 0) {
if (ioctl(fd, FDEJECT, 0) < 0)
perror("FDEJECT");
qemu_close(fd);
close(fd);
}
}
@@ -1160,7 +1053,7 @@ static int cdrom_probe_device(const char *filename)
int prio = 0;
struct stat st;
fd = qemu_open(filename, O_RDONLY | O_NONBLOCK);
fd = open(filename, O_RDONLY | O_NONBLOCK);
if (fd < 0) {
goto out;
}
@@ -1175,7 +1068,7 @@ static int cdrom_probe_device(const char *filename)
prio = 100;
outc:
qemu_close(fd);
close(fd);
out:
return prio;
}
@@ -1283,8 +1176,8 @@ static int cdrom_reopen(BlockDriverState *bs)
* FreeBSD seems to not notice sometimes...
*/
if (s->fd >= 0)
qemu_close(s->fd);
fd = qemu_open(bs->filename, s->open_flags, 0644);
close(s->fd);
fd = open(bs->filename, s->open_flags, 0644);
if (fd < 0) {
s->fd = -1;
return -EIO;

View File

@@ -255,13 +255,13 @@ static int raw_create(const char *filename, QEMUOptionParameter *options)
options++;
}
fd = qemu_open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY,
0644);
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY,
0644);
if (fd < 0)
return -EIO;
set_sparse(fd);
ftruncate(fd, total_size * 512);
qemu_close(fd);
close(fd);
return 0;
}

View File

@@ -12,14 +12,12 @@ static int raw_open(BlockDriverState *bs, int flags)
static int coroutine_fn raw_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
return bdrv_co_readv(bs->file, sector_num, nb_sectors, qiov);
}
static int coroutine_fn raw_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
BLKDBG_EVENT(bs->file, BLKDBG_WRITE_AIO);
return bdrv_co_writev(bs->file, sector_num, nb_sectors, qiov);
}
@@ -27,13 +25,6 @@ static void raw_close(BlockDriverState *bs)
{
}
static int coroutine_fn raw_co_is_allocated(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors, int *pnum)
{
return bdrv_co_is_allocated(bs->file, sector_num, nb_sectors, pnum);
}
static int64_t raw_getlength(BlockDriverState *bs)
{
return bdrv_getlength(bs->file);
@@ -117,7 +108,6 @@ static BlockDriver bdrv_raw = {
.bdrv_co_readv = raw_co_readv,
.bdrv_co_writev = raw_co_writev,
.bdrv_co_is_allocated = raw_co_is_allocated,
.bdrv_co_discard = raw_co_discard,
.bdrv_probe = raw_probe,

View File

@@ -476,25 +476,6 @@ static int qemu_rbd_open(BlockDriverState *bs, const char *filename, int flags)
s->snap = g_strdup(snap_buf);
}
/*
* Fallback to more conservative semantics if setting cache
* options fails. Ignore errors from setting rbd_cache because the
* only possible error is that the option does not exist, and
* librbd defaults to no caching. If write through caching cannot
* be set up, fall back to no caching.
*/
if (flags & BDRV_O_NOCACHE) {
rados_conf_set(s->cluster, "rbd_cache", "false");
} else {
rados_conf_set(s->cluster, "rbd_cache", "true");
if (!(flags & BDRV_O_CACHE_WB)) {
r = rados_conf_set(s->cluster, "rbd_cache_max_dirty", "0");
if (r < 0) {
rados_conf_set(s->cluster, "rbd_cache", "false");
}
}
}
if (strstr(conf, "conf=") == NULL) {
/* try default location, but ignore failure */
rados_conf_read_file(s->cluster, NULL);
@@ -639,7 +620,7 @@ static void rbd_aio_bh_cb(void *opaque)
RBDAIOCB *acb = opaque;
if (acb->cmd == RBD_AIO_READ) {
qemu_iovec_from_buf(acb->qiov, 0, acb->bounce, acb->qiov->size);
qemu_iovec_from_buffer(acb->qiov, acb->bounce, acb->qiov->size);
}
qemu_vfree(acb->bounce);
acb->common.cb(acb->common.opaque, (acb->ret > 0 ? 0 : acb->ret));
@@ -693,7 +674,7 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
acb->bh = NULL;
if (cmd == RBD_AIO_WRITE) {
qemu_iovec_to_buf(acb->qiov, 0, acb->bounce, qiov->size);
qemu_iovec_to_buffer(acb->qiov, acb->bounce);
}
buf = acb->bounce;

View File

@@ -259,7 +259,8 @@ typedef struct AIOReq {
uint8_t flags;
uint32_t id;
QLIST_ENTRY(AIOReq) aio_siblings;
QLIST_ENTRY(AIOReq) outstanding_aio_siblings;
QLIST_ENTRY(AIOReq) aioreq_siblings;
} AIOReq;
enum AIOCBState {
@@ -282,7 +283,8 @@ struct SheepdogAIOCB {
void (*aio_done_func)(SheepdogAIOCB *);
int canceled;
int nr_pending;
QLIST_HEAD(aioreq_head, AIOReq) aioreq_head;
};
typedef struct BDRVSheepdogState {
@@ -305,8 +307,7 @@ typedef struct BDRVSheepdogState {
Coroutine *co_recv;
uint32_t aioreq_seq_num;
QLIST_HEAD(inflight_aio_head, AIOReq) inflight_aio_head;
QLIST_HEAD(pending_aio_head, AIOReq) pending_aio_head;
QLIST_HEAD(outstanding_aio_head, AIOReq) outstanding_aio_head;
} BDRVSheepdogState;
static const char * sd_strerror(int err)
@@ -357,7 +358,7 @@ static const char * sd_strerror(int err)
* Sheepdog I/O handling:
*
* 1. In sd_co_rw_vector, we send the I/O requests to the server and
* link the requests to the inflight_list in the
* link the requests to the outstanding_list in the
* BDRVSheepdogState. The function exits without waiting for
* receiving the response.
*
@@ -385,18 +386,21 @@ static inline AIOReq *alloc_aio_req(BDRVSheepdogState *s, SheepdogAIOCB *acb,
aio_req->flags = flags;
aio_req->id = s->aioreq_seq_num++;
acb->nr_pending++;
QLIST_INSERT_HEAD(&s->outstanding_aio_head, aio_req,
outstanding_aio_siblings);
QLIST_INSERT_HEAD(&acb->aioreq_head, aio_req, aioreq_siblings);
return aio_req;
}
static inline void free_aio_req(BDRVSheepdogState *s, AIOReq *aio_req)
static inline int free_aio_req(BDRVSheepdogState *s, AIOReq *aio_req)
{
SheepdogAIOCB *acb = aio_req->aiocb;
QLIST_REMOVE(aio_req, aio_siblings);
QLIST_REMOVE(aio_req, outstanding_aio_siblings);
QLIST_REMOVE(aio_req, aioreq_siblings);
g_free(aio_req);
acb->nr_pending--;
return !QLIST_EMPTY(&acb->aioreq_head);
}
static void coroutine_fn sd_finish_aiocb(SheepdogAIOCB *acb)
@@ -442,7 +446,7 @@ static SheepdogAIOCB *sd_aio_setup(BlockDriverState *bs, QEMUIOVector *qiov,
acb->canceled = 0;
acb->coroutine = qemu_coroutine_self();
acb->ret = 0;
acb->nr_pending = 0;
QLIST_INIT(&acb->aioreq_head);
return acb;
}
@@ -485,7 +489,6 @@ static int connect_to_sdog(const char *addr, const char *port)
if (errno == EINTR) {
goto reconnect;
}
close(fd);
break;
}
@@ -499,8 +502,28 @@ success:
return fd;
}
static coroutine_fn int send_co_req(int sockfd, SheepdogReq *hdr, void *data,
unsigned int *wlen)
static int send_req(int sockfd, SheepdogReq *hdr, void *data,
unsigned int *wlen)
{
int ret;
ret = qemu_send_full(sockfd, hdr, sizeof(*hdr), 0);
if (ret < sizeof(*hdr)) {
error_report("failed to send a req, %s", strerror(errno));
return -errno;
}
ret = qemu_send_full(sockfd, data, *wlen, 0);
if (ret < *wlen) {
error_report("failed to send a req, %s", strerror(errno));
ret = -errno;
}
return ret;
}
static int send_co_req(int sockfd, SheepdogReq *hdr, void *data,
unsigned int *wlen)
{
int ret;
@@ -517,37 +540,46 @@ static coroutine_fn int send_co_req(int sockfd, SheepdogReq *hdr, void *data,
return ret;
}
static void restart_co_req(void *opaque)
static int do_req(int sockfd, SheepdogReq *hdr, void *data,
unsigned int *wlen, unsigned int *rlen)
{
Coroutine *co = opaque;
int ret;
qemu_coroutine_enter(co, NULL);
socket_set_block(sockfd);
ret = send_req(sockfd, hdr, data, wlen);
if (ret < 0) {
goto out;
}
ret = qemu_recv_full(sockfd, hdr, sizeof(*hdr), 0);
if (ret < sizeof(*hdr)) {
error_report("failed to get a rsp, %s", strerror(errno));
ret = -errno;
goto out;
}
if (*rlen > hdr->data_length) {
*rlen = hdr->data_length;
}
if (*rlen) {
ret = qemu_recv_full(sockfd, data, *rlen, 0);
if (ret < *rlen) {
error_report("failed to get the data, %s", strerror(errno));
ret = -errno;
goto out;
}
}
ret = 0;
out:
socket_set_nonblock(sockfd);
return ret;
}
typedef struct SheepdogReqCo {
int sockfd;
SheepdogReq *hdr;
void *data;
unsigned int *wlen;
unsigned int *rlen;
int ret;
bool finished;
} SheepdogReqCo;
static coroutine_fn void do_co_req(void *opaque)
static int do_co_req(int sockfd, SheepdogReq *hdr, void *data,
unsigned int *wlen, unsigned int *rlen)
{
int ret;
Coroutine *co;
SheepdogReqCo *srco = opaque;
int sockfd = srco->sockfd;
SheepdogReq *hdr = srco->hdr;
void *data = srco->data;
unsigned int *wlen = srco->wlen;
unsigned int *rlen = srco->rlen;
co = qemu_coroutine_self();
qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, NULL, co);
socket_set_block(sockfd);
ret = send_co_req(sockfd, hdr, data, wlen);
@@ -555,8 +587,6 @@ static coroutine_fn void do_co_req(void *opaque)
goto out;
}
qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, NULL, co);
ret = qemu_co_recv(sockfd, hdr, sizeof(*hdr));
if (ret < sizeof(*hdr)) {
error_report("failed to get a rsp, %s", strerror(errno));
@@ -578,79 +608,40 @@ static coroutine_fn void do_co_req(void *opaque)
}
ret = 0;
out:
qemu_aio_set_fd_handler(sockfd, NULL, NULL, NULL, NULL);
socket_set_nonblock(sockfd);
srco->ret = ret;
srco->finished = true;
}
static int do_req(int sockfd, SheepdogReq *hdr, void *data,
unsigned int *wlen, unsigned int *rlen)
{
Coroutine *co;
SheepdogReqCo srco = {
.sockfd = sockfd,
.hdr = hdr,
.data = data,
.wlen = wlen,
.rlen = rlen,
.ret = 0,
.finished = false,
};
if (qemu_in_coroutine()) {
do_co_req(&srco);
} else {
co = qemu_coroutine_create(do_co_req);
qemu_coroutine_enter(co, &srco);
while (!srco.finished) {
qemu_aio_wait();
}
}
return srco.ret;
return ret;
}
static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
struct iovec *iov, int niov, int create,
enum AIOCBState aiocb_type);
static AIOReq *find_pending_req(BDRVSheepdogState *s, uint64_t oid)
{
AIOReq *aio_req;
QLIST_FOREACH(aio_req, &s->pending_aio_head, aio_siblings) {
if (aio_req->oid == oid) {
return aio_req;
}
}
return NULL;
}
/*
* This function searchs pending requests to the object `oid', and
* sends them.
*/
static void coroutine_fn send_pending_req(BDRVSheepdogState *s, uint64_t oid)
static void coroutine_fn send_pending_req(BDRVSheepdogState *s, uint64_t oid, uint32_t id)
{
AIOReq *aio_req;
AIOReq *aio_req, *next;
SheepdogAIOCB *acb;
int ret;
while ((aio_req = find_pending_req(s, oid)) != NULL) {
QLIST_FOREACH_SAFE(aio_req, &s->outstanding_aio_head,
outstanding_aio_siblings, next) {
if (id == aio_req->id) {
continue;
}
if (aio_req->oid != oid) {
continue;
}
acb = aio_req->aiocb;
/* move aio_req from pending list to inflight one */
QLIST_REMOVE(aio_req, aio_siblings);
QLIST_INSERT_HEAD(&s->inflight_aio_head, aio_req, aio_siblings);
ret = add_aio_request(s, aio_req, acb->qiov->iov,
acb->qiov->niov, 0, acb->aiocb_type);
if (ret < 0) {
error_report("add_aio_request is failed");
free_aio_req(s, aio_req);
if (!acb->nr_pending) {
if (QLIST_EMPTY(&acb->aioreq_head)) {
sd_finish_aiocb(acb);
}
}
@@ -671,9 +662,10 @@ static void coroutine_fn aio_read_response(void *opaque)
int ret;
AIOReq *aio_req = NULL;
SheepdogAIOCB *acb;
int rest;
unsigned long idx;
if (QLIST_EMPTY(&s->inflight_aio_head)) {
if (QLIST_EMPTY(&s->outstanding_aio_head)) {
goto out;
}
@@ -684,8 +676,8 @@ static void coroutine_fn aio_read_response(void *opaque)
goto out;
}
/* find the right aio_req from the inflight aio list */
QLIST_FOREACH(aio_req, &s->inflight_aio_head, aio_siblings) {
/* find the right aio_req from the outstanding_aio list */
QLIST_FOREACH(aio_req, &s->outstanding_aio_head, outstanding_aio_siblings) {
if (aio_req->id == rsp.id) {
break;
}
@@ -723,12 +715,12 @@ static void coroutine_fn aio_read_response(void *opaque)
* create requests are not allowed, so we search the
* pending requests here.
*/
send_pending_req(s, vid_to_data_oid(s->inode.vdi_id, idx));
send_pending_req(s, vid_to_data_oid(s->inode.vdi_id, idx), rsp.id);
}
break;
case AIOCB_READ_UDATA:
ret = qemu_co_recvv(fd, acb->qiov->iov, acb->qiov->niov,
aio_req->iov_offset, rsp.data_length);
ret = qemu_co_recvv(fd, acb->qiov->iov, rsp.data_length,
aio_req->iov_offset);
if (ret < 0) {
error_report("failed to get the data, %s", strerror(errno));
goto out;
@@ -741,8 +733,8 @@ static void coroutine_fn aio_read_response(void *opaque)
error_report("%s", sd_strerror(rsp.result));
}
free_aio_req(s, aio_req);
if (!acb->nr_pending) {
rest = free_aio_req(s, aio_req);
if (!rest) {
/*
* We've finished all requests which belong to the AIOCB, so
* we can switch back to sd_co_readv/writev now.
@@ -775,8 +767,7 @@ static int aio_flush_request(void *opaque)
{
BDRVSheepdogState *s = opaque;
return !QLIST_EMPTY(&s->inflight_aio_head) ||
!QLIST_EMPTY(&s->pending_aio_head);
return !QLIST_EMPTY(&s->outstanding_aio_head);
}
static int set_nodelay(int fd)
@@ -1001,7 +992,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
}
if (wlen) {
ret = qemu_co_sendv(s->fd, iov, niov, aio_req->iov_offset, wlen);
ret = qemu_co_sendv(s->fd, iov, wlen, aio_req->iov_offset);
if (ret < 0) {
qemu_co_mutex_unlock(&s->lock);
error_report("failed to send a data, %s", strerror(errno));
@@ -1093,8 +1084,7 @@ static int sd_open(BlockDriverState *bs, const char *filename, int flags)
strstart(filename, "sheepdog:", (const char **)&filename);
QLIST_INIT(&s->inflight_aio_head);
QLIST_INIT(&s->pending_aio_head);
QLIST_INIT(&s->outstanding_aio_head);
s->fd = -1;
memset(vdi, 0, sizeof(vdi));
@@ -1456,7 +1446,6 @@ static void coroutine_fn sd_write_done(SheepdogAIOCB *acb)
iov.iov_len = sizeof(s->inode);
aio_req = alloc_aio_req(s, acb, vid_to_vdi_oid(s->inode.vdi_id),
data_len, offset, 0, 0, offset);
QLIST_INSERT_HEAD(&s->inflight_aio_head, aio_req, aio_siblings);
ret = add_aio_request(s, aio_req, &iov, 1, 0, AIOCB_WRITE_UDATA);
if (ret) {
free_aio_req(s, aio_req);
@@ -1525,7 +1514,7 @@ out:
* Send I/O requests to the server.
*
* This function sends requests to the server, links the requests to
* the inflight_list in BDRVSheepdogState, and exits without
* the outstanding_list in BDRVSheepdogState, and exits without
* waiting the response. The responses are received in the
* `aio_read_response' function which is called from the main loop as
* a fd handler.
@@ -1557,12 +1546,6 @@ static int coroutine_fn sd_co_rw_vector(void *p)
}
}
/*
* Make sure we don't free the aiocb before we are done with all requests.
* This additional reference is dropped at the end of this function.
*/
acb->nr_pending++;
while (done != total) {
uint8_t flags = 0;
uint64_t old_oid = 0;
@@ -1572,40 +1555,37 @@ static int coroutine_fn sd_co_rw_vector(void *p)
len = MIN(total - done, SD_DATA_OBJ_SIZE - offset);
switch (acb->aiocb_type) {
case AIOCB_READ_UDATA:
if (!inode->data_vdi_id[idx]) {
qemu_iovec_memset(acb->qiov, done, 0, len);
if (!inode->data_vdi_id[idx]) {
if (acb->aiocb_type == AIOCB_READ_UDATA) {
goto done;
}
break;
case AIOCB_WRITE_UDATA:
if (!inode->data_vdi_id[idx]) {
create = 1;
} else if (!is_data_obj_writable(inode, idx)) {
/* Copy-On-Write */
create = 1;
old_oid = oid;
flags = SD_FLAG_CMD_COW;
}
break;
default:
break;
create = 1;
} else if (acb->aiocb_type == AIOCB_WRITE_UDATA
&& !is_data_obj_writable(inode, idx)) {
/* Copy-On-Write */
create = 1;
old_oid = oid;
flags = SD_FLAG_CMD_COW;
}
if (create) {
dprintf("update ino (%" PRIu32 ") %" PRIu64 " %" PRIu64 " %ld\n",
inode->vdi_id, oid,
dprintf("update ino (%" PRIu32") %" PRIu64 " %" PRIu64
" %" PRIu64 "\n", inode->vdi_id, oid,
vid_to_data_oid(inode->data_vdi_id[idx], idx), idx);
oid = vid_to_data_oid(inode->vdi_id, idx);
dprintf("new oid %" PRIx64 "\n", oid);
dprintf("new oid %lx\n", oid);
}
aio_req = alloc_aio_req(s, acb, oid, len, offset, flags, old_oid, done);
if (create) {
AIOReq *areq;
QLIST_FOREACH(areq, &s->inflight_aio_head, aio_siblings) {
QLIST_FOREACH(areq, &s->outstanding_aio_head,
outstanding_aio_siblings) {
if (areq == aio_req) {
continue;
}
if (areq->oid == oid) {
/*
* Sheepdog cannot handle simultaneous create
@@ -1615,14 +1595,11 @@ static int coroutine_fn sd_co_rw_vector(void *p)
*/
aio_req->flags = 0;
aio_req->base_oid = 0;
QLIST_INSERT_HEAD(&s->pending_aio_head, aio_req,
aio_siblings);
goto done;
}
}
}
QLIST_INSERT_HEAD(&s->inflight_aio_head, aio_req, aio_siblings);
ret = add_aio_request(s, aio_req, acb->qiov->iov, acb->qiov->niov,
create, acb->aiocb_type);
if (ret < 0) {
@@ -1637,7 +1614,7 @@ static int coroutine_fn sd_co_rw_vector(void *p)
done += len;
}
out:
if (!--acb->nr_pending) {
if (QLIST_EMPTY(&acb->aioreq_head)) {
return acb->ret;
}
return 1;
@@ -1650,6 +1627,7 @@ static coroutine_fn int sd_co_writev(BlockDriverState *bs, int64_t sector_num,
int ret;
if (bs->growable && sector_num + nb_sectors > bs->total_sectors) {
/* TODO: shouldn't block here */
ret = sd_truncate(bs, (sector_num + nb_sectors) * SECTOR_SIZE);
if (ret < 0) {
return ret;
@@ -1676,12 +1654,20 @@ static coroutine_fn int sd_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
SheepdogAIOCB *acb;
int ret;
int i, ret;
acb = sd_aio_setup(bs, qiov, sector_num, nb_sectors, NULL, NULL);
acb->aiocb_type = AIOCB_READ_UDATA;
acb->aio_done_func = sd_finish_aiocb;
/*
* TODO: we can do better; we don't need to initialize
* blindly.
*/
for (i = 0; i < qiov->niov; i++) {
memset(qiov->iov[i].iov_base, 0, qiov->iov[i].iov_len);
}
ret = sd_co_rw_vector(acb);
if (ret <= 0) {
qemu_aio_release(acb);
@@ -1709,7 +1695,7 @@ static int coroutine_fn sd_co_flush_to_disk(BlockDriverState *bs)
hdr.opcode = SD_OP_FLUSH_VDI;
hdr.oid = vid_to_vdi_oid(inode->vdi_id);
ret = do_req(s->flush_fd, (SheepdogReq *)&hdr, NULL, &wlen, &rlen);
ret = do_co_req(s->flush_fd, (SheepdogReq *)&hdr, NULL, &wlen, &rlen);
if (ret) {
error_report("failed to send a request to the sheep");
return ret;
@@ -1739,7 +1725,7 @@ static int sd_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
SheepdogInode *inode;
unsigned int datalen;
dprintf("sn_info: name %s id_str %s s: name %s vm_state_size %" PRId64 " "
dprintf("sn_info: name %s id_str %s s: name %s vm_state_size %d "
"is_snapshot %d\n", sn_info->name, sn_info->id_str,
s->name, sn_info->vm_state_size, s->is_snapshot);
@@ -1986,7 +1972,7 @@ static int do_load_save_vmstate(BDRVSheepdogState *s, uint8_t *data,
vdi_index = pos / SD_DATA_OBJ_SIZE;
offset = pos % SD_DATA_OBJ_SIZE;
data_len = MIN(remaining, SD_DATA_OBJ_SIZE - offset);
data_len = MIN(remaining, SD_DATA_OBJ_SIZE);
vmstate_oid = vid_to_vmstate_oid(s->inode.vdi_id, vdi_index);
@@ -2007,7 +1993,6 @@ static int do_load_save_vmstate(BDRVSheepdogState *s, uint8_t *data,
}
pos += data_len;
data += data_len;
remaining -= data_len;
}
ret = size;

View File

@@ -13,7 +13,6 @@
#include "trace.h"
#include "block_int.h"
#include "qemu/ratelimit.h"
enum {
/*
@@ -26,6 +25,34 @@ enum {
#define SLICE_TIME 100000000ULL /* ns */
typedef struct {
int64_t next_slice_time;
uint64_t slice_quota;
uint64_t dispatched;
} RateLimit;
static int64_t ratelimit_calculate_delay(RateLimit *limit, uint64_t n)
{
int64_t now = qemu_get_clock_ns(rt_clock);
if (limit->next_slice_time < now) {
limit->next_slice_time = now + SLICE_TIME;
limit->dispatched = 0;
}
if (limit->dispatched == 0 || limit->dispatched + n <= limit->slice_quota) {
limit->dispatched += n;
return 0;
} else {
limit->dispatched = n;
return limit->next_slice_time - now;
}
}
static void ratelimit_set_speed(RateLimit *limit, uint64_t speed)
{
limit->slice_quota = speed / (1000000000ULL / SLICE_TIME);
}
typedef struct StreamBlockJob {
BlockJob common;
RateLimit limit;
@@ -71,6 +98,67 @@ static void close_unused_images(BlockDriverState *top, BlockDriverState *base,
top->backing_hd = base;
}
/*
* Given an image chain: [BASE] -> [INTER1] -> [INTER2] -> [TOP]
*
* Return true if the given sector is allocated in top.
* Return false if the given sector is allocated in intermediate images.
* Return true otherwise.
*
* 'pnum' is set to the number of sectors (including and immediately following
* the specified sector) that are known to be in the same
* allocated/unallocated state.
*
*/
static int coroutine_fn is_allocated_base(BlockDriverState *top,
BlockDriverState *base,
int64_t sector_num,
int nb_sectors, int *pnum)
{
BlockDriverState *intermediate;
int ret, n;
ret = bdrv_co_is_allocated(top, sector_num, nb_sectors, &n);
if (ret) {
*pnum = n;
return ret;
}
/*
* Is the unallocated chunk [sector_num, n] also
* unallocated between base and top?
*/
intermediate = top->backing_hd;
while (intermediate != base) {
int pnum_inter;
ret = bdrv_co_is_allocated(intermediate, sector_num, nb_sectors,
&pnum_inter);
if (ret < 0) {
return ret;
} else if (ret) {
*pnum = pnum_inter;
return 0;
}
/*
* [sector_num, nb_sectors] is unallocated on top but intermediate
* might have
*
* [sector_num+x, nr_sectors] allocated.
*/
if (n > pnum_inter) {
n = pnum_inter;
}
intermediate = intermediate->backing_hd;
}
*pnum = n;
return 1;
}
static void coroutine_fn stream_run(void *opaque)
{
StreamBlockJob *s = opaque;
@@ -101,7 +189,6 @@ static void coroutine_fn stream_run(void *opaque)
for (sector_num = 0; sector_num < end; sector_num += n) {
uint64_t delay_ns = 0;
bool copy;
wait:
/* Note that even when no rate limit is applied we need to yield
@@ -112,26 +199,10 @@ wait:
break;
}
ret = bdrv_co_is_allocated(bs, sector_num,
STREAM_BUFFER_SIZE / BDRV_SECTOR_SIZE, &n);
if (ret == 1) {
/* Allocated in the top, no need to copy. */
copy = false;
} else {
/* Copy if allocated in the intermediate images. Limit to the
* known-unallocated area [sector_num, sector_num+n). */
ret = bdrv_co_is_allocated_above(bs->backing_hd, base,
sector_num, n, &n);
/* Finish early if end of backing file has been reached */
if (ret == 0 && n == 0) {
n = end - sector_num;
}
copy = (ret == 1);
}
ret = is_allocated_base(bs, base, sector_num,
STREAM_BUFFER_SIZE / BDRV_SECTOR_SIZE, &n);
trace_stream_one_iteration(s, sector_num, n, ret);
if (ret >= 0 && copy) {
if (ret == 0) {
if (s->common.speed) {
delay_ns = ratelimit_calculate_delay(&s->limit, n);
if (delay_ns > 0) {
@@ -177,7 +248,7 @@ static void stream_set_speed(BlockJob *job, int64_t speed, Error **errp)
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE);
}
static BlockJobType stream_job_type = {

View File

@@ -277,8 +277,7 @@ static void vdi_header_print(VdiHeader *header)
}
#endif
static int vdi_check(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix)
static int vdi_check(BlockDriverState *bs, BdrvCheckResult *res)
{
/* TODO: additional checks possible. */
BDRVVdiState *s = (BDRVVdiState *)bs->opaque;
@@ -287,10 +286,6 @@ static int vdi_check(BlockDriverState *bs, BdrvCheckResult *res,
uint32_t *bmap;
logout("\n");
if (fix) {
return -ENOTSUP;
}
bmap = g_malloc(s->header.blocks_in_image * sizeof(uint32_t));
memset(bmap, 0xff, s->header.blocks_in_image * sizeof(uint32_t));
@@ -628,6 +623,7 @@ static int vdi_create(const char *filename, QEMUOptionParameter *options)
VdiHeader header;
size_t i;
size_t bmap_size;
uint32_t *bmap;
logout("\n");
@@ -652,9 +648,8 @@ static int vdi_create(const char *filename, QEMUOptionParameter *options)
options++;
}
fd = qemu_open(filename,
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY | O_LARGEFILE,
0644);
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY | O_LARGEFILE,
0644);
if (fd < 0) {
return -errno;
}
@@ -692,21 +687,21 @@ static int vdi_create(const char *filename, QEMUOptionParameter *options)
result = -errno;
}
bmap = NULL;
if (bmap_size > 0) {
uint32_t *bmap = g_malloc0(bmap_size);
for (i = 0; i < blocks; i++) {
if (image_type == VDI_TYPE_STATIC) {
bmap[i] = i;
} else {
bmap[i] = VDI_UNALLOCATED;
}
}
if (write(fd, bmap, bmap_size) < 0) {
result = -errno;
}
g_free(bmap);
bmap = (uint32_t *)g_malloc0(bmap_size);
}
for (i = 0; i < blocks; i++) {
if (image_type == VDI_TYPE_STATIC) {
bmap[i] = i;
} else {
bmap[i] = VDI_UNALLOCATED;
}
}
if (write(fd, bmap, bmap_size) < 0) {
result = -errno;
}
g_free(bmap);
if (image_type == VDI_TYPE_STATIC) {
if (ftruncate(fd, sizeof(header) + bmap_size + blocks * block_size)) {
result = -errno;

View File

@@ -35,7 +35,6 @@
#define VMDK4_FLAG_RGD (1 << 1)
#define VMDK4_FLAG_COMPRESS (1 << 16)
#define VMDK4_FLAG_MARKER (1 << 17)
#define VMDK4_GD_AT_END 0xffffffffffffffffULL
typedef struct {
uint32_t version;
@@ -58,8 +57,8 @@ typedef struct {
int64_t desc_offset;
int64_t desc_size;
int32_t num_gtes_per_gte;
int64_t rgd_offset;
int64_t gd_offset;
int64_t rgd_offset;
int64_t grain_offset;
char filler[1];
char check_bytes[4];
@@ -116,13 +115,6 @@ typedef struct VmdkGrainMarker {
uint8_t data[0];
} VmdkGrainMarker;
enum {
MARKER_END_OF_STREAM = 0,
MARKER_GRAIN_TABLE = 1,
MARKER_GRAIN_DIRECTORY = 2,
MARKER_FOOTER = 3,
};
static int vmdk_probe(const uint8_t *buf, int buf_size, const char *filename)
{
uint32_t magic;
@@ -459,54 +451,6 @@ static int vmdk_open_vmdk4(BlockDriverState *bs,
if (header.capacity == 0 && header.desc_offset) {
return vmdk_open_desc_file(bs, flags, header.desc_offset << 9);
}
if (le64_to_cpu(header.gd_offset) == VMDK4_GD_AT_END) {
/*
* The footer takes precedence over the header, so read it in. The
* footer starts at offset -1024 from the end: One sector for the
* footer, and another one for the end-of-stream marker.
*/
struct {
struct {
uint64_t val;
uint32_t size;
uint32_t type;
uint8_t pad[512 - 16];
} QEMU_PACKED footer_marker;
uint32_t magic;
VMDK4Header header;
uint8_t pad[512 - 4 - sizeof(VMDK4Header)];
struct {
uint64_t val;
uint32_t size;
uint32_t type;
uint8_t pad[512 - 16];
} QEMU_PACKED eos_marker;
} QEMU_PACKED footer;
ret = bdrv_pread(file,
bs->file->total_sectors * 512 - 1536,
&footer, sizeof(footer));
if (ret < 0) {
return ret;
}
/* Some sanity checks for the footer */
if (be32_to_cpu(footer.magic) != VMDK4_MAGIC ||
le32_to_cpu(footer.footer_marker.size) != 0 ||
le32_to_cpu(footer.footer_marker.type) != MARKER_FOOTER ||
le64_to_cpu(footer.eos_marker.val) != 0 ||
le32_to_cpu(footer.eos_marker.size) != 0 ||
le32_to_cpu(footer.eos_marker.type) != MARKER_END_OF_STREAM)
{
return -EINVAL;
}
header = footer.header;
}
l1_entry_sectors = le32_to_cpu(header.num_gtes_per_gte)
* le64_to_cpu(header.granularity);
if (l1_entry_sectors == 0) {
@@ -1217,9 +1161,10 @@ static int vmdk_create_extent(const char *filename, int64_t filesize,
VMDK4Header header;
uint32_t tmp, magic, grains, gd_size, gt_size, gt_count;
fd = qemu_open(filename,
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY | O_LARGEFILE,
0644);
fd = open(
filename,
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY | O_LARGEFILE,
0644);
if (fd < 0) {
return -errno;
}
@@ -1314,7 +1259,7 @@ static int vmdk_create_extent(const char *filename, int64_t filesize,
ret = 0;
exit:
qemu_close(fd);
close(fd);
return ret;
}
@@ -1539,13 +1484,15 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options)
(flags & BLOCK_FLAG_COMPAT6 ? 6 : 4),
total_size / (int64_t)(63 * 16 * 512));
if (split || flat) {
fd = qemu_open(filename,
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY | O_LARGEFILE,
0644);
fd = open(
filename,
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY | O_LARGEFILE,
0644);
} else {
fd = qemu_open(filename,
O_WRONLY | O_BINARY | O_LARGEFILE,
0644);
fd = open(
filename,
O_WRONLY | O_BINARY | O_LARGEFILE,
0644);
}
if (fd < 0) {
return -errno;
@@ -1562,7 +1509,7 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options)
}
ret = 0;
exit:
qemu_close(fd);
close(fd);
return ret;
}

View File

@@ -678,7 +678,7 @@ static int vpc_create(const char *filename, QEMUOptionParameter *options)
}
/* Create the file */
fd = qemu_open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, 0644);
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, 0644);
if (fd < 0) {
return -EIO;
}
@@ -744,7 +744,7 @@ static int vpc_create(const char *filename, QEMUOptionParameter *options)
}
fail:
qemu_close(fd);
close(fd);
return ret;
}

View File

@@ -359,12 +359,11 @@ typedef struct BDRVVVFATState {
* if the position is outside the specified geometry, fill maximum value for CHS
* and return 1 to signal overflow.
*/
static int sector2CHS(mbr_chs_t *chs, int spos, int cyls, int heads, int secs)
{
static int sector2CHS(BlockDriverState* bs, mbr_chs_t * chs, int spos){
int head,sector;
sector = spos % secs; spos /= secs;
head = spos % heads; spos /= heads;
if (spos >= cyls) {
sector = spos % (bs->secs); spos/= bs->secs;
head = spos % (bs->heads); spos/= bs->heads;
if(spos >= bs->cyls){
/* Overflow,
it happens if 32bit sector positions are used, while CHS is only 24bit.
Windows/Dos is said to take 1023/255/63 as nonrepresentable CHS */
@@ -379,7 +378,7 @@ static int sector2CHS(mbr_chs_t *chs, int spos, int cyls, int heads, int secs)
return 0;
}
static void init_mbr(BDRVVVFATState *s, int cyls, int heads, int secs)
static void init_mbr(BDRVVVFATState* s)
{
/* TODO: if the files mbr.img and bootsect.img exist, use them */
mbr_t* real_mbr=(mbr_t*)s->first_sectors;
@@ -394,15 +393,12 @@ static void init_mbr(BDRVVVFATState *s, int cyls, int heads, int secs)
partition->attributes=0x80; /* bootable */
/* LBA is used when partition is outside the CHS geometry */
lba = sector2CHS(&partition->start_CHS, s->first_sectors_number - 1,
cyls, heads, secs);
lba |= sector2CHS(&partition->end_CHS, s->bs->total_sectors - 1,
cyls, heads, secs);
lba = sector2CHS(s->bs, &partition->start_CHS, s->first_sectors_number-1);
lba|= sector2CHS(s->bs, &partition->end_CHS, s->sector_count);
/*LBA partitions are identified only by start/length_sector_long not by CHS*/
partition->start_sector_long = cpu_to_le32(s->first_sectors_number - 1);
partition->length_sector_long = cpu_to_le32(s->bs->total_sectors
- s->first_sectors_number + 1);
partition->start_sector_long =cpu_to_le32(s->first_sectors_number-1);
partition->length_sector_long=cpu_to_le32(s->sector_count - s->first_sectors_number+1);
/* FAT12/FAT16/FAT32 */
/* DOS uses different types when partition is LBA,
@@ -834,7 +830,7 @@ static inline off_t cluster2sector(BDRVVVFATState* s, uint32_t cluster_num)
}
static int init_directories(BDRVVVFATState* s,
const char *dirname, int heads, int secs)
const char* dirname)
{
bootsector_t* bootsector;
mapping_t* mapping;
@@ -961,8 +957,8 @@ static int init_directories(BDRVVVFATState* s,
bootsector->media_type=(s->first_sectors_number>1?0xf8:0xf0); /* media descriptor (f8=hd, f0=3.5 fd)*/
s->fat.pointer[0] = bootsector->media_type;
bootsector->sectors_per_fat=cpu_to_le16(s->sectors_per_fat);
bootsector->sectors_per_track = cpu_to_le16(secs);
bootsector->number_of_heads = cpu_to_le16(heads);
bootsector->sectors_per_track=cpu_to_le16(s->bs->secs);
bootsector->number_of_heads=cpu_to_le16(s->bs->heads);
bootsector->hidden_sectors=cpu_to_le32(s->first_sectors_number==1?0:0x3f);
bootsector->total_sectors=cpu_to_le32(s->sector_count>0xffff?s->sector_count:0);
@@ -995,7 +991,7 @@ static void vvfat_rebind(BlockDriverState *bs)
static int vvfat_open(BlockDriverState *bs, const char* dirname, int flags)
{
BDRVVVFATState *s = bs->opaque;
int i, cyls, heads, secs;
int i;
#ifdef DEBUG
vvv = s;
@@ -1037,28 +1033,24 @@ DLOG(if (stderr == NULL) {
/* 1.44MB or 2.88MB floppy. 2.88MB can be FAT12 (default) or FAT16. */
if (!s->fat_type) {
s->fat_type = 12;
secs = 36;
bs->secs = 36;
s->sectors_per_cluster=2;
} else {
secs = s->fat_type == 12 ? 18 : 36;
bs->secs=(s->fat_type == 12 ? 18 : 36);
s->sectors_per_cluster=1;
}
s->first_sectors_number = 1;
cyls = 80;
heads = 2;
bs->cyls=80; bs->heads=2;
} else {
/* 32MB or 504MB disk*/
if (!s->fat_type) {
s->fat_type = 16;
}
cyls = s->fat_type == 12 ? 64 : 1024;
heads = 16;
secs = 63;
bs->cyls=(s->fat_type == 12 ? 64 : 1024);
bs->heads=16; bs->secs=63;
}
fprintf(stderr, "vvfat %s chs %d,%d,%d\n",
dirname, cyls, heads, secs);
s->sector_count = cyls * heads * secs - (s->first_sectors_number - 1);
s->sector_count=bs->cyls*bs->heads*bs->secs-(s->first_sectors_number-1);
if (strstr(dirname, ":rw:")) {
if (enable_write_target(s))
@@ -1074,16 +1066,18 @@ DLOG(if (stderr == NULL) {
else
dirname += i+1;
bs->total_sectors = cyls * heads * secs;
bs->total_sectors=bs->cyls*bs->heads*bs->secs;
if (init_directories(s, dirname, heads, secs)) {
if(init_directories(s, dirname))
return -1;
}
s->sector_count = s->faked_sectors + s->sectors_per_cluster*s->cluster_count;
if (s->first_sectors_number == 0x40) {
init_mbr(s, cyls, heads, secs);
if(s->first_sectors_number==0x40)
init_mbr(s);
else {
/* MS-DOS does not like to know about CHS (?). */
bs->heads = bs->cyls = bs->secs = 0;
}
// assert(is_consistent(s));
@@ -1105,7 +1099,7 @@ static inline void vvfat_close_current_file(BDRVVVFATState *s)
if(s->current_mapping) {
s->current_mapping = NULL;
if (s->current_fd) {
qemu_close(s->current_fd);
close(s->current_fd);
s->current_fd = 0;
}
}
@@ -1162,7 +1156,7 @@ static int open_file(BDRVVVFATState* s,mapping_t* mapping)
if(!s->current_mapping ||
strcmp(s->current_mapping->path,mapping->path)) {
/* open file */
int fd = qemu_open(mapping->path, O_RDONLY | O_BINARY | O_LARGEFILE);
int fd = open(mapping->path, O_RDONLY | O_BINARY | O_LARGEFILE);
if(fd<0)
return -1;
vvfat_close_current_file(s);
@@ -2221,7 +2215,7 @@ static int commit_one_file(BDRVVVFATState* s,
for (i = s->cluster_size; i < offset; i += s->cluster_size)
c = modified_fat_get(s, c);
fd = qemu_open(mapping->path, O_RDWR | O_CREAT | O_BINARY, 0666);
fd = open(mapping->path, O_RDWR | O_CREAT | O_BINARY, 0666);
if (fd < 0) {
fprintf(stderr, "Could not open %s... (%s, %d)\n", mapping->path,
strerror(errno), errno);
@@ -2230,7 +2224,7 @@ static int commit_one_file(BDRVVVFATState* s,
}
if (offset > 0) {
if (lseek(fd, offset, SEEK_SET) != offset) {
qemu_close(fd);
close(fd);
g_free(cluster);
return -3;
}
@@ -2251,13 +2245,13 @@ static int commit_one_file(BDRVVVFATState* s,
(uint8_t*)cluster, (rest_size + 0x1ff) / 0x200);
if (ret < 0) {
qemu_close(fd);
close(fd);
g_free(cluster);
return ret;
}
if (write(fd, cluster, rest_size) < 0) {
qemu_close(fd);
close(fd);
g_free(cluster);
return -2;
}
@@ -2268,11 +2262,11 @@ static int commit_one_file(BDRVVVFATState* s,
if (ftruncate(fd, size)) {
perror("ftruncate()");
qemu_close(fd);
close(fd);
g_free(cluster);
return -4;
}
qemu_close(fd);
close(fd);
g_free(cluster);
return commit_mappings(s, first_cluster, dir_index);

View File

@@ -30,11 +30,9 @@
#include "qemu-coroutine.h"
#include "qemu-timer.h"
#include "qapi-types.h"
#include "qerror.h"
#define BLOCK_FLAG_ENCRYPT 1
#define BLOCK_FLAG_COMPAT6 4
#define BLOCK_FLAG_LAZY_REFCOUNTS 8
#define BLOCK_FLAG_ENCRYPT 1
#define BLOCK_FLAG_COMPAT6 4
#define BLOCK_IO_LIMIT_READ 0
#define BLOCK_IO_LIMIT_WRITE 1
@@ -43,17 +41,16 @@
#define BLOCK_IO_SLICE_TIME 100000000
#define NANOSECONDS_PER_SECOND 1000000000.0
#define BLOCK_OPT_SIZE "size"
#define BLOCK_OPT_ENCRYPT "encryption"
#define BLOCK_OPT_COMPAT6 "compat6"
#define BLOCK_OPT_BACKING_FILE "backing_file"
#define BLOCK_OPT_BACKING_FMT "backing_fmt"
#define BLOCK_OPT_CLUSTER_SIZE "cluster_size"
#define BLOCK_OPT_TABLE_SIZE "table_size"
#define BLOCK_OPT_PREALLOC "preallocation"
#define BLOCK_OPT_SUBFMT "subformat"
#define BLOCK_OPT_COMPAT_LEVEL "compat"
#define BLOCK_OPT_LAZY_REFCOUNTS "lazy_refcounts"
#define BLOCK_OPT_SIZE "size"
#define BLOCK_OPT_ENCRYPT "encryption"
#define BLOCK_OPT_COMPAT6 "compat6"
#define BLOCK_OPT_BACKING_FILE "backing_file"
#define BLOCK_OPT_BACKING_FMT "backing_fmt"
#define BLOCK_OPT_CLUSTER_SIZE "cluster_size"
#define BLOCK_OPT_TABLE_SIZE "table_size"
#define BLOCK_OPT_PREALLOC "preallocation"
#define BLOCK_OPT_SUBFMT "subformat"
#define BLOCK_OPT_COMPAT_LEVEL "compat"
typedef struct BdrvTrackedRequest BdrvTrackedRequest;
@@ -244,8 +241,7 @@ struct BlockDriver {
* Returns 0 for completed check, -errno for internal errors.
* The check results are stored in result.
*/
int (*bdrv_check)(BlockDriverState* bs, BdrvCheckResult *result,
BdrvCheckMode fix);
int (*bdrv_check)(BlockDriverState* bs, BdrvCheckResult *result);
void (*bdrv_debug_event)(BlockDriverState *bs, BlkDebugEvent event);
@@ -323,6 +319,7 @@ struct BlockDriverState {
/* NOTE: the following infos are only hints for real hardware
drivers. They are not used by the block driver */
int cyls, heads, secs, translation;
BlockErrorAction on_read_error, on_write_error;
bool iostatus_enabled;
BlockDeviceIoStatus iostatus;

View File

@@ -7,8 +7,8 @@
* later. See the COPYING file in the top-level directory.
*/
#include "block.h"
#include "blockdev.h"
#include "hw/block-common.h"
#include "monitor.h"
#include "qerror.h"
#include "qemu-option.h"
@@ -278,6 +278,7 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
{
const char *buf;
const char *file = NULL;
char devname[128];
const char *serial;
const char *mediastr = "";
BlockInterfaceType type;
@@ -317,6 +318,7 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
serial = qemu_opt_get(opts, "serial");
if ((buf = qemu_opt_get(opts, "if")) != NULL) {
pstrcpy(devname, sizeof(devname), buf);
for (type = 0; type < IF_COUNT && strcmp(buf, if_name[type]); type++)
;
if (type == IF_COUNT) {
@@ -325,20 +327,21 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
}
} else {
type = default_to_scsi ? IF_SCSI : IF_IDE;
pstrcpy(devname, sizeof(devname), if_name[type]);
}
max_devs = if_max_devs[type];
if (cyls || heads || secs) {
if (cyls < 1) {
if (cyls < 1 || (type == IF_IDE && cyls > 16383)) {
error_report("invalid physical cyls number");
return NULL;
}
if (heads < 1) {
if (heads < 1 || (type == IF_IDE && heads > 16)) {
error_report("invalid physical heads number");
return NULL;
}
if (secs < 1) {
if (secs < 1 || (type == IF_IDE && secs > 63)) {
error_report("invalid physical secs number");
return NULL;
}
@@ -377,7 +380,6 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
}
}
bdrv_flags |= BDRV_O_CACHE_WB;
if ((buf = qemu_opt_get(opts, "cache")) != NULL) {
if (bdrv_parse_cache_flags(buf, &bdrv_flags) != 0) {
error_report("invalid cache option");
@@ -399,11 +401,11 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
#endif
if ((buf = qemu_opt_get(opts, "format")) != NULL) {
if (is_help_option(buf)) {
error_printf("Supported formats:");
bdrv_iterate_format(bdrv_format_print, NULL);
error_printf("\n");
return NULL;
if (strcmp(buf, "?") == 0) {
error_printf("Supported formats:");
bdrv_iterate_format(bdrv_format_print, NULL);
error_printf("\n");
return NULL;
}
drv = bdrv_find_whitelisted_format(buf);
if (!drv) {
@@ -521,25 +523,21 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
mediastr = (media == MEDIA_CDROM) ? "-cd" : "-hd";
if (max_devs)
snprintf(dinfo->id, 32, "%s%i%s%i",
if_name[type], bus_id, mediastr, unit_id);
devname, bus_id, mediastr, unit_id);
else
snprintf(dinfo->id, 32, "%s%s%i",
if_name[type], mediastr, unit_id);
devname, mediastr, unit_id);
}
dinfo->bdrv = bdrv_new(dinfo->id);
dinfo->bdrv->open_flags = snapshot ? BDRV_O_SNAPSHOT : 0;
dinfo->bdrv->read_only = ro;
dinfo->devaddr = devaddr;
dinfo->type = type;
dinfo->bus = bus_id;
dinfo->unit = unit_id;
dinfo->cyls = cyls;
dinfo->heads = heads;
dinfo->secs = secs;
dinfo->trans = translation;
dinfo->opts = opts;
dinfo->refcount = 1;
dinfo->serial = serial;
if (serial) {
pstrcpy(dinfo->serial, sizeof(dinfo->serial), serial);
}
QTAILQ_INSERT_TAIL(&drives, dinfo, next);
bdrv_set_on_error(dinfo->bdrv, on_read_error, on_write_error);
@@ -552,7 +550,17 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
case IF_SCSI:
case IF_XEN:
case IF_NONE:
dinfo->media_cd = media == MEDIA_CDROM;
switch(media) {
case MEDIA_DISK:
if (cyls != 0) {
bdrv_set_geometry_hint(dinfo->bdrv, cyls, heads, secs);
bdrv_set_translation_hint(dinfo->bdrv, translation);
}
break;
case MEDIA_CDROM:
dinfo->media_cd = 1;
break;
}
break;
case IF_SD:
case IF_FLOPPY:
@@ -561,7 +569,7 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
break;
case IF_VIRTIO:
/* add virtio block device */
opts = qemu_opts_create(qemu_find_opts("device"), NULL, 0, NULL);
opts = qemu_opts_create(qemu_find_opts("device"), NULL, 0);
if (arch_type == QEMU_ARCH_S390X) {
qemu_opt_set(opts, "driver", "virtio-blk-s390");
} else {
@@ -604,10 +612,6 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
bdrv_flags |= ro ? 0 : BDRV_O_RDWR;
if (ro && copy_on_read) {
error_report("warning: disabling copy_on_read on readonly drive");
}
ret = bdrv_open(dinfo->bdrv, file, bdrv_flags, drv);
if (ret < 0) {
error_report("could not open disk image %s: %s",

View File

@@ -17,6 +17,8 @@
void blockdev_mark_auto_del(BlockDriverState *bs);
void blockdev_auto_del(BlockDriverState *bs);
#define BLOCK_SERIAL_STRLEN 20
typedef enum {
IF_DEFAULT = -1, /* for use with drive_add() only */
IF_NONE,
@@ -33,9 +35,8 @@ struct DriveInfo {
int unit;
int auto_del; /* see blockdev_mark_auto_del() */
int media_cd;
int cyls, heads, secs, trans;
QemuOpts *opts;
const char *serial;
char serial[BLOCK_SERIAL_STRLEN + 1];
QTAILQ_ENTRY(DriveInfo) next;
int refcount;
};

View File

@@ -1,2 +0,0 @@
obj-y = main.o bsdload.o elfload.o mmap.o signal.o strace.o syscall.o \
uaccess.o

View File

@@ -681,7 +681,7 @@ static void usage(void)
"-g port wait gdb connection to port\n"
"-L path set the elf interpreter prefix (default=%s)\n"
"-s size set the stack size in bytes (default=%ld)\n"
"-cpu model select CPU (-cpu help for list)\n"
"-cpu model select CPU (-cpu ? for list)\n"
"-drop-ld-preload drop LD_PRELOAD for target process\n"
"-E var=value sets/modifies targets environment variable(s)\n"
"-U var unsets targets environment variable(s)\n"
@@ -825,7 +825,7 @@ int main(int argc, char **argv)
qemu_uname_release = argv[optind++];
} else if (!strcmp(r, "cpu")) {
cpu_model = argv[optind++];
if (is_help_option(cpu_model)) {
if (strcmp(cpu_model, "?") == 0) {
/* XXX: implement xxx_cpu_list for targets that still miss it */
#if defined(cpu_list)
cpu_list(stdout, &fprintf);
@@ -918,7 +918,7 @@ int main(int argc, char **argv)
exit(1);
}
#if defined(TARGET_I386) || defined(TARGET_SPARC) || defined(TARGET_PPC)
cpu_reset(ENV_GET_CPU(env));
cpu_state_reset(env);
#endif
thread_env = env;

View File

@@ -44,19 +44,7 @@
/* Use gnu_printf when supported (qemu uses standard format strings). */
# define GCC_ATTR __attribute__((__unused__, format(gnu_printf, 1, 2)))
# define GCC_FMT_ATTR(n, m) __attribute__((format(gnu_printf, n, m)))
# if defined(_WIN32)
/* Map __printf__ to __gnu_printf__ because we want standard format strings
* even when MinGW or GLib include files use __printf__. */
# define __printf__ __gnu_printf__
# endif
# endif
#if defined(_WIN32)
#define GCC_WEAK __attribute__((weak))
#define GCC_WEAK_DECL GCC_WEAK
#else
#define GCC_WEAK __attribute__((weak))
#define GCC_WEAK_DECL
#endif
#else
#define GCC_ATTR /**/
#define GCC_FMT_ATTR(n, m)

576
configure vendored

File diff suppressed because it is too large Load Diff

View File

@@ -28,7 +28,6 @@
//#define DEBUG_CONSOLE
#define DEFAULT_BACKSCROLL 512
#define MAX_CONSOLES 12
#define CONSOLE_CURSOR_PERIOD 500
#define QEMU_RGBA(r, g, b, a) (((a) << 24) | ((r) << 16) | ((g) << 8) | (b))
#define QEMU_RGB(r, g, b) QEMU_RGBA(r, g, b, 0xff)
@@ -140,8 +139,6 @@ struct TextConsole {
TextCell *cells;
int text_x[2], text_y[2], cursor_invalidate;
int echo;
bool cursor_visible_phase;
QEMUTimer *cursor_timer;
int update_x0;
int update_y0;
@@ -618,7 +615,7 @@ static void console_show_cursor(TextConsole *s, int show)
y += s->total_height;
if (y < s->height) {
c = &s->cells[y1 * s->width + x];
if (show && s->cursor_visible_phase) {
if (show) {
TextAttributes t_attrib = s->t_attrib_default;
t_attrib.invers = !(t_attrib.invers); /* invert fg and bg */
vga_putcharxy(s->ds, x, y, c->ch, &t_attrib);
@@ -850,26 +847,6 @@ static void console_clear_xy(TextConsole *s, int x, int y)
update_xy(s, x, y);
}
/* set cursor, checking bounds */
static void set_cursor(TextConsole *s, int x, int y)
{
if (x < 0) {
x = 0;
}
if (y < 0) {
y = 0;
}
if (y >= s->height) {
y = s->height - 1;
}
if (x >= s->width) {
x = s->width - 1;
}
s->x = x;
s->y = y;
}
static void console_putchar(TextConsole *s, int ch)
{
TextCell *c;
@@ -937,15 +914,11 @@ static void console_putchar(TextConsole *s, int ch)
case TTY_STATE_CSI: /* handle escape sequence parameters */
if (ch >= '0' && ch <= '9') {
if (s->nb_esc_params < MAX_ESC_PARAMS) {
int *param = &s->esc_params[s->nb_esc_params];
int digit = (ch - '0');
*param = (*param <= (INT_MAX - digit) / 10) ?
*param * 10 + digit : INT_MAX;
s->esc_params[s->nb_esc_params] =
s->esc_params[s->nb_esc_params] * 10 + ch - '0';
}
} else {
if (s->nb_esc_params < MAX_ESC_PARAMS)
s->nb_esc_params++;
s->nb_esc_params++;
if (ch == ';')
break;
#ifdef DEBUG_CONSOLE
@@ -959,37 +932,59 @@ static void console_putchar(TextConsole *s, int ch)
if (s->esc_params[0] == 0) {
s->esc_params[0] = 1;
}
set_cursor(s, s->x, s->y - s->esc_params[0]);
s->y -= s->esc_params[0];
if (s->y < 0) {
s->y = 0;
}
break;
case 'B':
/* move cursor down */
if (s->esc_params[0] == 0) {
s->esc_params[0] = 1;
}
set_cursor(s, s->x, s->y + s->esc_params[0]);
s->y += s->esc_params[0];
if (s->y >= s->height) {
s->y = s->height - 1;
}
break;
case 'C':
/* move cursor right */
if (s->esc_params[0] == 0) {
s->esc_params[0] = 1;
}
set_cursor(s, s->x + s->esc_params[0], s->y);
s->x += s->esc_params[0];
if (s->x >= s->width) {
s->x = s->width - 1;
}
break;
case 'D':
/* move cursor left */
if (s->esc_params[0] == 0) {
s->esc_params[0] = 1;
}
set_cursor(s, s->x - s->esc_params[0], s->y);
s->x -= s->esc_params[0];
if (s->x < 0) {
s->x = 0;
}
break;
case 'G':
/* move cursor to column */
set_cursor(s, s->esc_params[0] - 1, s->y);
s->x = s->esc_params[0] - 1;
if (s->x < 0) {
s->x = 0;
}
break;
case 'f':
case 'H':
/* move cursor to row, column */
set_cursor(s, s->esc_params[1] - 1, s->esc_params[0] - 1);
s->x = s->esc_params[1] - 1;
if (s->x < 0) {
s->x = 0;
}
s->y = s->esc_params[0] - 1;
if (s->y < 0) {
s->y = 0;
}
break;
case 'J':
switch (s->esc_params[0]) {
@@ -1088,10 +1083,6 @@ void console_select(unsigned int index)
s = consoles[index];
if (s) {
DisplayState *ds = s->ds;
if (active_console && active_console->cursor_timer) {
qemu_del_timer(active_console->cursor_timer);
}
active_console = s;
if (ds_get_bits_per_pixel(s->ds)) {
ds->surface = qemu_resize_displaysurface(ds, s->g_width, s->g_height);
@@ -1099,10 +1090,6 @@ void console_select(unsigned int index)
s->ds->surface->width = s->width;
s->ds->surface->height = s->height;
}
if (s->cursor_timer) {
qemu_mod_timer(s->cursor_timer,
qemu_get_clock_ms(rt_clock) + CONSOLE_CURSOR_PERIOD / 2);
}
dpy_resize(s->ds);
vga_hw_invalidate();
}
@@ -1467,16 +1454,6 @@ static void text_console_set_echo(CharDriverState *chr, bool echo)
s->echo = echo;
}
static void text_console_update_cursor(void *opaque)
{
TextConsole *s = opaque;
s->cursor_visible_phase = !s->cursor_visible_phase;
vga_hw_invalidate();
qemu_mod_timer(s->cursor_timer,
qemu_get_clock_ms(rt_clock) + CONSOLE_CURSOR_PERIOD / 2);
}
static void text_console_do_init(CharDriverState *chr, DisplayState *ds)
{
TextConsole *s;
@@ -1505,9 +1482,6 @@ static void text_console_do_init(CharDriverState *chr, DisplayState *ds)
s->g_height = ds_get_height(s->ds);
}
s->cursor_timer =
qemu_new_timer_ms(rt_clock, text_console_update_cursor, s);
s->hw_invalidate = text_console_invalidate;
s->hw_text_update = text_console_update;
s->hw = s;
@@ -1614,7 +1588,7 @@ PixelFormat qemu_different_endianness_pixelformat(int bpp)
memset(&pf, 0x00, sizeof(PixelFormat));
pf.bits_per_pixel = bpp;
pf.bytes_per_pixel = DIV_ROUND_UP(bpp, 8);
pf.bytes_per_pixel = bpp / 8;
pf.depth = bpp == 32 ? 24 : bpp;
switch (bpp) {
@@ -1663,12 +1637,13 @@ PixelFormat qemu_default_pixelformat(int bpp)
memset(&pf, 0x00, sizeof(PixelFormat));
pf.bits_per_pixel = bpp;
pf.bytes_per_pixel = DIV_ROUND_UP(bpp, 8);
pf.bytes_per_pixel = bpp / 8;
pf.depth = bpp == 32 ? 24 : bpp;
switch (bpp) {
case 15:
pf.bits_per_pixel = 16;
pf.bytes_per_pixel = 2;
pf.rmask = 0x00007c00;
pf.gmask = 0x000003E0;
pf.bmask = 0x0000001F;

View File

@@ -30,10 +30,6 @@
#include "qemu-common.h"
#include "qemu-coroutine-int.h"
#ifdef CONFIG_VALGRIND_H
#include <valgrind/valgrind.h>
#endif
enum {
/* Maximum free pool size prevents holding too many freed coroutines */
POOL_MAX_SIZE = 64,
@@ -47,11 +43,6 @@ typedef struct {
Coroutine base;
void *stack;
jmp_buf env;
#ifdef CONFIG_VALGRIND_H
unsigned int valgrind_stack_id;
#endif
} CoroutineUContext;
/**
@@ -168,11 +159,6 @@ static Coroutine *coroutine_new(void)
uc.uc_stack.ss_size = stack_size;
uc.uc_stack.ss_flags = 0;
#ifdef CONFIG_VALGRIND_H
co->valgrind_stack_id =
VALGRIND_STACK_REGISTER(co->stack, co->stack + stack_size);
#endif
arg.p = co;
makecontext(&uc, (void (*)(void))coroutine_trampoline,
@@ -199,20 +185,6 @@ Coroutine *qemu_coroutine_new(void)
return co;
}
#ifdef CONFIG_VALGRIND_H
#ifdef CONFIG_PRAGMA_DISABLE_UNUSED_BUT_SET
/* Work around an unused variable in the valgrind.h macro... */
#pragma GCC diagnostic ignored "-Wunused-but-set-variable"
#endif
static inline void valgrind_stack_deregister(CoroutineUContext *co)
{
VALGRIND_STACK_DEREGISTER(co->valgrind_stack_id);
}
#ifdef CONFIG_PRAGMA_DISABLE_UNUSED_BUT_SET
#pragma GCC diagnostic error "-Wunused-but-set-variable"
#endif
#endif
void qemu_coroutine_delete(Coroutine *co_)
{
CoroutineUContext *co = DO_UPCAST(CoroutineUContext, base, co_);
@@ -224,10 +196,6 @@ void qemu_coroutine_delete(Coroutine *co_)
return;
}
#ifdef CONFIG_VALGRIND_H
valgrind_stack_deregister(co);
#endif
g_free(co->stack);
g_free(co);
}

View File

@@ -260,34 +260,21 @@ extern unsigned long reserved_va;
#define stfl(p, v) stfl_raw(p, v)
#define stfq(p, v) stfq_raw(p, v)
#ifndef CONFIG_TCG_PASS_AREG0
#define ldub_code(p) ldub_raw(p)
#define ldsb_code(p) ldsb_raw(p)
#define lduw_code(p) lduw_raw(p)
#define ldsw_code(p) ldsw_raw(p)
#define ldl_code(p) ldl_raw(p)
#define ldq_code(p) ldq_raw(p)
#else
#define cpu_ldub_code(env1, p) ldub_raw(p)
#define cpu_ldsb_code(env1, p) ldsb_raw(p)
#define cpu_lduw_code(env1, p) lduw_raw(p)
#define cpu_ldsw_code(env1, p) ldsw_raw(p)
#define cpu_ldl_code(env1, p) ldl_raw(p)
#define cpu_ldq_code(env1, p) ldq_raw(p)
#define cpu_ldub_data(env, addr) ldub_raw(addr)
#define cpu_lduw_data(env, addr) lduw_raw(addr)
#define cpu_ldsw_data(env, addr) ldsw_raw(addr)
#define cpu_ldl_data(env, addr) ldl_raw(addr)
#define cpu_ldq_data(env, addr) ldq_raw(addr)
#define cpu_stb_data(env, addr, data) stb_raw(addr, data)
#define cpu_stw_data(env, addr, data) stw_raw(addr, data)
#define cpu_stl_data(env, addr, data) stl_raw(addr, data)
#define cpu_stq_data(env, addr, data) stq_raw(addr, data)
#define cpu_ldub_kernel(env, addr) ldub_raw(addr)
#define cpu_lduw_kernel(env, addr) lduw_raw(addr)
#define cpu_ldsw_kernel(env, addr) ldsw_raw(addr)
#define cpu_ldl_kernel(env, addr) ldl_raw(addr)
#define cpu_ldq_kernel(env, addr) ldq_raw(addr)
#define cpu_stb_kernel(env, addr, data) stb_raw(addr, data)
#define cpu_stw_kernel(env, addr, data) stw_raw(addr, data)
#define cpu_stl_kernel(env, addr, data) stl_raw(addr, data)
#define cpu_stq_kernel(env, addr, data) stq_raw(addr, data)
#endif
#define ldub_kernel(p) ldub_raw(p)
#define ldsb_kernel(p) ldsb_raw(p)
@@ -304,13 +291,6 @@ extern unsigned long reserved_va;
#define stfl_kernel(p, v) stfl_raw(p, v)
#define stfq_kernel(p, vt) stfq_raw(p, v)
#define cpu_ldub_data(env, addr) ldub_raw(addr)
#define cpu_lduw_data(env, addr) lduw_raw(addr)
#define cpu_ldl_data(env, addr) ldl_raw(addr)
#define cpu_stb_data(env, addr, data) stb_raw(addr, data)
#define cpu_stw_data(env, addr, data) stw_raw(addr, data)
#define cpu_stl_data(env, addr, data) stl_raw(addr, data)
#endif /* defined(CONFIG_USER_ONLY) */
/* page related stuff */
@@ -463,9 +443,34 @@ void cpu_watchpoint_remove_all(CPUArchState *env, int mask);
#define SSTEP_NOTIMER 0x4 /* Do not Timers while single stepping */
void cpu_single_step(CPUArchState *env, int enabled);
void cpu_state_reset(CPUArchState *s);
int cpu_is_stopped(CPUArchState *env);
void run_on_cpu(CPUArchState *env, void (*func)(void *data), void *data);
#define CPU_LOG_TB_OUT_ASM (1 << 0)
#define CPU_LOG_TB_IN_ASM (1 << 1)
#define CPU_LOG_TB_OP (1 << 2)
#define CPU_LOG_TB_OP_OPT (1 << 3)
#define CPU_LOG_INT (1 << 4)
#define CPU_LOG_EXEC (1 << 5)
#define CPU_LOG_PCALL (1 << 6)
#define CPU_LOG_IOPORT (1 << 7)
#define CPU_LOG_TB_CPU (1 << 8)
#define CPU_LOG_RESET (1 << 9)
/* define log items */
typedef struct CPULogItem {
int mask;
const char *name;
const char *help;
} CPULogItem;
extern const CPULogItem cpu_log_items[];
void cpu_set_log(int log_flags);
void cpu_set_log_filename(const char *filename);
int cpu_str_to_log_mask(const char *str);
#if !defined(CONFIG_USER_ONLY)
/* Return the physical page corresponding to a virtual one. Use it
@@ -497,7 +502,6 @@ typedef struct RAMBlock {
typedef struct RAMList {
uint8_t *phys_dirty;
QLIST_HEAD(, RAMBlock) blocks;
uint64_t dirty_pages;
} RAMList;
extern RAMList ram_list;

View File

@@ -3,7 +3,9 @@
/* CPU interfaces that are target independent. */
#ifdef TARGET_PHYS_ADDR_BITS
#include "targphys.h"
#endif
#ifndef NEED_CPU_H
#include "poison.h"
@@ -69,8 +71,6 @@ void cpu_physical_memory_unmap(void *buffer, target_phys_addr_t len,
void *cpu_register_map_client(void *opaque, void (*callback)(void *opaque));
void cpu_unregister_map_client(void *cookie);
bool cpu_physical_memory_is_io(target_phys_addr_t phys_addr);
/* Coalesced MMIO regions are areas where write operations can be reordered.
* This usually implies that write operations are side-effect free. This allows
* batching which can make a major impact on performance when using

View File

@@ -151,6 +151,14 @@ typedef struct CPUWatchpoint {
QTAILQ_ENTRY(CPUWatchpoint) entry;
} CPUWatchpoint;
#ifdef _WIN32
#define CPU_COMMON_THREAD \
void *hThread;
#else
#define CPU_COMMON_THREAD
#endif
#define CPU_TEMP_BUF_NLONGS 128
#define CPU_COMMON \
struct TranslationBlock *current_tb; /* currently executing TB */ \
@@ -208,7 +216,10 @@ typedef struct CPUWatchpoint {
uint32_t created; \
uint32_t stop; /* Stop request */ \
uint32_t stopped; /* Artificially stopped */ \
struct QemuThread *thread; \
CPU_COMMON_THREAD \
struct QemuCond *halt_cond; \
int thread_kicked; \
struct qemu_work_item *queued_work_first, *queued_work_last; \
const char *cpu_model_str; \
struct KVMState *kvm_state; \

View File

@@ -156,9 +156,12 @@ static inline TranslationBlock *tb_find_fast(CPUArchState *env)
static CPUDebugExcpHandler *debug_excp_handler;
void cpu_set_debug_excp_handler(CPUDebugExcpHandler *handler)
CPUDebugExcpHandler *cpu_set_debug_excp_handler(CPUDebugExcpHandler *handler)
{
CPUDebugExcpHandler *old_handler = debug_excp_handler;
debug_excp_handler = handler;
return old_handler;
}
static void cpu_handle_debug_exception(CPUArchState *env)
@@ -181,9 +184,6 @@ volatile sig_atomic_t exit_request;
int cpu_exec(CPUArchState *env)
{
#ifdef TARGET_PPC
CPUState *cpu = ENV_GET_CPU(env);
#endif
int ret, interrupt_request;
TranslationBlock *tb;
uint8_t *tc_ptr;
@@ -222,7 +222,6 @@ int cpu_exec(CPUArchState *env)
#elif defined(TARGET_LM32)
#elif defined(TARGET_MICROBLAZE)
#elif defined(TARGET_MIPS)
#elif defined(TARGET_OPENRISC)
#elif defined(TARGET_SH4)
#elif defined(TARGET_CRIS)
#elif defined(TARGET_S390X)
@@ -286,25 +285,17 @@ int cpu_exec(CPUArchState *env)
}
#endif
#if defined(TARGET_I386)
#if !defined(CONFIG_USER_ONLY)
if (interrupt_request & CPU_INTERRUPT_POLL) {
env->interrupt_request &= ~CPU_INTERRUPT_POLL;
apic_poll_irq(env->apic_state);
}
#endif
if (interrupt_request & CPU_INTERRUPT_INIT) {
cpu_svm_check_intercept_param(env, SVM_EXIT_INIT,
0);
do_cpu_init(x86_env_get_cpu(env));
svm_check_intercept(env, SVM_EXIT_INIT);
do_cpu_init(env);
env->exception_index = EXCP_HALTED;
cpu_loop_exit(env);
} else if (interrupt_request & CPU_INTERRUPT_SIPI) {
do_cpu_sipi(x86_env_get_cpu(env));
do_cpu_sipi(env);
} else if (env->hflags2 & HF2_GIF_MASK) {
if ((interrupt_request & CPU_INTERRUPT_SMI) &&
!(env->hflags & HF_SMM_MASK)) {
cpu_svm_check_intercept_param(env, SVM_EXIT_SMI,
0);
svm_check_intercept(env, SVM_EXIT_SMI);
env->interrupt_request &= ~CPU_INTERRUPT_SMI;
do_smm_enter(env);
next_tb = 0;
@@ -325,8 +316,7 @@ int cpu_exec(CPUArchState *env)
(env->eflags & IF_MASK &&
!(env->hflags & HF_INHIBIT_IRQ_MASK))))) {
int intno;
cpu_svm_check_intercept_param(env, SVM_EXIT_INTR,
0);
svm_check_intercept(env, SVM_EXIT_INTR);
env->interrupt_request &= ~(CPU_INTERRUPT_HARD | CPU_INTERRUPT_VIRQ);
intno = cpu_get_pic_interrupt(env);
qemu_log_mask(CPU_LOG_TB_IN_ASM, "Servicing hardware INT=0x%02x\n", intno);
@@ -340,8 +330,7 @@ int cpu_exec(CPUArchState *env)
!(env->hflags & HF_INHIBIT_IRQ_MASK)) {
int intno;
/* FIXME: this should respect TPR */
cpu_svm_check_intercept_param(env, SVM_EXIT_VINTR,
0);
svm_check_intercept(env, SVM_EXIT_VINTR);
intno = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_vector));
qemu_log_mask(CPU_LOG_TB_IN_ASM, "Servicing virtual hardware INT=0x%02x\n", intno);
do_interrupt_x86_hardirq(env, intno, 1);
@@ -352,7 +341,7 @@ int cpu_exec(CPUArchState *env)
}
#elif defined(TARGET_PPC)
if ((interrupt_request & CPU_INTERRUPT_RESET)) {
cpu_reset(cpu);
cpu_state_reset(env);
}
if (interrupt_request & CPU_INTERRUPT_HARD) {
ppc_hw_interrupt(env);
@@ -385,23 +374,6 @@ int cpu_exec(CPUArchState *env)
do_interrupt(env);
next_tb = 0;
}
#elif defined(TARGET_OPENRISC)
{
int idx = -1;
if ((interrupt_request & CPU_INTERRUPT_HARD)
&& (env->sr & SR_IEE)) {
idx = EXCP_INT;
}
if ((interrupt_request & CPU_INTERRUPT_TIMER)
&& (env->sr & SR_TEE)) {
idx = EXCP_TICK;
}
if (idx >= 0) {
env->exception_index = idx;
do_interrupt(env);
next_tb = 0;
}
}
#elif defined(TARGET_SPARC)
if (interrupt_request & CPU_INTERRUPT_HARD) {
if (cpu_interrupts_enabled(env) &&
@@ -444,7 +416,6 @@ int cpu_exec(CPUArchState *env)
#elif defined(TARGET_UNICORE32)
if (interrupt_request & CPU_INTERRUPT_HARD
&& !(env->uncached_asr & ASR_I)) {
env->exception_index = UC32_EXCP_INTR;
do_interrupt(env);
next_tb = 0;
}
@@ -493,18 +464,11 @@ int cpu_exec(CPUArchState *env)
do_interrupt(env);
next_tb = 0;
}
if (interrupt_request & CPU_INTERRUPT_NMI) {
unsigned int m_flag_archval;
if (env->pregs[PR_VR] < 32) {
m_flag_archval = M_FLAG_V10;
} else {
m_flag_archval = M_FLAG_V32;
}
if ((env->pregs[PR_CCS] & m_flag_archval)) {
env->exception_index = EXCP_NMI;
do_interrupt(env);
next_tb = 0;
}
if (interrupt_request & CPU_INTERRUPT_NMI
&& (env->pregs[PR_CCS] & M_FLAG)) {
env->exception_index = EXCP_NMI;
do_interrupt(env);
next_tb = 0;
}
#elif defined(TARGET_M68K)
if (interrupt_request & CPU_INTERRUPT_HARD
@@ -656,7 +620,6 @@ int cpu_exec(CPUArchState *env)
| env->cc_dest | (env->cc_x << 4);
#elif defined(TARGET_MICROBLAZE)
#elif defined(TARGET_MIPS)
#elif defined(TARGET_OPENRISC)
#elif defined(TARGET_SH4)
#elif defined(TARGET_ALPHA)
#elif defined(TARGET_CRIS)

112
cpus.c
View File

@@ -36,7 +36,6 @@
#include "cpus.h"
#include "qtest.h"
#include "main-loop.h"
#include "bitmap.h"
#ifndef _WIN32
#include "compatfd.h"
@@ -62,33 +61,6 @@
static CPUArchState *next_cpu;
static bool cpu_thread_is_idle(CPUArchState *env)
{
if (env->stop || env->queued_work_first) {
return false;
}
if (env->stopped || !runstate_is_running()) {
return true;
}
if (!env->halted || qemu_cpu_has_work(env) ||
kvm_async_interrupts_enabled()) {
return false;
}
return true;
}
static bool all_cpu_threads_idle(void)
{
CPUArchState *env;
for (env = first_cpu; env != NULL; env = env->next_cpu) {
if (!cpu_thread_is_idle(env)) {
return false;
}
}
return true;
}
/***********************************************************/
/* guest cycle counter */
@@ -461,6 +433,32 @@ static int cpu_can_run(CPUArchState *env)
return 1;
}
static bool cpu_thread_is_idle(CPUArchState *env)
{
if (env->stop || env->queued_work_first) {
return false;
}
if (env->stopped || !runstate_is_running()) {
return true;
}
if (!env->halted || qemu_cpu_has_work(env) || kvm_irqchip_in_kernel()) {
return false;
}
return true;
}
bool all_cpu_threads_idle(void)
{
CPUArchState *env;
for (env = first_cpu; env != NULL; env = env->next_cpu) {
if (!cpu_thread_is_idle(env)) {
return false;
}
}
return true;
}
static void cpu_handle_guest_debug(CPUArchState *env)
{
gdb_set_stop_cpu(env);
@@ -688,15 +686,13 @@ static void flush_queued_work(CPUArchState *env)
static void qemu_wait_io_event_common(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
if (env->stop) {
env->stop = 0;
env->stopped = 1;
qemu_cond_signal(&qemu_pause_cond);
}
flush_queued_work(env);
cpu->thread_kicked = false;
env->thread_kicked = false;
}
static void qemu_tcg_wait_io_event(void)
@@ -732,11 +728,10 @@ static void qemu_kvm_wait_io_event(CPUArchState *env)
static void *qemu_kvm_cpu_thread_fn(void *arg)
{
CPUArchState *env = arg;
CPUState *cpu = ENV_GET_CPU(env);
int r;
qemu_mutex_lock(&qemu_global_mutex);
qemu_thread_get_self(cpu->thread);
qemu_thread_get_self(env->thread);
env->thread_id = qemu_get_thread_id();
cpu_single_env = env;
@@ -772,12 +767,11 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
exit(1);
#else
CPUArchState *env = arg;
CPUState *cpu = ENV_GET_CPU(env);
sigset_t waitset;
int r;
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
qemu_thread_get_self(env->thread);
env->thread_id = qemu_get_thread_id();
sigemptyset(&waitset);
@@ -813,10 +807,9 @@ static void tcg_exec_all(void);
static void *qemu_tcg_cpu_thread_fn(void *arg)
{
CPUArchState *env = arg;
CPUState *cpu = ENV_GET_CPU(env);
qemu_tcg_init_cpu_signals();
qemu_thread_get_self(cpu->thread);
qemu_thread_get_self(env->thread);
/* signal CPU creation */
qemu_mutex_lock(&qemu_global_mutex);
@@ -849,20 +842,19 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
static void qemu_cpu_kick_thread(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
#ifndef _WIN32
int err;
err = pthread_kill(cpu->thread->thread, SIG_IPI);
err = pthread_kill(env->thread->thread, SIG_IPI);
if (err) {
fprintf(stderr, "qemu:%s: %s", __func__, strerror(err));
exit(1);
}
#else /* _WIN32 */
if (!qemu_cpu_is_self(env)) {
SuspendThread(cpu->hThread);
SuspendThread(env->hThread);
cpu_signal(0);
ResumeThread(cpu->hThread);
ResumeThread(env->hThread);
}
#endif
}
@@ -870,12 +862,11 @@ static void qemu_cpu_kick_thread(CPUArchState *env)
void qemu_cpu_kick(void *_env)
{
CPUArchState *env = _env;
CPUState *cpu = ENV_GET_CPU(env);
qemu_cond_broadcast(env->halt_cond);
if (!tcg_enabled() && !cpu->thread_kicked) {
if (!tcg_enabled() && !env->thread_kicked) {
qemu_cpu_kick_thread(env);
cpu->thread_kicked = true;
env->thread_kicked = true;
}
}
@@ -883,11 +874,10 @@ void qemu_cpu_kick_self(void)
{
#ifndef _WIN32
assert(cpu_single_env);
CPUState *cpu_single_cpu = ENV_GET_CPU(cpu_single_env);
if (!cpu_single_cpu->thread_kicked) {
if (!cpu_single_env->thread_kicked) {
qemu_cpu_kick_thread(cpu_single_env);
cpu_single_cpu->thread_kicked = true;
cpu_single_env->thread_kicked = true;
}
#else
abort();
@@ -897,9 +887,8 @@ void qemu_cpu_kick_self(void)
int qemu_cpu_is_self(void *_env)
{
CPUArchState *env = _env;
CPUState *cpu = ENV_GET_CPU(env);
return qemu_thread_is_self(cpu->thread);
return qemu_thread_is_self(env->thread);
}
void qemu_mutex_lock_iothread(void)
@@ -985,37 +974,34 @@ void resume_all_vcpus(void)
static void qemu_tcg_init_vcpu(void *_env)
{
CPUArchState *env = _env;
CPUState *cpu = ENV_GET_CPU(env);
/* share a single thread for all cpus with TCG */
if (!tcg_cpu_thread) {
cpu->thread = g_malloc0(sizeof(QemuThread));
env->thread = g_malloc0(sizeof(QemuThread));
env->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(env->halt_cond);
tcg_halt_cond = env->halt_cond;
qemu_thread_create(cpu->thread, qemu_tcg_cpu_thread_fn, env,
qemu_thread_create(env->thread, qemu_tcg_cpu_thread_fn, env,
QEMU_THREAD_JOINABLE);
#ifdef _WIN32
cpu->hThread = qemu_thread_get_handle(cpu->thread);
env->hThread = qemu_thread_get_handle(env->thread);
#endif
while (env->created == 0) {
qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex);
}
tcg_cpu_thread = cpu->thread;
tcg_cpu_thread = env->thread;
} else {
cpu->thread = tcg_cpu_thread;
env->thread = tcg_cpu_thread;
env->halt_cond = tcg_halt_cond;
}
}
static void qemu_kvm_start_vcpu(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
cpu->thread = g_malloc0(sizeof(QemuThread));
env->thread = g_malloc0(sizeof(QemuThread));
env->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(env->halt_cond);
qemu_thread_create(cpu->thread, qemu_kvm_cpu_thread_fn, env,
qemu_thread_create(env->thread, qemu_kvm_cpu_thread_fn, env,
QEMU_THREAD_JOINABLE);
while (env->created == 0) {
qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex);
@@ -1024,12 +1010,10 @@ static void qemu_kvm_start_vcpu(CPUArchState *env)
static void qemu_dummy_start_vcpu(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
cpu->thread = g_malloc0(sizeof(QemuThread));
env->thread = g_malloc0(sizeof(QemuThread));
env->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(env->halt_cond);
qemu_thread_create(cpu->thread, qemu_dummy_cpu_thread_fn, env,
qemu_thread_create(env->thread, qemu_dummy_cpu_thread_fn, env,
QEMU_THREAD_JOINABLE);
while (env->created == 0) {
qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex);
@@ -1161,7 +1145,7 @@ void set_numa_modes(void)
for (env = first_cpu; env != NULL; env = env->next_cpu) {
for (i = 0; i < nb_numa_nodes; i++) {
if (test_bit(env->cpu_index, node_cpumask[i])) {
if (node_cpumask[i] & (1 << env->cpu_index)) {
env->numa_node = i;
}
}

View File

@@ -312,9 +312,7 @@ void tlb_set_page(CPUArchState *env, target_ulong vaddr,
/* NOTE: this function can trigger an exception */
/* NOTE2: the returned address is not exactly the physical address: it
* is actually a ram_addr_t (in system mode; the user mode emulation
* version of this function returns a guest virtual address).
*/
is the offset relative to phys_ram_base */
tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr)
{
int mmu_idx, page_index, pd;
@@ -325,7 +323,11 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr)
mmu_idx = cpu_mmu_index(env1);
if (unlikely(env1->tlb_table[mmu_idx][page_index].addr_code !=
(addr & TARGET_PAGE_MASK))) {
#ifdef CONFIG_TCG_PASS_AREG0
cpu_ldub_code(env1, addr);
#else
ldub_code(addr);
#endif
}
pd = env1->iotlb[mmu_idx][page_index] & ~TARGET_PAGE_MASK;
mr = iotlb_to_region(pd);
@@ -344,6 +346,7 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr)
#define MMUSUFFIX _cmmu
#undef GETPC
#define GETPC() ((uintptr_t)0)
#define env cpu_single_env
#define SOFTMMU_CODE_ACCESS
#define SHIFT 0

272
cutils.c
View File

@@ -26,14 +26,6 @@
#include <math.h>
#include "qemu_socket.h"
#include "iov.h"
void strpadcpy(char *buf, int buf_size, const char *str, char pad)
{
int len = qemu_strnlen(str, buf_size);
memcpy(buf, str, len);
memset(buf + len, pad, buf_size - len);
}
void pstrcpy(char *buf, int buf_size, const char *str)
{
@@ -115,7 +107,7 @@ time_t mktimegm(struct tm *tm)
m += 12;
y--;
}
t = 86400ULL * (d + (153 * m - 457) / 5 + 365 * y + y / 4 - y / 100 +
t = 86400 * (d + (153 * m - 457) / 5 + 365 * y + y / 4 - y / 100 +
y / 400 - 719469);
t += 3600 * tm->tm_hour + 60 * tm->tm_min + tm->tm_sec;
return t;
@@ -179,34 +171,48 @@ void qemu_iovec_add(QEMUIOVector *qiov, void *base, size_t len)
}
/*
* Concatenates (partial) iovecs from src to the end of dst.
* It starts copying after skipping `soffset' bytes at the
* beginning of src and adds individual vectors from src to
* dst copies up to `sbytes' bytes total, or up to the end
* of src if it comes first. This way, it is okay to specify
* very large value for `sbytes' to indicate "up to the end
* of src".
* Only vector pointers are processed, not the actual data buffers.
* Copies iovecs from src to the end of dst. It starts copying after skipping
* the given number of bytes in src and copies until src is completely copied
* or the total size of the copied iovec reaches size.The size of the last
* copied iovec is changed in order to fit the specified total size if it isn't
* a perfect fit already.
*/
void qemu_iovec_concat(QEMUIOVector *dst,
QEMUIOVector *src, size_t soffset, size_t sbytes)
void qemu_iovec_copy(QEMUIOVector *dst, QEMUIOVector *src, uint64_t skip,
size_t size)
{
int i;
size_t done;
struct iovec *siov = src->iov;
void *iov_base;
uint64_t iov_len;
assert(dst->nalloc != -1);
assert(src->size >= soffset);
for (i = 0, done = 0; done < sbytes && i < src->niov; i++) {
if (soffset < siov[i].iov_len) {
size_t len = MIN(siov[i].iov_len - soffset, sbytes - done);
qemu_iovec_add(dst, siov[i].iov_base + soffset, len);
done += len;
soffset = 0;
done = 0;
for (i = 0; (i < src->niov) && (done != size); i++) {
if (skip >= src->iov[i].iov_len) {
/* Skip the whole iov */
skip -= src->iov[i].iov_len;
continue;
} else {
soffset -= siov[i].iov_len;
/* Skip only part (or nothing) of the iov */
iov_base = (uint8_t*) src->iov[i].iov_base + skip;
iov_len = src->iov[i].iov_len - skip;
skip = 0;
}
if (done + iov_len > size) {
qemu_iovec_add(dst, iov_base, size - done);
break;
} else {
qemu_iovec_add(dst, iov_base, iov_len);
}
done += iov_len;
}
/* return done; */
}
void qemu_iovec_concat(QEMUIOVector *dst, QEMUIOVector *src, size_t size)
{
qemu_iovec_copy(dst, src, 0, size);
}
void qemu_iovec_destroy(QEMUIOVector *qiov)
@@ -227,22 +233,74 @@ void qemu_iovec_reset(QEMUIOVector *qiov)
qiov->size = 0;
}
size_t qemu_iovec_to_buf(QEMUIOVector *qiov, size_t offset,
void *buf, size_t bytes)
void qemu_iovec_to_buffer(QEMUIOVector *qiov, void *buf)
{
return iov_to_buf(qiov->iov, qiov->niov, offset, buf, bytes);
uint8_t *p = (uint8_t *)buf;
int i;
for (i = 0; i < qiov->niov; ++i) {
memcpy(p, qiov->iov[i].iov_base, qiov->iov[i].iov_len);
p += qiov->iov[i].iov_len;
}
}
size_t qemu_iovec_from_buf(QEMUIOVector *qiov, size_t offset,
const void *buf, size_t bytes)
void qemu_iovec_from_buffer(QEMUIOVector *qiov, const void *buf, size_t count)
{
return iov_from_buf(qiov->iov, qiov->niov, offset, buf, bytes);
const uint8_t *p = (const uint8_t *)buf;
size_t copy;
int i;
for (i = 0; i < qiov->niov && count; ++i) {
copy = count;
if (copy > qiov->iov[i].iov_len)
copy = qiov->iov[i].iov_len;
memcpy(qiov->iov[i].iov_base, p, copy);
p += copy;
count -= copy;
}
}
size_t qemu_iovec_memset(QEMUIOVector *qiov, size_t offset,
int fillc, size_t bytes)
void qemu_iovec_memset(QEMUIOVector *qiov, int c, size_t count)
{
return iov_memset(qiov->iov, qiov->niov, offset, fillc, bytes);
size_t n;
int i;
for (i = 0; i < qiov->niov && count; ++i) {
n = MIN(count, qiov->iov[i].iov_len);
memset(qiov->iov[i].iov_base, c, n);
count -= n;
}
}
void qemu_iovec_memset_skip(QEMUIOVector *qiov, int c, size_t count,
size_t skip)
{
int i;
size_t done;
void *iov_base;
uint64_t iov_len;
done = 0;
for (i = 0; (i < qiov->niov) && (done != count); i++) {
if (skip >= qiov->iov[i].iov_len) {
/* Skip the whole iov */
skip -= qiov->iov[i].iov_len;
continue;
} else {
/* Skip only part (or nothing) of the iov */
iov_base = (uint8_t*) qiov->iov[i].iov_base + skip;
iov_len = qiov->iov[i].iov_len - skip;
skip = 0;
}
if (done + iov_len > count) {
memset(iov_base, c, count - done);
break;
} else {
memset(iov_base, c, iov_len);
}
done += iov_len;
}
}
/*
@@ -383,49 +441,111 @@ int qemu_parse_fd(const char *param)
return fd;
}
int qemu_parse_fdset(const char *param)
{
return qemu_parse_fd(param);
}
/* round down to the nearest power of 2*/
int64_t pow2floor(int64_t value)
{
if (!is_power_of_2(value)) {
value = 0x8000000000000000ULL >> clz64(value);
}
return value;
}
/*
* Implementation of ULEB128 (http://en.wikipedia.org/wiki/LEB128)
* Input is limited to 14-bit numbers
* Send/recv data with iovec buffers
*
* This function send/recv data from/to the iovec buffer directly.
* The first `offset' bytes in the iovec buffer are skipped and next
* `len' bytes are used.
*
* For example,
*
* do_sendv_recvv(sockfd, iov, len, offset, 1);
*
* is equal to
*
* char *buf = malloc(size);
* iov_to_buf(iov, iovcnt, buf, offset, size);
* send(sockfd, buf, size, 0);
* free(buf);
*/
int uleb128_encode_small(uint8_t *out, uint32_t n)
static int do_sendv_recvv(int sockfd, struct iovec *iov, int len, int offset,
int do_sendv)
{
g_assert(n <= 0x3fff);
if (n < 0x80) {
*out++ = n;
return 1;
} else {
*out++ = (n & 0x7f) | 0x80;
*out++ = n >> 7;
return 2;
int ret, diff, iovlen;
struct iovec *last_iov;
/* last_iov is inclusive, so count from one. */
iovlen = 1;
last_iov = iov;
len += offset;
while (last_iov->iov_len < len) {
len -= last_iov->iov_len;
last_iov++;
iovlen++;
}
diff = last_iov->iov_len - len;
last_iov->iov_len -= diff;
while (iov->iov_len <= offset) {
offset -= iov->iov_len;
iov++;
iovlen--;
}
iov->iov_base = (char *) iov->iov_base + offset;
iov->iov_len -= offset;
{
#if defined CONFIG_IOVEC && defined CONFIG_POSIX
struct msghdr msg;
memset(&msg, 0, sizeof(msg));
msg.msg_iov = iov;
msg.msg_iovlen = iovlen;
do {
if (do_sendv) {
ret = sendmsg(sockfd, &msg, 0);
} else {
ret = recvmsg(sockfd, &msg, 0);
}
} while (ret == -1 && errno == EINTR);
#else
struct iovec *p = iov;
ret = 0;
while (iovlen > 0) {
int rc;
if (do_sendv) {
rc = send(sockfd, p->iov_base, p->iov_len, 0);
} else {
rc = qemu_recv(sockfd, p->iov_base, p->iov_len, 0);
}
if (rc == -1) {
if (errno == EINTR) {
continue;
}
if (ret == 0) {
ret = -1;
}
break;
}
if (rc == 0) {
break;
}
ret += rc;
iovlen--, p++;
}
#endif
}
/* Undo the changes above */
iov->iov_base = (char *) iov->iov_base - offset;
iov->iov_len += offset;
last_iov->iov_len += diff;
return ret;
}
int uleb128_decode_small(const uint8_t *in, uint32_t *n)
int qemu_recvv(int sockfd, struct iovec *iov, int len, int iov_offset)
{
if (!(*in & 0x80)) {
*n = *in++;
return 1;
} else {
*n = *in++ & 0x7f;
/* we exceed 14 bit number */
if (*in & 0x80) {
return -1;
}
*n |= *in++ << 7;
return 2;
}
return do_sendv_recvv(sockfd, iov, len, iov_offset, 0);
}
int qemu_sendv(int sockfd, struct iovec *iov, int len, int iov_offset)
{
return do_sendv_recvv(sockfd, iov, len, iov_offset, 1);
}

View File

@@ -128,8 +128,6 @@
#define DEF_HELPER_5(name, ret, t1, t2, t3, t4, t5) \
DEF_HELPER_FLAGS_5(name, 0, ret, t1, t2, t3, t4, t5)
/* MAX_OPC_PARAM_IARGS must be set to n if last entry is DEF_HELPER_FLAGS_n. */
#endif /* DEF_HELPER_H */
#ifndef GEN_HELPER

View File

@@ -27,21 +27,3 @@ CONFIG_SMC91C111=y
CONFIG_DS1338=y
CONFIG_PFLASH_CFI01=y
CONFIG_PFLASH_CFI02=y
CONFIG_ARM_TIMER=y
CONFIG_PL011=y
CONFIG_PL022=y
CONFIG_PL031=y
CONFIG_PL041=y
CONFIG_PL050=y
CONFIG_PL061=y
CONFIG_PL080=y
CONFIG_PL110=y
CONFIG_PL181=y
CONFIG_PL190=y
CONFIG_PL310=y
CONFIG_CADENCE=y
CONFIG_XGMAC=y
CONFIG_VERSATILE_PCI=y
CONFIG_VERSATILE_I2C=y

View File

@@ -3,5 +3,3 @@
CONFIG_PTIMER=y
CONFIG_PFLASH_CFI01=y
CONFIG_SERIAL=y
CONFIG_XILINX=y
CONFIG_XILINX_AXI=y

View File

@@ -3,5 +3,3 @@
CONFIG_PTIMER=y
CONFIG_PFLASH_CFI01=y
CONFIG_SERIAL=y
CONFIG_XILINX=y
CONFIG_XILINX_AXI=y

View File

@@ -1 +0,0 @@
# Default configuration for or32-linux-user

View File

@@ -1,4 +0,0 @@
# Default configuration for or32-softmmu
CONFIG_SERIAL=y
CONFIG_OPENCORES_ETH=y

View File

@@ -10,12 +10,9 @@ CONFIG_EEPRO100_PCI=y
CONFIG_PCNET_PCI=y
CONFIG_PCNET_COMMON=y
CONFIG_LSI_SCSI_PCI=y
CONFIG_MEGASAS_SCSI_PCI=y
CONFIG_RTL8139_PCI=y
CONFIG_E1000_PCI=y
CONFIG_IDE_CORE=y
CONFIG_IDE_QDEV=y
CONFIG_IDE_PCI=y
CONFIG_AHCI=y
CONFIG_ESP=y
CONFIG_ESP_PCI=y

View File

@@ -36,4 +36,3 @@ CONFIG_PFLASH_CFI01=y
CONFIG_PFLASH_CFI02=y
CONFIG_PTIMER=y
CONFIG_I8259=y
CONFIG_XILINX=y

View File

@@ -33,4 +33,3 @@ CONFIG_PFLASH_CFI01=y
CONFIG_PFLASH_CFI02=y
CONFIG_PTIMER=y
CONFIG_I8259=y
CONFIG_XILINX=y

View File

@@ -33,4 +33,3 @@ CONFIG_PFLASH_CFI01=y
CONFIG_PFLASH_CFI02=y
CONFIG_PTIMER=y
CONFIG_I8259=y
CONFIG_XILINX=y

View File

@@ -6,6 +6,7 @@ CONFIG_M48T59=y
CONFIG_PTIMER=y
CONFIG_VGA=y
CONFIG_VGA_PCI=y
CONFIG_VGA_CIRRUS=y
CONFIG_SERIAL=y
CONFIG_PARALLEL=y
CONFIG_PCKBD=y

View File

@@ -1,4 +0,0 @@
# Default configuration for unicore32-softmmu
CONFIG_PUV3=y
CONFIG_PTIMER=y
CONFIG_PCKBD=y

View File

@@ -22,48 +22,9 @@
#include "qemu-common.h"
#include "device_tree.h"
#include "hw/loader.h"
#include "qemu-option.h"
#include "qemu-config.h"
#include <libfdt.h>
#define FDT_MAX_SIZE 0x10000
void *create_device_tree(int *sizep)
{
void *fdt;
int ret;
*sizep = FDT_MAX_SIZE;
fdt = g_malloc0(FDT_MAX_SIZE);
ret = fdt_create(fdt, FDT_MAX_SIZE);
if (ret < 0) {
goto fail;
}
ret = fdt_begin_node(fdt, "");
if (ret < 0) {
goto fail;
}
ret = fdt_end_node(fdt);
if (ret < 0) {
goto fail;
}
ret = fdt_finish(fdt);
if (ret < 0) {
goto fail;
}
ret = fdt_open_into(fdt, fdt, *sizep);
if (ret) {
fprintf(stderr, "Unable to copy device tree in memory\n");
exit(1);
}
return fdt;
fail:
fprintf(stderr, "%s Couldn't create dt: %s\n", __func__, fdt_strerror(ret));
exit(1);
}
void *load_device_tree(const char *filename_path, int *sizep)
{
int dt_size;
@@ -127,7 +88,7 @@ static int findnode_nofail(void *fdt, const char *node_path)
}
int qemu_devtree_setprop(void *fdt, const char *node_path,
const char *property, const void *val_array, int size)
const char *property, void *val_array, int size)
{
int r;
@@ -156,13 +117,6 @@ int qemu_devtree_setprop_cell(void *fdt, const char *node_path,
return r;
}
int qemu_devtree_setprop_u64(void *fdt, const char *node_path,
const char *property, uint64_t val)
{
val = cpu_to_be64(val);
return qemu_devtree_setprop(fdt, node_path, property, &val, sizeof(val));
}
int qemu_devtree_setprop_string(void *fdt, const char *node_path,
const char *property, const char *string)
{
@@ -178,89 +132,6 @@ int qemu_devtree_setprop_string(void *fdt, const char *node_path,
return r;
}
const void *qemu_devtree_getprop(void *fdt, const char *node_path,
const char *property, int *lenp)
{
int len;
const void *r;
if (!lenp) {
lenp = &len;
}
r = fdt_getprop(fdt, findnode_nofail(fdt, node_path), property, lenp);
if (!r) {
fprintf(stderr, "%s: Couldn't get %s/%s: %s\n", __func__,
node_path, property, fdt_strerror(*lenp));
exit(1);
}
return r;
}
uint32_t qemu_devtree_getprop_cell(void *fdt, const char *node_path,
const char *property)
{
int len;
const uint32_t *p = qemu_devtree_getprop(fdt, node_path, property, &len);
if (len != 4) {
fprintf(stderr, "%s: %s/%s not 4 bytes long (not a cell?)\n",
__func__, node_path, property);
exit(1);
}
return be32_to_cpu(*p);
}
uint32_t qemu_devtree_get_phandle(void *fdt, const char *path)
{
uint32_t r;
r = fdt_get_phandle(fdt, findnode_nofail(fdt, path));
if (r <= 0) {
fprintf(stderr, "%s: Couldn't get phandle for %s: %s\n", __func__,
path, fdt_strerror(r));
exit(1);
}
return r;
}
int qemu_devtree_setprop_phandle(void *fdt, const char *node_path,
const char *property,
const char *target_node_path)
{
uint32_t phandle = qemu_devtree_get_phandle(fdt, target_node_path);
return qemu_devtree_setprop_cell(fdt, node_path, property, phandle);
}
uint32_t qemu_devtree_alloc_phandle(void *fdt)
{
static int phandle = 0x0;
/*
* We need to find out if the user gave us special instruction at
* which phandle id to start allocting phandles.
*/
if (!phandle) {
QemuOpts *machine_opts;
machine_opts = qemu_opts_find(qemu_find_opts("machine"), 0);
if (machine_opts) {
const char *phandle_start;
phandle_start = qemu_opt_get(machine_opts, "phandle_start");
if (phandle_start) {
phandle = strtoul(phandle_start, NULL, 0);
}
}
}
if (!phandle) {
/*
* None or invalid phandle given on the command line, so fall back to
* default starting point.
*/
phandle = 0x8000;
}
return phandle++;
}
int qemu_devtree_nop_node(void *fdt, const char *node_path)
{
int r;
@@ -280,7 +151,6 @@ int qemu_devtree_add_subnode(void *fdt, const char *name)
char *dupname = g_strdup(name);
char *basename = strrchr(dupname, '/');
int retval;
int parent = 0;
if (!basename) {
g_free(dupname);
@@ -290,11 +160,7 @@ int qemu_devtree_add_subnode(void *fdt, const char *name)
basename[0] = '\0';
basename++;
if (dupname[0]) {
parent = findnode_nofail(fdt, dupname);
}
retval = fdt_add_subnode(fdt, parent, basename);
retval = fdt_add_subnode(fdt, findnode_nofail(fdt, dupname), basename);
if (retval < 0) {
fprintf(stderr, "FDT: Failed to create subnode %s: %s\n", name,
fdt_strerror(retval));

View File

@@ -14,39 +14,15 @@
#ifndef __DEVICE_TREE_H__
#define __DEVICE_TREE_H__
void *create_device_tree(int *sizep);
void *load_device_tree(const char *filename_path, int *sizep);
int qemu_devtree_setprop(void *fdt, const char *node_path,
const char *property, const void *val_array, int size);
const char *property, void *val_array, int size);
int qemu_devtree_setprop_cell(void *fdt, const char *node_path,
const char *property, uint32_t val);
int qemu_devtree_setprop_u64(void *fdt, const char *node_path,
const char *property, uint64_t val);
int qemu_devtree_setprop_string(void *fdt, const char *node_path,
const char *property, const char *string);
int qemu_devtree_setprop_phandle(void *fdt, const char *node_path,
const char *property,
const char *target_node_path);
const void *qemu_devtree_getprop(void *fdt, const char *node_path,
const char *property, int *lenp);
uint32_t qemu_devtree_getprop_cell(void *fdt, const char *node_path,
const char *property);
uint32_t qemu_devtree_get_phandle(void *fdt, const char *path);
uint32_t qemu_devtree_alloc_phandle(void *fdt);
int qemu_devtree_nop_node(void *fdt, const char *node_path);
int qemu_devtree_add_subnode(void *fdt, const char *name);
#define qemu_devtree_setprop_cells(fdt, node_path, property, ...) \
do { \
uint32_t qdt_tmp[] = { __VA_ARGS__ }; \
int i; \
\
for (i = 0; i < ARRAY_SIZE(qdt_tmp); i++) { \
qdt_tmp[i] = cpu_to_be32(qdt_tmp[i]); \
} \
qemu_devtree_setprop(fdt, node_path, property, qdt_tmp, \
sizeof(qdt_tmp)); \
} while (0)
#endif /* __DEVICE_TREE_H__ */

21
disas.c
View File

@@ -64,22 +64,6 @@ generic_print_address (bfd_vma addr, struct disassemble_info *info)
(*info->fprintf_func) (info->stream, "0x%" PRIx64, addr);
}
/* Print address in hex, truncated to the width of a target virtual address. */
static void
generic_print_target_address(bfd_vma addr, struct disassemble_info *info)
{
uint64_t mask = ~0ULL >> (64 - TARGET_VIRT_ADDR_SPACE_BITS);
generic_print_address(addr & mask, info);
}
/* Print address in hex, truncated to the width of a host virtual address. */
static void
generic_print_host_address(bfd_vma addr, struct disassemble_info *info)
{
uint64_t mask = ~0ULL >> (64 - (sizeof(void *) * 8));
generic_print_address(addr & mask, info);
}
/* Just return the given address. */
int
@@ -170,7 +154,6 @@ void target_disas(FILE *out, target_ulong code, target_ulong size, int flags)
disasm_info.read_memory_func = target_read_memory;
disasm_info.buffer_vma = code;
disasm_info.buffer_length = size;
disasm_info.print_address_func = generic_print_target_address;
#ifdef TARGET_WORDS_BIGENDIAN
disasm_info.endian = BFD_ENDIAN_BIG;
@@ -291,7 +274,6 @@ void disas(FILE *out, void *code, unsigned long size)
int (*print_insn)(bfd_vma pc, disassemble_info *info);
INIT_DISASSEMBLE_INFO(disasm_info, out, fprintf);
disasm_info.print_address_func = generic_print_host_address;
disasm_info.buffer = code;
disasm_info.buffer_vma = (uintptr_t)code;
@@ -316,7 +298,9 @@ void disas(FILE *out, void *code, unsigned long size)
print_insn = print_insn_alpha;
#elif defined(__sparc__)
print_insn = print_insn_sparc;
#if defined(__sparc_v8plus__) || defined(__sparc_v8plusa__) || defined(__sparc_v9__)
disasm_info.mach = bfd_mach_sparc_v9b;
#endif
#elif defined(__arm__)
print_insn = print_insn_arm;
#elif defined(__MIPSEB__)
@@ -402,7 +386,6 @@ void monitor_disas(Monitor *mon, CPUArchState *env,
monitor_disas_env = env;
monitor_disas_is_physical = is_physical;
disasm_info.read_memory_func = monitor_read_memory;
disasm_info.print_address_func = generic_print_target_address;
disasm_info.buffer_vma = pc;

View File

@@ -9,45 +9,13 @@
#include "dma.h"
#include "trace.h"
#include "range.h"
#include "qemu-thread.h"
/* #define DEBUG_IOMMU */
static void do_dma_memory_set(dma_addr_t addr, uint8_t c, dma_addr_t len)
{
#define FILLBUF_SIZE 512
uint8_t fillbuf[FILLBUF_SIZE];
int l;
memset(fillbuf, c, FILLBUF_SIZE);
while (len > 0) {
l = len < FILLBUF_SIZE ? len : FILLBUF_SIZE;
cpu_physical_memory_rw(addr, fillbuf, l, true);
len -= l;
addr += l;
}
}
int dma_memory_set(DMAContext *dma, dma_addr_t addr, uint8_t c, dma_addr_t len)
{
dma_barrier(dma, DMA_DIRECTION_FROM_DEVICE);
if (dma_has_iommu(dma)) {
return iommu_dma_memory_set(dma, addr, c, len);
}
do_dma_memory_set(addr, c, len);
return 0;
}
void qemu_sglist_init(QEMUSGList *qsg, int alloc_hint, DMAContext *dma)
void qemu_sglist_init(QEMUSGList *qsg, int alloc_hint)
{
qsg->sg = g_malloc(alloc_hint * sizeof(ScatterGatherEntry));
qsg->nsg = 0;
qsg->nalloc = alloc_hint;
qsg->size = 0;
qsg->dma = dma;
}
void qemu_sglist_add(QEMUSGList *qsg, dma_addr_t base, dma_addr_t len)
@@ -65,7 +33,6 @@ void qemu_sglist_add(QEMUSGList *qsg, dma_addr_t base, dma_addr_t len)
void qemu_sglist_destroy(QEMUSGList *qsg)
{
g_free(qsg->sg);
memset(qsg, 0, sizeof(*qsg));
}
typedef struct {
@@ -107,9 +74,10 @@ static void dma_bdrv_unmap(DMAAIOCB *dbs)
int i;
for (i = 0; i < dbs->iov.niov; ++i) {
dma_memory_unmap(dbs->sg->dma, dbs->iov.iov[i].iov_base,
dbs->iov.iov[i].iov_len, dbs->dir,
dbs->iov.iov[i].iov_len);
cpu_physical_memory_unmap(dbs->iov.iov[i].iov_base,
dbs->iov.iov[i].iov_len,
dbs->dir != DMA_DIRECTION_TO_DEVICE,
dbs->iov.iov[i].iov_len);
}
qemu_iovec_reset(&dbs->iov);
}
@@ -138,7 +106,7 @@ static void dma_complete(DMAAIOCB *dbs, int ret)
static void dma_bdrv_cb(void *opaque, int ret)
{
DMAAIOCB *dbs = (DMAAIOCB *)opaque;
dma_addr_t cur_addr, cur_len;
target_phys_addr_t cur_addr, cur_len;
void *mem;
trace_dma_bdrv_cb(dbs, ret);
@@ -155,7 +123,8 @@ static void dma_bdrv_cb(void *opaque, int ret)
while (dbs->sg_cur_index < dbs->sg->nsg) {
cur_addr = dbs->sg->sg[dbs->sg_cur_index].base + dbs->sg_cur_byte;
cur_len = dbs->sg->sg[dbs->sg_cur_index].len - dbs->sg_cur_byte;
mem = dma_memory_map(dbs->sg->dma, cur_addr, &cur_len, dbs->dir);
mem = cpu_physical_memory_map(cur_addr, &cur_len,
dbs->dir != DMA_DIRECTION_TO_DEVICE);
if (!mem)
break;
qemu_iovec_add(&dbs->iov, mem, cur_len);
@@ -240,8 +209,7 @@ BlockDriverAIOCB *dma_bdrv_write(BlockDriverState *bs,
}
static uint64_t dma_buf_rw(uint8_t *ptr, int32_t len, QEMUSGList *sg,
DMADirection dir)
static uint64_t dma_buf_rw(uint8_t *ptr, int32_t len, QEMUSGList *sg, bool to_dev)
{
uint64_t resid;
int sg_cur_index;
@@ -252,7 +220,7 @@ static uint64_t dma_buf_rw(uint8_t *ptr, int32_t len, QEMUSGList *sg,
while (len > 0) {
ScatterGatherEntry entry = sg->sg[sg_cur_index++];
int32_t xfer = MIN(len, entry.len);
dma_memory_rw(sg->dma, entry.base, ptr, xfer, dir);
cpu_physical_memory_rw(entry.base, ptr, xfer, !to_dev);
ptr += xfer;
len -= xfer;
resid -= xfer;
@@ -263,12 +231,12 @@ static uint64_t dma_buf_rw(uint8_t *ptr, int32_t len, QEMUSGList *sg,
uint64_t dma_buf_read(uint8_t *ptr, int32_t len, QEMUSGList *sg)
{
return dma_buf_rw(ptr, len, sg, DMA_DIRECTION_FROM_DEVICE);
return dma_buf_rw(ptr, len, sg, 0);
}
uint64_t dma_buf_write(uint8_t *ptr, int32_t len, QEMUSGList *sg)
{
return dma_buf_rw(ptr, len, sg, DMA_DIRECTION_TO_DEVICE);
return dma_buf_rw(ptr, len, sg, 1);
}
void dma_acct_start(BlockDriverState *bs, BlockAcctCookie *cookie,
@@ -276,160 +244,3 @@ void dma_acct_start(BlockDriverState *bs, BlockAcctCookie *cookie,
{
bdrv_acct_start(bs, cookie, sg->size, type);
}
bool iommu_dma_memory_valid(DMAContext *dma, dma_addr_t addr, dma_addr_t len,
DMADirection dir)
{
target_phys_addr_t paddr, plen;
#ifdef DEBUG_IOMMU
fprintf(stderr, "dma_memory_check context=%p addr=0x" DMA_ADDR_FMT
" len=0x" DMA_ADDR_FMT " dir=%d\n", dma, addr, len, dir);
#endif
while (len) {
if (dma->translate(dma, addr, &paddr, &plen, dir) != 0) {
return false;
}
/* The translation might be valid for larger regions. */
if (plen > len) {
plen = len;
}
len -= plen;
addr += plen;
}
return true;
}
int iommu_dma_memory_rw(DMAContext *dma, dma_addr_t addr,
void *buf, dma_addr_t len, DMADirection dir)
{
target_phys_addr_t paddr, plen;
int err;
#ifdef DEBUG_IOMMU
fprintf(stderr, "dma_memory_rw context=%p addr=0x" DMA_ADDR_FMT " len=0x"
DMA_ADDR_FMT " dir=%d\n", dma, addr, len, dir);
#endif
while (len) {
err = dma->translate(dma, addr, &paddr, &plen, dir);
if (err) {
/*
* In case of failure on reads from the guest, we clean the
* destination buffer so that a device that doesn't test
* for errors will not expose qemu internal memory.
*/
memset(buf, 0, len);
return -1;
}
/* The translation might be valid for larger regions. */
if (plen > len) {
plen = len;
}
cpu_physical_memory_rw(paddr, buf, plen,
dir == DMA_DIRECTION_FROM_DEVICE);
len -= plen;
addr += plen;
buf += plen;
}
return 0;
}
int iommu_dma_memory_set(DMAContext *dma, dma_addr_t addr, uint8_t c,
dma_addr_t len)
{
target_phys_addr_t paddr, plen;
int err;
#ifdef DEBUG_IOMMU
fprintf(stderr, "dma_memory_set context=%p addr=0x" DMA_ADDR_FMT
" len=0x" DMA_ADDR_FMT "\n", dma, addr, len);
#endif
while (len) {
err = dma->translate(dma, addr, &paddr, &plen,
DMA_DIRECTION_FROM_DEVICE);
if (err) {
return err;
}
/* The translation might be valid for larger regions. */
if (plen > len) {
plen = len;
}
do_dma_memory_set(paddr, c, plen);
len -= plen;
addr += plen;
}
return 0;
}
void dma_context_init(DMAContext *dma, DMATranslateFunc translate,
DMAMapFunc map, DMAUnmapFunc unmap)
{
#ifdef DEBUG_IOMMU
fprintf(stderr, "dma_context_init(%p, %p, %p, %p)\n",
dma, translate, map, unmap);
#endif
dma->translate = translate;
dma->map = map;
dma->unmap = unmap;
}
void *iommu_dma_memory_map(DMAContext *dma, dma_addr_t addr, dma_addr_t *len,
DMADirection dir)
{
int err;
target_phys_addr_t paddr, plen;
void *buf;
if (dma->map) {
return dma->map(dma, addr, len, dir);
}
plen = *len;
err = dma->translate(dma, addr, &paddr, &plen, dir);
if (err) {
return NULL;
}
/*
* If this is true, the virtual region is contiguous,
* but the translated physical region isn't. We just
* clamp *len, much like cpu_physical_memory_map() does.
*/
if (plen < *len) {
*len = plen;
}
buf = cpu_physical_memory_map(paddr, &plen,
dir == DMA_DIRECTION_FROM_DEVICE);
*len = plen;
return buf;
}
void iommu_dma_memory_unmap(DMAContext *dma, void *buffer, dma_addr_t len,
DMADirection dir, dma_addr_t access_len)
{
if (dma->unmap) {
dma->unmap(dma, buffer, len, dir, access_len);
return;
}
cpu_physical_memory_unmap(buffer, len,
dir == DMA_DIRECTION_FROM_DEVICE,
access_len);
}

218
dma.h
View File

@@ -13,9 +13,7 @@
#include <stdio.h>
#include "hw/hw.h"
#include "block.h"
#include "kvm.h"
typedef struct DMAContext DMAContext;
typedef struct ScatterGatherEntry ScatterGatherEntry;
typedef enum {
@@ -28,229 +26,19 @@ struct QEMUSGList {
int nsg;
int nalloc;
size_t size;
DMAContext *dma;
};
#if defined(TARGET_PHYS_ADDR_BITS)
typedef target_phys_addr_t dma_addr_t;
/*
* When an IOMMU is present, bus addresses become distinct from
* CPU/memory physical addresses and may be a different size. Because
* the IOVA size depends more on the bus than on the platform, we more
* or less have to treat these as 64-bit always to cover all (or at
* least most) cases.
*/
typedef uint64_t dma_addr_t;
#define DMA_ADDR_BITS 64
#define DMA_ADDR_FMT "%" PRIx64
typedef int DMATranslateFunc(DMAContext *dma,
dma_addr_t addr,
target_phys_addr_t *paddr,
target_phys_addr_t *len,
DMADirection dir);
typedef void* DMAMapFunc(DMAContext *dma,
dma_addr_t addr,
dma_addr_t *len,
DMADirection dir);
typedef void DMAUnmapFunc(DMAContext *dma,
void *buffer,
dma_addr_t len,
DMADirection dir,
dma_addr_t access_len);
struct DMAContext {
DMATranslateFunc *translate;
DMAMapFunc *map;
DMAUnmapFunc *unmap;
};
static inline void dma_barrier(DMAContext *dma, DMADirection dir)
{
/*
* This is called before DMA read and write operations
* unless the _relaxed form is used and is responsible
* for providing some sane ordering of accesses vs
* concurrently running VCPUs.
*
* Users of map(), unmap() or lower level st/ld_*
* operations are responsible for providing their own
* ordering via barriers.
*
* This primitive implementation does a simple smp_mb()
* before each operation which provides pretty much full
* ordering.
*
* A smarter implementation can be devised if needed to
* use lighter barriers based on the direction of the
* transfer, the DMA context, etc...
*/
if (kvm_enabled()) {
smp_mb();
}
}
static inline bool dma_has_iommu(DMAContext *dma)
{
return !!dma;
}
/* Checks that the given range of addresses is valid for DMA. This is
* useful for certain cases, but usually you should just use
* dma_memory_{read,write}() and check for errors */
bool iommu_dma_memory_valid(DMAContext *dma, dma_addr_t addr, dma_addr_t len,
DMADirection dir);
static inline bool dma_memory_valid(DMAContext *dma,
dma_addr_t addr, dma_addr_t len,
DMADirection dir)
{
if (!dma_has_iommu(dma)) {
return true;
} else {
return iommu_dma_memory_valid(dma, addr, len, dir);
}
}
int iommu_dma_memory_rw(DMAContext *dma, dma_addr_t addr,
void *buf, dma_addr_t len, DMADirection dir);
static inline int dma_memory_rw_relaxed(DMAContext *dma, dma_addr_t addr,
void *buf, dma_addr_t len,
DMADirection dir)
{
if (!dma_has_iommu(dma)) {
/* Fast-path for no IOMMU */
cpu_physical_memory_rw(addr, buf, len,
dir == DMA_DIRECTION_FROM_DEVICE);
return 0;
} else {
return iommu_dma_memory_rw(dma, addr, buf, len, dir);
}
}
static inline int dma_memory_read_relaxed(DMAContext *dma, dma_addr_t addr,
void *buf, dma_addr_t len)
{
return dma_memory_rw_relaxed(dma, addr, buf, len, DMA_DIRECTION_TO_DEVICE);
}
static inline int dma_memory_write_relaxed(DMAContext *dma, dma_addr_t addr,
const void *buf, dma_addr_t len)
{
return dma_memory_rw_relaxed(dma, addr, (void *)buf, len,
DMA_DIRECTION_FROM_DEVICE);
}
static inline int dma_memory_rw(DMAContext *dma, dma_addr_t addr,
void *buf, dma_addr_t len,
DMADirection dir)
{
dma_barrier(dma, dir);
return dma_memory_rw_relaxed(dma, addr, buf, len, dir);
}
static inline int dma_memory_read(DMAContext *dma, dma_addr_t addr,
void *buf, dma_addr_t len)
{
return dma_memory_rw(dma, addr, buf, len, DMA_DIRECTION_TO_DEVICE);
}
static inline int dma_memory_write(DMAContext *dma, dma_addr_t addr,
const void *buf, dma_addr_t len)
{
return dma_memory_rw(dma, addr, (void *)buf, len,
DMA_DIRECTION_FROM_DEVICE);
}
int iommu_dma_memory_set(DMAContext *dma, dma_addr_t addr, uint8_t c,
dma_addr_t len);
int dma_memory_set(DMAContext *dma, dma_addr_t addr, uint8_t c, dma_addr_t len);
void *iommu_dma_memory_map(DMAContext *dma,
dma_addr_t addr, dma_addr_t *len,
DMADirection dir);
static inline void *dma_memory_map(DMAContext *dma,
dma_addr_t addr, dma_addr_t *len,
DMADirection dir)
{
if (!dma_has_iommu(dma)) {
target_phys_addr_t xlen = *len;
void *p;
p = cpu_physical_memory_map(addr, &xlen,
dir == DMA_DIRECTION_FROM_DEVICE);
*len = xlen;
return p;
} else {
return iommu_dma_memory_map(dma, addr, len, dir);
}
}
void iommu_dma_memory_unmap(DMAContext *dma,
void *buffer, dma_addr_t len,
DMADirection dir, dma_addr_t access_len);
static inline void dma_memory_unmap(DMAContext *dma,
void *buffer, dma_addr_t len,
DMADirection dir, dma_addr_t access_len)
{
if (!dma_has_iommu(dma)) {
cpu_physical_memory_unmap(buffer, (target_phys_addr_t)len,
dir == DMA_DIRECTION_FROM_DEVICE,
access_len);
} else {
iommu_dma_memory_unmap(dma, buffer, len, dir, access_len);
}
}
#define DEFINE_LDST_DMA(_lname, _sname, _bits, _end) \
static inline uint##_bits##_t ld##_lname##_##_end##_dma(DMAContext *dma, \
dma_addr_t addr) \
{ \
uint##_bits##_t val; \
dma_memory_read(dma, addr, &val, (_bits) / 8); \
return _end##_bits##_to_cpu(val); \
} \
static inline void st##_sname##_##_end##_dma(DMAContext *dma, \
dma_addr_t addr, \
uint##_bits##_t val) \
{ \
val = cpu_to_##_end##_bits(val); \
dma_memory_write(dma, addr, &val, (_bits) / 8); \
}
static inline uint8_t ldub_dma(DMAContext *dma, dma_addr_t addr)
{
uint8_t val;
dma_memory_read(dma, addr, &val, 1);
return val;
}
static inline void stb_dma(DMAContext *dma, dma_addr_t addr, uint8_t val)
{
dma_memory_write(dma, addr, &val, 1);
}
DEFINE_LDST_DMA(uw, w, 16, le);
DEFINE_LDST_DMA(l, l, 32, le);
DEFINE_LDST_DMA(q, q, 64, le);
DEFINE_LDST_DMA(uw, w, 16, be);
DEFINE_LDST_DMA(l, l, 32, be);
DEFINE_LDST_DMA(q, q, 64, be);
#undef DEFINE_LDST_DMA
void dma_context_init(DMAContext *dma, DMATranslateFunc translate,
DMAMapFunc map, DMAUnmapFunc unmap);
#define DMA_ADDR_FMT TARGET_FMT_plx
struct ScatterGatherEntry {
dma_addr_t base;
dma_addr_t len;
};
void qemu_sglist_init(QEMUSGList *qsg, int alloc_hint, DMAContext *dma);
void qemu_sglist_init(QEMUSGList *qsg, int alloc_hint);
void qemu_sglist_add(QEMUSGList *qsg, dma_addr_t base, dma_addr_t len);
void qemu_sglist_destroy(QEMUSGList *qsg);
#endif

View File

@@ -1,4 +1,4 @@
= Bootindex property =
= Bootindex propery =
Block and net devices have bootindex property. This property is used to
determine the order in which firmware will consider devices for booting

View File

@@ -220,8 +220,6 @@ Example:
#endif
mdroth@illuin:~/w/qemu2.git$
(The actual structure of the visit_type_* functions is a bit more complex
in order to propagate errors correctly and avoid leaking memory).
=== scripts/qapi-commands.py ===

View File

@@ -1,78 +0,0 @@
When used with the "pseries" machine type, QEMU-system-ppc64 implements
a set of hypervisor calls using a subset of the server "PAPR" specification
(IBM internal at this point), which is also what IBM's proprietary hypervisor
adheres too.
The subset is selected based on the requirements of Linux as a guest.
In addition to those calls, we have added our own private hypervisor
calls which are mostly used as a private interface between the firmware
running in the guest and QEMU.
All those hypercalls start at hcall number 0xf000 which correspond
to a implementation specific range in PAPR.
- H_RTAS (0xf000)
RTAS is a set of runtime services generally provided by the firmware
inside the guest to the operating system. It predates the existence
of hypervisors (it was originally an extension to Open Firmware) and
is still used by PAPR to provide various services that aren't performance
sensitive.
We currently implement the RTAS services in QEMU itself. The actual RTAS
"firmware" blob in the guest is a small stub of a few instructions which
calls our private H_RTAS hypervisor call to pass the RTAS calls to QEMU.
Arguments:
r3 : H_RTAS (0xf000)
r4 : Guest physical address of RTAS parameter block
Returns:
H_SUCCESS : Successfully called the RTAS function (RTAS result
will have been stored in the parameter block)
H_PARAMETER : Unknown token
- H_LOGICAL_MEMOP (0xf001)
When the guest runs in "real mode" (in powerpc lingua this means
with MMU disabled, ie guest effective == guest physical), it only
has access to a subset of memory and no IOs.
PAPR provides a set of hypervisor calls to perform cachable or
non-cachable accesses to any guest physical addresses that the
guest can use in order to access IO devices while in real mode.
This is typically used by the firmware running in the guest.
However, doing a hypercall for each access is extremely inefficient
(even more so when running KVM) when accessing the frame buffer. In
that case, things like scrolling become unusably slow.
This hypercall allows the guest to request a "memory op" to be applied
to memory. The supported memory ops at this point are to copy a range
of memory (supports overlap of source and destination) and XOR which
is used by our SLOF firmware to invert the screen.
Arguments:
r3: H_LOGICAL_MEMOP (0xf001)
r4: Guest physical address of destination
r5: Guest physical address of source
r6: Individual element size
0 = 1 byte
1 = 2 bytes
2 = 4 bytes
3 = 8 bytes
r7: Number of elements
r8: Operation
0 = copy
1 = xor
Returns:
H_SUCCESS : Success
H_PARAMETER : Invalid argument

View File

@@ -75,23 +75,13 @@ in the description of a field.
Bitmask of incompatible features. An implementation must
fail to open an image if an unknown bit is set.
Bit 0: Dirty bit. If this bit is set then refcounts
may be inconsistent, make sure to scan L1/L2
tables to repair refcounts before accessing the
image.
Bits 1-63: Reserved (set to 0)
Bits 0-63: Reserved (set to 0)
80 - 87: compatible_features
Bitmask of compatible features. An implementation can
safely ignore any unknown bits that are set.
Bit 0: Lazy refcounts bit. If this bit is set then
lazy refcount updates can be used. This means
marking the image file dirty and postponing
refcount metadata updates.
Bits 1-63: Reserved (set to 0)
Bits 0-63: Reserved (set to 0)
88 - 95: autoclear_features
Bitmask of auto-clear features. An implementation may only

View File

@@ -1,38 +0,0 @@
qemu usb storage emulation
--------------------------
QEMU has two emulations for usb storage devices.
Number one emulates the classic bulk-only transport protocol which is
used by 99% of the usb sticks on the marked today and is called
"usb-storage". Usage (hooking up to xhci, other host controllers work
too):
qemu ${other_vm_args} \
-drive if=none,id=stick,file=/path/to/file.img \
-device nec-usb-xhci,id=xhci \
-device usb-storage,bus=xhci.0,drive=stick
Number two is the newer usb attached scsi transport. This one doesn't
automagically create a scsi disk, so you have to explicitly attach one
manually. Multiple logical units are supported. Here is an example
with tree logical units:
qemu ${other_vm_args} \
-drive if=none,id=uas-disk1,file=/path/to/file1.img \
-drive if=none,id=uas-disk2,file=/path/to/file2.img \
-drive if=none,id=uas-cdrom,media=cdrom,file=/path/to/image.iso \
-device nec-usb-xhci,id=xhci \
-device usb-uas,id=uas,bus=xhci.0 \
-device scsi-hd,bus=uas.0,scsi-id=0,lun=0,drive=uas-disk1 \
-device scsi-hd,bus=uas.0,scsi-id=0,lun=1,drive=uas-disk2 \
-device scsi-cd,bus=uas.0,scsi-id=0,lun=5,drive=uas-cdrom
enjoy,
Gerd
--
Gerd Hoffmann <kraxel@redhat.com>

View File

@@ -58,11 +58,11 @@ try ...
xhci controller support
-----------------------
There is also xhci host controller support available. It got a lot
There also is xhci host controller support available. It got alot
less testing than ehci and there are a bunch of known limitations, so
ehci may work better for you. On the other hand the xhci hardware
design is much more virtualization-friendly, thus xhci emulation uses
less resources (especially cpu). If you want to give xhci a try
less ressources (especially cpu). If you wanna give xhci a try
use this to add the host controller ...
qemu -device nec-usb-xhci,id=xhci

View File

@@ -210,17 +210,19 @@ if you don't see these strings, then something went wrong.
=== Errors ===
QMP commands should use the error interface exported by the error.h header
file. Basically, errors are set by calling the error_set() function.
file. The basic function used to set an error is the error_set() one.
Let's say we don't accept the string "message" to contain the word "love". If
it does contain it, we want the "hello-world" command to return an error:
it does contain it, we want the "hello-world" command to the return the
InvalidParameter error.
Only one change is required, and it's in the C implementation:
void qmp_hello_world(bool has_message, const char *message, Error **errp)
{
if (has_message) {
if (strstr(message, "love")) {
error_set(errp, ERROR_CLASS_GENERIC_ERROR,
"the word 'love' is not allowed");
error_set(errp, QERR_INVALID_PARAMETER, "message");
return;
}
printf("%s\n", message);
@@ -229,40 +231,30 @@ void qmp_hello_world(bool has_message, const char *message, Error **errp)
}
}
The first argument to the error_set() function is the Error pointer to pointer,
which is passed to all QMP functions. The second argument is a ErrorClass
value, which should be ERROR_CLASS_GENERIC_ERROR most of the time (more
details about error classes are given below). The third argument is a human
description of the error, this is a free-form printf-like string.
Let's test it. Build qemu, run it as defined in the "Testing" section, and
then issue the following command:
Let's test the example above. Build qemu, run it as defined in the "Testing"
section, and then issue the following command:
{ "execute": "hello-world", "arguments": { "message": "all you need is love" } }
{ "execute": "hello-world", "arguments": { "message": "we love qemu" } }
The QMP server's response should be:
{
"error": {
"class": "GenericError",
"desc": "the word 'love' is not allowed"
"class": "InvalidParameter",
"desc": "Invalid parameter 'message'",
"data": {
"name": "message"
}
}
}
As a general rule, all QMP errors should use ERROR_CLASS_GENERIC_ERROR. There
are two exceptions to this rule:
Which is the InvalidParameter error.
1. A non-generic ErrorClass value exists* for the failure you want to report
(eg. DeviceNotFound)
When you have to return an error but you're unsure what error to return or
which arguments an error takes, you should look at the qerror.h file. Note
that you might be required to add new errors if needed.
2. Management applications have to take special action on the failure you
want to report, hence you have to add a new ErrorClass value so that they
can check for it
If the failure you want to report doesn't fall in one of the two cases above,
just report ERROR_CLASS_GENERIC_ERROR.
* All existing ErrorClass values are defined in the qapi-schema.json file
FIXME: describe better the error API and how to add new errors.
=== Command Documentation ===
@@ -283,6 +275,7 @@ here goes "hello-world"'s new entry for the qapi-schema.json file:
# @message: #optional string to be printed
#
# Returns: Nothing on success.
# If @message contains "love", InvalidParameter
#
# Notes: if @message is not provided, the "Hello, world" string will
# be printed instead

View File

@@ -1,128 +0,0 @@
XBZRLE (Xor Based Zero Run Length Encoding)
===========================================
Using XBZRLE (Xor Based Zero Run Length Encoding) allows for the reduction
of VM downtime and the total live-migration time of Virtual machines.
It is particularly useful for virtual machines running memory write intensive
workloads that are typical of large enterprise applications such as SAP ERP
Systems, and generally speaking for any application that uses a sparse memory
update pattern.
Instead of sending the changed guest memory page this solution will send a
compressed version of the updates, thus reducing the amount of data sent during
live migration.
In order to be able to calculate the update, the previous memory pages need to
be stored on the source. Those pages are stored in a dedicated cache
(hash table) and are accessed by their address.
The larger the cache size the better the chances are that the page has already
been stored in the cache.
A small cache size will result in high cache miss rate.
Cache size can be changed before and during migration.
Format
=======
The compression format performs a XOR between the previous and current content
of the page, where zero represents an unchanged value.
The page data delta is represented by zero and non zero runs.
A zero run is represented by its length (in bytes).
A non zero run is represented by its length (in bytes) and the new data.
The run length is encoded using ULEB128 (http://en.wikipedia.org/wiki/LEB128)
There can be more than one valid encoding, the sender may send a longer encoding
for the benefit of reducing computation cost.
page = zrun nzrun
| zrun nzrun page
zrun = length
nzrun = length byte...
length = uleb128 encoded integer
On the sender side XBZRLE is used as a compact delta encoding of page updates,
retrieving the old page content from the cache (default size of 512 MB). The
receiving side uses the existing page's content and XBZRLE to decode the new
page's content.
This work was originally based on research results published
VEE 2011: Evaluation of Delta Compression Techniques for Efficient Live
Migration of Large Virtual Machines by Benoit, Svard, Tordsson and Elmroth.
Additionally the delta encoder XBRLE was improved further using the XBZRLE
instead.
XBZRLE has a sustained bandwidth of 2-2.5 GB/s for typical workloads making it
ideal for in-line, real-time encoding such as is needed for live-migration.
Example
old buffer:
1001 zeros
05 06 07 08 09 0a 0b 0c 0d 0e 0f 10 11 12 13 68 00 00 6b 00 6d
3074 zeros
new buffer:
1001 zeros
01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f 68 00 00 67 00 69
3074 zeros
encoded buffer:
encoded length 24
e9 07 0f 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f 03 01 67 01 01 69
Usage
======================
1. Verify the destination QEMU version is able to decode the new format.
{qemu} info migrate_capabilities
{qemu} xbzrle: off , ...
2. Activate xbzrle on both source and destination:
{qemu} migrate_set_capability xbzrle on
3. Set the XBZRLE cache size - the cache size is in MBytes and should be a
power of 2. The cache default value is 64MBytes. (on source only)
{qemu} migrate_set_cache_size 256m
4. Start outgoing migration
{qemu} migrate -d tcp:destination.host:4444
{qemu} info migrate
capabilities: xbzrle: on
Migration status: active
transferred ram: A kbytes
remaining ram: B kbytes
total ram: C kbytes
total time: D milliseconds
duplicate: E pages
normal: F pages
normal bytes: G kbytes
cache size: H bytes
xbzrle transferred: I kbytes
xbzrle pages: J pages
xbzrle cache miss: K
xbzrle overflow : L
xbzrle cache-miss: the number of cache misses to date - high cache-miss rate
indicates that the cache size is set too low.
xbzrle overflow: the number of overflows in the decoding which where the delta
could not be compressed. This can happen if the changes in the pages are too
large or there are many short changes; for example, changing every second byte
(half a page).
Testing: Testing indicated that live migration with XBZRLE was completed in 110
seconds, whereas without it would not be able to complete.
A simple synthetic memory r/w load generator:
.. include <stdlib.h>
.. include <stdio.h>
.. int main()
.. {
.. char *buf = (char *) calloc(4096, 4096);
.. while (1) {
.. int i;
.. for (i = 0; i < 4096 * 4; i++) {
.. buf[i * 4096 / 4]++;
.. }
.. printf(".");
.. }
.. }

View File

@@ -1,64 +0,0 @@
/*
* QEMU dump
*
* Copyright Fujitsu, Corp. 2011, 2012
*
* Authors:
* Wen Congyang <wency@cn.fujitsu.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu-common.h"
#include "dump.h"
#include "qerror.h"
#include "qmp-commands.h"
/* we need this function in hmp.c */
void qmp_dump_guest_memory(bool paging, const char *file, bool has_begin,
int64_t begin, bool has_length, int64_t length,
Error **errp)
{
error_set(errp, QERR_UNSUPPORTED);
}
int cpu_write_elf64_note(write_core_dump_function f,
CPUArchState *env, int cpuid,
void *opaque)
{
return -1;
}
int cpu_write_elf32_note(write_core_dump_function f,
CPUArchState *env, int cpuid,
void *opaque)
{
return -1;
}
int cpu_write_elf64_qemunote(write_core_dump_function f,
CPUArchState *env,
void *opaque)
{
return -1;
}
int cpu_write_elf32_qemunote(write_core_dump_function f,
CPUArchState *env,
void *opaque)
{
return -1;
}
int cpu_get_dump_info(ArchDumpInfo *info)
{
return -1;
}
ssize_t cpu_get_note_size(int class, int machine, int nr_cpus)
{
return -1;
}

873
dump.c
View File

@@ -1,873 +0,0 @@
/*
* QEMU dump
*
* Copyright Fujitsu, Corp. 2011, 2012
*
* Authors:
* Wen Congyang <wency@cn.fujitsu.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu-common.h"
#include "elf.h"
#include "cpu.h"
#include "cpu-all.h"
#include "targphys.h"
#include "monitor.h"
#include "kvm.h"
#include "dump.h"
#include "sysemu.h"
#include "memory_mapping.h"
#include "error.h"
#include "qmp-commands.h"
#include "gdbstub.h"
static uint16_t cpu_convert_to_target16(uint16_t val, int endian)
{
if (endian == ELFDATA2LSB) {
val = cpu_to_le16(val);
} else {
val = cpu_to_be16(val);
}
return val;
}
static uint32_t cpu_convert_to_target32(uint32_t val, int endian)
{
if (endian == ELFDATA2LSB) {
val = cpu_to_le32(val);
} else {
val = cpu_to_be32(val);
}
return val;
}
static uint64_t cpu_convert_to_target64(uint64_t val, int endian)
{
if (endian == ELFDATA2LSB) {
val = cpu_to_le64(val);
} else {
val = cpu_to_be64(val);
}
return val;
}
typedef struct DumpState {
ArchDumpInfo dump_info;
MemoryMappingList list;
uint16_t phdr_num;
uint32_t sh_info;
bool have_section;
bool resume;
size_t note_size;
target_phys_addr_t memory_offset;
int fd;
RAMBlock *block;
ram_addr_t start;
bool has_filter;
int64_t begin;
int64_t length;
Error **errp;
} DumpState;
static int dump_cleanup(DumpState *s)
{
int ret = 0;
memory_mapping_list_free(&s->list);
if (s->fd != -1) {
close(s->fd);
}
if (s->resume) {
vm_start();
}
return ret;
}
static void dump_error(DumpState *s, const char *reason)
{
dump_cleanup(s);
}
static int fd_write_vmcore(void *buf, size_t size, void *opaque)
{
DumpState *s = opaque;
int fd = s->fd;
size_t writen_size;
/* The fd may be passed from user, and it can be non-blocked */
while (size) {
writen_size = qemu_write_full(fd, buf, size);
if (writen_size != size && errno != EAGAIN) {
return -1;
}
buf += writen_size;
size -= writen_size;
}
return 0;
}
static int write_elf64_header(DumpState *s)
{
Elf64_Ehdr elf_header;
int ret;
int endian = s->dump_info.d_endian;
memset(&elf_header, 0, sizeof(Elf64_Ehdr));
memcpy(&elf_header, ELFMAG, SELFMAG);
elf_header.e_ident[EI_CLASS] = ELFCLASS64;
elf_header.e_ident[EI_DATA] = s->dump_info.d_endian;
elf_header.e_ident[EI_VERSION] = EV_CURRENT;
elf_header.e_type = cpu_convert_to_target16(ET_CORE, endian);
elf_header.e_machine = cpu_convert_to_target16(s->dump_info.d_machine,
endian);
elf_header.e_version = cpu_convert_to_target32(EV_CURRENT, endian);
elf_header.e_ehsize = cpu_convert_to_target16(sizeof(elf_header), endian);
elf_header.e_phoff = cpu_convert_to_target64(sizeof(Elf64_Ehdr), endian);
elf_header.e_phentsize = cpu_convert_to_target16(sizeof(Elf64_Phdr),
endian);
elf_header.e_phnum = cpu_convert_to_target16(s->phdr_num, endian);
if (s->have_section) {
uint64_t shoff = sizeof(Elf64_Ehdr) + sizeof(Elf64_Phdr) * s->sh_info;
elf_header.e_shoff = cpu_convert_to_target64(shoff, endian);
elf_header.e_shentsize = cpu_convert_to_target16(sizeof(Elf64_Shdr),
endian);
elf_header.e_shnum = cpu_convert_to_target16(1, endian);
}
ret = fd_write_vmcore(&elf_header, sizeof(elf_header), s);
if (ret < 0) {
dump_error(s, "dump: failed to write elf header.\n");
return -1;
}
return 0;
}
static int write_elf32_header(DumpState *s)
{
Elf32_Ehdr elf_header;
int ret;
int endian = s->dump_info.d_endian;
memset(&elf_header, 0, sizeof(Elf32_Ehdr));
memcpy(&elf_header, ELFMAG, SELFMAG);
elf_header.e_ident[EI_CLASS] = ELFCLASS32;
elf_header.e_ident[EI_DATA] = endian;
elf_header.e_ident[EI_VERSION] = EV_CURRENT;
elf_header.e_type = cpu_convert_to_target16(ET_CORE, endian);
elf_header.e_machine = cpu_convert_to_target16(s->dump_info.d_machine,
endian);
elf_header.e_version = cpu_convert_to_target32(EV_CURRENT, endian);
elf_header.e_ehsize = cpu_convert_to_target16(sizeof(elf_header), endian);
elf_header.e_phoff = cpu_convert_to_target32(sizeof(Elf32_Ehdr), endian);
elf_header.e_phentsize = cpu_convert_to_target16(sizeof(Elf32_Phdr),
endian);
elf_header.e_phnum = cpu_convert_to_target16(s->phdr_num, endian);
if (s->have_section) {
uint32_t shoff = sizeof(Elf32_Ehdr) + sizeof(Elf32_Phdr) * s->sh_info;
elf_header.e_shoff = cpu_convert_to_target32(shoff, endian);
elf_header.e_shentsize = cpu_convert_to_target16(sizeof(Elf32_Shdr),
endian);
elf_header.e_shnum = cpu_convert_to_target16(1, endian);
}
ret = fd_write_vmcore(&elf_header, sizeof(elf_header), s);
if (ret < 0) {
dump_error(s, "dump: failed to write elf header.\n");
return -1;
}
return 0;
}
static int write_elf64_load(DumpState *s, MemoryMapping *memory_mapping,
int phdr_index, target_phys_addr_t offset)
{
Elf64_Phdr phdr;
int ret;
int endian = s->dump_info.d_endian;
memset(&phdr, 0, sizeof(Elf64_Phdr));
phdr.p_type = cpu_convert_to_target32(PT_LOAD, endian);
phdr.p_offset = cpu_convert_to_target64(offset, endian);
phdr.p_paddr = cpu_convert_to_target64(memory_mapping->phys_addr, endian);
if (offset == -1) {
/* When the memory is not stored into vmcore, offset will be -1 */
phdr.p_filesz = 0;
} else {
phdr.p_filesz = cpu_convert_to_target64(memory_mapping->length, endian);
}
phdr.p_memsz = cpu_convert_to_target64(memory_mapping->length, endian);
phdr.p_vaddr = cpu_convert_to_target64(memory_mapping->virt_addr, endian);
ret = fd_write_vmcore(&phdr, sizeof(Elf64_Phdr), s);
if (ret < 0) {
dump_error(s, "dump: failed to write program header table.\n");
return -1;
}
return 0;
}
static int write_elf32_load(DumpState *s, MemoryMapping *memory_mapping,
int phdr_index, target_phys_addr_t offset)
{
Elf32_Phdr phdr;
int ret;
int endian = s->dump_info.d_endian;
memset(&phdr, 0, sizeof(Elf32_Phdr));
phdr.p_type = cpu_convert_to_target32(PT_LOAD, endian);
phdr.p_offset = cpu_convert_to_target32(offset, endian);
phdr.p_paddr = cpu_convert_to_target32(memory_mapping->phys_addr, endian);
if (offset == -1) {
/* When the memory is not stored into vmcore, offset will be -1 */
phdr.p_filesz = 0;
} else {
phdr.p_filesz = cpu_convert_to_target32(memory_mapping->length, endian);
}
phdr.p_memsz = cpu_convert_to_target32(memory_mapping->length, endian);
phdr.p_vaddr = cpu_convert_to_target32(memory_mapping->virt_addr, endian);
ret = fd_write_vmcore(&phdr, sizeof(Elf32_Phdr), s);
if (ret < 0) {
dump_error(s, "dump: failed to write program header table.\n");
return -1;
}
return 0;
}
static int write_elf64_note(DumpState *s)
{
Elf64_Phdr phdr;
int endian = s->dump_info.d_endian;
target_phys_addr_t begin = s->memory_offset - s->note_size;
int ret;
memset(&phdr, 0, sizeof(Elf64_Phdr));
phdr.p_type = cpu_convert_to_target32(PT_NOTE, endian);
phdr.p_offset = cpu_convert_to_target64(begin, endian);
phdr.p_paddr = 0;
phdr.p_filesz = cpu_convert_to_target64(s->note_size, endian);
phdr.p_memsz = cpu_convert_to_target64(s->note_size, endian);
phdr.p_vaddr = 0;
ret = fd_write_vmcore(&phdr, sizeof(Elf64_Phdr), s);
if (ret < 0) {
dump_error(s, "dump: failed to write program header table.\n");
return -1;
}
return 0;
}
static int write_elf64_notes(DumpState *s)
{
CPUArchState *env;
int ret;
int id;
for (env = first_cpu; env != NULL; env = env->next_cpu) {
id = cpu_index(env);
ret = cpu_write_elf64_note(fd_write_vmcore, env, id, s);
if (ret < 0) {
dump_error(s, "dump: failed to write elf notes.\n");
return -1;
}
}
for (env = first_cpu; env != NULL; env = env->next_cpu) {
ret = cpu_write_elf64_qemunote(fd_write_vmcore, env, s);
if (ret < 0) {
dump_error(s, "dump: failed to write CPU status.\n");
return -1;
}
}
return 0;
}
static int write_elf32_note(DumpState *s)
{
target_phys_addr_t begin = s->memory_offset - s->note_size;
Elf32_Phdr phdr;
int endian = s->dump_info.d_endian;
int ret;
memset(&phdr, 0, sizeof(Elf32_Phdr));
phdr.p_type = cpu_convert_to_target32(PT_NOTE, endian);
phdr.p_offset = cpu_convert_to_target32(begin, endian);
phdr.p_paddr = 0;
phdr.p_filesz = cpu_convert_to_target32(s->note_size, endian);
phdr.p_memsz = cpu_convert_to_target32(s->note_size, endian);
phdr.p_vaddr = 0;
ret = fd_write_vmcore(&phdr, sizeof(Elf32_Phdr), s);
if (ret < 0) {
dump_error(s, "dump: failed to write program header table.\n");
return -1;
}
return 0;
}
static int write_elf32_notes(DumpState *s)
{
CPUArchState *env;
int ret;
int id;
for (env = first_cpu; env != NULL; env = env->next_cpu) {
id = cpu_index(env);
ret = cpu_write_elf32_note(fd_write_vmcore, env, id, s);
if (ret < 0) {
dump_error(s, "dump: failed to write elf notes.\n");
return -1;
}
}
for (env = first_cpu; env != NULL; env = env->next_cpu) {
ret = cpu_write_elf32_qemunote(fd_write_vmcore, env, s);
if (ret < 0) {
dump_error(s, "dump: failed to write CPU status.\n");
return -1;
}
}
return 0;
}
static int write_elf_section(DumpState *s, int type)
{
Elf32_Shdr shdr32;
Elf64_Shdr shdr64;
int endian = s->dump_info.d_endian;
int shdr_size;
void *shdr;
int ret;
if (type == 0) {
shdr_size = sizeof(Elf32_Shdr);
memset(&shdr32, 0, shdr_size);
shdr32.sh_info = cpu_convert_to_target32(s->sh_info, endian);
shdr = &shdr32;
} else {
shdr_size = sizeof(Elf64_Shdr);
memset(&shdr64, 0, shdr_size);
shdr64.sh_info = cpu_convert_to_target32(s->sh_info, endian);
shdr = &shdr64;
}
ret = fd_write_vmcore(&shdr, shdr_size, s);
if (ret < 0) {
dump_error(s, "dump: failed to write section header table.\n");
return -1;
}
return 0;
}
static int write_data(DumpState *s, void *buf, int length)
{
int ret;
ret = fd_write_vmcore(buf, length, s);
if (ret < 0) {
dump_error(s, "dump: failed to save memory.\n");
return -1;
}
return 0;
}
/* write the memroy to vmcore. 1 page per I/O. */
static int write_memory(DumpState *s, RAMBlock *block, ram_addr_t start,
int64_t size)
{
int64_t i;
int ret;
for (i = 0; i < size / TARGET_PAGE_SIZE; i++) {
ret = write_data(s, block->host + start + i * TARGET_PAGE_SIZE,
TARGET_PAGE_SIZE);
if (ret < 0) {
return ret;
}
}
if ((size % TARGET_PAGE_SIZE) != 0) {
ret = write_data(s, block->host + start + i * TARGET_PAGE_SIZE,
size % TARGET_PAGE_SIZE);
if (ret < 0) {
return ret;
}
}
return 0;
}
/* get the memory's offset in the vmcore */
static target_phys_addr_t get_offset(target_phys_addr_t phys_addr,
DumpState *s)
{
RAMBlock *block;
target_phys_addr_t offset = s->memory_offset;
int64_t size_in_block, start;
if (s->has_filter) {
if (phys_addr < s->begin || phys_addr >= s->begin + s->length) {
return -1;
}
}
QLIST_FOREACH(block, &ram_list.blocks, next) {
if (s->has_filter) {
if (block->offset >= s->begin + s->length ||
block->offset + block->length <= s->begin) {
/* This block is out of the range */
continue;
}
if (s->begin <= block->offset) {
start = block->offset;
} else {
start = s->begin;
}
size_in_block = block->length - (start - block->offset);
if (s->begin + s->length < block->offset + block->length) {
size_in_block -= block->offset + block->length -
(s->begin + s->length);
}
} else {
start = block->offset;
size_in_block = block->length;
}
if (phys_addr >= start && phys_addr < start + size_in_block) {
return phys_addr - start + offset;
}
offset += size_in_block;
}
return -1;
}
static int write_elf_loads(DumpState *s)
{
target_phys_addr_t offset;
MemoryMapping *memory_mapping;
uint32_t phdr_index = 1;
int ret;
uint32_t max_index;
if (s->have_section) {
max_index = s->sh_info;
} else {
max_index = s->phdr_num;
}
QTAILQ_FOREACH(memory_mapping, &s->list.head, next) {
offset = get_offset(memory_mapping->phys_addr, s);
if (s->dump_info.d_class == ELFCLASS64) {
ret = write_elf64_load(s, memory_mapping, phdr_index++, offset);
} else {
ret = write_elf32_load(s, memory_mapping, phdr_index++, offset);
}
if (ret < 0) {
return -1;
}
if (phdr_index >= max_index) {
break;
}
}
return 0;
}
/* write elf header, PT_NOTE and elf note to vmcore. */
static int dump_begin(DumpState *s)
{
int ret;
/*
* the vmcore's format is:
* --------------
* | elf header |
* --------------
* | PT_NOTE |
* --------------
* | PT_LOAD |
* --------------
* | ...... |
* --------------
* | PT_LOAD |
* --------------
* | sec_hdr |
* --------------
* | elf note |
* --------------
* | memory |
* --------------
*
* we only know where the memory is saved after we write elf note into
* vmcore.
*/
/* write elf header to vmcore */
if (s->dump_info.d_class == ELFCLASS64) {
ret = write_elf64_header(s);
} else {
ret = write_elf32_header(s);
}
if (ret < 0) {
return -1;
}
if (s->dump_info.d_class == ELFCLASS64) {
/* write PT_NOTE to vmcore */
if (write_elf64_note(s) < 0) {
return -1;
}
/* write all PT_LOAD to vmcore */
if (write_elf_loads(s) < 0) {
return -1;
}
/* write section to vmcore */
if (s->have_section) {
if (write_elf_section(s, 1) < 0) {
return -1;
}
}
/* write notes to vmcore */
if (write_elf64_notes(s) < 0) {
return -1;
}
} else {
/* write PT_NOTE to vmcore */
if (write_elf32_note(s) < 0) {
return -1;
}
/* write all PT_LOAD to vmcore */
if (write_elf_loads(s) < 0) {
return -1;
}
/* write section to vmcore */
if (s->have_section) {
if (write_elf_section(s, 0) < 0) {
return -1;
}
}
/* write notes to vmcore */
if (write_elf32_notes(s) < 0) {
return -1;
}
}
return 0;
}
/* write PT_LOAD to vmcore */
static int dump_completed(DumpState *s)
{
dump_cleanup(s);
return 0;
}
static int get_next_block(DumpState *s, RAMBlock *block)
{
while (1) {
block = QLIST_NEXT(block, next);
if (!block) {
/* no more block */
return 1;
}
s->start = 0;
s->block = block;
if (s->has_filter) {
if (block->offset >= s->begin + s->length ||
block->offset + block->length <= s->begin) {
/* This block is out of the range */
continue;
}
if (s->begin > block->offset) {
s->start = s->begin - block->offset;
}
}
return 0;
}
}
/* write all memory to vmcore */
static int dump_iterate(DumpState *s)
{
RAMBlock *block;
int64_t size;
int ret;
while (1) {
block = s->block;
size = block->length;
if (s->has_filter) {
size -= s->start;
if (s->begin + s->length < block->offset + block->length) {
size -= block->offset + block->length - (s->begin + s->length);
}
}
ret = write_memory(s, block, s->start, size);
if (ret == -1) {
return ret;
}
ret = get_next_block(s, block);
if (ret == 1) {
dump_completed(s);
return 0;
}
}
}
static int create_vmcore(DumpState *s)
{
int ret;
ret = dump_begin(s);
if (ret < 0) {
return -1;
}
ret = dump_iterate(s);
if (ret < 0) {
return -1;
}
return 0;
}
static ram_addr_t get_start_block(DumpState *s)
{
RAMBlock *block;
if (!s->has_filter) {
s->block = QLIST_FIRST(&ram_list.blocks);
return 0;
}
QLIST_FOREACH(block, &ram_list.blocks, next) {
if (block->offset >= s->begin + s->length ||
block->offset + block->length <= s->begin) {
/* This block is out of the range */
continue;
}
s->block = block;
if (s->begin > block->offset) {
s->start = s->begin - block->offset;
} else {
s->start = 0;
}
return s->start;
}
return -1;
}
static int dump_init(DumpState *s, int fd, bool paging, bool has_filter,
int64_t begin, int64_t length, Error **errp)
{
CPUArchState *env;
int nr_cpus;
int ret;
if (runstate_is_running()) {
vm_stop(RUN_STATE_SAVE_VM);
s->resume = true;
} else {
s->resume = false;
}
s->errp = errp;
s->fd = fd;
s->has_filter = has_filter;
s->begin = begin;
s->length = length;
s->start = get_start_block(s);
if (s->start == -1) {
error_set(errp, QERR_INVALID_PARAMETER, "begin");
goto cleanup;
}
/*
* get dump info: endian, class and architecture.
* If the target architecture is not supported, cpu_get_dump_info() will
* return -1.
*
* if we use kvm, we should synchronize the register before we get dump
* info.
*/
nr_cpus = 0;
for (env = first_cpu; env != NULL; env = env->next_cpu) {
cpu_synchronize_state(env);
nr_cpus++;
}
ret = cpu_get_dump_info(&s->dump_info);
if (ret < 0) {
error_set(errp, QERR_UNSUPPORTED);
goto cleanup;
}
s->note_size = cpu_get_note_size(s->dump_info.d_class,
s->dump_info.d_machine, nr_cpus);
if (ret < 0) {
error_set(errp, QERR_UNSUPPORTED);
goto cleanup;
}
/* get memory mapping */
memory_mapping_list_init(&s->list);
if (paging) {
qemu_get_guest_memory_mapping(&s->list);
} else {
qemu_get_guest_simple_memory_mapping(&s->list);
}
if (s->has_filter) {
memory_mapping_filter(&s->list, s->begin, s->length);
}
/*
* calculate phdr_num
*
* the type of ehdr->e_phnum is uint16_t, so we should avoid overflow
*/
s->phdr_num = 1; /* PT_NOTE */
if (s->list.num < UINT16_MAX - 2) {
s->phdr_num += s->list.num;
s->have_section = false;
} else {
s->have_section = true;
s->phdr_num = PN_XNUM;
s->sh_info = 1; /* PT_NOTE */
/* the type of shdr->sh_info is uint32_t, so we should avoid overflow */
if (s->list.num <= UINT32_MAX - 1) {
s->sh_info += s->list.num;
} else {
s->sh_info = UINT32_MAX;
}
}
if (s->dump_info.d_class == ELFCLASS64) {
if (s->have_section) {
s->memory_offset = sizeof(Elf64_Ehdr) +
sizeof(Elf64_Phdr) * s->sh_info +
sizeof(Elf64_Shdr) + s->note_size;
} else {
s->memory_offset = sizeof(Elf64_Ehdr) +
sizeof(Elf64_Phdr) * s->phdr_num + s->note_size;
}
} else {
if (s->have_section) {
s->memory_offset = sizeof(Elf32_Ehdr) +
sizeof(Elf32_Phdr) * s->sh_info +
sizeof(Elf32_Shdr) + s->note_size;
} else {
s->memory_offset = sizeof(Elf32_Ehdr) +
sizeof(Elf32_Phdr) * s->phdr_num + s->note_size;
}
}
return 0;
cleanup:
if (s->resume) {
vm_start();
}
return -1;
}
void qmp_dump_guest_memory(bool paging, const char *file, bool has_begin,
int64_t begin, bool has_length, int64_t length,
Error **errp)
{
const char *p;
int fd = -1;
DumpState *s;
int ret;
if (has_begin && !has_length) {
error_set(errp, QERR_MISSING_PARAMETER, "length");
return;
}
if (!has_begin && has_length) {
error_set(errp, QERR_MISSING_PARAMETER, "begin");
return;
}
#if !defined(WIN32)
if (strstart(file, "fd:", &p)) {
fd = monitor_get_fd(cur_mon, p);
if (fd == -1) {
error_set(errp, QERR_FD_NOT_FOUND, p);
return;
}
}
#endif
if (strstart(file, "file:", &p)) {
fd = qemu_open(p, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR);
if (fd < 0) {
error_set(errp, QERR_OPEN_FILE_FAILED, p);
return;
}
}
if (fd == -1) {
error_set(errp, QERR_INVALID_PARAMETER, "protocol");
return;
}
s = g_malloc(sizeof(DumpState));
ret = dump_init(s, fd, paging, has_begin, begin, length, errp);
if (ret < 0) {
g_free(s);
return;
}
if (create_vmcore(s) < 0 && !error_is_set(s->errp)) {
error_set(errp, QERR_IO_ERROR);
}
g_free(s);
}

35
dump.h
View File

@@ -1,35 +0,0 @@
/*
* QEMU dump
*
* Copyright Fujitsu, Corp. 2011, 2012
*
* Authors:
* Wen Congyang <wency@cn.fujitsu.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#ifndef DUMP_H
#define DUMP_H
typedef struct ArchDumpInfo {
int d_machine; /* Architecture */
int d_endian; /* ELFDATA2LSB or ELFDATA2MSB */
int d_class; /* ELFCLASS32 or ELFCLASS64 */
} ArchDumpInfo;
typedef int (*write_core_dump_function)(void *buf, size_t size, void *opaque);
int cpu_write_elf64_note(write_core_dump_function f, CPUArchState *env,
int cpuid, void *opaque);
int cpu_write_elf32_note(write_core_dump_function f, CPUArchState *env,
int cpuid, void *opaque);
int cpu_write_elf64_qemunote(write_core_dump_function f, CPUArchState *env,
void *opaque);
int cpu_write_elf32_qemunote(write_core_dump_function f, CPUArchState *env,
void *opaque);
int cpu_get_dump_info(ArchDumpInfo *info);
ssize_t cpu_get_note_size(int class, int machine, int nr_cpus);
#endif

70
dyngen-exec.h Normal file
View File

@@ -0,0 +1,70 @@
/*
* dyngen defines for micro operation code
*
* Copyright (c) 2003 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#if !defined(__DYNGEN_EXEC_H__)
#define __DYNGEN_EXEC_H__
#if defined(CONFIG_TCG_INTERPRETER)
/* The TCG interpreter does not need a special register AREG0,
* but it is possible to use one by defining AREG0.
* On i386, register edi seems to work. */
/* Run without special register AREG0 or use a value defined elsewhere. */
#elif defined(__i386__)
#define AREG0 "ebp"
#elif defined(__x86_64__)
#define AREG0 "r14"
#elif defined(_ARCH_PPC)
#define AREG0 "r27"
#elif defined(__arm__)
#define AREG0 "r6"
#elif defined(__hppa__)
#define AREG0 "r17"
#elif defined(__mips__)
#define AREG0 "s0"
#elif defined(__sparc__)
#ifdef CONFIG_SOLARIS
#define AREG0 "g2"
#else
#ifdef __sparc_v9__
#define AREG0 "g5"
#else
#define AREG0 "g6"
#endif
#endif
#elif defined(__s390__)
#define AREG0 "r10"
#elif defined(__alpha__)
/* Note $15 is the frame pointer, so anything in op-i386.c that would
require a frame pointer, like alloca, would probably loose. */
#define AREG0 "$15"
#elif defined(__mc68000)
#define AREG0 "%a5"
#elif defined(__ia64__)
#define AREG0 "r7"
#else
#error unsupported CPU
#endif
#if defined(AREG0)
register CPUArchState *env asm(AREG0);
#else
/* TODO: Try env = cpu_single_env. */
extern CPUArchState *env;
#endif
#endif /* !defined(__DYNGEN_EXEC_H__) */

7
elf.h
View File

@@ -106,8 +106,6 @@ typedef int64_t Elf64_Sxword;
#define EM_H8S 48 /* Hitachi H8S */
#define EM_LATTICEMICO32 138 /* LatticeMico32 */
#define EM_OPENRISC 92 /* OpenCores OpenRISC */
#define EM_UNICORE32 110 /* UniCore32 */
/*
@@ -1039,11 +1037,6 @@ typedef struct elf64_sym {
#define EI_NIDENT 16
/* Special value for e_phnum. This indicates that the real number of
program headers is too large to fit into e_phnum. Instead the real
value is in the field sh_info of section 0. */
#define PN_XNUM 0xffff
typedef struct elf32_hdr{
unsigned char e_ident[EI_NIDENT];
Elf32_Half e_type;

98
error.c
View File

@@ -14,16 +14,17 @@
#include "error.h"
#include "qjson.h"
#include "qdict.h"
#include "qapi-types.h"
#include "error_int.h"
#include "qerror.h"
struct Error
{
QDict *obj;
const char *fmt;
char *msg;
ErrorClass err_class;
};
void error_set(Error **errp, ErrorClass err_class, const char *fmt, ...)
void error_set(Error **errp, const char *fmt, ...)
{
Error *err;
va_list ap;
@@ -31,14 +32,13 @@ void error_set(Error **errp, ErrorClass err_class, const char *fmt, ...)
if (errp == NULL) {
return;
}
assert(*errp == NULL);
err = g_malloc0(sizeof(*err));
va_start(ap, fmt);
err->msg = g_strdup_vprintf(fmt, ap);
err->obj = qobject_to_qdict(qobject_from_jsonv(fmt, &ap));
va_end(ap);
err->err_class = err_class;
err->fmt = fmt;
*errp = err;
}
@@ -49,7 +49,9 @@ Error *error_copy(const Error *err)
err_new = g_malloc0(sizeof(*err));
err_new->msg = g_strdup(err->msg);
err_new->err_class = err->err_class;
err_new->fmt = err->fmt;
err_new->obj = err->obj;
QINCREF(err_new->obj);
return err_new;
}
@@ -59,29 +61,99 @@ bool error_is_set(Error **errp)
return (errp && *errp);
}
ErrorClass error_get_class(const Error *err)
{
return err->err_class;
}
const char *error_get_pretty(Error *err)
{
if (err->msg == NULL) {
QString *str;
str = qerror_format(err->fmt, err->obj);
err->msg = g_strdup(qstring_get_str(str));
QDECREF(str);
}
return err->msg;
}
const char *error_get_field(Error *err, const char *field)
{
if (strcmp(field, "class") == 0) {
return qdict_get_str(err->obj, field);
} else {
QDict *dict = qdict_get_qdict(err->obj, "data");
return qdict_get_str(dict, field);
}
}
QDict *error_get_data(Error *err)
{
QDict *data = qdict_get_qdict(err->obj, "data");
QINCREF(data);
return data;
}
void error_set_field(Error *err, const char *field, const char *value)
{
QDict *dict = qdict_get_qdict(err->obj, "data");
qdict_put(dict, field, qstring_from_str(value));
}
void error_free(Error *err)
{
if (err) {
QDECREF(err->obj);
g_free(err->msg);
g_free(err);
}
}
bool error_is_type(Error *err, const char *fmt)
{
const char *error_class;
char *ptr;
char *end;
if (!err) {
return false;
}
ptr = strstr(fmt, "'class': '");
assert(ptr != NULL);
ptr += strlen("'class': '");
end = strchr(ptr, '\'');
assert(end != NULL);
error_class = error_get_field(err, "class");
if (strlen(error_class) != end - ptr) {
return false;
}
return strncmp(ptr, error_class, end - ptr) == 0;
}
void error_propagate(Error **dst_err, Error *local_err)
{
if (dst_err && !*dst_err) {
if (dst_err) {
*dst_err = local_err;
} else if (local_err) {
error_free(local_err);
}
}
QObject *error_get_qobject(Error *err)
{
QINCREF(err->obj);
return QOBJECT(err->obj);
}
void error_set_qobject(Error **errp, QObject *obj)
{
Error *err;
if (errp == NULL) {
return;
}
err = g_malloc0(sizeof(*err));
err->obj = qobject_to_qdict(obj);
qobject_incref(obj);
*errp = err;
}

36
error.h
View File

@@ -13,21 +13,20 @@
#define ERROR_H
#include "compiler.h"
#include "qapi-types.h"
#include <stdbool.h>
/**
* A class representing internal errors within QEMU. An error has a ErrorClass
* code and a human message.
* A class representing internal errors within QEMU. An error has a string
* typename and optionally a set of named string parameters.
*/
typedef struct Error Error;
/**
* Set an indirect pointer to an error given a ErrorClass value and a
* printf-style human message. This function is not meant to be used outside
* of QEMU.
* Set an indirect pointer to an error given a printf-style format parameter.
* Currently, qerror.h defines these error formats. This function is not
* meant to be used outside of QEMU.
*/
void error_set(Error **err, ErrorClass err_class, const char *fmt, ...) GCC_FMT_ATTR(3, 4);
void error_set(Error **err, const char *fmt, ...) GCC_FMT_ATTR(2, 3);
/**
* Returns true if an indirect pointer to an error is pointing to a valid
@@ -35,11 +34,6 @@ void error_set(Error **err, ErrorClass err_class, const char *fmt, ...) GCC_FMT_
*/
bool error_is_set(Error **err);
/*
* Get the error class of an error object.
*/
ErrorClass error_get_class(const Error *err);
/**
* Returns an exact copy of the error passed as an argument.
*/
@@ -50,10 +44,20 @@ Error *error_copy(const Error *err);
*/
const char *error_get_pretty(Error *err);
/**
* Get an individual named error field.
*/
const char *error_get_field(Error *err, const char *field);
/**
* Get an individual named error field.
*/
void error_set_field(Error *err, const char *field, const char *value);
/**
* Propagate an error to an indirect pointer to an error. This function will
* always transfer ownership of the error reference and handles the case where
* dst_err is NULL correctly. Errors after the first are discarded.
* dst_err is NULL correctly.
*/
void error_propagate(Error **dst_err, Error *local_err);
@@ -62,4 +66,10 @@ void error_propagate(Error **dst_err, Error *local_err);
*/
void error_free(Error *err);
/**
* Determine if an error is of a speific type (based on the qerror format).
* Non-QEMU users should get the `class' field to identify the error type.
*/
bool error_is_type(Error *err, const char *fmt);
#endif

29
error_int.h Normal file
View File

@@ -0,0 +1,29 @@
/*
* QEMU Error Objects
*
* Copyright IBM, Corp. 2011
*
* Authors:
* Anthony Liguori <aliguori@us.ibm.com>
*
* This work is licensed under the terms of the GNU LGPL, version 2. See
* the COPYING.LIB file in the top-level directory.
*/
#ifndef QEMU_ERROR_INT_H
#define QEMU_ERROR_INT_H
#include "qemu-common.h"
#include "qobject.h"
#include "qdict.h"
#include "error.h"
/**
* Internal QEMU functions for working with Error.
*
* These are used to convert QErrors to Errors
*/
QDict *error_get_data(Error *err);
QObject *error_get_qobject(Error *err);
void error_set_qobject(Error **errp, QObject *obj);
#endif

View File

@@ -10,19 +10,11 @@
* See the COPYING file in the top-level directory.
*/
#include "qemu-common.h"
#include "event_notifier.h"
#include "qemu-char.h"
#ifdef CONFIG_EVENTFD
#include <sys/eventfd.h>
#endif
void event_notifier_init_fd(EventNotifier *e, int fd)
{
e->fd = fd;
}
int event_notifier_init(EventNotifier *e, int active)
{
#ifdef CONFIG_EVENTFD
@@ -46,22 +38,24 @@ int event_notifier_get_fd(EventNotifier *e)
return e->fd;
}
int event_notifier_set_handler(EventNotifier *e,
EventNotifierHandler *handler)
{
return qemu_set_fd_handler(e->fd, (IOHandler *)handler, NULL, e);
}
int event_notifier_set(EventNotifier *e)
{
uint64_t value = 1;
int r = write(e->fd, &value, sizeof(value));
return r == sizeof(value);
}
int event_notifier_test_and_clear(EventNotifier *e)
{
uint64_t value;
int r = read(e->fd, &value, sizeof(value));
return r == sizeof(value);
}
int event_notifier_test(EventNotifier *e)
{
uint64_t value;
int r = read(e->fd, &value, sizeof(value));
if (r == sizeof(value)) {
/* restore previous value. */
int s = write(e->fd, &value, sizeof(value));
/* never blocks because we use EFD_SEMAPHORE.
* If we didn't we'd get EAGAIN on overflow
* and we'd have to write code to ignore it. */
assert(s == sizeof(value));
}
return r == sizeof(value);
}

View File

@@ -16,17 +16,13 @@
#include "qemu-common.h"
struct EventNotifier {
int fd;
int fd;
};
typedef void EventNotifierHandler(EventNotifier *);
void event_notifier_init_fd(EventNotifier *, int fd);
int event_notifier_init(EventNotifier *, int active);
void event_notifier_cleanup(EventNotifier *);
int event_notifier_get_fd(EventNotifier *);
int event_notifier_set(EventNotifier *);
int event_notifier_test_and_clear(EventNotifier *);
int event_notifier_set_handler(EventNotifier *, EventNotifierHandler *);
int event_notifier_test(EventNotifier *);
#endif

Some files were not shown because too many files have changed in this diff Show More