Compare commits

..

65 Commits

Author SHA1 Message Date
Michael Roth
dfa83a6bae Update version for 2.3.1 release 2015-08-10 16:09:34 -05:00
Paolo Bonzini
35a616edef qemu-char: handle EINTR for TCP character devices
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 9172f428af)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-08-10 15:31:16 -05:00
Stefan Hajnoczi
35c30d3efd rtl8139: check TCP Data Offset field (CVE-2015-5165)
The TCP Data Offset field contains the length of the header.  Make sure
it is valid and does not exceed the IP data length.

Reported-by: 朱东海(启路) <donghai.zdh@alibaba-inc.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 8357946b15)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-08-04 12:34:00 -05:00
Stefan Hajnoczi
f4c861fd68 rtl8139: skip offload on short TCP header (CVE-2015-5165)
TCP Large Segment Offload accesses the TCP header in the packet.  If the
packet is too short we must not attempt to access header fields:

  tcp_header *p_tcp_hdr = (tcp_header*)(eth_payload_data + hlen);
  int tcp_hlen = TCP_HEADER_DATA_OFFSET(p_tcp_hdr);

Reported-by: 朱东海(启路) <donghai.zdh@alibaba-inc.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 4240be4563)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-08-04 12:33:54 -05:00
Stefan Hajnoczi
b7a197c39e rtl8139: check IP Total Length field (CVE-2015-5165)
The IP Total Length field includes the IP header and data.  Make sure it
is valid and does not exceed the Ethernet payload size.

Reported-by: 朱东海(启路) <donghai.zdh@alibaba-inc.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit c6296ea88d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-08-04 12:33:48 -05:00
Stefan Hajnoczi
85611098ff rtl8139: check IP Header Length field (CVE-2015-5165)
The IP Header Length field was only checked in the IP checksum case, but
is used in other cases too.

Reported-by: 朱东海(启路) <donghai.zdh@alibaba-inc.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 03247d43c5)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-08-04 12:33:42 -05:00
Stefan Hajnoczi
ce4f451bbb rtl8139: skip offload on short Ethernet/IP header (CVE-2015-5165)
Transmit offload features access Ethernet and IP headers the packet.  If
the packet is too short we must not attempt to access header fields:

  int proto = be16_to_cpu(*(uint16_t *)(saved_buffer + 12));
  ...
  eth_payload_data = saved_buffer + ETH_HLEN;
  ...
  ip = (ip_header*)eth_payload_data;
  if (IP_HEADER_VERSION(ip) != IP_HEADER_VERSION_4) {

Reported-by: 朱东海(启路) <donghai.zdh@alibaba-inc.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit e1c120a9c5)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-08-04 12:33:36 -05:00
Stefan Hajnoczi
6722c126f3 rtl8139: drop tautologous if (ip) {...} statement (CVE-2015-5165)
The previous patch stopped using the ip pointer as an indicator that the
IP header is present.  When we reach the if (ip) {...} statement we know
ip is always non-NULL.

Remove the if statement to reduce nesting.

Reported-by: 朱东海(启路) <donghai.zdh@alibaba-inc.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit d6812d60e7)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-08-04 12:33:30 -05:00
Stefan Hajnoczi
8dd45dcd83 rtl8139: avoid nested ifs in IP header parsing (CVE-2015-5165)
Transmit offload needs to parse packet headers.  If header fields have
unexpected values the offload processing is skipped.

The code currently uses nested ifs because there is relatively little
input validation.  The next patches will add missing input validation
and a goto label is more appropriate to avoid deep if statement nesting.

Reported-by: 朱东海(启路) <donghai.zdh@alibaba-inc.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 39b8e7dcaf)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-08-04 12:32:40 -05:00
Aurelien Jarno
e750591c8a tcg/mips: fix add2
The add2 code in the tcg_out_addsub2 function doesn't take into account
the case where rl == al == bl. In that case we can't compute the carry
after the addition. As it corresponds to a multiplication by 2, the
carry bit is the bit 31.

While this is a corner case, this prevents x86-64 guests to boot on a
MIPS host.

Cc: qemu-stable@nongnu.org
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit c99d69694a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-08-04 12:30:37 -05:00
Aurelien Jarno
f9c0ae2723 tcg/mips: fix TLB loading for BE host with 32-bit guests
For 32-bit guest, we load a 32-bit address from the TLB, so there is no
need to compensate for the low or high part. This fixes 32-bit guests on
big-endian hosts.

Cc: qemu-stable@nongnu.org
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit e72c4fb81d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-08-04 12:30:20 -05:00
Stefano Stabellini
c8bd74d1d5 Fix release_drive on unplugged devices (pci_piix3_xen_ide_unplug)
pci_piix3_xen_ide_unplug should completely unhook the unplugged
IDEDevice from the corresponding BlockBackend, otherwise the next call
to release_drive will try to detach the drive again.

Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
(cherry picked from commit 6cd387833d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-08-04 12:25:11 -05:00
Kevin Wolf
d1557697fd ide: Clear DRQ after handling all expected accesses
This is additional hardening against an end_transfer_func that fails to
clear the DRQ status bit. The bit must be unset as soon as the PIO
transfer has completed, so it's better to do this in a central place
instead of duplicating the code in all commands (and forgetting it in
some).

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
(cherry picked from commit cb72cba830)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 22:19:55 -05:00
Kevin Wolf
86d6fe4cb0 ide/atapi: Fix START STOP UNIT command completion
The command must be completed on all code paths. START STOP UNIT with
pwrcnd set should succeed without doing anything.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
(cherry picked from commit 03441c3a4a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 22:19:39 -05:00
Kevin Wolf
9634e45e0b ide: Check array bounds before writing to io_buffer (CVE-2015-5154)
If the end_transfer_func of a command is called because enough data has
been read or written for the current PIO transfer, and it fails to
correctly call the command completion functions, the DRQ bit in the
status register and s->end_transfer_func may remain set. This allows the
guest to access further bytes in s->io_buffer beyond s->data_end, and
eventually overflowing the io_buffer.

One case where this currently happens is emulation of the ATAPI command
START STOP UNIT.

This patch fixes the problem by adding explicit array bounds checks
before accessing the buffer instead of relying on end_transfer_func to
function correctly.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
(cherry picked from commit d2ff858545)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 22:19:29 -05:00
Jeff Cody
0dc545e977 block: qemu-iotests - add check for multiplication overflow in vpc
This checks that VPC is able to successfully fail (without segfault)
on an image file with a max_table_entries that exceeds 0x40000000.

This table entry is within the valid range for VPC (although too large
for this sample image).

Cc: qemu-stable@nongnu.org
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 77c102c26e)
Conflicts:
	tests/qemu-iotests/group

* removed context dependency on iotest not present in 2.3.0 group
  file

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 22:18:23 -05:00
Jeff Cody
358f0ee234 block: vpc - prevent overflow if max_table_entries >= 0x40000000
When we allocate the pagetable based on max_table_entries, we multiply
the max table entry value by 4 to accomodate a table of 32-bit integers.
However, max_table_entries is a uint32_t, and the VPC driver accepts
ranges for that entry over 0x40000000.  So during this allocation:

s->pagetable = qemu_try_blockalign(bs->file, s->max_table_entries * 4);

The size arg overflows, allocating significantly less memory than
expected.

Since qemu_try_blockalign() size argument is size_t, cast the
multiplication correctly to prevent overflow.

The value of "max_table_entries * 4" is used elsewhere in the code as
well, so store the correct value for use in all those cases.

We also check the Max Tables Entries value, to make sure that it is <
SIZE_MAX / 4, so we know the pagetable size will fit in size_t.

Cc: qemu-stable@nongnu.org
Reported-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit b15deac795)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 22:16:01 -05:00
Paolo Bonzini
961c74a841 scsi: fix buffer overflow in scsi_req_parse_cdb (CVE-2015-5158)
This is a guest-triggerable buffer overflow present in QEMU 2.2.0
and newer.  scsi_cdb_length returns -1 as an error value, but the
caller does not check it.

Luckily, the massive overflow means that QEMU will just SIGSEGV,
making the impact much smaller.

Reported-by: Zhu Donghai (朱东海) <donghai.zdh@alibaba-inc.com>
Fixes: 1894df0281
Reviewed-by: Fam Zheng <famz@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit c170aad8b0)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 22:13:25 -05:00
Alex Williamson
98fe91ed66 vfio/pci: Fix bootindex
bootindex was incorrectly changed to a device Property during the
platform code split, resulting in it no longer working.  Remove it.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-stable@nongnu.org # v2.3+
(cherry picked from commit 759b484c5d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 22:12:30 -05:00
Jason Wang
46addaa0b5 virtio-net: unbreak any layout
Commit 032a74a1c0
("virtio-net: byteswap virtio-net header") breaks any layout by
requiring out_sg[0].iov_len >= n->guest_hdr_len. Fixing this by
copying header to temporary buffer if swap is needed, and then use
this buffer as part of out_sg.

Fixes 032a74a1c0
("virtio-net: byteswap virtio-net header")
Cc: qemu-stable@nongnu.org
Cc: clg@fr.ibm.com
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>

(cherry picked from commit feb93f3617)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 22:10:36 -05:00
Alex Williamson
5a4568717c vfio/pci: Fix RTL8168 NIC quirks
The RTL8168 quirk correctly describes using bit 31 as a signal to
mark a latch/completion, but the code mistakenly uses bit 28.  This
causes the Realtek driver to spin on this register for quite a while,
20k cycles on Windows 7 v7.092 driver.  Then it gets frustrated and
tries to set the bit itself and spins for another 20k cycles.  For
some this still results in a working driver, for others not.  About
the only thing the code really does in its current form is protect
the guest from sneaking in writes to the real hardware MSI-X table.
The fix is obviously to use bit 31 as we document that we should.

The other problem doesn't seem to affect current drivers as nobody
seems to use these window registers for writes to the MSI-X table, but
we need to use the stored data when a write is triggered, not the
value of the current write, which only provides the offset.

Note that only the Windows drivers from Realtek seem to use these
registers, the Microsoft drivers provided with Windows 8.1 do not
access them, nor do Linux in-kernel drivers.

Link: https://bugs.launchpad.net/qemu/+bug/1384892
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-stable@nongnu.org # v2.1+
(cherry picked from commit 69970fcef9)
Conflicts:
	hw/vfio/pci.c

* removed dependency on 3b643495

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 22:07:43 -05:00
James Hogan
87740cecc3 mips/kvm: Sign extend registers written to KVM
In case we're running on a 64-bit host, be sure to sign extend the
general purpose registers and hi/lo/pc before writing them to KVM, so as
to take advantage of MIPS32/MIPS64 compatibility.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Leon Alrae <leon.alrae@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: kvm@vger.kernel.org
Cc: qemu-stable@nongnu.org
Message-Id: <1429871214-23514-3-git-send-email-james.hogan@imgtec.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 02dae26ac4)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 22:00:11 -05:00
James Hogan
8df2a9acd2 mips/kvm: Fix Big endian 32-bit register access
Fix access to 32-bit registers on big endian targets. The pointer passed
to the kernel must be for the actual 32-bit value, not a temporary
64-bit value, otherwise on big endian systems the kernel will only
interpret the upper half.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Leon Alrae <leon.alrae@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: kvm@vger.kernel.org
Cc: qemu-stable@nongnu.org
Message-Id: <1429871214-23514-2-git-send-email-james.hogan@imgtec.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit f8b3e48b2d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 22:00:07 -05:00
Fam Zheng
c5c71e87aa block: Initialize local_err in bdrv_append_temp_snapshot
Cc: qemu-stable@nongnu.org
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-id: 1436156684-16526-1-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit c2e0dbbfd7)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 21:56:40 -05:00
马文霜
2060efae47 Fix irq route entries exceeding KVM_MAX_IRQ_ROUTES
Last month, we experienced several guests crash(6cores-8cores), qemu logs
display the following messages:

qemu-system-x86_64: /build/qemu-2.1.2/kvm-all.c:976:
kvm_irqchip_commit_routes: Assertion `ret == 0' failed.

After analysis and verification, we can confirm it's irq-balance
daemon(in guest) leads to the assertion failure. Start a 8 core guest with
two disks, execute the following scripts will reproduce the BUG quickly:

irq_affinity.sh
========================================================================

vda_irq_num=25
vdb_irq_num=27
while [ 1 ]
do
    for irq in {1,2,4,8,10,20,40,80}
        do
            echo $irq > /proc/irq/$vda_irq_num/smp_affinity
            echo $irq > /proc/irq/$vdb_irq_num/smp_affinity
            dd if=/dev/vda of=/dev/zero bs=4K count=100 iflag=direct
            dd if=/dev/vdb of=/dev/zero bs=4K count=100 iflag=direct
        done
done
========================================================================

QEMU setup static irq route entries in kvm_pc_setup_irq_routing(), PIC and
IOAPIC share the first 15 GSI numbers, take up 23 GSI numbers, but take up
38 irq route entries. When change irq smp_affinity in guest, a dynamic route
entry may be setup, the current logic is: if allocate GSI number succeeds,
a new route entry can be added. The available dynamic GSI numbers is
1021(KVM_MAX_IRQ_ROUTES-23), but available irq route entries is only
986(KVM_MAX_IRQ_ROUTES-38), GSI numbers greater than route entries.
irq-balance's behavior will eventually leads to total irq route entries
exceed KVM_MAX_IRQ_ROUTES, ioctl(KVM_SET_GSI_ROUTING) fail and
kvm_irqchip_commit_routes() trigger assertion failure.

This patch fix the BUG.

Signed-off-by: Wenshuang Ma <kevinnma@tencent.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit bdf026317d)
Conflicts:
	kvm-all.c

* remove context dependency on bd2a8884
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 21:54:31 -05:00
Michael Roth
8d64975c98 target-ppc: fix hugepage support when using memory-backend-file
Current PPC code relies on -mem-path being used in order for
hugepage support to be detected. With the introduction of
MemoryBackendFile we can now handle this via:
  -object memory-file-backend,mem-path=...,id=hugemem0 \
  -numa node,id=mem0,memdev=hugemem0

Management tools like libvirt treat the 2 approaches as
interchangeable in some cases, which can lead to user-visible
regressions even for previously supported guest configurations.

Fix these by also iterating through any configured memory
backends that may be backed by hugepages.

Since the old code assumed hugepages always backed the entirety
of guest memory, play it safe an pick the minimum across the
max pages sizes for all backends, even ones that aren't backed
by hugepages.

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
(cherry picked from commit 2d103aae87)
Conflicts:
	target-ppc/kvm.c

*remove context dependency on header includes not in 2.3.0

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 21:50:55 -05:00
David Gibson
9b4420ad62 spapr_vty: lookup should only return valid VTY objects
If a guest passes the reg property of a valid VIO object that is not a VTY
to either H_GET_TERM_CHAR or H_PUT_TERM_CHAR, QEMU hits a dynamic cast
assertion and aborts.

PAPR+ says "Hypervisor checks the termno parameter for validity against the
Vterm IOA unit addresses assigned to the partition, else return H_Parameter."

This patch adds a type check to ensure vty_lookup() either returns a pointer
to a valid VTY object or NULL.  H_GET_TERM_CHAR and H_PUT_TERM_CHAR will
now return H_PARAMETER to the guest instead of crashing.

The patch has no effect on the reg == 0 hack used to implement the RTAS call
display-character.

Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
(cherry picked from commit 0f888bfadd)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 21:48:27 -05:00
Christian Borntraeger
99c3468d8f s390x/ipl: Fix boot if no bootindex was specified
commit fa92e218df ("s390x/ipl: avoid sign extension") introduced
a regression:

qemu-system-s390x -drive file=image.qcow,format=qcow2
does not boot, the bios states
"No virtio-blk device found!"

adding bootindex=1 does boot.

The reason is that the uint32_t as return value will not do the right
thing for the return -1 (default without bootindex).
The bios itself, will interpret a 64bit -1 as autodetect (but it will
interpret 32bit -1 as ccw device address ff.ff.ffff)

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: qemu-stable@nongnu.org # v2.3.0
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
(cherry picked from commit 6efd2c2a12)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 21:47:15 -05:00
Peter Lieven
1c17e8c7d3 block/nfs: limit maximum readahead size to 1MB
a malicious caller could otherwise specify a very
large value via the URI and force libnfs to allocate
a large amount of memory for the readahead buffer.

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Lieven <pl@kamp.de>
Message-id: 1435317241-25585-1-git-send-email-pl@kamp.de
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 29c838cdc9)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 21:46:36 -05:00
John Snow
ffd060d51f iotests: add QMP event waiting queue
A filter is added to allow callers to request very specific
events to be pulled from the event queue, while leaving undesired
events still in the stream.

This allows us to poll for completion data for multiple asynchronous
events in any arbitrary order.

A new timeout context is added to the qmp pull_event method's
wait parameter to allow tests to fail if they do not complete
within some expected period of time.

Also fixed is a bug in qmp.pull_event where we try to retrieve an event
from an empty list if we attempt to retrieve an event with wait=False
but no events have occurred.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1429314609-29776-19-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 7898f74e78)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 21:46:08 -05:00
Fam Zheng
e4fb4bea37 iotests: Use event_wait in wait_ready
Only poll the specific type of event we are interested in, to avoid
stealing events that should be consumed by someone else.

Suggested-by: John Snow <jsnow@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit d7b2529792)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 21:46:08 -05:00
Fam Zheng
edc0a65326 qemu-iotests: Add test case for mirror with unmap
This checks that the discard on mirror source that effectively zeroes
data is also reflected by the data of target.

Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit c615091793)
Conflicts:
	tests/qemu-iotests/group

*remove context dependencies on newer block tests

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 21:46:07 -05:00
Fam Zheng
c62f6c8f67 qemu-iotests: Make block job methods common
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 866323f39d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 21:46:07 -05:00
Fam Zheng
3d8b7aed60 block: Fix dirty bitmap in bdrv_co_discard
Unsetting dirty globally with discard is not very correct. The discard may zero
out sectors (depending on can_write_zeroes_with_unmap), we should replicate
this change to destination side to make sure that the guest sees the same data.

Calling bdrv_reset_dirty also troubles mirror job because the hbitmap iterator
doesn't expect unsetting of bits after current position.

So let's do it the opposite way which fixes both problems: set the dirty bits
if we are to discard it.

Reported-by: wangxiaolong@ucloud.cn
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 508249952c)
Conflicts:
	block/io.c

* applied manually to avoid dependency on 61007b316
* squashed in 6e82e4b bdrv_reset_dirty() is static in
  2.3.0 and becomes unused as of this patch
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 21:46:07 -05:00
Fam Zheng
27ed14c4d7 mirror: Do zero write on target if sectors not allocated
If guest discards a source cluster, mirroring with bdrv_aio_readv is overkill.
Some protocols do zero upon discard, where it's best to use
bdrv_aio_write_zeroes, otherwise, bdrv_aio_discard will be enough.

Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit dcfb3beb51)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 21:46:07 -05:00
Fam Zheng
6a45a1b8e4 qmp: Add optional bool "unmap" to drive-mirror
If specified as "true", it allows discarding on target sectors where source is
not allocated.

Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 0fc9f8ea28)

* added to maintain any interdependencies between patches in the
  set. not intended as a new feature for 2.3.1, though it's there
  for anyone interested

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 21:43:36 -05:00
Fam Zheng
6cacd2651a block: Add bdrv_get_block_status_above
Like bdrv_is_allocated_above, this function follows the backing chain until seeing
BDRV_BLOCK_ALLOCATED.  Base is not included.

Reimplement bdrv_is_allocated on top.

[Initialized bdrv_co_get_block_status_above() ret to 0 to silence
mingw64 compiler warning about the unitialized variable.  assert(bs !=
base) prevents that case but I suppose the program could be compiled
with -DNDEBUG.
--Stefan]

Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit ba3f0e2545)
Conflicts:
	block/io.c

* applied manually to avoid dependency on 61007b316
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 20:37:30 -05:00
Cornelia Huck
e8248a5af1 virtio-ccw: complete handling of guest-initiated resets
For a guest-initiated reset, we need to not only reset the virtio device,
but also reset the VirtioCcwDevice into a clean state. This includes
resetting the indicators, or else a guest will not be able to e.g.
switch from classic interrupts to adapter interrupts.

Split off this routine into a new function virtio_ccw_reset_virtio()
to make the distinction between resetting the virtio-related devices
and the base subchannel device clear.

CC: qemu-stable@nongnu.org
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
(cherry picked from commit fa8b0ca5d1)
Conflicts:
	hw/s390x/virtio-ccw.c

*removed context dependency on 0b352fd

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 18:50:08 -05:00
Jason Wang
81cb0a5657 vhost: correctly pass error to caller in vhost_dev_enable_notifiers()
We override the error value r in fail_vq, this will cause the caller
can't detect the failure which may cause the caller may disable the
notifiers twice if vhost is failed to start. Fix this by using another
variable to keep track the return value of set_host_notifier().

Fixes b0b3db7955 ("vhost-net: cleanup
host notifiers at last step")

Cc: qemu-stable@nongnu.org
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 16617e36b0)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 18:44:36 -05:00
Laszlo Ersek
6130c46232 hw/core: rebase sysbus_get_fw_dev_path() to g_strdup_printf()
This is done mainly for improving readability, and in preparation for the
next patch, but Markus pointed out another bonus for the string being
returned:

"No arbitrary length limit. Before the patch, it's 39 characters, and the
code breaks catastrophically when qdev_fw_name() is longer: the second
snprintf() is called with its first argument pointing beyond path[], and
its second argument underflowing to a huge size."

Cc: qemu-stable@nongnu.org
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 5ba03e2dd7)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 18:41:25 -05:00
Petr Matousek
49ef542e41 i8254: fix out-of-bounds memory access in pit_ioport_read()
Due converting PIO to the new memory read/write api we no longer provide
separate I/O region lenghts for read and write operations. As a result,
reading from PIT Mode/Command register will end with accessing
pit->channels with invalid index.

Fix this by ignoring read from the Mode/Command register.

This is CVE-2015-3214.

Reported-by: Matt Tait <matttait@google.com>
Fixes: 0505bcdec8
Cc: qemu-stable@nongnu.org
Signed-off-by: Petr Matousek <pmatouse@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit d4862a87e3)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 18:40:21 -05:00
Gerd Hoffmann
c270245a53 spice-display: fix segfault in qemu_spice_create_update
Although it is pretty unusual the stride for the guest image and the
mirror image maintained by spice-display can be different.  So use
separate variables for them.

https://bugzilla.redhat.com/show_bug.cgi?id=1163047

Cc: qemu-stable@nongnu.org
Reported-by: perrier vincent <clownix@clownix.net>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit c6e484707f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 18:34:12 -05:00
Alberto Garcia
9272707a1f sdl2: fix crash in handle_windowevent() when restoring the screen size
The Ctrl-Alt-u keyboard shortcut restores the screen to its original
size. In the SDL2 UI this is done by destroying the window and
creating a new one. The old window emits SDL_WINDOWEVENT_HIDDEN when
it's destroyed, but trying to call SDL_GetWindowFromID() from that
event's window ID returns a null pointer. handle_windowevent() assumes
that the pointer is never null so it results in a crash.

Cc: qemu-stable@nongnu.org
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 08d49df0db)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 18:33:50 -05:00
Fam Zheng
c759f1a078 vmdk: Use vmdk_find_index_in_cluster everywhere
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 90df601f06)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 18:33:01 -05:00
Fam Zheng
714b54401c vmdk: Fix index_in_cluster calculation in vmdk_co_get_block_status
It has the similar issue with b1649fae49. Since the calculation
is repeated for a few times already, introduce a function so it can be
reused.

Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 61f0ed1d54)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 18:32:49 -05:00
Max Reitz
e7e08380c3 iotests: qcow2 COW with minimal L2 cache size
This adds a test case to test 103 for performing a COW operation in a
qcow2 image using an L2 cache with minimal size (which should be at
least two clusters so the COW can access both source and destination
simultaneously).

Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit a4291eafc5)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 18:27:34 -05:00
Max Reitz
c631ee6520 qcow2: Set MIN_L2_CACHE_SIZE to 2
The L2 cache must cover at least two L2 tables, because during COW two
L2 tables are accessed simultaneously.

Reported-by: Alexander Graf <agraf@suse.de>
Cc: qemu-stable <qemu-stable@nongnu.org>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Tested-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 57e2166959)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 18:26:34 -05:00
Gerd Hoffmann
b153c8d3f3 kbd: add brazil kbd keys to x11 evdev map
This patch adds the two extra brazilian keys to the evdev keymap for
X11.  This patch gets the two keys going with the vnc, gtk and sdl1
UIs.

The SDL2 library complains it doesn't know these keys, so the SDL2
library must be fixed before we can update ui/sdl2-keymap.h

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit 33aa30cafc)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 18:25:10 -05:00
Gerd Hoffmann
f45048225a kbd: add brazil kbd keys to qemu
The brazilian computer keyboard layout has two extra keys (compared to
the usual 105-key intl ps/2 keyboard).  This patch makes these two keys
known to qemu.

For historic reasons qemu has two ways to specify a key:  A QKeyCode
(name-based) or a number (ps/2 scancode based).  Therefore we have to
update multiple places to make new keys known to qemu:

  (1) The QKeyCode definition in qapi-schema.json
  (2) The QKeyCode <-> number mapping table in ui/input-keymap.c

This patch does just that.  With this patch applied you can send those
two keys to the guest using the send-key monitor command.

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit b771f470f3)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 18:25:03 -05:00
Justin Ossevoort
ae0fa48f51 qga/commands-posix: Fix bug in guest-fstrim
The FITRIM ioctl updates the fstrim_range structure it receives. This
way the caller can determine how many bytes were trimmed. The
guest-fstrim logic reuses the same fstrim_range for each filesystem,
effectively limiting each filesystem to trim at most as much as the
previous was able to trim.

If a previous filesystem would have trimmed 0 bytes, than the next
filesystem would report an error 'Invalid argument' because a FITRIM
request with length 0 is not valid.

This change resets the fstrim_range structure for each filesystem.

Signed-off-by: Justin Ossevoort <justin@quarantainenet.nl>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
(cherry picked from commit 73a652a1b0)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 18:24:00 -05:00
Shannon Zhao
bb3a1da4d4 hw/acpi/aml-build: Fix memory leak
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
(cherry picked from commit afcf905cff)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 18:21:41 -05:00
Fam Zheng
b48a391cff qemu-iotests: Test unaligned sub-block zero write
Test zero write in byte range 512~1024 for 4k alignment.

Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 1431522721-3266-4-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit ab53c44718)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 18:18:59 -05:00
Fam Zheng
cc883fe42d block: Fix NULL deference for unaligned write if qiov is NULL
For zero write, callers pass in NULL qiov (qemu-io "write -z" or
scsi-disk "write same").

Commit fc3959e466 fixed bdrv_co_write_zeroes which is the common case
for this bug, but it still exists in bdrv_aio_write_zeroes. A simpler
fix would be in bdrv_co_do_pwritev which is the NULL dereference point
and covers both cases.

So don't access it in bdrv_co_do_pwritev in this case, use three aligned
writes.

[Initialize ret to 0 in bdrv_co_do_zero_pwritev() to avoid uninitialized
variable warning with gcc 4.9.2.
--Stefan]

Signed-off-by: Fam Zheng <famz@redhat.com>
Message-id: 1431522721-3266-3-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 9eeb6dd1b2)
Conflicts:
	block/io.c

* moved hunks into corresponding location in block.c due to lack of
  61007b316 in v2.3.0
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 18:15:27 -05:00
Michael Roth
4072585ecf Revert "block: Fix unaligned zero write"
This reverts commit fc3959e466.

From upstream commit d01c07f:
  This reverts commit fc3959e466.

  The core write code already handles the case, so remove this
  duplication.

  Because commit 61007b316 moved the touched code from block.c to
  block/io.c, the change is manually reverted.

  Signed-off-by: Fam Zheng <famz@redhat.com>
  Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
  Reviewed-by: Kevin Wolf <kwolf@redhat.com>

v2.3.0 does not contain 61007b316 so we can revert the change
directly.

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-29 17:05:56 -05:00
Petr Matousek
959fad0ff1 fdc: force the fifo access to be in bounds of the allocated buffer
During processing of certain commands such as FD_CMD_READ_ID and
FD_CMD_DRIVE_SPECIFICATION_COMMAND the fifo memory access could
get out of bounds leading to memory corruption with values coming
from the guest.

Fix this by making sure that the index is always bounded by the
allocated memory.

This is CVE-2015-3456.

Signed-off-by: Petr Matousek <pmatouse@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit e907746266)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-28 18:26:06 -05:00
Peter Maydell
a4bb522ee5 target-arm: Avoid buffer overrun on UNPREDICTABLE ldrd/strd
A LDRD or STRD where rd is not an even number is UNPREDICTABLE.
We were letting this fall through, which is OK unless rd is 15,
in which case we would attempt to do a load_reg or store_reg
to a nonexistent r16 for the second half of the double-word.
Catch the odd-numbered-rd cases and UNDEF them instead.

To do this we rearrange the structure of the code a little
so we can put the UNDEF catches at the top before we've
allocated TCG temporaries.

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1431348973-21315-1-git-send-email-peter.maydell@linaro.org
(cherry picked from commit 3960c336ad)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-28 18:23:18 -05:00
Jason Wang
cf6c213981 virtio-net: fix the upper bound when trying to delete queues
Virtqueue were indexed from zero, so don't delete virtqueue whose
index is n->max_queues * 2 + 1.

Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: qemu-stable <qemu-stable@nongnu.org>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

(cherry picked from commit 27a46dcf50)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-28 18:22:35 -05:00
Michal Kazior
cf3297868c usb: fix usb-net segfault
The dev->config pointer isn't set until guest
system initializes usb devices (via
usb_desc_set_config). However qemu networking can
go through some motions prior to that, e.g.:

 #0  is_rndis (s=0x555557261970) at hw/usb/dev-network.c:653
 #1  0x000055555585f723 in usbnet_can_receive (nc=0x55555641e820) at hw/usb/dev-network.c:1315
 #2  0x000055555587635e in qemu_can_send_packet (sender=0x5555572660a0) at net/net.c:470
 #3  0x0000555555878e34 in net_hub_port_can_receive (nc=0x5555562d7800) at net/hub.c:101
 #4  0x000055555587635e in qemu_can_send_packet (sender=0x5555562d7980) at net/net.c:470
 #5  0x000055555587dbca in tap_can_send (opaque=0x5555562d7980) at net/tap.c:172

The command to reproduce most reliably was:

 qemu-system-i386 -usb -device usb-net,vlan=0 -net tap,vlan=0

This wasn't strictly a problem with tap. Other
networking endpoints (vde, user) could trigger
this problem as well.

Fixes: https://bugs.launchpad.net/qemu/+bug/1050823
Cc: qemu-stable@nongnu.org
Signed-off-by: Michal Kazior <michal.kazior@tieto.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 278412d0e7)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-28 18:20:04 -05:00
Kevin Wolf
ad9c167fd2 qcow2: Flush pending discards before allocating cluster
Before a freed cluster can be reused, pending discards for this cluster
must be processed.

The original assumption was that this was not a problem because discards
are only cached during discard/write zeroes operations, which are
synchronous so that no concurrent write requests can cause cluster
allocations.

However, the discard/write zeroes operation itself can allocate a new L2
table (and it has to in order to put zero flags there), so make sure we
can cope with the situation.

This fixes https://bugs.launchpad.net/bugs/1349972.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit ecbda7a225)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-28 18:19:33 -05:00
Fam Zheng
d8e231fce2 vmdk: Fix overflow if l1_size is 0x20000000
Richard Jones caught this bug with afl fuzzer.

In fact, that's the only possible value to overflow (extent->l1_size =
0x20000000) l1_size:

l1_size = extent->l1_size * sizeof(long) => 0x80000000;

g_try_malloc returns NULL because l1_size is interpreted as negative
during type casting from 'int' to 'gsize', which yields a enormous
value. Hence, by coincidence, we get a "not too bad" behavior:

qemu-img: Could not open '/tmp/afl6.img': Could not open
'/tmp/afl6.img': Cannot allocate memory

Values larger than 0x20000000 will be refused by the validation in
vmdk_add_extent.

Values smaller than 0x20000000 will not overflow l1_size.

Cc: qemu-stable@nongnu.org
Reported-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 13c4941cdd)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-28 18:19:02 -05:00
Fam Zheng
53cd79c117 vmdk: Fix next_cluster_sector for compressed write
This fixes the bug introduced by commit c6ac36e (vmdk: Optimize cluster
allocation).

Sometimes, write_len could be larger than cluster size, because it
contains both data and marker.  We must advance next_cluster_sector in
this case, otherwise the image gets corrupted.

Cc: qemu-stable@nongnu.org
Reported-by: Antoni Villalonga <qemu-list@friki.cat>
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 5e82a31eb9)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-28 18:18:27 -05:00
Bogdan Purcareata
3dd15f3e58 nbd/trivial: fix type cast for ioctl
This fixes ioctl behavior on powerpc e6500 platforms with 64bit kernel and 32bit
userspace. The current type cast has no effect there and the value passed to the
kernel is still 0. Probably an issue related to the compiler, since I'm assuming
the same configuration works on a similar setup on x86.

Also ensure consistency with previous type cast in TRACE message.

Signed-off-by: Bogdan Purcareata <bogdan.purcareata@freescale.com>
Message-Id: <1428058914-32050-1-git-send-email-bogdan.purcareata@freescale.com>
Cc: qemu-stable@nongnu.org
[Fix parens as noticed by Michael. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

(cherry picked from commit d064d9f381)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-28 18:14:07 -05:00
Ján Tomko
4c59860506 Strip brackets from vnc host
Commit v2.2.0-1530-ge556032 vnc: switch to inet_listen_opts
bypassed the use of inet_parse in inet_listen, making literal
IPv6 addresses enclosed in brackets fail:

qemu-kvm: -vnc [::1]:0: Failed to start VNC server on `(null)': address
resolution failed for [::1]:5900: Name or service not known

Strip the brackets to make it work again.

Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 274c3b52e1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-28 18:13:19 -05:00
Peter Lieven
b575af0730 block/iscsi: do not forget to logout from target
We actually were always impolitely dropping the connection and
not cleanly logging out.

CC: qemu-stable@nongnu.org
Signed-off-by: Peter Lieven <pl@kamp.de>
Message-id: 1429193313-4263-2-git-send-email-pl@kamp.de
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 20474e9aa0)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-28 18:08:49 -05:00
Stefan Hajnoczi
d3b59789e8 bt-sdp: fix broken uuids power-of-2 calculation
The binary search in sdp_uuid_match() only works when the number of
elements to search is a power of two.

  lo = record->uuid;
  hi = record->uuids;
  while (hi >>= 1)
      if (lo[hi] <= val)
          lo += hi;

  return *lo == val;

I noticed that the record->uuids calculation in
sdp_service_record_build() was suspect:

  record->uuids = 1 << ffs(record->uuids - 1);

Unlike most ffs(val) - 1 users, the expression is ffs(val - 1)!

Actually ffs() is the wrong function to use for power-of-2.  Use
pow2ceil() to achieve the correct effect.  Now the record->uuid[] array
is sized correctly and the binary search in sdp_uuid_match() should
work.

I'm not sure how to run/test this code.

Cc: Andrzej Zaborowski <balrog@zabor.org>
Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1427124571-28598-2-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 588ef9d411)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-07-28 17:46:44 -05:00
1397 changed files with 25542 additions and 71735 deletions

2
.gitignore vendored
View File

@@ -17,8 +17,6 @@
/trace/generated-tcg-tracers.h
/trace/generated-ust-provider.h
/trace/generated-ust.c
/ui/shader/texture-blit-frag.h
/ui/shader/texture-blit-vert.h
/libcacard/trace/generated-tracers.c
*-timestamp
/*-softmmu

View File

@@ -172,8 +172,7 @@ F: hw/unicore32/
X86
M: Paolo Bonzini <pbonzini@redhat.com>
M: Richard Henderson <rth@twiddle.net>
M: Eduardo Habkost <ehabkost@redhat.com>
S: Maintained
S: Odd Fixes
F: target-i386/
F: hw/i386/
@@ -356,13 +355,6 @@ F: hw/misc/zynq_slcr.c
F: hw/*/cadence_*
F: hw/ssi/xilinx_spips.c
ARM ACPI Subsystem
M: Shannon Zhao <zhaoshenglong@huawei.com>
M: Shannon Zhao <shannon.zhao@linaro.org>
S: Maintained
F: hw/arm/virt-acpi-build.c
F: include/hw/arm/virt-acpi-build.h
CRIS Machines
-------------
Axis Dev88
@@ -493,8 +485,7 @@ F: hw/ppc/prep.c
F: hw/pci-host/prep.[hc]
F: hw/isa/pc87312.[hc]
sPAPR (pseries)
M: David Gibson <david@gibson.dropbear.id.au>
sPAPR
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Supported
@@ -644,19 +635,7 @@ M: Michael S. Tsirkin <mst@redhat.com>
S: Supported
F: include/hw/pci/*
F: hw/pci/*
ACPI
M: Michael S. Tsirkin <mst@redhat.com>
M: Igor Mammedov <imammedo@redhat.com>
S: Supported
F: include/hw/acpi/*
F: hw/mem/*
F: hw/acpi/*
F: hw/i386/acpi-build.[hc]
F: hw/i386/*dsl
F: hw/arm/virt-acpi-build.c
F: include/hw/arm/virt-acpi-build.h
F: scripts/acpi*py
ppc4xx
M: Alexander Graf <agraf@suse.de>
@@ -714,7 +693,6 @@ virtio
M: Michael S. Tsirkin <mst@redhat.com>
S: Supported
F: hw/*/virtio*
F: net/vhost-user.c
virtio-9p
M: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
@@ -725,13 +703,10 @@ F: tests/virtio-9p-test.c
T: git git://github.com/kvaneesh/QEMU.git
virtio-blk
M: Kevin Wolf <kwolf@redhat.com>
M: Stefan Hajnoczi <stefanha@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: hw/block/virtio-blk.c
F: hw/block/dataplane/*
F: hw/virtio/dataplane/*
T: git git://github.com/stefanha/qemu.git block
virtio-ccw
M: Cornelia Huck <cornelia.huck@de.ibm.com>
@@ -740,12 +715,6 @@ S: Supported
F: hw/s390x/virtio-ccw.[hc]
T: git git://github.com/cohuck/qemu virtio-ccw-upstr
virtio-input
M: Gerd Hoffmann <kraxel@redhat.com>
S: Maintained
F: hw/input/virtio-input*.c
F: include/hw/virtio/virtio-input.h
virtio-serial
M: Amit Shah <amit.shah@redhat.com>
S: Supported
@@ -762,14 +731,12 @@ F: backends/rng*.c
nvme
M: Keith Busch <keith.busch@intel.com>
L: qemu-block@nongnu.org
S: Supported
F: hw/block/nvme*
F: tests/nvme-test.c
megasas
M: Hannes Reinecke <hare@suse.de>
L: qemu-block@nongnu.org
S: Supported
F: hw/scsi/megasas.c
F: hw/scsi/mfi.h
@@ -787,15 +754,10 @@ S: Maintained
F: hw/net/vmxnet*
F: hw/scsi/vmw_pvscsi*
Rocker
M: Scott Feldman <sfeldma@gmail.com>
M: Jiri Pirko <jiri@resnulli.us>
S: Maintained
F: hw/net/rocker/
Subsystems
----------
Audio
M: Vassili Karpov (malc) <av1474@comtv.ru>
M: Gerd Hoffmann <kraxel@redhat.com>
S: Maintained
F: audio/
@@ -804,27 +766,21 @@ F: tests/ac97-test.c
F: tests/es1370-test.c
F: tests/intel-hda-test.c
Block layer core
Block
M: Kevin Wolf <kwolf@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: block*
F: block/
F: hw/block/
F: include/block/
F: qemu-img*
F: qemu-io*
F: tests/qemu-iotests/
T: git git://repo.or.cz/qemu/kevin.git block
Block I/O path
M: Stefan Hajnoczi <stefanha@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: async.c
F: aio-*.c
F: block/io.c
F: block*
F: block/
F: hw/block/
F: migration/block*
F: qemu-img*
F: qemu-io*
F: tests/image-fuzzer/
F: tests/qemu-iotests/
T: git git://repo.or.cz/qemu/kevin.git block
T: git git://github.com/stefanha/qemu.git block
Block Jobs
@@ -839,14 +795,6 @@ F: block/stream.h
F: block/mirror.c
T: git git://github.com/codyprime/qemu-kvm-jtc.git block
Block QAPI, monitor, command line
M: Markus Armbruster <armbru@redhat.com>
S: Supported
F: blockdev.c
F: block/qapi.c
F: qapi/block*.json
T: git git://repo.or.cz/qemu/armbru.git block-next
Character Devices
M: Paolo Bonzini <pbonzini@redhat.com>
S: Maintained
@@ -957,33 +905,21 @@ F: nbd.*
F: qemu-nbd.c
T: git git://github.com/bonzini/qemu.git nbd-next
NUMA
M: Eduardo Habkost <ehabkost@redhat.com>
S: Maintained
F: numa.c
F: include/sysemu/numa.h
K: numa|NUMA
K: srat|SRAT
T: git git://github.com/ehabkost/qemu.git numa
QAPI
M: Markus Armbruster <armbru@redhat.com>
M: Luiz Capitulino <lcapitulino@redhat.com>
M: Michael Roth <mdroth@linux.vnet.ibm.com>
S: Supported
S: Maintained
F: qapi/
X: qapi/*.json
F: tests/qapi-schema/
F: scripts/qapi*
F: docs/qapi*
T: git git://repo.or.cz/qemu/armbru.git qapi-next
T: git git://repo.or.cz/qemu/qmp-unstable.git queue/qmp
QAPI Schema
M: Eric Blake <eblake@redhat.com>
M: Luiz Capitulino <lcapitulino@redhat.com>
M: Markus Armbruster <armbru@redhat.com>
S: Supported
F: qapi-schema.json
F: qapi/*.json
T: git git://repo.or.cz/qemu/armbru.git qapi-next
T: git git://repo.or.cz/qemu/qmp-unstable.git queue/qmp
QObject
M: Luiz Capitulino <lcapitulino@redhat.com>
@@ -1008,14 +944,13 @@ X: qom/cpu.c
F: tests/qom-test.c
QMP
M: Markus Armbruster <armbru@redhat.com>
S: Supported
M: Luiz Capitulino <lcapitulino@redhat.com>
S: Maintained
F: qmp.c
F: monitor.c
F: qmp-commands.hx
F: docs/qmp/
F: scripts/qmp/
T: git git://repo.or.cz/qemu/armbru.git qapi-next
F: QMP/
T: git git://repo.or.cz/qemu/qmp-unstable.git queue/qmp
SLIRP
M: Jan Kiszka <jan.kiszka@siemens.com>
@@ -1043,6 +978,8 @@ M: Amit Shah <amit.shah@redhat.com>
S: Maintained
F: include/migration/
F: migration/
F: savevm.c
F: arch_init.c
F: scripts/vmstate-static-checker.py
F: tests/vmstate-static-checker-data/
@@ -1052,13 +989,6 @@ S: Supported
F: qemu-seccomp.c
F: include/sysemu/seccomp.h
Cryptography
M: Daniel P. Berrange <berrange@redhat.com>
S: Maintained
F: crypto/
F: include/crypto/
F: tests/test-crypto-*
Usermode Emulation
------------------
Overall
@@ -1164,12 +1094,11 @@ Block drivers
-------------
VMDK
M: Fam Zheng <famz@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: block/vmdk.c
RBD
M: Josh Durgin <jdurgin@redhat.com>
M: Josh Durgin <josh.durgin@inktank.com>
M: Jeff Cody <jcody@redhat.com>
L: qemu-block@nongnu.org
S: Supported
@@ -1195,7 +1124,6 @@ T: git git://github.com/codyprime/qemu-kvm-jtc.git block
VDI
M: Stefan Weil <sw@weilnetz.de>
L: qemu-block@nongnu.org
S: Maintained
F: block/vdi.c
@@ -1203,7 +1131,6 @@ iSCSI
M: Ronnie Sahlberg <ronniesahlberg@gmail.com>
M: Paolo Bonzini <pbonzini@redhat.com>
M: Peter Lieven <pl@kamp.de>
L: qemu-block@nongnu.org
S: Supported
F: block/iscsi.c
@@ -1245,102 +1172,7 @@ S: Supported
F: block/gluster.c
T: git git://github.com/codyprime/qemu-kvm-jtc.git block
Null Block Driver
M: Fam Zheng <famz@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: block/null.c
Bootdevice
M: Gonglei <arei.gonglei@huawei.com>
S: Maintained
F: bootdevice.c
Quorum
M: Alberto Garcia <berto@igalia.com>
S: Supported
F: block/quorum.c
L: qemu-block@nongnu.org
blkverify
M: Stefan Hajnoczi <stefanha@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: block/blkverify.c
bochs
M: Stefan Hajnoczi <stefanha@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: block/bochs.c
cloop
M: Stefan Hajnoczi <stefanha@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: block/cloop.c
dmg
M: Stefan Hajnoczi <stefanha@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: block/dmg.c
parallels
M: Stefan Hajnoczi <stefanha@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: block/parallels.c
qed
M: Stefan Hajnoczi <stefanha@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: block/qed.c
raw
M: Kevin Wolf <kwolf@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: block/linux-aio.c
F: block/raw-aio.h
F: block/raw-posix.c
F: block/raw-win32.c
F: block/raw_bsd.c
F: block/win32-aio.c
qcow2
M: Kevin Wolf <kwolf@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: block/qcow2*
qcow
M: Kevin Wolf <kwolf@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: block/qcow.c
blkdebug
M: Kevin Wolf <kwolf@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: block/blkdebug.c
vpc
M: Kevin Wolf <kwolf@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: block/vpc.c
vvfat
M: Kevin Wolf <kwolf@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: block/vvfat.c
Image format fuzzer
M: Stefan Hajnoczi <stefanha@redhat.com>
L: qemu-block@nongnu.org
S: Supported
F: tests/image-fuzzer/

View File

@@ -3,11 +3,6 @@
# Always point to the root of the build tree (needs GNU make).
BUILD_DIR=$(CURDIR)
# Before including a proper config-host.mak, assume we are in the source tree
SRC_PATH=.
UNCHECKED_GOALS := %clean TAGS cscope ctags
# All following code might depend on configuration variables
ifneq ($(wildcard config-host.mak),)
# Put the all: rule here so that config-host.mak can contain dependencies.
@@ -43,7 +38,7 @@ config-host.mak: $(SRC_PATH)/configure
fi
else
config-host.mak:
ifneq ($(filter-out $(UNCHECKED_GOALS),$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
ifneq ($(filter-out %clean,$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
@echo "Please call configure before running make!"
@exit 1
endif
@@ -79,7 +74,7 @@ Makefile: ;
configure: ;
.PHONY: all clean cscope distclean dvi html info install install-doc \
pdf recurse-all speed test dist msi
pdf recurse-all speed test dist
$(call set-vpath, $(SRC_PATH))
@@ -135,7 +130,7 @@ endif
else \
mv $@.tmp $@; \
cp -p $@ $@.old; \
fi, " GEN $@");
fi, " GEN $@");
defconfig:
rm -f config-all-devices.mak $(SUBDIR_DEVICES_MAK)
@@ -248,17 +243,17 @@ qapi-py = $(SRC_PATH)/scripts/qapi.py $(SRC_PATH)/scripts/ordereddict.py
qga/qapi-generated/qga-qapi-types.c qga/qapi-generated/qga-qapi-types.h :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" $<, \
$(gen-out-type) -o qga/qapi-generated -p "qga-" -i $<, \
" GEN $@")
qga/qapi-generated/qga-qapi-visit.c qga/qapi-generated/qga-qapi-visit.h :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" $<, \
$(gen-out-type) -o qga/qapi-generated -p "qga-" -i $<, \
" GEN $@")
qga/qapi-generated/qga-qmp-commands.h qga/qapi-generated/qga-qmp-marshal.c :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" $<, \
$(gen-out-type) -o qga/qapi-generated -p "qga-" -i $<, \
" GEN $@")
qapi-modules = $(SRC_PATH)/qapi-schema.json $(SRC_PATH)/qapi/common.json \
@@ -268,22 +263,22 @@ qapi-modules = $(SRC_PATH)/qapi-schema.json $(SRC_PATH)/qapi/common.json \
qapi-types.c qapi-types.h :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py \
$(gen-out-type) -o "." -b $<, \
$(gen-out-type) -o "." -b -i $<, \
" GEN $@")
qapi-visit.c qapi-visit.h :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py \
$(gen-out-type) -o "." -b $<, \
$(gen-out-type) -o "." -b -i $<, \
" GEN $@")
qapi-event.c qapi-event.h :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-event.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-event.py \
$(gen-out-type) -o "." $<, \
$(gen-out-type) -o "." -b -i $<, \
" GEN $@")
qmp-commands.h qmp-marshal.c :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py \
$(gen-out-type) -o "." -m $<, \
$(gen-out-type) -o "." -m -i $<, \
" GEN $@")
QGALIB_GEN=$(addprefix qga/qapi-generated/, qga-qapi-types.h qga-qapi-visit.h qga-qmp-commands.h)
@@ -292,38 +287,15 @@ $(qga-obj-y) qemu-ga.o: $(QGALIB_GEN)
qemu-ga$(EXESUF): $(qga-obj-y) libqemuutil.a libqemustub.a
$(call LINK, $^)
ifdef QEMU_GA_MSI_ENABLED
QEMU_GA_MSI=qemu-ga-$(ARCH).msi
msi: ${QEMU_GA_MSI}
$(QEMU_GA_MSI): qemu-ga.exe
ifdef QEMU_GA_MSI_WITH_VSS
$(QEMU_GA_MSI): qga/vss-win32/qga-vss.dll
endif
$(QEMU_GA_MSI): config-host.mak
$(QEMU_GA_MSI): qga/installer/qemu-ga.wxs
$(call quiet-command,QEMU_GA_VERSION="$(QEMU_GA_VERSION)" QEMU_GA_MANUFACTURER="$(QEMU_GA_MANUFACTURER)" QEMU_GA_DISTRO="$(QEMU_GA_DISTRO)" \
wixl -o $@ $(QEMU_GA_MSI_ARCH) $(QEMU_GA_MSI_WITH_VSS) $(QEMU_GA_MSI_MINGW_DLL_PATH) $<, " WIXL $@")
else
msi:
@echo MSI build not configured or dependency resolution failed (reconfigure with --enable-guest-agent-msi option)
endif
clean:
# avoid old build problems by removing potentially incorrect old files
rm -f config.mak op-i386.h opc-i386.h gen-op-i386.h op-arm.h opc-arm.h gen-op-arm.h
rm -f qemu-options.def
rm -f *.msi
find . \( -name '*.l[oa]' -o -name '*.so' -o -name '*.dll' -o -name '*.mo' -o -name '*.[oda]' \) -type f -exec rm {} +
rm -f $(filter-out %.tlb,$(TOOLS)) $(HELPERS-y) qemu-ga TAGS cscope.* *.pod *~ */*~
rm -f fsdev/*.pod
rm -rf .libs */.libs
rm -f qemu-img-cmds.h
rm -f ui/shader/*-vert.h ui/shader/*-frag.h
@# May not be present in GENERATED_HEADERS
rm -f trace/generated-tracers-dtrace.dtrace*
rm -f trace/generated-tracers-dtrace.h*
@@ -369,7 +341,7 @@ bepo cz
ifdef INSTALL_BLOBS
BLOBS=bios.bin bios-256k.bin sgabios.bin vgabios.bin vgabios-cirrus.bin \
vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin vgabios-virtio.bin \
vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin \
acpi-dsdt.aml q35-acpi-dsdt.aml \
ppc_rom.bin openbios-sparc32 openbios-sparc64 openbios-ppc QEMU,tcx.bin QEMU,cgthree.bin \
pxe-e1000.rom pxe-eepro100.rom pxe-ne2k_pci.rom \
@@ -416,8 +388,13 @@ ifneq (,$(findstring qemu-ga,$(TOOLS)))
endif
endif
install-confdir:
$(INSTALL_DIR) "$(DESTDIR)$(qemu_confdir)"
install: all $(if $(BUILD_DOCS),install-doc) \
install-sysconfig: install-datadir install-confdir
$(INSTALL_DATA) $(SRC_PATH)/sysconfigs/target/target-x86_64.conf "$(DESTDIR)$(qemu_confdir)"
install: all $(if $(BUILD_DOCS),install-doc) install-sysconfig \
install-datadir install-localstatedir
ifneq ($(TOOLS),)
$(call install-prog,$(TOOLS),$(DESTDIR)$(bindir))
@@ -454,36 +431,15 @@ endif
test speed: all
$(MAKE) -C tests/tcg $@
.PHONY: ctags
ctags:
rm -f $@
find "$(SRC_PATH)" -name '*.[hc]' -exec ctags --append {} +
.PHONY: TAGS
TAGS:
rm -f $@
find "$(SRC_PATH)" -name '*.[hc]' -exec etags --append {} +
cscope:
rm -f "$(SRC_PATH)"/cscope.*
find "$(SRC_PATH)/" -name "*.[chsS]" -print | sed 's,^\./,,' > "$(SRC_PATH)/cscope.files"
cscope -b -i"$(SRC_PATH)/cscope.files"
# opengl shader programs
ui/shader/%-vert.h: $(SRC_PATH)/ui/shader/%.vert $(SRC_PATH)/scripts/shaderinclude.pl
@mkdir -p $(dir $@)
$(call quiet-command,\
perl $(SRC_PATH)/scripts/shaderinclude.pl $< > $@,\
" VERT $@")
ui/shader/%-frag.h: $(SRC_PATH)/ui/shader/%.frag $(SRC_PATH)/scripts/shaderinclude.pl
@mkdir -p $(dir $@)
$(call quiet-command,\
perl $(SRC_PATH)/scripts/shaderinclude.pl $< > $@,\
" FRAG $@")
ui/console-gl.o: $(SRC_PATH)/ui/console-gl.c \
ui/shader/texture-blit-vert.h ui/shader/texture-blit-frag.h
rm -f ./cscope.*
find "$(SRC_PATH)" -name "*.[chsS]" -print | sed 's,^\./,,' > ./cscope.files
cscope -b
# documentation
MAKEINFO=makeinfo
@@ -610,7 +566,7 @@ endif # CONFIG_WIN
# Add a dependency on the generated files, so that they are always
# rebuilt before other object files
ifneq ($(filter-out $(UNCHECKED_GOALS),$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
ifneq ($(filter-out %clean,$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
Makefile: $(GENERATED_HEADERS)
endif

View File

@@ -2,7 +2,6 @@
# Common libraries for tools and emulators
stub-obj-y = stubs/
util-obj-y = util/ qobject/ qapi/ qapi-types.o qapi-visit.o qapi-event.o
util-obj-y += crypto/
#######################################################################
# block-obj-y is code used by both qemu system emulation and qemu-img
@@ -77,8 +76,6 @@ common-obj-$(CONFIG_SECCOMP) += qemu-seccomp.o
common-obj-$(CONFIG_SMARTCARD_NSS) += $(libcacard-y)
common-obj-$(CONFIG_FDT) += device_tree.o
######################################################################
# qapi

View File

@@ -1,7 +1,5 @@
# -*- Mode: makefile -*-
BUILD_DIR?=$(CURDIR)/..
include ../config-host.mak
include config-target.mak
include config-devices.mak
@@ -131,12 +129,12 @@ ifdef CONFIG_SOFTMMU
obj-y += arch_init.o cpus.o monitor.o gdbstub.o balloon.o ioport.o numa.o
obj-y += qtest.o bootdevice.o
obj-y += hw/
obj-$(CONFIG_FDT) += device_tree.o
obj-$(CONFIG_KVM) += kvm-all.o
obj-y += memory.o cputlb.o
obj-y += memory.o savevm.o cputlb.o
obj-y += memory_mapping.o
obj-y += dump.o
obj-y += migration/ram.o migration/savevm.o
LIBS := $(libs_softmmu) $(LIBS)
LIBS+=$(libs_softmmu)
# xen support
obj-$(CONFIG_XEN) += xen-common.o
@@ -182,10 +180,6 @@ $(QEMU_PROG_BUILD): config-devices.mak
# build either PROG or PROGW
$(QEMU_PROG_BUILD): $(all-obj-y) ../libqemuutil.a ../libqemustub.a
$(call LINK, $(filter-out %.mak, $^))
ifdef CONFIG_DARWIN
$(call quiet-command,Rez -append $(SRC_PATH)/pc-bios/qemu.rsrc -o $@," REZ $(TARGET_DIR)$@")
$(call quiet-command,SetFile -a C $@," SETFILE $(TARGET_DIR)$@")
endif
gdbstub-xml.c: $(TARGET_XML_FILES) $(SRC_PATH)/scripts/feature_to_c.sh
$(call quiet-command,rm -f $@ && $(SHELL) $(SRC_PATH)/scripts/feature_to_c.sh $@ $(TARGET_XML_FILES)," GEN $(TARGET_DIR)$@")

View File

@@ -1 +1 @@
2.4.0.1
2.3.1

View File

@@ -24,6 +24,7 @@ struct AioHandler
IOHandler *io_read;
IOHandler *io_write;
int deleted;
int pollfds_idx;
void *opaque;
QLIST_ENTRY(AioHandler) node;
};
@@ -82,6 +83,7 @@ void aio_set_fd_handler(AioContext *ctx,
node->io_read = io_read;
node->io_write = io_write;
node->opaque = opaque;
node->pollfds_idx = -1;
node->pfd.events = (io_read ? G_IO_IN | G_IO_HUP | G_IO_ERR : 0);
node->pfd.events |= (io_write ? G_IO_OUT | G_IO_ERR : 0);
@@ -184,116 +186,69 @@ bool aio_dispatch(AioContext *ctx)
return progress;
}
/* These thread-local variables are used only in a small part of aio_poll
* around the call to the poll() system call. In particular they are not
* used while aio_poll is performing callbacks, which makes it much easier
* to think about reentrancy!
*
* Stack-allocated arrays would be perfect but they have size limitations;
* heap allocation is expensive enough that we want to reuse arrays across
* calls to aio_poll(). And because poll() has to be called without holding
* any lock, the arrays cannot be stored in AioContext. Thread-local data
* has none of the disadvantages of these three options.
*/
static __thread GPollFD *pollfds;
static __thread AioHandler **nodes;
static __thread unsigned npfd, nalloc;
static __thread Notifier pollfds_cleanup_notifier;
static void pollfds_cleanup(Notifier *n, void *unused)
{
g_assert(npfd == 0);
g_free(pollfds);
g_free(nodes);
nalloc = 0;
}
static void add_pollfd(AioHandler *node)
{
if (npfd == nalloc) {
if (nalloc == 0) {
pollfds_cleanup_notifier.notify = pollfds_cleanup;
qemu_thread_atexit_add(&pollfds_cleanup_notifier);
nalloc = 8;
} else {
g_assert(nalloc <= INT_MAX);
nalloc *= 2;
}
pollfds = g_renew(GPollFD, pollfds, nalloc);
nodes = g_renew(AioHandler *, nodes, nalloc);
}
nodes[npfd] = node;
pollfds[npfd] = (GPollFD) {
.fd = node->pfd.fd,
.events = node->pfd.events,
};
npfd++;
}
bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
int i, ret;
bool was_dispatching;
int ret;
bool progress;
int64_t timeout;
aio_context_acquire(ctx);
was_dispatching = ctx->dispatching;
progress = false;
/* aio_notify can avoid the expensive event_notifier_set if
* everything (file descriptors, bottom halves, timers) will
* be re-evaluated before the next blocking poll(). This is
* already true when aio_poll is called with blocking == false;
* if blocking == true, it is only true after poll() returns,
* so disable the optimization now.
* if blocking == true, it is only true after poll() returns.
*
* If we're in a nested event loop, ctx->dispatching might be true.
* In that case we can restore it just before returning, but we
* have to clear it now.
*/
if (blocking) {
atomic_add(&ctx->notify_me, 2);
}
aio_set_dispatching(ctx, !blocking);
ctx->walking_handlers++;
assert(npfd == 0);
g_array_set_size(ctx->pollfds, 0);
/* fill pollfds */
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
node->pollfds_idx = -1;
if (!node->deleted && node->pfd.events) {
add_pollfd(node);
GPollFD pfd = {
.fd = node->pfd.fd,
.events = node->pfd.events,
};
node->pollfds_idx = ctx->pollfds->len;
g_array_append_val(ctx->pollfds, pfd);
}
}
timeout = blocking ? aio_compute_timeout(ctx) : 0;
ctx->walking_handlers--;
/* wait until next event */
if (timeout) {
aio_context_release(ctx);
}
ret = qemu_poll_ns((GPollFD *)pollfds, npfd, timeout);
if (blocking) {
atomic_sub(&ctx->notify_me, 2);
}
if (timeout) {
aio_context_acquire(ctx);
}
aio_notify_accept(ctx);
ret = qemu_poll_ns((GPollFD *)ctx->pollfds->data,
ctx->pollfds->len,
blocking ? aio_compute_timeout(ctx) : 0);
/* if we have any readable fds, dispatch event */
if (ret > 0) {
for (i = 0; i < npfd; i++) {
nodes[i]->pfd.revents = pollfds[i].revents;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->pollfds_idx != -1) {
GPollFD *pfd = &g_array_index(ctx->pollfds, GPollFD,
node->pollfds_idx);
node->pfd.revents = pfd->revents;
}
}
}
npfd = 0;
ctx->walking_handlers--;
/* Run dispatch even if there were no readable fds to run timers */
aio_set_dispatching(ctx, true);
if (aio_dispatch(ctx)) {
progress = true;
}
aio_context_release(ctx);
aio_set_dispatching(ctx, was_dispatching);
return progress;
}

View File

@@ -279,25 +279,29 @@ bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
bool progress, have_select_revents, first;
bool was_dispatching, progress, have_select_revents, first;
int count;
int timeout;
aio_context_acquire(ctx);
have_select_revents = aio_prepare(ctx);
if (have_select_revents) {
blocking = false;
}
was_dispatching = ctx->dispatching;
progress = false;
/* aio_notify can avoid the expensive event_notifier_set if
* everything (file descriptors, bottom halves, timers) will
* be re-evaluated before the next blocking poll(). This is
* already true when aio_poll is called with blocking == false;
* if blocking == true, it is only true after poll() returns,
* so disable the optimization now.
* if blocking == true, it is only true after poll() returns.
*
* If we're in a nested event loop, ctx->dispatching might be true.
* In that case we can restore it just before returning, but we
* have to clear it now.
*/
if (blocking) {
atomic_add(&ctx->notify_me, 2);
}
have_select_revents = aio_prepare(ctx);
aio_set_dispatching(ctx, !blocking);
ctx->walking_handlers++;
@@ -312,36 +316,20 @@ bool aio_poll(AioContext *ctx, bool blocking)
ctx->walking_handlers--;
first = true;
/* ctx->notifier is always registered. */
assert(count > 0);
/* Multiple iterations, all of them non-blocking except the first,
* may be necessary to process all pending events. After the first
* WaitForMultipleObjects call ctx->notify_me will be decremented.
*/
do {
/* wait until next event */
while (count > 0) {
HANDLE event;
int ret;
timeout = blocking && !have_select_revents
timeout = blocking
? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0;
if (timeout) {
aio_context_release(ctx);
}
ret = WaitForMultipleObjects(count, events, FALSE, timeout);
if (blocking) {
assert(first);
atomic_sub(&ctx->notify_me, 2);
}
if (timeout) {
aio_context_acquire(ctx);
}
aio_set_dispatching(ctx, true);
if (first) {
aio_notify_accept(ctx);
progress |= aio_bh_poll(ctx);
first = false;
if (first && aio_bh_poll(ctx)) {
progress = true;
}
first = false;
/* if we have any signaled events, dispatch event */
event = NULL;
@@ -356,10 +344,10 @@ bool aio_poll(AioContext *ctx, bool blocking)
blocking = false;
progress |= aio_dispatch_handlers(ctx, event);
} while (count > 0);
}
progress |= timerlistgroup_run_timers(&ctx->tlg);
aio_context_release(ctx);
aio_set_dispatching(ctx, was_dispatching);
return progress;
}

File diff suppressed because it is too large Load Diff

66
async.c
View File

@@ -79,10 +79,8 @@ int aio_bh_poll(AioContext *ctx)
* aio_notify again if necessary.
*/
if (!bh->deleted && atomic_xchg(&bh->scheduled, 0)) {
/* Idle BHs and the notify BH don't count as progress */
if (!bh->idle && bh != ctx->notify_dummy_bh) {
if (!bh->idle)
ret = 1;
}
bh->idle = 0;
bh->cb(bh->opaque);
}
@@ -186,8 +184,6 @@ aio_ctx_prepare(GSource *source, gint *timeout)
{
AioContext *ctx = (AioContext *) source;
atomic_or(&ctx->notify_me, 1);
/* We assume there is no timeout already supplied */
*timeout = qemu_timeout_ns_to_ms(aio_compute_timeout(ctx));
@@ -204,9 +200,6 @@ aio_ctx_check(GSource *source)
AioContext *ctx = (AioContext *) source;
QEMUBH *bh;
atomic_and(&ctx->notify_me, ~1);
aio_notify_accept(ctx);
for (bh = ctx->first_bh; bh; bh = bh->next) {
if (!bh->deleted && bh->scheduled) {
return true;
@@ -232,25 +225,12 @@ aio_ctx_finalize(GSource *source)
{
AioContext *ctx = (AioContext *) source;
qemu_bh_delete(ctx->notify_dummy_bh);
thread_pool_free(ctx->thread_pool);
qemu_mutex_lock(&ctx->bh_lock);
while (ctx->first_bh) {
QEMUBH *next = ctx->first_bh->next;
/* qemu_bh_delete() must have been called on BHs in this AioContext */
assert(ctx->first_bh->deleted);
g_free(ctx->first_bh);
ctx->first_bh = next;
}
qemu_mutex_unlock(&ctx->bh_lock);
aio_set_event_notifier(ctx, &ctx->notifier, NULL);
event_notifier_cleanup(&ctx->notifier);
rfifolock_destroy(&ctx->lock);
qemu_mutex_destroy(&ctx->bh_lock);
g_array_free(ctx->pollfds, TRUE);
timerlistgroup_deinit(&ctx->tlg);
}
@@ -275,22 +255,24 @@ ThreadPool *aio_get_thread_pool(AioContext *ctx)
return ctx->thread_pool;
}
void aio_notify(AioContext *ctx)
void aio_set_dispatching(AioContext *ctx, bool dispatching)
{
/* Write e.g. bh->scheduled before reading ctx->notify_me. Pairs
* with atomic_or in aio_ctx_prepare or atomic_add in aio_poll.
*/
smp_mb();
if (ctx->notify_me) {
event_notifier_set(&ctx->notifier);
atomic_mb_set(&ctx->notified, true);
ctx->dispatching = dispatching;
if (!dispatching) {
/* Write ctx->dispatching before reading e.g. bh->scheduled.
* Optimization: this is only needed when we're entering the "unsafe"
* phase where other threads must call event_notifier_set.
*/
smp_mb();
}
}
void aio_notify_accept(AioContext *ctx)
void aio_notify(AioContext *ctx)
{
if (atomic_xchg(&ctx->notified, false)) {
event_notifier_test_and_clear(&ctx->notifier);
/* Write e.g. bh->scheduled before reading ctx->dispatching. */
smp_mb();
if (!ctx->dispatching) {
event_notifier_set(&ctx->notifier);
}
}
@@ -301,19 +283,8 @@ static void aio_timerlist_notify(void *opaque)
static void aio_rfifolock_cb(void *opaque)
{
AioContext *ctx = opaque;
/* Kick owner thread in case they are blocked in aio_poll() */
qemu_bh_schedule(ctx->notify_dummy_bh);
}
static void notify_dummy_bh(void *opaque)
{
/* Do nothing, we were invoked just to force the event loop to iterate */
}
static void event_notifier_dummy_cb(EventNotifier *e)
{
aio_notify(opaque);
}
AioContext *aio_context_new(Error **errp)
@@ -330,14 +301,13 @@ AioContext *aio_context_new(Error **errp)
g_source_set_can_recurse(&ctx->source, true);
aio_set_event_notifier(ctx, &ctx->notifier,
(EventNotifierHandler *)
event_notifier_dummy_cb);
event_notifier_test_and_clear);
ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
ctx->thread_pool = NULL;
qemu_mutex_init(&ctx->bh_lock);
rfifolock_init(&ctx->lock, aio_rfifolock_cb, ctx);
timerlistgroup_init(&ctx->tlg, aio_timerlist_notify, ctx);
ctx->notify_dummy_bh = aio_bh_new(ctx, notify_dummy_bh, NULL);
return ctx;
}

View File

@@ -5,9 +5,13 @@ common-obj-$(CONFIG_SPICE) += spiceaudio.o
common-obj-$(CONFIG_COREAUDIO) += coreaudio.o
common-obj-$(CONFIG_ALSA) += alsaaudio.o
common-obj-$(CONFIG_DSOUND) += dsoundaudio.o
common-obj-$(CONFIG_FMOD) += fmodaudio.o
common-obj-$(CONFIG_ESD) += esdaudio.o
common-obj-$(CONFIG_PA) += paaudio.o
common-obj-$(CONFIG_WINWAVE) += winwaveaudio.o
common-obj-$(CONFIG_AUDIO_PT_INT) += audio_pt_int.o
common-obj-$(CONFIG_AUDIO_WIN_INT) += audio_win_int.o
common-obj-y += wavcapture.o
$(obj)/audio.o $(obj)/fmodaudio.o: QEMU_CFLAGS += $(FMOD_CFLAGS)
sdlaudio.o-cflags := $(SDL_CFLAGS)

View File

@@ -25,7 +25,6 @@
#include "qemu-common.h"
#include "qemu/main-loop.h"
#include "audio.h"
#include "trace.h"
#if QEMU_GNUC_PREREQ(4, 3)
#pragma GCC diagnostic ignored "-Waddress"
@@ -34,28 +33,9 @@
#define AUDIO_CAP "alsa"
#include "audio_int.h"
typedef struct ALSAConf {
int size_in_usec_in;
int size_in_usec_out;
const char *pcm_name_in;
const char *pcm_name_out;
unsigned int buffer_size_in;
unsigned int period_size_in;
unsigned int buffer_size_out;
unsigned int period_size_out;
unsigned int threshold;
int buffer_size_in_overridden;
int period_size_in_overridden;
int buffer_size_out_overridden;
int period_size_out_overridden;
} ALSAConf;
struct pollhlp {
snd_pcm_t *handle;
struct pollfd *pfds;
ALSAConf *conf;
int count;
int mask;
};
@@ -76,6 +56,30 @@ typedef struct ALSAVoiceIn {
struct pollhlp pollhlp;
} ALSAVoiceIn;
static struct {
int size_in_usec_in;
int size_in_usec_out;
const char *pcm_name_in;
const char *pcm_name_out;
unsigned int buffer_size_in;
unsigned int period_size_in;
unsigned int buffer_size_out;
unsigned int period_size_out;
unsigned int threshold;
int buffer_size_in_overridden;
int period_size_in_overridden;
int buffer_size_out_overridden;
int period_size_out_overridden;
int verbose;
} conf = {
.buffer_size_out = 4096,
.period_size_out = 1024,
.pcm_name_out = "default",
.pcm_name_in = "default",
};
struct alsa_params_req {
int freq;
snd_pcm_format_t fmt;
@@ -201,7 +205,9 @@ static void alsa_poll_handler (void *opaque)
}
if (!(revents & hlp->mask)) {
trace_alsa_revents(revents);
if (conf.verbose) {
dolog ("revents = %d\n", revents);
}
return;
}
@@ -260,14 +266,31 @@ static int alsa_poll_helper (snd_pcm_t *handle, struct pollhlp *hlp, int mask)
for (i = 0; i < count; ++i) {
if (pfds[i].events & POLLIN) {
qemu_set_fd_handler (pfds[i].fd, alsa_poll_handler, NULL, hlp);
err = qemu_set_fd_handler (pfds[i].fd, alsa_poll_handler,
NULL, hlp);
}
if (pfds[i].events & POLLOUT) {
trace_alsa_pollout(i, pfds[i].fd);
qemu_set_fd_handler (pfds[i].fd, NULL, alsa_poll_handler, hlp);
if (conf.verbose) {
dolog ("POLLOUT %d %d\n", i, pfds[i].fd);
}
err = qemu_set_fd_handler (pfds[i].fd, NULL,
alsa_poll_handler, hlp);
}
if (conf.verbose) {
dolog ("Set handler events=%#x index=%d fd=%d err=%d\n",
pfds[i].events, i, pfds[i].fd, err);
}
trace_alsa_set_handler(pfds[i].events, i, pfds[i].fd, err);
if (err) {
dolog ("Failed to set handler events=%#x index=%d fd=%d err=%d\n",
pfds[i].events, i, pfds[i].fd, err);
while (i--) {
qemu_set_fd_handler (pfds[i].fd, NULL, NULL, NULL);
}
g_free (pfds);
return -1;
}
}
hlp->pfds = pfds;
hlp->count = count;
@@ -453,15 +476,14 @@ static void alsa_set_threshold (snd_pcm_t *handle, snd_pcm_uframes_t threshold)
}
static int alsa_open (int in, struct alsa_params_req *req,
struct alsa_params_obt *obt, snd_pcm_t **handlep,
ALSAConf *conf)
struct alsa_params_obt *obt, snd_pcm_t **handlep)
{
snd_pcm_t *handle;
snd_pcm_hw_params_t *hw_params;
int err;
int size_in_usec;
unsigned int freq, nchannels;
const char *pcm_name = in ? conf->pcm_name_in : conf->pcm_name_out;
const char *pcm_name = in ? conf.pcm_name_in : conf.pcm_name_out;
snd_pcm_uframes_t obt_buffer_size;
const char *typ = in ? "ADC" : "DAC";
snd_pcm_format_t obtfmt;
@@ -500,7 +522,7 @@ static int alsa_open (int in, struct alsa_params_req *req,
}
err = snd_pcm_hw_params_set_format (handle, hw_params, req->fmt);
if (err < 0) {
if (err < 0 && conf.verbose) {
alsa_logerr2 (err, typ, "Failed to set format %d\n", req->fmt);
}
@@ -632,7 +654,7 @@ static int alsa_open (int in, struct alsa_params_req *req,
goto err;
}
if (!in && conf->threshold) {
if (!in && conf.threshold) {
snd_pcm_uframes_t threshold;
int bytes_per_sec;
@@ -654,7 +676,7 @@ static int alsa_open (int in, struct alsa_params_req *req,
break;
}
threshold = (conf->threshold * bytes_per_sec) / 1000;
threshold = (conf.threshold * bytes_per_sec) / 1000;
alsa_set_threshold (handle, threshold);
}
@@ -664,9 +686,10 @@ static int alsa_open (int in, struct alsa_params_req *req,
*handlep = handle;
if (obtfmt != req->fmt ||
if (conf.verbose &&
(obtfmt != req->fmt ||
obt->nchannels != req->nchannels ||
obt->freq != req->freq) {
obt->freq != req->freq)) {
dolog ("Audio parameters for %s\n", typ);
alsa_dump_info (req, obt, obtfmt);
}
@@ -720,7 +743,9 @@ static void alsa_write_pending (ALSAVoiceOut *alsa)
if (written <= 0) {
switch (written) {
case 0:
trace_alsa_wrote_zero(len);
if (conf.verbose) {
dolog ("Failed to write %d frames (wrote zero)\n", len);
}
return;
case -EPIPE:
@@ -729,7 +754,9 @@ static void alsa_write_pending (ALSAVoiceOut *alsa)
len);
return;
}
trace_alsa_xrun_out();
if (conf.verbose) {
dolog ("Recovering from playback xrun\n");
}
continue;
case -ESTRPIPE:
@@ -740,7 +767,9 @@ static void alsa_write_pending (ALSAVoiceOut *alsa)
len);
return;
}
trace_alsa_resume_out();
if (conf.verbose) {
dolog ("Resuming suspended output stream\n");
}
continue;
case -EAGAIN:
@@ -790,27 +819,25 @@ static void alsa_fini_out (HWVoiceOut *hw)
alsa->pcm_buf = NULL;
}
static int alsa_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int alsa_init_out (HWVoiceOut *hw, struct audsettings *as)
{
ALSAVoiceOut *alsa = (ALSAVoiceOut *) hw;
struct alsa_params_req req;
struct alsa_params_obt obt;
snd_pcm_t *handle;
struct audsettings obt_as;
ALSAConf *conf = drv_opaque;
req.fmt = aud_to_alsafmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.period_size = conf->period_size_out;
req.buffer_size = conf->buffer_size_out;
req.size_in_usec = conf->size_in_usec_out;
req.period_size = conf.period_size_out;
req.buffer_size = conf.buffer_size_out;
req.size_in_usec = conf.size_in_usec_out;
req.override_mask =
(conf->period_size_out_overridden ? 1 : 0) |
(conf->buffer_size_out_overridden ? 2 : 0);
(conf.period_size_out_overridden ? 1 : 0) |
(conf.buffer_size_out_overridden ? 2 : 0);
if (alsa_open (0, &req, &obt, &handle, conf)) {
if (alsa_open (0, &req, &obt, &handle)) {
return -1;
}
@@ -831,7 +858,6 @@ static int alsa_init_out(HWVoiceOut *hw, struct audsettings *as,
}
alsa->handle = handle;
alsa->pollhlp.conf = conf;
return 0;
}
@@ -902,26 +928,25 @@ static int alsa_ctl_out (HWVoiceOut *hw, int cmd, ...)
return -1;
}
static int alsa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int alsa_init_in (HWVoiceIn *hw, struct audsettings *as)
{
ALSAVoiceIn *alsa = (ALSAVoiceIn *) hw;
struct alsa_params_req req;
struct alsa_params_obt obt;
snd_pcm_t *handle;
struct audsettings obt_as;
ALSAConf *conf = drv_opaque;
req.fmt = aud_to_alsafmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.period_size = conf->period_size_in;
req.buffer_size = conf->buffer_size_in;
req.size_in_usec = conf->size_in_usec_in;
req.period_size = conf.period_size_in;
req.buffer_size = conf.buffer_size_in;
req.size_in_usec = conf.size_in_usec_in;
req.override_mask =
(conf->period_size_in_overridden ? 1 : 0) |
(conf->buffer_size_in_overridden ? 2 : 0);
(conf.period_size_in_overridden ? 1 : 0) |
(conf.buffer_size_in_overridden ? 2 : 0);
if (alsa_open (1, &req, &obt, &handle, conf)) {
if (alsa_open (1, &req, &obt, &handle)) {
return -1;
}
@@ -942,7 +967,6 @@ static int alsa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
alsa->handle = handle;
alsa->pollhlp.conf = conf;
return 0;
}
@@ -998,10 +1022,14 @@ static int alsa_run_in (HWVoiceIn *hw)
dolog ("Failed to resume suspended input stream\n");
return 0;
}
trace_alsa_resume_in();
if (conf.verbose) {
dolog ("Resuming suspended input stream\n");
}
break;
default:
trace_alsa_no_frames(state);
if (conf.verbose) {
dolog ("No frames available and ALSA state is %d\n", state);
}
return 0;
}
}
@@ -1036,7 +1064,9 @@ static int alsa_run_in (HWVoiceIn *hw)
if (nread <= 0) {
switch (nread) {
case 0:
trace_alsa_read_zero(len);
if (conf.verbose) {
dolog ("Failed to read %ld frames (read zero)\n", len);
}
goto exit;
case -EPIPE:
@@ -1044,7 +1074,9 @@ static int alsa_run_in (HWVoiceIn *hw)
alsa_logerr (nread, "Failed to read %ld frames\n", len);
goto exit;
}
trace_alsa_xrun_in();
if (conf.verbose) {
dolog ("Recovering from capture xrun\n");
}
continue;
case -EAGAIN:
@@ -1116,85 +1148,82 @@ static int alsa_ctl_in (HWVoiceIn *hw, int cmd, ...)
return -1;
}
static ALSAConf glob_conf = {
.buffer_size_out = 4096,
.period_size_out = 1024,
.pcm_name_out = "default",
.pcm_name_in = "default",
};
static void *alsa_audio_init (void)
{
ALSAConf *conf = g_malloc(sizeof(ALSAConf));
*conf = glob_conf;
return conf;
return &conf;
}
static void alsa_audio_fini (void *opaque)
{
g_free(opaque);
(void) opaque;
}
static struct audio_option alsa_options[] = {
{
.name = "DAC_SIZE_IN_USEC",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.size_in_usec_out,
.valp = &conf.size_in_usec_out,
.descr = "DAC period/buffer size in microseconds (otherwise in frames)"
},
{
.name = "DAC_PERIOD_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.period_size_out,
.valp = &conf.period_size_out,
.descr = "DAC period size (0 to go with system default)",
.overriddenp = &glob_conf.period_size_out_overridden
.overriddenp = &conf.period_size_out_overridden
},
{
.name = "DAC_BUFFER_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.buffer_size_out,
.valp = &conf.buffer_size_out,
.descr = "DAC buffer size (0 to go with system default)",
.overriddenp = &glob_conf.buffer_size_out_overridden
.overriddenp = &conf.buffer_size_out_overridden
},
{
.name = "ADC_SIZE_IN_USEC",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.size_in_usec_in,
.valp = &conf.size_in_usec_in,
.descr =
"ADC period/buffer size in microseconds (otherwise in frames)"
},
{
.name = "ADC_PERIOD_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.period_size_in,
.valp = &conf.period_size_in,
.descr = "ADC period size (0 to go with system default)",
.overriddenp = &glob_conf.period_size_in_overridden
.overriddenp = &conf.period_size_in_overridden
},
{
.name = "ADC_BUFFER_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.buffer_size_in,
.valp = &conf.buffer_size_in,
.descr = "ADC buffer size (0 to go with system default)",
.overriddenp = &glob_conf.buffer_size_in_overridden
.overriddenp = &conf.buffer_size_in_overridden
},
{
.name = "THRESHOLD",
.tag = AUD_OPT_INT,
.valp = &glob_conf.threshold,
.valp = &conf.threshold,
.descr = "(undocumented)"
},
{
.name = "DAC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.pcm_name_out,
.valp = &conf.pcm_name_out,
.descr = "DAC device name (for instance dmix)"
},
{
.name = "ADC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.pcm_name_in,
.valp = &conf.pcm_name_in,
.descr = "ADC device name"
},
{
.name = "VERBOSE",
.tag = AUD_OPT_BOOL,
.valp = &conf.verbose,
.descr = "Behave in a more verbose way"
},
{ /* End of list */ }
};

View File

@@ -30,6 +30,7 @@
#define AUDIO_CAP "audio"
#include "audio_int.h"
/* #define DEBUG_PLIVE */
/* #define DEBUG_LIVE */
/* #define DEBUG_OUT */
/* #define DEBUG_CAPTURE */
@@ -65,6 +66,8 @@ static struct {
int hertz;
int64_t ticks;
} period;
int plive;
int log_to_monitor;
int try_poll_in;
int try_poll_out;
} conf = {
@@ -93,6 +96,8 @@ static struct {
},
.period = { .hertz = 100 },
.plive = 0,
.log_to_monitor = 0,
.try_poll_in = 1,
.try_poll_out = 1,
};
@@ -326,11 +331,20 @@ static const char *audio_get_conf_str (const char *key,
void AUD_vlog (const char *cap, const char *fmt, va_list ap)
{
if (cap) {
fprintf(stderr, "%s: ", cap);
}
if (conf.log_to_monitor) {
if (cap) {
monitor_printf(default_mon, "%s: ", cap);
}
vfprintf(stderr, fmt, ap);
monitor_vprintf(default_mon, fmt, ap);
}
else {
if (cap) {
fprintf (stderr, "%s: ", cap);
}
vfprintf (stderr, fmt, ap);
}
}
void AUD_log (const char *cap, const char *fmt, ...)
@@ -1440,6 +1454,9 @@ static void audio_run_out (AudioState *s)
while (sw) {
sw1 = sw->entries.le_next;
if (!sw->active && !sw->callback.fn) {
#ifdef DEBUG_PLIVE
dolog ("Finishing with old voice\n");
#endif
audio_close_out (sw);
}
sw = sw1;
@@ -1631,6 +1648,18 @@ static struct audio_option audio_options[] = {
.valp = &conf.period.hertz,
.descr = "Timer period in HZ (0 - use lowest possible)"
},
{
.name = "PLIVE",
.tag = AUD_OPT_BOOL,
.valp = &conf.plive,
.descr = "(undocumented)"
},
{
.name = "LOG_TO_MONITOR",
.tag = AUD_OPT_BOOL,
.valp = &conf.log_to_monitor,
.descr = "Print logging messages to monitor instead of stderr"
},
{ /* End of list */ }
};

View File

@@ -156,13 +156,13 @@ struct audio_driver {
};
struct audio_pcm_ops {
int (*init_out)(HWVoiceOut *hw, struct audsettings *as, void *drv_opaque);
int (*init_out)(HWVoiceOut *hw, struct audsettings *as);
void (*fini_out)(HWVoiceOut *hw);
int (*run_out) (HWVoiceOut *hw, int live);
int (*write) (SWVoiceOut *sw, void *buf, int size);
int (*ctl_out) (HWVoiceOut *hw, int cmd, ...);
int (*init_in) (HWVoiceIn *hw, struct audsettings *as, void *drv_opaque);
int (*init_in) (HWVoiceIn *hw, struct audsettings *as);
void (*fini_in) (HWVoiceIn *hw);
int (*run_in) (HWVoiceIn *hw);
int (*read) (SWVoiceIn *sw, void *buf, int size);
@@ -206,11 +206,14 @@ extern struct audio_driver no_audio_driver;
extern struct audio_driver oss_audio_driver;
extern struct audio_driver sdl_audio_driver;
extern struct audio_driver wav_audio_driver;
extern struct audio_driver fmod_audio_driver;
extern struct audio_driver alsa_audio_driver;
extern struct audio_driver coreaudio_audio_driver;
extern struct audio_driver dsound_audio_driver;
extern struct audio_driver esd_audio_driver;
extern struct audio_driver pa_audio_driver;
extern struct audio_driver spice_audio_driver;
extern struct audio_driver winwave_audio_driver;
extern const struct mixeng_volume nominal_volume;
void audio_pcm_init_info (struct audio_pcm_info *info, struct audsettings *as);

View File

@@ -262,7 +262,7 @@ static HW *glue (audio_pcm_hw_add_new_, TYPE) (struct audsettings *as)
#ifdef DAC
QLIST_INIT (&hw->cap_head);
#endif
if (glue (hw->pcm_ops->init_, TYPE) (hw, as, s->drv_opaque)) {
if (glue (hw->pcm_ops->init_, TYPE) (hw, as)) {
goto err0;
}
@@ -398,6 +398,10 @@ SW *glue (AUD_open_, TYPE) (
)
{
AudioState *s = &glob_audio_state;
#ifdef DAC
int live = 0;
SW *old_sw = NULL;
#endif
if (audio_bug (AUDIO_FUNC, !card || !name || !callback_fn || !as)) {
dolog ("card=%p name=%p callback_fn=%p as=%p\n",
@@ -422,6 +426,29 @@ SW *glue (AUD_open_, TYPE) (
return sw;
}
#ifdef DAC
if (conf.plive && sw && (!sw->active && !sw->empty)) {
live = sw->total_hw_samples_mixed;
#ifdef DEBUG_PLIVE
dolog ("Replacing voice %s with %d live samples\n", SW_NAME (sw), live);
dolog ("Old %s freq %d, bits %d, channels %d\n",
SW_NAME (sw), sw->info.freq, sw->info.bits, sw->info.nchannels);
dolog ("New %s freq %d, bits %d, channels %d\n",
name,
as->freq,
(as->fmt == AUD_FMT_S16 || as->fmt == AUD_FMT_U16) ? 16 : 8,
as->nchannels);
#endif
if (live) {
old_sw = sw;
old_sw->callback.fn = NULL;
sw = NULL;
}
}
#endif
if (!glue (conf.fixed_, TYPE).enabled && sw) {
glue (AUD_close_, TYPE) (card, sw);
sw = NULL;
@@ -454,6 +481,20 @@ SW *glue (AUD_open_, TYPE) (
sw->callback.fn = callback_fn;
sw->callback.opaque = callback_opaque;
#ifdef DAC
if (live) {
int mixed =
(live << old_sw->info.shift)
* old_sw->info.bytes_per_second
/ sw->info.bytes_per_second;
#ifdef DEBUG_PLIVE
dolog ("Silence will be mixed %d\n", mixed);
#endif
sw->total_hw_samples_mixed += mixed;
}
#endif
#ifdef DEBUG_AUDIO
dolog ("%s\n", name);
audio_pcm_print_info ("hw", &sw->hw->info);

View File

@@ -32,16 +32,20 @@
#define AUDIO_CAP "coreaudio"
#include "audio_int.h"
static int isAtexit;
typedef struct {
struct {
int buffer_frames;
int nbuffers;
} CoreaudioConf;
int isAtexit;
} conf = {
.buffer_frames = 512,
.nbuffers = 4,
.isAtexit = 0
};
typedef struct coreaudioVoiceOut {
HWVoiceOut hw;
pthread_mutex_t mutex;
int isAtexit;
AudioDeviceID outputDeviceID;
UInt32 audioDevicePropertyBufferFrameSize;
AudioStreamBasicDescription outputStreamBasicDescription;
@@ -157,7 +161,7 @@ static inline UInt32 isPlaying (AudioDeviceID outputDeviceID)
static void coreaudio_atexit (void)
{
isAtexit = 1;
conf.isAtexit = 1;
}
static int coreaudio_lock (coreaudioVoiceOut *core, const char *fn_name)
@@ -283,8 +287,7 @@ static int coreaudio_write (SWVoiceOut *sw, void *buf, int len)
return audio_pcm_sw_write (sw, buf, len);
}
static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int coreaudio_init_out (HWVoiceOut *hw, struct audsettings *as)
{
OSStatus status;
coreaudioVoiceOut *core = (coreaudioVoiceOut *) hw;
@@ -292,7 +295,6 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
int err;
const char *typ = "playback";
AudioValueRange frameRange;
CoreaudioConf *conf = drv_opaque;
/* create mutex */
err = pthread_mutex_init(&core->mutex, NULL);
@@ -334,16 +336,16 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
return -1;
}
if (frameRange.mMinimum > conf->buffer_frames) {
if (frameRange.mMinimum > conf.buffer_frames) {
core->audioDevicePropertyBufferFrameSize = (UInt32) frameRange.mMinimum;
dolog ("warning: Upsizing Buffer Frames to %f\n", frameRange.mMinimum);
}
else if (frameRange.mMaximum < conf->buffer_frames) {
else if (frameRange.mMaximum < conf.buffer_frames) {
core->audioDevicePropertyBufferFrameSize = (UInt32) frameRange.mMaximum;
dolog ("warning: Downsizing Buffer Frames to %f\n", frameRange.mMaximum);
}
else {
core->audioDevicePropertyBufferFrameSize = conf->buffer_frames;
core->audioDevicePropertyBufferFrameSize = conf.buffer_frames;
}
/* set Buffer Frame Size */
@@ -377,7 +379,7 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
"Could not get device buffer frame size\n");
return -1;
}
hw->samples = conf->nbuffers * core->audioDevicePropertyBufferFrameSize;
hw->samples = conf.nbuffers * core->audioDevicePropertyBufferFrameSize;
/* get StreamFormat */
propertySize = sizeof(core->outputStreamBasicDescription);
@@ -441,7 +443,7 @@ static void coreaudio_fini_out (HWVoiceOut *hw)
int err;
coreaudioVoiceOut *core = (coreaudioVoiceOut *) hw;
if (!isAtexit) {
if (!conf.isAtexit) {
/* stop playback */
if (isPlaying(core->outputDeviceID)) {
status = AudioDeviceStop(core->outputDeviceID, audioDeviceIOProc);
@@ -484,7 +486,7 @@ static int coreaudio_ctl_out (HWVoiceOut *hw, int cmd, ...)
case VOICE_DISABLE:
/* stop playback */
if (!isAtexit) {
if (!conf.isAtexit) {
if (isPlaying(core->outputDeviceID)) {
status = AudioDeviceStop(core->outputDeviceID, audioDeviceIOProc);
if (status != kAudioHardwareNoError) {
@@ -497,36 +499,28 @@ static int coreaudio_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static CoreaudioConf glob_conf = {
.buffer_frames = 512,
.nbuffers = 4,
};
static void *coreaudio_audio_init (void)
{
CoreaudioConf *conf = g_malloc(sizeof(CoreaudioConf));
*conf = glob_conf;
atexit(coreaudio_atexit);
return conf;
return &coreaudio_audio_init;
}
static void coreaudio_audio_fini (void *opaque)
{
g_free(opaque);
(void) opaque;
}
static struct audio_option coreaudio_options[] = {
{
.name = "BUFFER_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.buffer_frames,
.valp = &conf.buffer_frames,
.descr = "Size of the buffer in frames"
},
{
.name = "BUFFER_COUNT",
.tag = AUD_OPT_INT,
.valp = &glob_conf.nbuffers,
.valp = &conf.nbuffers,
.descr = "Number of buffers"
},
{ /* End of list */ }

View File

@@ -67,11 +67,11 @@ static int glue (dsound_lock_, TYPE) (
LPVOID *p2p,
DWORD *blen1p,
DWORD *blen2p,
int entire,
dsound *s
int entire
)
{
HRESULT hr;
int i;
LPVOID p1 = NULL, p2 = NULL;
DWORD blen1 = 0, blen2 = 0;
DWORD flag;
@@ -81,18 +81,37 @@ static int glue (dsound_lock_, TYPE) (
#else
flag = entire ? DSBLOCK_ENTIREBUFFER : 0;
#endif
hr = glue(IFACE, _Lock)(buf, pos, len, &p1, &blen1, &p2, &blen2, flag);
for (i = 0; i < conf.lock_retries; ++i) {
hr = glue (IFACE, _Lock) (
buf,
pos,
len,
&p1,
&blen1,
&p2,
&blen2,
flag
);
if (FAILED (hr)) {
if (FAILED (hr)) {
#ifndef DSBTYPE_IN
if (hr == DSERR_BUFFERLOST) {
if (glue (dsound_restore_, TYPE) (buf, s)) {
dsound_logerr (hr, "Could not lock " NAME "\n");
if (hr == DSERR_BUFFERLOST) {
if (glue (dsound_restore_, TYPE) (buf)) {
dsound_logerr (hr, "Could not lock " NAME "\n");
goto fail;
}
continue;
}
#endif
dsound_logerr (hr, "Could not lock " NAME "\n");
goto fail;
}
#endif
dsound_logerr (hr, "Could not lock " NAME "\n");
break;
}
if (i == conf.lock_retries) {
dolog ("%d attempts to lock " NAME " failed\n", i);
goto fail;
}
@@ -155,19 +174,16 @@ static void dsound_fini_out (HWVoiceOut *hw)
}
#ifdef DSBTYPE_IN
static int dsound_init_in(HWVoiceIn *hw, struct audsettings *as,
void *drv_opaque)
static int dsound_init_in (HWVoiceIn *hw, struct audsettings *as)
#else
static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int dsound_init_out (HWVoiceOut *hw, struct audsettings *as)
#endif
{
int err;
HRESULT hr;
dsound *s = drv_opaque;
dsound *s = &glob_dsound;
WAVEFORMATEX wfx;
struct audsettings obt_as;
DSoundConf *conf = &s->conf;
#ifdef DSBTYPE_IN
const char *typ = "ADC";
DSoundVoiceIn *ds = (DSoundVoiceIn *) hw;
@@ -194,7 +210,7 @@ static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
bd.dwSize = sizeof (bd);
bd.lpwfxFormat = &wfx;
#ifdef DSBTYPE_IN
bd.dwBufferBytes = conf->bufsize_in;
bd.dwBufferBytes = conf.bufsize_in;
hr = IDirectSoundCapture_CreateCaptureBuffer (
s->dsound_capture,
&bd,
@@ -203,7 +219,7 @@ static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
);
#else
bd.dwFlags = DSBCAPS_STICKYFOCUS | DSBCAPS_GETCURRENTPOSITION2;
bd.dwBufferBytes = conf->bufsize_out;
bd.dwBufferBytes = conf.bufsize_out;
hr = IDirectSound_CreateSoundBuffer (
s->dsound,
&bd,
@@ -253,7 +269,6 @@ static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
);
}
hw->samples = bc.dwBufferBytes >> hw->info.shift;
ds->s = s;
#ifdef DEBUG_DSOUND
dolog ("caps %ld, desc %ld\n",

View File

@@ -41,25 +41,42 @@
/* #define DEBUG_DSOUND */
typedef struct {
static struct {
int lock_retries;
int restore_retries;
int getstatus_retries;
int set_primary;
int bufsize_in;
int bufsize_out;
struct audsettings settings;
int latency_millis;
} DSoundConf;
} conf = {
.lock_retries = 1,
.restore_retries = 1,
.getstatus_retries = 1,
.set_primary = 0,
.bufsize_in = 16384,
.bufsize_out = 16384,
.settings.freq = 44100,
.settings.nchannels = 2,
.settings.fmt = AUD_FMT_S16,
.latency_millis = 10
};
typedef struct {
LPDIRECTSOUND dsound;
LPDIRECTSOUNDCAPTURE dsound_capture;
LPDIRECTSOUNDBUFFER dsound_primary_buffer;
struct audsettings settings;
DSoundConf conf;
} dsound;
static dsound glob_dsound;
typedef struct {
HWVoiceOut hw;
LPDIRECTSOUNDBUFFER dsound_buffer;
DWORD old_pos;
int first_time;
dsound *s;
#ifdef DEBUG_DSOUND
DWORD old_ppos;
DWORD played;
@@ -71,7 +88,6 @@ typedef struct {
HWVoiceIn hw;
int first_time;
LPDIRECTSOUNDCAPTUREBUFFER dsound_capture_buffer;
dsound *s;
} DSoundVoiceIn;
static void dsound_log_hresult (HRESULT hr)
@@ -265,17 +281,29 @@ static void print_wave_format (WAVEFORMATEX *wfx)
}
#endif
static int dsound_restore_out (LPDIRECTSOUNDBUFFER dsb, dsound *s)
static int dsound_restore_out (LPDIRECTSOUNDBUFFER dsb)
{
HRESULT hr;
int i;
hr = IDirectSoundBuffer_Restore (dsb);
for (i = 0; i < conf.restore_retries; ++i) {
hr = IDirectSoundBuffer_Restore (dsb);
if (hr != DS_OK) {
dsound_logerr (hr, "Could not restore playback buffer\n");
return -1;
switch (hr) {
case DS_OK:
return 0;
case DSERR_BUFFERLOST:
continue;
default:
dsound_logerr (hr, "Could not restore playback buffer\n");
return -1;
}
}
return 0;
dolog ("%d attempts to restore playback buffer failed\n", i);
return -1;
}
#include "dsound_template.h"
@@ -283,20 +311,25 @@ static int dsound_restore_out (LPDIRECTSOUNDBUFFER dsb, dsound *s)
#include "dsound_template.h"
#undef DSBTYPE_IN
static int dsound_get_status_out (LPDIRECTSOUNDBUFFER dsb, DWORD *statusp,
dsound *s)
static int dsound_get_status_out (LPDIRECTSOUNDBUFFER dsb, DWORD *statusp)
{
HRESULT hr;
int i;
hr = IDirectSoundBuffer_GetStatus (dsb, statusp);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not get playback buffer status\n");
return -1;
}
for (i = 0; i < conf.getstatus_retries; ++i) {
hr = IDirectSoundBuffer_GetStatus (dsb, statusp);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not get playback buffer status\n");
return -1;
}
if (*statusp & DSERR_BUFFERLOST) {
dsound_restore_out(dsb, s);
return -1;
if (*statusp & DSERR_BUFFERLOST) {
if (dsound_restore_out (dsb)) {
return -1;
}
continue;
}
break;
}
return 0;
@@ -343,8 +376,7 @@ static void dsound_write_sample (HWVoiceOut *hw, uint8_t *dst, int dst_len)
hw->rpos = pos % hw->samples;
}
static void dsound_clear_sample (HWVoiceOut *hw, LPDIRECTSOUNDBUFFER dsb,
dsound *s)
static void dsound_clear_sample (HWVoiceOut *hw, LPDIRECTSOUNDBUFFER dsb)
{
int err;
LPVOID p1, p2;
@@ -357,8 +389,7 @@ static void dsound_clear_sample (HWVoiceOut *hw, LPDIRECTSOUNDBUFFER dsb,
hw->samples << hw->info.shift,
&p1, &p2,
&blen1, &blen2,
1,
s
1
);
if (err) {
return;
@@ -384,9 +415,25 @@ static void dsound_clear_sample (HWVoiceOut *hw, LPDIRECTSOUNDBUFFER dsb,
dsound_unlock_out (dsb, p1, p2, blen1, blen2);
}
static int dsound_open (dsound *s)
static void dsound_close (dsound *s)
{
HRESULT hr;
if (s->dsound_primary_buffer) {
hr = IDirectSoundBuffer_Release (s->dsound_primary_buffer);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not release primary buffer\n");
}
s->dsound_primary_buffer = NULL;
}
}
static int dsound_open (dsound *s)
{
int err;
HRESULT hr;
WAVEFORMATEX wfx;
DSBUFFERDESC dsbd;
HWND hwnd;
hwnd = GetForegroundWindow ();
@@ -402,7 +449,63 @@ static int dsound_open (dsound *s)
return -1;
}
if (!conf.set_primary) {
return 0;
}
err = waveformat_from_audio_settings (&wfx, &conf.settings);
if (err) {
return -1;
}
memset (&dsbd, 0, sizeof (dsbd));
dsbd.dwSize = sizeof (dsbd);
dsbd.dwFlags = DSBCAPS_PRIMARYBUFFER;
dsbd.dwBufferBytes = 0;
dsbd.lpwfxFormat = NULL;
hr = IDirectSound_CreateSoundBuffer (
s->dsound,
&dsbd,
&s->dsound_primary_buffer,
NULL
);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not create primary playback buffer\n");
return -1;
}
hr = IDirectSoundBuffer_SetFormat (s->dsound_primary_buffer, &wfx);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not set primary playback buffer format\n");
}
hr = IDirectSoundBuffer_GetFormat (
s->dsound_primary_buffer,
&wfx,
sizeof (wfx),
NULL
);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not get primary playback buffer format\n");
goto fail0;
}
#ifdef DEBUG_DSOUND
dolog ("Primary\n");
print_wave_format (&wfx);
#endif
err = waveformat_to_audio_settings (&wfx, &s->settings);
if (err) {
goto fail0;
}
return 0;
fail0:
dsound_close (s);
return -1;
}
static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
@@ -411,7 +514,6 @@ static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
DWORD status;
DSoundVoiceOut *ds = (DSoundVoiceOut *) hw;
LPDIRECTSOUNDBUFFER dsb = ds->dsound_buffer;
dsound *s = ds->s;
if (!dsb) {
dolog ("Attempt to control voice without a buffer\n");
@@ -420,7 +522,7 @@ static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
switch (cmd) {
case VOICE_ENABLE:
if (dsound_get_status_out (dsb, &status, s)) {
if (dsound_get_status_out (dsb, &status)) {
return -1;
}
@@ -429,7 +531,7 @@ static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
dsound_clear_sample (hw, dsb, s);
dsound_clear_sample (hw, dsb);
hr = IDirectSoundBuffer_Play (dsb, 0, 0, DSBPLAY_LOOPING);
if (FAILED (hr)) {
@@ -439,7 +541,7 @@ static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
break;
case VOICE_DISABLE:
if (dsound_get_status_out (dsb, &status, s)) {
if (dsound_get_status_out (dsb, &status)) {
return -1;
}
@@ -476,8 +578,6 @@ static int dsound_run_out (HWVoiceOut *hw, int live)
DWORD wpos, ppos, old_pos;
LPVOID p1, p2;
int bufsize;
dsound *s = ds->s;
DSoundConf *conf = &s->conf;
if (!dsb) {
dolog ("Attempt to run empty with playback buffer\n");
@@ -500,14 +600,14 @@ static int dsound_run_out (HWVoiceOut *hw, int live)
len = live << hwshift;
if (ds->first_time) {
if (conf->latency_millis) {
if (conf.latency_millis) {
DWORD cur_blat;
cur_blat = audio_ring_dist (wpos, ppos, bufsize);
ds->first_time = 0;
old_pos = wpos;
old_pos +=
millis_to_bytes (&hw->info, conf->latency_millis) - cur_blat;
millis_to_bytes (&hw->info, conf.latency_millis) - cur_blat;
old_pos %= bufsize;
old_pos &= ~hw->info.align;
}
@@ -563,8 +663,7 @@ static int dsound_run_out (HWVoiceOut *hw, int live)
len,
&p1, &p2,
&blen1, &blen2,
0,
s
0
);
if (err) {
return 0;
@@ -667,7 +766,6 @@ static int dsound_run_in (HWVoiceIn *hw)
DWORD cpos, rpos;
LPVOID p1, p2;
int hwshift;
dsound *s = ds->s;
if (!dscb) {
dolog ("Attempt to run without capture buffer\n");
@@ -722,8 +820,7 @@ static int dsound_run_in (HWVoiceIn *hw)
&p2,
&blen1,
&blen2,
0,
s
0
);
if (err) {
return 0;
@@ -746,19 +843,12 @@ static int dsound_run_in (HWVoiceIn *hw)
return decr;
}
static DSoundConf glob_conf = {
.bufsize_in = 16384,
.bufsize_out = 16384,
.latency_millis = 10
};
static void dsound_audio_fini (void *opaque)
{
HRESULT hr;
dsound *s = opaque;
if (!s->dsound) {
g_free(s);
return;
}
@@ -769,7 +859,6 @@ static void dsound_audio_fini (void *opaque)
s->dsound = NULL;
if (!s->dsound_capture) {
g_free(s);
return;
}
@@ -778,21 +867,17 @@ static void dsound_audio_fini (void *opaque)
dsound_logerr (hr, "Could not release DirectSoundCapture\n");
}
s->dsound_capture = NULL;
g_free(s);
}
static void *dsound_audio_init (void)
{
int err;
HRESULT hr;
dsound *s = g_malloc0(sizeof(dsound));
dsound *s = &glob_dsound;
s->conf = glob_conf;
hr = CoInitialize (NULL);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not initialize COM\n");
g_free(s);
return NULL;
}
@@ -805,7 +890,6 @@ static void *dsound_audio_init (void)
);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not create DirectSound instance\n");
g_free(s);
return NULL;
}
@@ -817,7 +901,7 @@ static void *dsound_audio_init (void)
if (FAILED (hr)) {
dsound_logerr (hr, "Could not release DirectSound\n");
}
g_free(s);
s->dsound = NULL;
return NULL;
}
@@ -854,22 +938,64 @@ static void *dsound_audio_init (void)
}
static struct audio_option dsound_options[] = {
{
.name = "LOCK_RETRIES",
.tag = AUD_OPT_INT,
.valp = &conf.lock_retries,
.descr = "Number of times to attempt locking the buffer"
},
{
.name = "RESTOURE_RETRIES",
.tag = AUD_OPT_INT,
.valp = &conf.restore_retries,
.descr = "Number of times to attempt restoring the buffer"
},
{
.name = "GETSTATUS_RETRIES",
.tag = AUD_OPT_INT,
.valp = &conf.getstatus_retries,
.descr = "Number of times to attempt getting status of the buffer"
},
{
.name = "SET_PRIMARY",
.tag = AUD_OPT_BOOL,
.valp = &conf.set_primary,
.descr = "Set the parameters of primary buffer"
},
{
.name = "LATENCY_MILLIS",
.tag = AUD_OPT_INT,
.valp = &glob_conf.latency_millis,
.valp = &conf.latency_millis,
.descr = "(undocumented)"
},
{
.name = "PRIMARY_FREQ",
.tag = AUD_OPT_INT,
.valp = &conf.settings.freq,
.descr = "Primary buffer frequency"
},
{
.name = "PRIMARY_CHANNELS",
.tag = AUD_OPT_INT,
.valp = &conf.settings.nchannels,
.descr = "Primary buffer number of channels (1 - mono, 2 - stereo)"
},
{
.name = "PRIMARY_FMT",
.tag = AUD_OPT_FMT,
.valp = &conf.settings.fmt,
.descr = "Primary buffer format"
},
{
.name = "BUFSIZE_OUT",
.tag = AUD_OPT_INT,
.valp = &glob_conf.bufsize_out,
.valp = &conf.bufsize_out,
.descr = "(undocumented)"
},
{
.name = "BUFSIZE_IN",
.tag = AUD_OPT_INT,
.valp = &glob_conf.bufsize_in,
.valp = &conf.bufsize_in,
.descr = "(undocumented)"
},
{ /* End of list */ }

557
audio/esdaudio.c Normal file
View File

@@ -0,0 +1,557 @@
/*
* QEMU ESD audio driver
*
* Copyright (c) 2006 Frederick Reeve (brushed up by malc)
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <esd.h>
#include "qemu-common.h"
#include "audio.h"
#define AUDIO_CAP "esd"
#include "audio_int.h"
#include "audio_pt_int.h"
typedef struct {
HWVoiceOut hw;
int done;
int live;
int decr;
int rpos;
void *pcm_buf;
int fd;
struct audio_pt pt;
} ESDVoiceOut;
typedef struct {
HWVoiceIn hw;
int done;
int dead;
int incr;
int wpos;
void *pcm_buf;
int fd;
struct audio_pt pt;
} ESDVoiceIn;
static struct {
int samples;
int divisor;
char *dac_host;
char *adc_host;
} conf = {
.samples = 1024,
.divisor = 2,
};
static void GCC_FMT_ATTR (2, 3) qesd_logerr (int err, const char *fmt, ...)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
AUD_log (AUDIO_CAP, "Reason: %s\n", strerror (err));
}
/* playback */
static void *qesd_thread_out (void *arg)
{
ESDVoiceOut *esd = arg;
HWVoiceOut *hw = &esd->hw;
int threshold;
threshold = conf.divisor ? hw->samples / conf.divisor : 0;
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
for (;;) {
int decr, to_mix, rpos;
for (;;) {
if (esd->done) {
goto exit;
}
if (esd->live > threshold) {
break;
}
if (audio_pt_wait (&esd->pt, AUDIO_FUNC)) {
goto exit;
}
}
decr = to_mix = esd->live;
rpos = hw->rpos;
if (audio_pt_unlock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
while (to_mix) {
ssize_t written;
int chunk = audio_MIN (to_mix, hw->samples - rpos);
struct st_sample *src = hw->mix_buf + rpos;
hw->clip (esd->pcm_buf, src, chunk);
again:
written = write (esd->fd, esd->pcm_buf, chunk << hw->info.shift);
if (written == -1) {
if (errno == EINTR || errno == EAGAIN) {
goto again;
}
qesd_logerr (errno, "write failed\n");
return NULL;
}
if (written != chunk << hw->info.shift) {
int wsamples = written >> hw->info.shift;
int wbytes = wsamples << hw->info.shift;
if (wbytes != written) {
dolog ("warning: Misaligned write %d (requested %zd), "
"alignment %d\n",
wbytes, written, hw->info.align + 1);
}
to_mix -= wsamples;
rpos = (rpos + wsamples) % hw->samples;
break;
}
rpos = (rpos + chunk) % hw->samples;
to_mix -= chunk;
}
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
esd->rpos = rpos;
esd->live -= decr;
esd->decr += decr;
}
exit:
audio_pt_unlock (&esd->pt, AUDIO_FUNC);
return NULL;
}
static int qesd_run_out (HWVoiceOut *hw, int live)
{
int decr;
ESDVoiceOut *esd = (ESDVoiceOut *) hw;
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return 0;
}
decr = audio_MIN (live, esd->decr);
esd->decr -= decr;
esd->live = live - decr;
hw->rpos = esd->rpos;
if (esd->live > 0) {
audio_pt_unlock_and_signal (&esd->pt, AUDIO_FUNC);
}
else {
audio_pt_unlock (&esd->pt, AUDIO_FUNC);
}
return decr;
}
static int qesd_write (SWVoiceOut *sw, void *buf, int len)
{
return audio_pcm_sw_write (sw, buf, len);
}
static int qesd_init_out (HWVoiceOut *hw, struct audsettings *as)
{
ESDVoiceOut *esd = (ESDVoiceOut *) hw;
struct audsettings obt_as = *as;
int esdfmt = ESD_STREAM | ESD_PLAY;
esdfmt |= (as->nchannels == 2) ? ESD_STEREO : ESD_MONO;
switch (as->fmt) {
case AUD_FMT_S8:
case AUD_FMT_U8:
esdfmt |= ESD_BITS8;
obt_as.fmt = AUD_FMT_U8;
break;
case AUD_FMT_S32:
case AUD_FMT_U32:
dolog ("Will use 16 instead of 32 bit samples\n");
/* fall through */
case AUD_FMT_S16:
case AUD_FMT_U16:
deffmt:
esdfmt |= ESD_BITS16;
obt_as.fmt = AUD_FMT_S16;
break;
default:
dolog ("Internal logic error: Bad audio format %d\n", as->fmt);
goto deffmt;
}
obt_as.endianness = AUDIO_HOST_ENDIANNESS;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = conf.samples;
esd->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
if (!esd->pcm_buf) {
dolog ("Could not allocate buffer (%d bytes)\n",
hw->samples << hw->info.shift);
return -1;
}
esd->fd = esd_play_stream (esdfmt, as->freq, conf.dac_host, NULL);
if (esd->fd < 0) {
qesd_logerr (errno, "esd_play_stream failed\n");
goto fail1;
}
if (audio_pt_init (&esd->pt, qesd_thread_out, esd, AUDIO_CAP, AUDIO_FUNC)) {
goto fail2;
}
return 0;
fail2:
if (close (esd->fd)) {
qesd_logerr (errno, "%s: close on esd socket(%d) failed\n",
AUDIO_FUNC, esd->fd);
}
esd->fd = -1;
fail1:
g_free (esd->pcm_buf);
esd->pcm_buf = NULL;
return -1;
}
static void qesd_fini_out (HWVoiceOut *hw)
{
void *ret;
ESDVoiceOut *esd = (ESDVoiceOut *) hw;
audio_pt_lock (&esd->pt, AUDIO_FUNC);
esd->done = 1;
audio_pt_unlock_and_signal (&esd->pt, AUDIO_FUNC);
audio_pt_join (&esd->pt, &ret, AUDIO_FUNC);
if (esd->fd >= 0) {
if (close (esd->fd)) {
qesd_logerr (errno, "failed to close esd socket\n");
}
esd->fd = -1;
}
audio_pt_fini (&esd->pt, AUDIO_FUNC);
g_free (esd->pcm_buf);
esd->pcm_buf = NULL;
}
static int qesd_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
(void) hw;
(void) cmd;
return 0;
}
/* capture */
static void *qesd_thread_in (void *arg)
{
ESDVoiceIn *esd = arg;
HWVoiceIn *hw = &esd->hw;
int threshold;
threshold = conf.divisor ? hw->samples / conf.divisor : 0;
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
for (;;) {
int incr, to_grab, wpos;
for (;;) {
if (esd->done) {
goto exit;
}
if (esd->dead > threshold) {
break;
}
if (audio_pt_wait (&esd->pt, AUDIO_FUNC)) {
goto exit;
}
}
incr = to_grab = esd->dead;
wpos = hw->wpos;
if (audio_pt_unlock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
while (to_grab) {
ssize_t nread;
int chunk = audio_MIN (to_grab, hw->samples - wpos);
void *buf = advance (esd->pcm_buf, wpos);
again:
nread = read (esd->fd, buf, chunk << hw->info.shift);
if (nread == -1) {
if (errno == EINTR || errno == EAGAIN) {
goto again;
}
qesd_logerr (errno, "read failed\n");
return NULL;
}
if (nread != chunk << hw->info.shift) {
int rsamples = nread >> hw->info.shift;
int rbytes = rsamples << hw->info.shift;
if (rbytes != nread) {
dolog ("warning: Misaligned write %d (requested %zd), "
"alignment %d\n",
rbytes, nread, hw->info.align + 1);
}
to_grab -= rsamples;
wpos = (wpos + rsamples) % hw->samples;
break;
}
hw->conv (hw->conv_buf + wpos, buf, nread >> hw->info.shift);
wpos = (wpos + chunk) % hw->samples;
to_grab -= chunk;
}
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
esd->wpos = wpos;
esd->dead -= incr;
esd->incr += incr;
}
exit:
audio_pt_unlock (&esd->pt, AUDIO_FUNC);
return NULL;
}
static int qesd_run_in (HWVoiceIn *hw)
{
int live, incr, dead;
ESDVoiceIn *esd = (ESDVoiceIn *) hw;
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return 0;
}
live = audio_pcm_hw_get_live_in (hw);
dead = hw->samples - live;
incr = audio_MIN (dead, esd->incr);
esd->incr -= incr;
esd->dead = dead - incr;
hw->wpos = esd->wpos;
if (esd->dead > 0) {
audio_pt_unlock_and_signal (&esd->pt, AUDIO_FUNC);
}
else {
audio_pt_unlock (&esd->pt, AUDIO_FUNC);
}
return incr;
}
static int qesd_read (SWVoiceIn *sw, void *buf, int len)
{
return audio_pcm_sw_read (sw, buf, len);
}
static int qesd_init_in (HWVoiceIn *hw, struct audsettings *as)
{
ESDVoiceIn *esd = (ESDVoiceIn *) hw;
struct audsettings obt_as = *as;
int esdfmt = ESD_STREAM | ESD_RECORD;
esdfmt |= (as->nchannels == 2) ? ESD_STEREO : ESD_MONO;
switch (as->fmt) {
case AUD_FMT_S8:
case AUD_FMT_U8:
esdfmt |= ESD_BITS8;
obt_as.fmt = AUD_FMT_U8;
break;
case AUD_FMT_S16:
case AUD_FMT_U16:
esdfmt |= ESD_BITS16;
obt_as.fmt = AUD_FMT_S16;
break;
case AUD_FMT_S32:
case AUD_FMT_U32:
dolog ("Will use 16 instead of 32 bit samples\n");
esdfmt |= ESD_BITS16;
obt_as.fmt = AUD_FMT_S16;
break;
}
obt_as.endianness = AUDIO_HOST_ENDIANNESS;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = conf.samples;
esd->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
if (!esd->pcm_buf) {
dolog ("Could not allocate buffer (%d bytes)\n",
hw->samples << hw->info.shift);
return -1;
}
esd->fd = esd_record_stream (esdfmt, as->freq, conf.adc_host, NULL);
if (esd->fd < 0) {
qesd_logerr (errno, "esd_record_stream failed\n");
goto fail1;
}
if (audio_pt_init (&esd->pt, qesd_thread_in, esd, AUDIO_CAP, AUDIO_FUNC)) {
goto fail2;
}
return 0;
fail2:
if (close (esd->fd)) {
qesd_logerr (errno, "%s: close on esd socket(%d) failed\n",
AUDIO_FUNC, esd->fd);
}
esd->fd = -1;
fail1:
g_free (esd->pcm_buf);
esd->pcm_buf = NULL;
return -1;
}
static void qesd_fini_in (HWVoiceIn *hw)
{
void *ret;
ESDVoiceIn *esd = (ESDVoiceIn *) hw;
audio_pt_lock (&esd->pt, AUDIO_FUNC);
esd->done = 1;
audio_pt_unlock_and_signal (&esd->pt, AUDIO_FUNC);
audio_pt_join (&esd->pt, &ret, AUDIO_FUNC);
if (esd->fd >= 0) {
if (close (esd->fd)) {
qesd_logerr (errno, "failed to close esd socket\n");
}
esd->fd = -1;
}
audio_pt_fini (&esd->pt, AUDIO_FUNC);
g_free (esd->pcm_buf);
esd->pcm_buf = NULL;
}
static int qesd_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
(void) hw;
(void) cmd;
return 0;
}
/* common */
static void *qesd_audio_init (void)
{
return &conf;
}
static void qesd_audio_fini (void *opaque)
{
(void) opaque;
ldebug ("esd_fini");
}
struct audio_option qesd_options[] = {
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.samples,
.descr = "buffer size in samples"
},
{
.name = "DIVISOR",
.tag = AUD_OPT_INT,
.valp = &conf.divisor,
.descr = "threshold divisor"
},
{
.name = "DAC_HOST",
.tag = AUD_OPT_STR,
.valp = &conf.dac_host,
.descr = "playback host"
},
{
.name = "ADC_HOST",
.tag = AUD_OPT_STR,
.valp = &conf.adc_host,
.descr = "capture host"
},
{ /* End of list */ }
};
static struct audio_pcm_ops qesd_pcm_ops = {
.init_out = qesd_init_out,
.fini_out = qesd_fini_out,
.run_out = qesd_run_out,
.write = qesd_write,
.ctl_out = qesd_ctl_out,
.init_in = qesd_init_in,
.fini_in = qesd_fini_in,
.run_in = qesd_run_in,
.read = qesd_read,
.ctl_in = qesd_ctl_in,
};
struct audio_driver esd_audio_driver = {
.name = "esd",
.descr = "http://en.wikipedia.org/wiki/Esound",
.options = qesd_options,
.init = qesd_audio_init,
.fini = qesd_audio_fini,
.pcm_ops = &qesd_pcm_ops,
.can_be_default = 0,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (ESDVoiceOut),
.voice_size_in = sizeof (ESDVoiceIn)
};

685
audio/fmodaudio.c Normal file
View File

@@ -0,0 +1,685 @@
/*
* QEMU FMOD audio driver
*
* Copyright (c) 2004-2005 Vassili Karpov (malc)
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <fmod.h>
#include <fmod_errors.h>
#include "qemu-common.h"
#include "audio.h"
#define AUDIO_CAP "fmod"
#include "audio_int.h"
typedef struct FMODVoiceOut {
HWVoiceOut hw;
unsigned int old_pos;
FSOUND_SAMPLE *fmod_sample;
int channel;
} FMODVoiceOut;
typedef struct FMODVoiceIn {
HWVoiceIn hw;
FSOUND_SAMPLE *fmod_sample;
} FMODVoiceIn;
static struct {
const char *drvname;
int nb_samples;
int freq;
int nb_channels;
int bufsize;
int broken_adc;
} conf = {
.nb_samples = 2048 * 2,
.freq = 44100,
.nb_channels = 2,
};
static void GCC_FMT_ATTR (1, 2) fmod_logerr (const char *fmt, ...)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
AUD_log (AUDIO_CAP, "Reason: %s\n",
FMOD_ErrorString (FSOUND_GetError ()));
}
static void GCC_FMT_ATTR (2, 3) fmod_logerr2 (
const char *typ,
const char *fmt,
...
)
{
va_list ap;
AUD_log (AUDIO_CAP, "Could not initialize %s\n", typ);
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
AUD_log (AUDIO_CAP, "Reason: %s\n",
FMOD_ErrorString (FSOUND_GetError ()));
}
static int fmod_write (SWVoiceOut *sw, void *buf, int len)
{
return audio_pcm_sw_write (sw, buf, len);
}
static void fmod_clear_sample (FMODVoiceOut *fmd)
{
HWVoiceOut *hw = &fmd->hw;
int status;
void *p1 = 0, *p2 = 0;
unsigned int len1 = 0, len2 = 0;
status = FSOUND_Sample_Lock (
fmd->fmod_sample,
0,
hw->samples << hw->info.shift,
&p1,
&p2,
&len1,
&len2
);
if (!status) {
fmod_logerr ("Failed to lock sample\n");
return;
}
if ((len1 & hw->info.align) || (len2 & hw->info.align)) {
dolog ("Lock returned misaligned length %d, %d, alignment %d\n",
len1, len2, hw->info.align + 1);
goto fail;
}
if ((len1 + len2) - (hw->samples << hw->info.shift)) {
dolog ("Lock returned incomplete length %d, %d\n",
len1 + len2, hw->samples << hw->info.shift);
goto fail;
}
audio_pcm_info_clear_buf (&hw->info, p1, hw->samples);
fail:
status = FSOUND_Sample_Unlock (fmd->fmod_sample, p1, p2, len1, len2);
if (!status) {
fmod_logerr ("Failed to unlock sample\n");
}
}
static void fmod_write_sample (HWVoiceOut *hw, uint8_t *dst, int dst_len)
{
int src_len1 = dst_len;
int src_len2 = 0;
int pos = hw->rpos + dst_len;
struct st_sample *src1 = hw->mix_buf + hw->rpos;
struct st_sample *src2 = NULL;
if (pos > hw->samples) {
src_len1 = hw->samples - hw->rpos;
src2 = hw->mix_buf;
src_len2 = dst_len - src_len1;
pos = src_len2;
}
if (src_len1) {
hw->clip (dst, src1, src_len1);
}
if (src_len2) {
dst = advance (dst, src_len1 << hw->info.shift);
hw->clip (dst, src2, src_len2);
}
hw->rpos = pos % hw->samples;
}
static int fmod_unlock_sample (FSOUND_SAMPLE *sample, void *p1, void *p2,
unsigned int blen1, unsigned int blen2)
{
int status = FSOUND_Sample_Unlock (sample, p1, p2, blen1, blen2);
if (!status) {
fmod_logerr ("Failed to unlock sample\n");
return -1;
}
return 0;
}
static int fmod_lock_sample (
FSOUND_SAMPLE *sample,
struct audio_pcm_info *info,
int pos,
int len,
void **p1,
void **p2,
unsigned int *blen1,
unsigned int *blen2
)
{
int status;
status = FSOUND_Sample_Lock (
sample,
pos << info->shift,
len << info->shift,
p1,
p2,
blen1,
blen2
);
if (!status) {
fmod_logerr ("Failed to lock sample\n");
return -1;
}
if ((*blen1 & info->align) || (*blen2 & info->align)) {
dolog ("Lock returned misaligned length %d, %d, alignment %d\n",
*blen1, *blen2, info->align + 1);
fmod_unlock_sample (sample, *p1, *p2, *blen1, *blen2);
*p1 = NULL - 1;
*p2 = NULL - 1;
*blen1 = ~0U;
*blen2 = ~0U;
return -1;
}
if (!*p1 && *blen1) {
dolog ("warning: !p1 && blen1=%d\n", *blen1);
*blen1 = 0;
}
if (!p2 && *blen2) {
dolog ("warning: !p2 && blen2=%d\n", *blen2);
*blen2 = 0;
}
return 0;
}
static int fmod_run_out (HWVoiceOut *hw, int live)
{
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
int decr;
void *p1 = 0, *p2 = 0;
unsigned int blen1 = 0, blen2 = 0;
unsigned int len1 = 0, len2 = 0;
if (!hw->pending_disable) {
return 0;
}
decr = live;
if (fmd->channel >= 0) {
int len = decr;
int old_pos = fmd->old_pos;
int ppos = FSOUND_GetCurrentPosition (fmd->channel);
if (ppos == old_pos || !ppos) {
return 0;
}
if ((old_pos < ppos) && ((old_pos + len) > ppos)) {
len = ppos - old_pos;
}
else {
if ((old_pos > ppos) && ((old_pos + len) > (ppos + hw->samples))) {
len = hw->samples - old_pos + ppos;
}
}
decr = len;
if (audio_bug (AUDIO_FUNC, decr < 0)) {
dolog ("decr=%d live=%d ppos=%d old_pos=%d len=%d\n",
decr, live, ppos, old_pos, len);
return 0;
}
}
if (!decr) {
return 0;
}
if (fmod_lock_sample (fmd->fmod_sample, &fmd->hw.info,
fmd->old_pos, decr,
&p1, &p2,
&blen1, &blen2)) {
return 0;
}
len1 = blen1 >> hw->info.shift;
len2 = blen2 >> hw->info.shift;
ldebug ("%p %p %d %d %d %d\n", p1, p2, len1, len2, blen1, blen2);
decr = len1 + len2;
if (p1 && len1) {
fmod_write_sample (hw, p1, len1);
}
if (p2 && len2) {
fmod_write_sample (hw, p2, len2);
}
fmod_unlock_sample (fmd->fmod_sample, p1, p2, blen1, blen2);
fmd->old_pos = (fmd->old_pos + decr) % hw->samples;
return decr;
}
static int aud_to_fmodfmt (audfmt_e fmt, int stereo)
{
int mode = FSOUND_LOOP_NORMAL;
switch (fmt) {
case AUD_FMT_S8:
mode |= FSOUND_SIGNED | FSOUND_8BITS;
break;
case AUD_FMT_U8:
mode |= FSOUND_UNSIGNED | FSOUND_8BITS;
break;
case AUD_FMT_S16:
mode |= FSOUND_SIGNED | FSOUND_16BITS;
break;
case AUD_FMT_U16:
mode |= FSOUND_UNSIGNED | FSOUND_16BITS;
break;
default:
dolog ("Internal logic error: Bad audio format %d\n", fmt);
#ifdef DEBUG_FMOD
abort ();
#endif
mode |= FSOUND_8BITS;
}
mode |= stereo ? FSOUND_STEREO : FSOUND_MONO;
return mode;
}
static void fmod_fini_out (HWVoiceOut *hw)
{
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
if (fmd->fmod_sample) {
FSOUND_Sample_Free (fmd->fmod_sample);
fmd->fmod_sample = 0;
if (fmd->channel >= 0) {
FSOUND_StopSound (fmd->channel);
}
}
}
static int fmod_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int mode, channel;
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
struct audsettings obt_as = *as;
mode = aud_to_fmodfmt (as->fmt, as->nchannels == 2 ? 1 : 0);
fmd->fmod_sample = FSOUND_Sample_Alloc (
FSOUND_FREE, /* index */
conf.nb_samples, /* length */
mode, /* mode */
as->freq, /* freq */
255, /* volume */
128, /* pan */
255 /* priority */
);
if (!fmd->fmod_sample) {
fmod_logerr2 ("DAC", "Failed to allocate FMOD sample\n");
return -1;
}
channel = FSOUND_PlaySoundEx (FSOUND_FREE, fmd->fmod_sample, 0, 1);
if (channel < 0) {
fmod_logerr2 ("DAC", "Failed to start playing sound\n");
FSOUND_Sample_Free (fmd->fmod_sample);
return -1;
}
fmd->channel = channel;
/* FMOD always operates on little endian frames? */
obt_as.endianness = 0;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = conf.nb_samples;
return 0;
}
static int fmod_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
int status;
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
switch (cmd) {
case VOICE_ENABLE:
fmod_clear_sample (fmd);
status = FSOUND_SetPaused (fmd->channel, 0);
if (!status) {
fmod_logerr ("Failed to resume channel %d\n", fmd->channel);
}
break;
case VOICE_DISABLE:
status = FSOUND_SetPaused (fmd->channel, 1);
if (!status) {
fmod_logerr ("Failed to pause channel %d\n", fmd->channel);
}
break;
}
return 0;
}
static int fmod_init_in (HWVoiceIn *hw, struct audsettings *as)
{
int mode;
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
struct audsettings obt_as = *as;
if (conf.broken_adc) {
return -1;
}
mode = aud_to_fmodfmt (as->fmt, as->nchannels == 2 ? 1 : 0);
fmd->fmod_sample = FSOUND_Sample_Alloc (
FSOUND_FREE, /* index */
conf.nb_samples, /* length */
mode, /* mode */
as->freq, /* freq */
255, /* volume */
128, /* pan */
255 /* priority */
);
if (!fmd->fmod_sample) {
fmod_logerr2 ("ADC", "Failed to allocate FMOD sample\n");
return -1;
}
/* FMOD always operates on little endian frames? */
obt_as.endianness = 0;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = conf.nb_samples;
return 0;
}
static void fmod_fini_in (HWVoiceIn *hw)
{
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
if (fmd->fmod_sample) {
FSOUND_Record_Stop ();
FSOUND_Sample_Free (fmd->fmod_sample);
fmd->fmod_sample = 0;
}
}
static int fmod_run_in (HWVoiceIn *hw)
{
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
int hwshift = hw->info.shift;
int live, dead, new_pos, len;
unsigned int blen1 = 0, blen2 = 0;
unsigned int len1, len2;
unsigned int decr;
void *p1, *p2;
live = audio_pcm_hw_get_live_in (hw);
dead = hw->samples - live;
if (!dead) {
return 0;
}
new_pos = FSOUND_Record_GetPosition ();
if (new_pos < 0) {
fmod_logerr ("Could not get recording position\n");
return 0;
}
len = audio_ring_dist (new_pos, hw->wpos, hw->samples);
if (!len) {
return 0;
}
len = audio_MIN (len, dead);
if (fmod_lock_sample (fmd->fmod_sample, &fmd->hw.info,
hw->wpos, len,
&p1, &p2,
&blen1, &blen2)) {
return 0;
}
len1 = blen1 >> hwshift;
len2 = blen2 >> hwshift;
decr = len1 + len2;
if (p1 && blen1) {
hw->conv (hw->conv_buf + hw->wpos, p1, len1);
}
if (p2 && len2) {
hw->conv (hw->conv_buf, p2, len2);
}
fmod_unlock_sample (fmd->fmod_sample, p1, p2, blen1, blen2);
hw->wpos = (hw->wpos + decr) % hw->samples;
return decr;
}
static struct {
const char *name;
int type;
} drvtab[] = {
{ .name = "none", .type = FSOUND_OUTPUT_NOSOUND },
#ifdef _WIN32
{ .name = "winmm", .type = FSOUND_OUTPUT_WINMM },
{ .name = "dsound", .type = FSOUND_OUTPUT_DSOUND },
{ .name = "a3d", .type = FSOUND_OUTPUT_A3D },
{ .name = "asio", .type = FSOUND_OUTPUT_ASIO },
#endif
#ifdef __linux__
{ .name = "oss", .type = FSOUND_OUTPUT_OSS },
{ .name = "alsa", .type = FSOUND_OUTPUT_ALSA },
{ .name = "esd", .type = FSOUND_OUTPUT_ESD },
#endif
#ifdef __APPLE__
{ .name = "mac", .type = FSOUND_OUTPUT_MAC },
#endif
#if 0
{ .name = "xbox", .type = FSOUND_OUTPUT_XBOX },
{ .name = "ps2", .type = FSOUND_OUTPUT_PS2 },
{ .name = "gcube", .type = FSOUND_OUTPUT_GC },
#endif
{ .name = "none-realtime", .type = FSOUND_OUTPUT_NOSOUND_NONREALTIME }
};
static void *fmod_audio_init (void)
{
size_t i;
double ver;
int status;
int output_type = -1;
const char *drv = conf.drvname;
ver = FSOUND_GetVersion ();
if (ver < FMOD_VERSION) {
dolog ("Wrong FMOD version %f, need at least %f\n", ver, FMOD_VERSION);
return NULL;
}
#ifdef __linux__
if (ver < 3.75) {
dolog ("FMOD before 3.75 has bug preventing ADC from working\n"
"ADC will be disabled.\n");
conf.broken_adc = 1;
}
#endif
if (drv) {
int found = 0;
for (i = 0; i < ARRAY_SIZE (drvtab); i++) {
if (!strcmp (drv, drvtab[i].name)) {
output_type = drvtab[i].type;
found = 1;
break;
}
}
if (!found) {
dolog ("Unknown FMOD driver `%s'\n", drv);
dolog ("Valid drivers:\n");
for (i = 0; i < ARRAY_SIZE (drvtab); i++) {
dolog (" %s\n", drvtab[i].name);
}
}
}
if (output_type != -1) {
status = FSOUND_SetOutput (output_type);
if (!status) {
fmod_logerr ("FSOUND_SetOutput(%d) failed\n", output_type);
return NULL;
}
}
if (conf.bufsize) {
status = FSOUND_SetBufferSize (conf.bufsize);
if (!status) {
fmod_logerr ("FSOUND_SetBufferSize (%d) failed\n", conf.bufsize);
}
}
status = FSOUND_Init (conf.freq, conf.nb_channels, 0);
if (!status) {
fmod_logerr ("FSOUND_Init failed\n");
return NULL;
}
return &conf;
}
static int fmod_read (SWVoiceIn *sw, void *buf, int size)
{
return audio_pcm_sw_read (sw, buf, size);
}
static int fmod_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
int status;
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
switch (cmd) {
case VOICE_ENABLE:
status = FSOUND_Record_StartSample (fmd->fmod_sample, 1);
if (!status) {
fmod_logerr ("Failed to start recording\n");
}
break;
case VOICE_DISABLE:
status = FSOUND_Record_Stop ();
if (!status) {
fmod_logerr ("Failed to stop recording\n");
}
break;
}
return 0;
}
static void fmod_audio_fini (void *opaque)
{
(void) opaque;
FSOUND_Close ();
}
static struct audio_option fmod_options[] = {
{
.name = "DRV",
.tag = AUD_OPT_STR,
.valp = &conf.drvname,
.descr = "FMOD driver"
},
{
.name = "FREQ",
.tag = AUD_OPT_INT,
.valp = &conf.freq,
.descr = "Default frequency"
},
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.nb_samples,
.descr = "Buffer size in samples"
},
{
.name = "CHANNELS",
.tag = AUD_OPT_INT,
.valp = &conf.nb_channels,
.descr = "Number of default channels (1 - mono, 2 - stereo)"
},
{
.name = "BUFSIZE",
.tag = AUD_OPT_INT,
.valp = &conf.bufsize,
.descr = "(undocumented)"
},
{ /* End of list */ }
};
static struct audio_pcm_ops fmod_pcm_ops = {
.init_out = fmod_init_out,
.fini_out = fmod_fini_out,
.run_out = fmod_run_out,
.write = fmod_write,
.ctl_out = fmod_ctl_out,
.init_in = fmod_init_in,
.fini_in = fmod_fini_in,
.run_in = fmod_run_in,
.read = fmod_read,
.ctl_in = fmod_ctl_in
};
struct audio_driver fmod_audio_driver = {
.name = "fmod",
.descr = "FMOD 3.xx http://www.fmod.org",
.options = fmod_options,
.init = fmod_audio_init,
.fini = fmod_audio_fini,
.pcm_ops = &fmod_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (FMODVoiceOut),
.voice_size_in = sizeof (FMODVoiceIn)
};

View File

@@ -63,7 +63,7 @@ static int no_write (SWVoiceOut *sw, void *buf, int len)
return audio_pcm_sw_write (sw, buf, len);
}
static int no_init_out(HWVoiceOut *hw, struct audsettings *as, void *drv_opaque)
static int no_init_out (HWVoiceOut *hw, struct audsettings *as)
{
audio_pcm_init_info (&hw->info, as);
hw->samples = 1024;
@@ -82,7 +82,7 @@ static int no_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static int no_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int no_init_in (HWVoiceIn *hw, struct audsettings *as)
{
audio_pcm_init_info (&hw->info, as);
hw->samples = 1024;

View File

@@ -30,7 +30,6 @@
#include "qemu/main-loop.h"
#include "qemu/host-utils.h"
#include "audio.h"
#include "trace.h"
#define AUDIO_CAP "oss"
#include "audio_int.h"
@@ -39,16 +38,6 @@
#define USE_DSP_POLICY
#endif
typedef struct OSSConf {
int try_mmap;
int nfrags;
int fragsize;
const char *devpath_out;
const char *devpath_in;
int exclusive;
int policy;
} OSSConf;
typedef struct OSSVoiceOut {
HWVoiceOut hw;
void *pcm_buf;
@@ -58,7 +47,6 @@ typedef struct OSSVoiceOut {
int fragsize;
int mmapped;
int pending;
OSSConf *conf;
} OSSVoiceOut;
typedef struct OSSVoiceIn {
@@ -67,9 +55,28 @@ typedef struct OSSVoiceIn {
int fd;
int nfrags;
int fragsize;
OSSConf *conf;
} OSSVoiceIn;
static struct {
int try_mmap;
int nfrags;
int fragsize;
const char *devpath_out;
const char *devpath_in;
int debug;
int exclusive;
int policy;
} conf = {
.try_mmap = 0,
.nfrags = 4,
.fragsize = 4096,
.devpath_out = "/dev/dsp",
.devpath_in = "/dev/dsp",
.debug = 0,
.exclusive = 0,
.policy = 5
};
struct oss_params {
int freq;
audfmt_e fmt;
@@ -131,18 +138,18 @@ static void oss_helper_poll_in (void *opaque)
audio_run ("oss_poll_in");
}
static void oss_poll_out (HWVoiceOut *hw)
static int oss_poll_out (HWVoiceOut *hw)
{
OSSVoiceOut *oss = (OSSVoiceOut *) hw;
qemu_set_fd_handler (oss->fd, NULL, oss_helper_poll_out, NULL);
return qemu_set_fd_handler (oss->fd, NULL, oss_helper_poll_out, NULL);
}
static void oss_poll_in (HWVoiceIn *hw)
static int oss_poll_in (HWVoiceIn *hw)
{
OSSVoiceIn *oss = (OSSVoiceIn *) hw;
qemu_set_fd_handler (oss->fd, oss_helper_poll_in, NULL, NULL);
return qemu_set_fd_handler (oss->fd, oss_helper_poll_in, NULL, NULL);
}
static int oss_write (SWVoiceOut *sw, void *buf, int len)
@@ -265,18 +272,18 @@ static int oss_get_version (int fd, int *version, const char *typ)
#endif
static int oss_open (int in, struct oss_params *req,
struct oss_params *obt, int *pfd, OSSConf* conf)
struct oss_params *obt, int *pfd)
{
int fd;
int oflags = conf->exclusive ? O_EXCL : 0;
int oflags = conf.exclusive ? O_EXCL : 0;
audio_buf_info abinfo;
int fmt, freq, nchannels;
int setfragment = 1;
const char *dspname = in ? conf->devpath_in : conf->devpath_out;
const char *dspname = in ? conf.devpath_in : conf.devpath_out;
const char *typ = in ? "ADC" : "DAC";
/* Kludge needed to have working mmap on Linux */
oflags |= conf->try_mmap ? O_RDWR : (in ? O_RDONLY : O_WRONLY);
oflags |= conf.try_mmap ? O_RDWR : (in ? O_RDONLY : O_WRONLY);
fd = open (dspname, oflags | O_NONBLOCK);
if (-1 == fd) {
@@ -310,18 +317,20 @@ static int oss_open (int in, struct oss_params *req,
}
#ifdef USE_DSP_POLICY
if (conf->policy >= 0) {
if (conf.policy >= 0) {
int version;
if (!oss_get_version (fd, &version, typ)) {
trace_oss_version(version);
if (conf.debug) {
dolog ("OSS version = %#x\n", version);
}
if (version >= 0x040000) {
int policy = conf->policy;
int policy = conf.policy;
if (ioctl (fd, SNDCTL_DSP_POLICY, &policy)) {
oss_logerr2 (errno, typ,
"Failed to set timing policy to %d\n",
conf->policy);
conf.policy);
goto err;
}
setfragment = 0;
@@ -449,12 +458,19 @@ static int oss_run_out (HWVoiceOut *hw, int live)
}
if (abinfo.bytes > bufsize) {
trace_oss_invalid_available_size(abinfo.bytes, bufsize);
if (conf.debug) {
dolog ("warning: Invalid available size, size=%d bufsize=%d\n"
"please report your OS/audio hw to av1474@comtv.ru\n",
abinfo.bytes, bufsize);
}
abinfo.bytes = bufsize;
}
if (abinfo.bytes < 0) {
trace_oss_invalid_available_size(abinfo.bytes, bufsize);
if (conf.debug) {
dolog ("warning: Invalid available size, size=%d bufsize=%d\n",
abinfo.bytes, bufsize);
}
return 0;
}
@@ -494,8 +510,7 @@ static void oss_fini_out (HWVoiceOut *hw)
}
}
static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int oss_init_out (HWVoiceOut *hw, struct audsettings *as)
{
OSSVoiceOut *oss = (OSSVoiceOut *) hw;
struct oss_params req, obt;
@@ -504,17 +519,16 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
int fd;
audfmt_e effective_fmt;
struct audsettings obt_as;
OSSConf *conf = drv_opaque;
oss->fd = -1;
req.fmt = aud_to_ossfmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.fragsize = conf->fragsize;
req.nfrags = conf->nfrags;
req.fragsize = conf.fragsize;
req.nfrags = conf.nfrags;
if (oss_open (0, &req, &obt, &fd, conf)) {
if (oss_open (0, &req, &obt, &fd)) {
return -1;
}
@@ -541,7 +555,7 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
hw->samples = (obt.nfrags * obt.fragsize) >> hw->info.shift;
oss->mmapped = 0;
if (conf->try_mmap) {
if (conf.try_mmap) {
oss->pcm_buf = mmap (
NULL,
hw->samples << hw->info.shift,
@@ -601,7 +615,6 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
}
oss->fd = fd;
oss->conf = conf;
return 0;
}
@@ -621,8 +634,7 @@ static int oss_ctl_out (HWVoiceOut *hw, int cmd, ...)
va_end (ap);
ldebug ("enabling voice\n");
if (poll_mode) {
oss_poll_out (hw);
if (poll_mode && oss_poll_out (hw)) {
poll_mode = 0;
}
hw->poll_mode = poll_mode;
@@ -664,7 +676,7 @@ static int oss_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static int oss_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int oss_init_in (HWVoiceIn *hw, struct audsettings *as)
{
OSSVoiceIn *oss = (OSSVoiceIn *) hw;
struct oss_params req, obt;
@@ -673,16 +685,15 @@ static int oss_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
int fd;
audfmt_e effective_fmt;
struct audsettings obt_as;
OSSConf *conf = drv_opaque;
oss->fd = -1;
req.fmt = aud_to_ossfmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.fragsize = conf->fragsize;
req.nfrags = conf->nfrags;
if (oss_open (1, &req, &obt, &fd, conf)) {
req.fragsize = conf.fragsize;
req.nfrags = conf.nfrags;
if (oss_open (1, &req, &obt, &fd)) {
return -1;
}
@@ -716,7 +727,6 @@ static int oss_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
oss->fd = fd;
oss->conf = conf;
return 0;
}
@@ -818,8 +828,7 @@ static int oss_ctl_in (HWVoiceIn *hw, int cmd, ...)
poll_mode = va_arg (ap, int);
va_end (ap);
if (poll_mode) {
oss_poll_in (hw);
if (poll_mode && oss_poll_in (hw)) {
poll_mode = 0;
}
hw->poll_mode = poll_mode;
@@ -836,79 +845,71 @@ static int oss_ctl_in (HWVoiceIn *hw, int cmd, ...)
return 0;
}
static OSSConf glob_conf = {
.try_mmap = 0,
.nfrags = 4,
.fragsize = 4096,
.devpath_out = "/dev/dsp",
.devpath_in = "/dev/dsp",
.exclusive = 0,
.policy = 5
};
static void *oss_audio_init (void)
{
OSSConf *conf = g_malloc(sizeof(OSSConf));
*conf = glob_conf;
if (access(conf->devpath_in, R_OK | W_OK) < 0 ||
access(conf->devpath_out, R_OK | W_OK) < 0) {
g_free(conf);
if (access(conf.devpath_in, R_OK | W_OK) < 0 ||
access(conf.devpath_out, R_OK | W_OK) < 0) {
return NULL;
}
return conf;
return &conf;
}
static void oss_audio_fini (void *opaque)
{
g_free(opaque);
(void) opaque;
}
static struct audio_option oss_options[] = {
{
.name = "FRAGSIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.fragsize,
.valp = &conf.fragsize,
.descr = "Fragment size in bytes"
},
{
.name = "NFRAGS",
.tag = AUD_OPT_INT,
.valp = &glob_conf.nfrags,
.valp = &conf.nfrags,
.descr = "Number of fragments"
},
{
.name = "MMAP",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.try_mmap,
.valp = &conf.try_mmap,
.descr = "Try using memory mapped access"
},
{
.name = "DAC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.devpath_out,
.valp = &conf.devpath_out,
.descr = "Path to DAC device"
},
{
.name = "ADC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.devpath_in,
.valp = &conf.devpath_in,
.descr = "Path to ADC device"
},
{
.name = "EXCLUSIVE",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.exclusive,
.valp = &conf.exclusive,
.descr = "Open device in exclusive mode (vmix wont work)"
},
#ifdef USE_DSP_POLICY
{
.name = "POLICY",
.tag = AUD_OPT_INT,
.valp = &glob_conf.policy,
.valp = &conf.policy,
.descr = "Set the timing policy of the device, -1 to use fragment mode",
},
#endif
{
.name = "DEBUG",
.tag = AUD_OPT_BOOL,
.valp = &conf.debug,
.descr = "Turn on some debugging messages"
},
{ /* End of list */ }
};

View File

@@ -8,19 +8,6 @@
#include "audio_int.h"
#include "audio_pt_int.h"
typedef struct {
int samples;
char *server;
char *sink;
char *source;
} PAConf;
typedef struct {
PAConf conf;
pa_threaded_mainloop *mainloop;
pa_context *context;
} paaudio;
typedef struct {
HWVoiceOut hw;
int done;
@@ -30,7 +17,6 @@ typedef struct {
pa_stream *stream;
void *pcm_buf;
struct audio_pt pt;
paaudio *g;
} PAVoiceOut;
typedef struct {
@@ -44,10 +30,20 @@ typedef struct {
struct audio_pt pt;
const void *read_data;
size_t read_index, read_length;
paaudio *g;
} PAVoiceIn;
static void qpa_audio_fini(void *opaque);
typedef struct {
int samples;
char *server;
char *sink;
char *source;
pa_threaded_mainloop *mainloop;
pa_context *context;
} paaudio;
static paaudio glob_paaudio = {
.samples = 4096,
};
static void GCC_FMT_ATTR (2, 3) qpa_logerr (int err, const char *fmt, ...)
{
@@ -110,7 +106,7 @@ static inline int PA_STREAM_IS_GOOD(pa_stream_state_t x)
static int qpa_simple_read (PAVoiceIn *p, void *data, size_t length, int *rerror)
{
paaudio *g = p->g;
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_lock (g->mainloop);
@@ -164,7 +160,7 @@ unlock_and_fail:
static int qpa_simple_write (PAVoiceOut *p, const void *data, size_t length, int *rerror)
{
paaudio *g = p->g;
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_lock (g->mainloop);
@@ -226,7 +222,7 @@ static void *qpa_thread_out (void *arg)
}
}
decr = to_mix = audio_MIN (pa->live, pa->g->conf.samples >> 2);
decr = to_mix = audio_MIN (pa->live, glob_paaudio.samples >> 2);
rpos = pa->rpos;
if (audio_pt_unlock (&pa->pt, AUDIO_FUNC)) {
@@ -318,7 +314,7 @@ static void *qpa_thread_in (void *arg)
}
}
incr = to_grab = audio_MIN (pa->dead, pa->g->conf.samples >> 2);
incr = to_grab = audio_MIN (pa->dead, glob_paaudio.samples >> 2);
wpos = pa->wpos;
if (audio_pt_unlock (&pa->pt, AUDIO_FUNC)) {
@@ -434,7 +430,7 @@ static audfmt_e pa_to_audfmt (pa_sample_format_t fmt, int *endianness)
static void context_state_cb (pa_context *c, void *userdata)
{
paaudio *g = userdata;
paaudio *g = &glob_paaudio;
switch (pa_context_get_state(c)) {
case PA_CONTEXT_READY:
@@ -453,7 +449,7 @@ static void context_state_cb (pa_context *c, void *userdata)
static void stream_state_cb (pa_stream *s, void * userdata)
{
paaudio *g = userdata;
paaudio *g = &glob_paaudio;
switch (pa_stream_get_state (s)) {
@@ -471,21 +467,23 @@ static void stream_state_cb (pa_stream *s, void * userdata)
static void stream_request_cb (pa_stream *s, size_t length, void *userdata)
{
paaudio *g = userdata;
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_signal (g->mainloop, 0);
}
static pa_stream *qpa_simple_new (
paaudio *g,
const char *server,
const char *name,
pa_stream_direction_t dir,
const char *dev,
const char *stream_name,
const pa_sample_spec *ss,
const pa_channel_map *map,
const pa_buffer_attr *attr,
int *rerror)
{
paaudio *g = &glob_paaudio;
int r;
pa_stream *stream;
@@ -536,15 +534,13 @@ fail:
return NULL;
}
static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int qpa_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int error;
pa_sample_spec ss;
pa_buffer_attr ba;
static pa_sample_spec ss;
static pa_buffer_attr ba;
struct audsettings obt_as = *as;
PAVoiceOut *pa = (PAVoiceOut *) hw;
paaudio *g = pa->g = drv_opaque;
ss.format = audfmt_to_pa (as->fmt, as->endianness);
ss.channels = as->nchannels;
@@ -562,10 +558,11 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
obt_as.fmt = pa_to_audfmt (ss.format, &obt_as.endianness);
pa->stream = qpa_simple_new (
g,
glob_paaudio.server,
"qemu",
PA_STREAM_PLAYBACK,
g->conf.sink,
glob_paaudio.sink,
"pcm.playback",
&ss,
NULL, /* channel map */
&ba, /* buffering attributes */
@@ -577,7 +574,7 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
}
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = g->conf.samples;
hw->samples = glob_paaudio.samples;
pa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
pa->rpos = hw->rpos;
if (!pa->pcm_buf) {
@@ -604,13 +601,12 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
return -1;
}
static int qpa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int qpa_init_in (HWVoiceIn *hw, struct audsettings *as)
{
int error;
pa_sample_spec ss;
static pa_sample_spec ss;
struct audsettings obt_as = *as;
PAVoiceIn *pa = (PAVoiceIn *) hw;
paaudio *g = pa->g = drv_opaque;
ss.format = audfmt_to_pa (as->fmt, as->endianness);
ss.channels = as->nchannels;
@@ -619,10 +615,11 @@ static int qpa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
obt_as.fmt = pa_to_audfmt (ss.format, &obt_as.endianness);
pa->stream = qpa_simple_new (
g,
glob_paaudio.server,
"qemu",
PA_STREAM_RECORD,
g->conf.source,
glob_paaudio.source,
"pcm.capture",
&ss,
NULL, /* channel map */
NULL, /* buffering attributes */
@@ -634,7 +631,7 @@ static int qpa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = g->conf.samples;
hw->samples = glob_paaudio.samples;
pa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
pa->wpos = hw->wpos;
if (!pa->pcm_buf) {
@@ -706,7 +703,7 @@ static int qpa_ctl_out (HWVoiceOut *hw, int cmd, ...)
PAVoiceOut *pa = (PAVoiceOut *) hw;
pa_operation *op;
pa_cvolume v;
paaudio *g = pa->g;
paaudio *g = &glob_paaudio;
#ifdef PA_CHECK_VERSION /* macro is present in 0.9.16+ */
pa_cvolume_init (&v); /* function is present in 0.9.13+ */
@@ -758,7 +755,7 @@ static int qpa_ctl_in (HWVoiceIn *hw, int cmd, ...)
PAVoiceIn *pa = (PAVoiceIn *) hw;
pa_operation *op;
pa_cvolume v;
paaudio *g = pa->g;
paaudio *g = &glob_paaudio;
#ifdef PA_CHECK_VERSION
pa_cvolume_init (&v);
@@ -808,31 +805,23 @@ static int qpa_ctl_in (HWVoiceIn *hw, int cmd, ...)
}
/* common */
static PAConf glob_conf = {
.samples = 4096,
};
static void *qpa_audio_init (void)
{
paaudio *g = g_malloc(sizeof(paaudio));
g->conf = glob_conf;
g->mainloop = NULL;
g->context = NULL;
paaudio *g = &glob_paaudio;
g->mainloop = pa_threaded_mainloop_new ();
if (!g->mainloop) {
goto fail;
}
g->context = pa_context_new (pa_threaded_mainloop_get_api (g->mainloop),
g->conf.server);
g->context = pa_context_new (pa_threaded_mainloop_get_api (g->mainloop), glob_paaudio.server);
if (!g->context) {
goto fail;
}
pa_context_set_state_callback (g->context, context_state_cb, g);
if (pa_context_connect (g->context, g->conf.server, 0, NULL) < 0) {
if (pa_context_connect (g->context, glob_paaudio.server, 0, NULL) < 0) {
qpa_logerr (pa_context_errno (g->context),
"pa_context_connect() failed\n");
goto fail;
@@ -865,13 +854,12 @@ static void *qpa_audio_init (void)
pa_threaded_mainloop_unlock (g->mainloop);
return g;
return &glob_paaudio;
unlock_and_fail:
pa_threaded_mainloop_unlock (g->mainloop);
fail:
AUD_log (AUDIO_CAP, "Failed to initialize PA context");
qpa_audio_fini(g);
return NULL;
}
@@ -886,38 +874,39 @@ static void qpa_audio_fini (void *opaque)
if (g->context) {
pa_context_disconnect (g->context);
pa_context_unref (g->context);
g->context = NULL;
}
if (g->mainloop) {
pa_threaded_mainloop_free (g->mainloop);
}
g_free(g);
g->mainloop = NULL;
}
struct audio_option qpa_options[] = {
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &glob_conf.samples,
.valp = &glob_paaudio.samples,
.descr = "buffer size in samples"
},
{
.name = "SERVER",
.tag = AUD_OPT_STR,
.valp = &glob_conf.server,
.valp = &glob_paaudio.server,
.descr = "server address"
},
{
.name = "SINK",
.tag = AUD_OPT_STR,
.valp = &glob_conf.sink,
.valp = &glob_paaudio.sink,
.descr = "sink device name"
},
{
.name = "SOURCE",
.tag = AUD_OPT_STR,
.valp = &glob_conf.source,
.valp = &glob_paaudio.source,
.descr = "source device name"
},
{ /* End of list */ }

View File

@@ -55,7 +55,6 @@ static struct SDLAudioState {
SDL_mutex *mutex;
SDL_sem *sem;
int initialized;
bool driver_created;
} glob_sdl;
typedef struct SDLAudioState SDLAudioState;
@@ -333,8 +332,7 @@ static void sdl_fini_out (HWVoiceOut *hw)
sdl_close (&glob_sdl);
}
static int sdl_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int sdl_init_out (HWVoiceOut *hw, struct audsettings *as)
{
SDLVoiceOut *sdl = (SDLVoiceOut *) hw;
SDLAudioState *s = &glob_sdl;
@@ -394,10 +392,6 @@ static int sdl_ctl_out (HWVoiceOut *hw, int cmd, ...)
static void *sdl_audio_init (void)
{
SDLAudioState *s = &glob_sdl;
if (s->driver_created) {
sdl_logerr("Can't create multiple sdl backends\n");
return NULL;
}
if (SDL_InitSubSystem (SDL_INIT_AUDIO)) {
sdl_logerr ("SDL failed to initialize audio subsystem\n");
@@ -419,7 +413,6 @@ static void *sdl_audio_init (void)
return NULL;
}
s->driver_created = true;
return s;
}
@@ -430,7 +423,6 @@ static void sdl_audio_fini (void *opaque)
SDL_DestroySemaphore (s->sem);
SDL_DestroyMutex (s->mutex);
SDL_QuitSubSystem (SDL_INIT_AUDIO);
s->driver_created = false;
}
static struct audio_option sdl_options[] = {

View File

@@ -18,7 +18,6 @@
*/
#include "hw/hw.h"
#include "qemu/error-report.h"
#include "qemu/timer.h"
#include "ui/qemu-spice.h"
@@ -116,8 +115,7 @@ static int rate_get_samples (struct audio_pcm_info *info, SpiceRateCtl *rate)
/* playback */
static int line_out_init(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int line_out_init (HWVoiceOut *hw, struct audsettings *as)
{
SpiceVoiceOut *out = container_of (hw, SpiceVoiceOut, hw);
struct audsettings settings;
@@ -245,7 +243,7 @@ static int line_out_ctl (HWVoiceOut *hw, int cmd, ...)
/* record */
static int line_in_init(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int line_in_init (HWVoiceIn *hw, struct audsettings *as)
{
SpiceVoiceIn *in = container_of (hw, SpiceVoiceIn, hw);
struct audsettings settings;

View File

@@ -36,10 +36,15 @@ typedef struct WAVVoiceOut {
int total_samples;
} WAVVoiceOut;
typedef struct {
static struct {
struct audsettings settings;
const char *wav_path;
} WAVConf;
} conf = {
.settings.freq = 44100,
.settings.nchannels = 2,
.settings.fmt = AUD_FMT_S16,
.wav_path = "qemu.wav"
};
static int wav_run_out (HWVoiceOut *hw, int live)
{
@@ -100,8 +105,7 @@ static void le_store (uint8_t *buf, uint32_t val, int len)
}
}
static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int wav_init_out (HWVoiceOut *hw, struct audsettings *as)
{
WAVVoiceOut *wav = (WAVVoiceOut *) hw;
int bits16 = 0, stereo = 0;
@@ -111,8 +115,9 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
0x02, 0x00, 0x44, 0xac, 0x00, 0x00, 0x10, 0xb1, 0x02, 0x00, 0x04,
0x00, 0x10, 0x00, 0x64, 0x61, 0x74, 0x61, 0x00, 0x00, 0x00, 0x00
};
WAVConf *conf = drv_opaque;
struct audsettings wav_as = conf->settings;
struct audsettings wav_as = conf.settings;
(void) as;
stereo = wav_as.nchannels == 2;
switch (wav_as.fmt) {
@@ -150,10 +155,10 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
le_store (hdr + 28, hw->info.freq << (bits16 + stereo), 4);
le_store (hdr + 32, 1 << (bits16 + stereo), 2);
wav->f = fopen (conf->wav_path, "wb");
wav->f = fopen (conf.wav_path, "wb");
if (!wav->f) {
dolog ("Failed to open wave file `%s'\nReason: %s\n",
conf->wav_path, strerror (errno));
conf.wav_path, strerror (errno));
g_free (wav->pcm_buf);
wav->pcm_buf = NULL;
return -1;
@@ -221,49 +226,40 @@ static int wav_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static WAVConf glob_conf = {
.settings.freq = 44100,
.settings.nchannels = 2,
.settings.fmt = AUD_FMT_S16,
.wav_path = "qemu.wav"
};
static void *wav_audio_init (void)
{
WAVConf *conf = g_malloc(sizeof(WAVConf));
*conf = glob_conf;
return conf;
return &conf;
}
static void wav_audio_fini (void *opaque)
{
(void) opaque;
ldebug ("wav_fini");
g_free(opaque);
}
static struct audio_option wav_options[] = {
{
.name = "FREQUENCY",
.tag = AUD_OPT_INT,
.valp = &glob_conf.settings.freq,
.valp = &conf.settings.freq,
.descr = "Frequency"
},
{
.name = "FORMAT",
.tag = AUD_OPT_FMT,
.valp = &glob_conf.settings.fmt,
.valp = &conf.settings.fmt,
.descr = "Format"
},
{
.name = "DAC_FIXED_CHANNELS",
.tag = AUD_OPT_INT,
.valp = &glob_conf.settings.nchannels,
.valp = &conf.settings.nchannels,
.descr = "Number of channels (1 - mono, 2 - stereo)"
},
{
.name = "PATH",
.tag = AUD_OPT_STR,
.valp = &glob_conf.wav_path,
.valp = &conf.wav_path,
.descr = "Path to wave file"
},
{ /* End of list */ }

View File

@@ -1,6 +1,5 @@
#include "hw/hw.h"
#include "monitor/monitor.h"
#include "qemu/error-report.h"
#include "audio.h"
typedef struct {

717
audio/winwaveaudio.c Normal file
View File

@@ -0,0 +1,717 @@
/* public domain */
#include "qemu-common.h"
#include "sysemu/sysemu.h"
#include "audio.h"
#define AUDIO_CAP "winwave"
#include "audio_int.h"
#include <windows.h>
#include <mmsystem.h>
#include "audio_win_int.h"
static struct {
int dac_headers;
int dac_samples;
int adc_headers;
int adc_samples;
} conf = {
.dac_headers = 4,
.dac_samples = 1024,
.adc_headers = 4,
.adc_samples = 1024
};
typedef struct {
HWVoiceOut hw;
HWAVEOUT hwo;
WAVEHDR *hdrs;
HANDLE event;
void *pcm_buf;
int avail;
int pending;
int curhdr;
int paused;
CRITICAL_SECTION crit_sect;
} WaveVoiceOut;
typedef struct {
HWVoiceIn hw;
HWAVEIN hwi;
WAVEHDR *hdrs;
HANDLE event;
void *pcm_buf;
int curhdr;
int paused;
int rpos;
int avail;
CRITICAL_SECTION crit_sect;
} WaveVoiceIn;
static void winwave_log_mmresult (MMRESULT mr)
{
const char *str = "BUG";
switch (mr) {
case MMSYSERR_NOERROR:
str = "Success";
break;
case MMSYSERR_INVALHANDLE:
str = "Specified device handle is invalid";
break;
case MMSYSERR_BADDEVICEID:
str = "Specified device id is out of range";
break;
case MMSYSERR_NODRIVER:
str = "No device driver is present";
break;
case MMSYSERR_NOMEM:
str = "Unable to allocate or lock memory";
break;
case WAVERR_SYNC:
str = "Device is synchronous but waveOutOpen was called "
"without using the WINWAVE_ALLOWSYNC flag";
break;
case WAVERR_UNPREPARED:
str = "The data block pointed to by the pwh parameter "
"hasn't been prepared";
break;
case WAVERR_STILLPLAYING:
str = "There are still buffers in the queue";
break;
default:
dolog ("Reason: Unknown (MMRESULT %#x)\n", mr);
return;
}
dolog ("Reason: %s\n", str);
}
static void GCC_FMT_ATTR (2, 3) winwave_logerr (
MMRESULT mr,
const char *fmt,
...
)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
AUD_log (NULL, " failed\n");
winwave_log_mmresult (mr);
}
static void winwave_anal_close_out (WaveVoiceOut *wave)
{
MMRESULT mr;
mr = waveOutClose (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutClose");
}
wave->hwo = NULL;
}
static void CALLBACK winwave_callback_out (
HWAVEOUT hwo,
UINT msg,
DWORD_PTR dwInstance,
DWORD_PTR dwParam1,
DWORD_PTR dwParam2
)
{
WaveVoiceOut *wave = (WaveVoiceOut *) dwInstance;
switch (msg) {
case WOM_DONE:
{
WAVEHDR *h = (WAVEHDR *) dwParam1;
if (!h->dwUser) {
h->dwUser = 1;
EnterCriticalSection (&wave->crit_sect);
{
wave->avail += conf.dac_samples;
}
LeaveCriticalSection (&wave->crit_sect);
if (wave->hw.poll_mode) {
if (!SetEvent (wave->event)) {
dolog ("DAC SetEvent failed %lx\n", GetLastError ());
}
}
}
}
break;
case WOM_CLOSE:
case WOM_OPEN:
break;
default:
dolog ("unknown wave out callback msg %x\n", msg);
}
}
static int winwave_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int i;
int err;
MMRESULT mr;
WAVEFORMATEX wfx;
WaveVoiceOut *wave;
wave = (WaveVoiceOut *) hw;
InitializeCriticalSection (&wave->crit_sect);
err = waveformat_from_audio_settings (&wfx, as);
if (err) {
goto err0;
}
mr = waveOutOpen (&wave->hwo, WAVE_MAPPER, &wfx,
(DWORD_PTR) winwave_callback_out,
(DWORD_PTR) wave, CALLBACK_FUNCTION);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutOpen");
goto err1;
}
wave->hdrs = audio_calloc (AUDIO_FUNC, conf.dac_headers,
sizeof (*wave->hdrs));
if (!wave->hdrs) {
goto err2;
}
audio_pcm_init_info (&hw->info, as);
hw->samples = conf.dac_samples * conf.dac_headers;
wave->avail = hw->samples;
wave->pcm_buf = audio_calloc (AUDIO_FUNC, conf.dac_samples,
conf.dac_headers << hw->info.shift);
if (!wave->pcm_buf) {
goto err3;
}
for (i = 0; i < conf.dac_headers; ++i) {
WAVEHDR *h = &wave->hdrs[i];
h->dwUser = 0;
h->dwBufferLength = conf.dac_samples << hw->info.shift;
h->lpData = advance (wave->pcm_buf, i * h->dwBufferLength);
h->dwFlags = 0;
mr = waveOutPrepareHeader (wave->hwo, h, sizeof (*h));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutPrepareHeader(%d)", i);
goto err4;
}
}
return 0;
err4:
g_free (wave->pcm_buf);
err3:
g_free (wave->hdrs);
err2:
winwave_anal_close_out (wave);
err1:
err0:
return -1;
}
static int winwave_write (SWVoiceOut *sw, void *buf, int len)
{
return audio_pcm_sw_write (sw, buf, len);
}
static int winwave_run_out (HWVoiceOut *hw, int live)
{
WaveVoiceOut *wave = (WaveVoiceOut *) hw;
int decr;
int doreset;
EnterCriticalSection (&wave->crit_sect);
{
decr = audio_MIN (live, wave->avail);
decr = audio_pcm_hw_clip_out (hw, wave->pcm_buf, decr, wave->pending);
wave->pending += decr;
wave->avail -= decr;
}
LeaveCriticalSection (&wave->crit_sect);
doreset = hw->poll_mode && (wave->pending >= conf.dac_samples);
if (doreset && !ResetEvent (wave->event)) {
dolog ("DAC ResetEvent failed %lx\n", GetLastError ());
}
while (wave->pending >= conf.dac_samples) {
MMRESULT mr;
WAVEHDR *h = &wave->hdrs[wave->curhdr];
h->dwUser = 0;
mr = waveOutWrite (wave->hwo, h, sizeof (*h));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutWrite(%d)", wave->curhdr);
break;
}
wave->pending -= conf.dac_samples;
wave->curhdr = (wave->curhdr + 1) % conf.dac_headers;
}
return decr;
}
static void winwave_poll (void *opaque)
{
(void) opaque;
audio_run ("winwave_poll");
}
static void winwave_fini_out (HWVoiceOut *hw)
{
int i;
MMRESULT mr;
WaveVoiceOut *wave = (WaveVoiceOut *) hw;
mr = waveOutReset (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutReset");
}
for (i = 0; i < conf.dac_headers; ++i) {
mr = waveOutUnprepareHeader (wave->hwo, &wave->hdrs[i],
sizeof (wave->hdrs[i]));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutUnprepareHeader(%d)", i);
}
}
winwave_anal_close_out (wave);
if (wave->event) {
qemu_del_wait_object (wave->event, winwave_poll, wave);
if (!CloseHandle (wave->event)) {
dolog ("DAC CloseHandle failed %lx\n", GetLastError ());
}
wave->event = NULL;
}
g_free (wave->pcm_buf);
wave->pcm_buf = NULL;
g_free (wave->hdrs);
wave->hdrs = NULL;
}
static int winwave_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
MMRESULT mr;
WaveVoiceOut *wave = (WaveVoiceOut *) hw;
switch (cmd) {
case VOICE_ENABLE:
{
va_list ap;
int poll_mode;
va_start (ap, cmd);
poll_mode = va_arg (ap, int);
va_end (ap);
if (poll_mode && !wave->event) {
wave->event = CreateEvent (NULL, TRUE, TRUE, NULL);
if (!wave->event) {
dolog ("DAC CreateEvent: %lx, poll mode will be disabled\n",
GetLastError ());
}
}
if (wave->event) {
int ret;
ret = qemu_add_wait_object (wave->event, winwave_poll, wave);
hw->poll_mode = (ret == 0);
}
else {
hw->poll_mode = 0;
}
wave->paused = 0;
}
return 0;
case VOICE_DISABLE:
if (!wave->paused) {
mr = waveOutReset (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutReset");
}
else {
wave->paused = 1;
}
}
if (wave->event) {
qemu_del_wait_object (wave->event, winwave_poll, wave);
}
return 0;
}
return -1;
}
static void winwave_anal_close_in (WaveVoiceIn *wave)
{
MMRESULT mr;
mr = waveInClose (wave->hwi);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInClose");
}
wave->hwi = NULL;
}
static void CALLBACK winwave_callback_in (
HWAVEIN *hwi,
UINT msg,
DWORD_PTR dwInstance,
DWORD_PTR dwParam1,
DWORD_PTR dwParam2
)
{
WaveVoiceIn *wave = (WaveVoiceIn *) dwInstance;
switch (msg) {
case WIM_DATA:
{
WAVEHDR *h = (WAVEHDR *) dwParam1;
if (!h->dwUser) {
h->dwUser = 1;
EnterCriticalSection (&wave->crit_sect);
{
wave->avail += conf.adc_samples;
}
LeaveCriticalSection (&wave->crit_sect);
if (wave->hw.poll_mode) {
if (!SetEvent (wave->event)) {
dolog ("ADC SetEvent failed %lx\n", GetLastError ());
}
}
}
}
break;
case WIM_CLOSE:
case WIM_OPEN:
break;
default:
dolog ("unknown wave in callback msg %x\n", msg);
}
}
static void winwave_add_buffers (WaveVoiceIn *wave, int samples)
{
int doreset;
doreset = wave->hw.poll_mode && (samples >= conf.adc_samples);
if (doreset && !ResetEvent (wave->event)) {
dolog ("ADC ResetEvent failed %lx\n", GetLastError ());
}
while (samples >= conf.adc_samples) {
MMRESULT mr;
WAVEHDR *h = &wave->hdrs[wave->curhdr];
h->dwUser = 0;
mr = waveInAddBuffer (wave->hwi, h, sizeof (*h));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInAddBuffer(%d)", wave->curhdr);
}
wave->curhdr = (wave->curhdr + 1) % conf.adc_headers;
samples -= conf.adc_samples;
}
}
static int winwave_init_in (HWVoiceIn *hw, struct audsettings *as)
{
int i;
int err;
MMRESULT mr;
WAVEFORMATEX wfx;
WaveVoiceIn *wave;
wave = (WaveVoiceIn *) hw;
InitializeCriticalSection (&wave->crit_sect);
err = waveformat_from_audio_settings (&wfx, as);
if (err) {
goto err0;
}
mr = waveInOpen (&wave->hwi, WAVE_MAPPER, &wfx,
(DWORD_PTR) winwave_callback_in,
(DWORD_PTR) wave, CALLBACK_FUNCTION);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInOpen");
goto err1;
}
wave->hdrs = audio_calloc (AUDIO_FUNC, conf.dac_headers,
sizeof (*wave->hdrs));
if (!wave->hdrs) {
goto err2;
}
audio_pcm_init_info (&hw->info, as);
hw->samples = conf.adc_samples * conf.adc_headers;
wave->avail = 0;
wave->pcm_buf = audio_calloc (AUDIO_FUNC, conf.adc_samples,
conf.adc_headers << hw->info.shift);
if (!wave->pcm_buf) {
goto err3;
}
for (i = 0; i < conf.adc_headers; ++i) {
WAVEHDR *h = &wave->hdrs[i];
h->dwUser = 0;
h->dwBufferLength = conf.adc_samples << hw->info.shift;
h->lpData = advance (wave->pcm_buf, i * h->dwBufferLength);
h->dwFlags = 0;
mr = waveInPrepareHeader (wave->hwi, h, sizeof (*h));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInPrepareHeader(%d)", i);
goto err4;
}
}
wave->paused = 1;
winwave_add_buffers (wave, hw->samples);
return 0;
err4:
g_free (wave->pcm_buf);
err3:
g_free (wave->hdrs);
err2:
winwave_anal_close_in (wave);
err1:
err0:
return -1;
}
static void winwave_fini_in (HWVoiceIn *hw)
{
int i;
MMRESULT mr;
WaveVoiceIn *wave = (WaveVoiceIn *) hw;
mr = waveInReset (wave->hwi);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInReset");
}
for (i = 0; i < conf.adc_headers; ++i) {
mr = waveInUnprepareHeader (wave->hwi, &wave->hdrs[i],
sizeof (wave->hdrs[i]));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInUnprepareHeader(%d)", i);
}
}
winwave_anal_close_in (wave);
if (wave->event) {
qemu_del_wait_object (wave->event, winwave_poll, wave);
if (!CloseHandle (wave->event)) {
dolog ("ADC CloseHandle failed %lx\n", GetLastError ());
}
wave->event = NULL;
}
g_free (wave->pcm_buf);
wave->pcm_buf = NULL;
g_free (wave->hdrs);
wave->hdrs = NULL;
}
static int winwave_run_in (HWVoiceIn *hw)
{
WaveVoiceIn *wave = (WaveVoiceIn *) hw;
int live = audio_pcm_hw_get_live_in (hw);
int dead = hw->samples - live;
int decr, ret;
if (!dead) {
return 0;
}
EnterCriticalSection (&wave->crit_sect);
{
decr = audio_MIN (dead, wave->avail);
wave->avail -= decr;
}
LeaveCriticalSection (&wave->crit_sect);
ret = decr;
while (decr) {
int left = hw->samples - hw->wpos;
int conv = audio_MIN (left, decr);
hw->conv (hw->conv_buf + hw->wpos,
advance (wave->pcm_buf, wave->rpos << hw->info.shift),
conv);
wave->rpos = (wave->rpos + conv) % hw->samples;
hw->wpos = (hw->wpos + conv) % hw->samples;
decr -= conv;
}
winwave_add_buffers (wave, ret);
return ret;
}
static int winwave_read (SWVoiceIn *sw, void *buf, int size)
{
return audio_pcm_sw_read (sw, buf, size);
}
static int winwave_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
MMRESULT mr;
WaveVoiceIn *wave = (WaveVoiceIn *) hw;
switch (cmd) {
case VOICE_ENABLE:
{
va_list ap;
int poll_mode;
va_start (ap, cmd);
poll_mode = va_arg (ap, int);
va_end (ap);
if (poll_mode && !wave->event) {
wave->event = CreateEvent (NULL, TRUE, TRUE, NULL);
if (!wave->event) {
dolog ("ADC CreateEvent: %lx, poll mode will be disabled\n",
GetLastError ());
}
}
if (wave->event) {
int ret;
ret = qemu_add_wait_object (wave->event, winwave_poll, wave);
hw->poll_mode = (ret == 0);
}
else {
hw->poll_mode = 0;
}
if (wave->paused) {
mr = waveInStart (wave->hwi);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInStart");
}
wave->paused = 0;
}
}
return 0;
case VOICE_DISABLE:
if (!wave->paused) {
mr = waveInStop (wave->hwi);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInStop");
}
else {
wave->paused = 1;
}
}
if (wave->event) {
qemu_del_wait_object (wave->event, winwave_poll, wave);
}
return 0;
}
return 0;
}
static void *winwave_audio_init (void)
{
return &conf;
}
static void winwave_audio_fini (void *opaque)
{
(void) opaque;
}
static struct audio_option winwave_options[] = {
{
.name = "DAC_HEADERS",
.tag = AUD_OPT_INT,
.valp = &conf.dac_headers,
.descr = "DAC number of headers",
},
{
.name = "DAC_SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.dac_samples,
.descr = "DAC number of samples per header",
},
{
.name = "ADC_HEADERS",
.tag = AUD_OPT_INT,
.valp = &conf.adc_headers,
.descr = "ADC number of headers",
},
{
.name = "ADC_SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.adc_samples,
.descr = "ADC number of samples per header",
},
{ /* End of list */ }
};
static struct audio_pcm_ops winwave_pcm_ops = {
.init_out = winwave_init_out,
.fini_out = winwave_fini_out,
.run_out = winwave_run_out,
.write = winwave_write,
.ctl_out = winwave_ctl_out,
.init_in = winwave_init_in,
.fini_in = winwave_fini_in,
.run_in = winwave_run_in,
.read = winwave_read,
.ctl_in = winwave_ctl_in
};
struct audio_driver winwave_audio_driver = {
.name = "winwave",
.descr = "Windows Waveform Audio http://msdn.microsoft.com",
.options = winwave_options,
.init = winwave_audio_init,
.fini = winwave_audio_fini,
.pcm_ops = &winwave_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (WaveVoiceOut),
.voice_size_in = sizeof (WaveVoiceIn)
};

View File

@@ -43,7 +43,7 @@ file_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
return;
}
if (!fb->mem_path) {
error_setg(errp, "mem-path property not set");
error_setg(errp, "mem_path property not set");
return;
}
#ifndef CONFIG_LINUX

View File

@@ -10,10 +10,10 @@
* See the COPYING file in the top-level directory.
*/
#include "sysemu/hostmem.h"
#include "hw/boards.h"
#include "qapi/visitor.h"
#include "qapi-types.h"
#include "qapi-visit.h"
#include "qapi/qmp/qerror.h"
#include "qemu/config-file.h"
#include "qom/object_interfaces.h"
@@ -113,17 +113,24 @@ host_memory_backend_set_host_nodes(Object *obj, Visitor *v, void *opaque,
#endif
}
static int
host_memory_backend_get_policy(Object *obj, Error **errp G_GNUC_UNUSED)
static void
host_memory_backend_get_policy(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
return backend->policy;
int policy = backend->policy;
visit_type_enum(v, &policy, HostMemPolicy_lookup, NULL, name, errp);
}
static void
host_memory_backend_set_policy(Object *obj, int policy, Error **errp)
host_memory_backend_set_policy(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
int policy;
visit_type_enum(v, &policy, HostMemPolicy_lookup, NULL, name, errp);
backend->policy = policy;
#ifndef CONFIG_NUMA
@@ -223,10 +230,11 @@ static void host_memory_backend_set_prealloc(Object *obj, bool value,
static void host_memory_backend_init(Object *obj)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
MachineState *machine = MACHINE(qdev_get_machine());
backend->merge = machine_mem_merge(machine);
backend->dump = machine_dump_guest_core(machine);
backend->merge = qemu_opt_get_bool(qemu_get_machine_opts(),
"mem-merge", true);
backend->dump = qemu_opt_get_bool(qemu_get_machine_opts(),
"dump-guest-core", true);
backend->prealloc = mem_prealloc;
object_property_add_bool(obj, "merge",
@@ -244,10 +252,9 @@ static void host_memory_backend_init(Object *obj)
object_property_add(obj, "host-nodes", "int",
host_memory_backend_get_host_nodes,
host_memory_backend_set_host_nodes, NULL, NULL, NULL);
object_property_add_enum(obj, "policy", "HostMemPolicy",
HostMemPolicy_lookup,
host_memory_backend_get_policy,
host_memory_backend_set_policy, NULL);
object_property_add(obj, "policy", "str",
host_memory_backend_get_policy,
host_memory_backend_set_policy, NULL, NULL, NULL);
}
MemoryRegion *

View File

@@ -140,20 +140,19 @@ static void rng_egd_opened(RngBackend *b, Error **errp)
RngEgd *s = RNG_EGD(b);
if (s->chr_name == NULL) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
"chardev", "a valid character device");
error_set(errp, QERR_INVALID_PARAMETER_VALUE,
"chardev", "a valid character device");
return;
}
s->chr = qemu_chr_find(s->chr_name);
if (s->chr == NULL) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", s->chr_name);
error_set(errp, QERR_DEVICE_NOT_FOUND, s->chr_name);
return;
}
if (qemu_chr_fe_claim(s->chr) != 0) {
error_setg(errp, QERR_DEVICE_IN_USE, s->chr_name);
error_set(errp, QERR_DEVICE_IN_USE, s->chr_name);
return;
}
@@ -168,7 +167,7 @@ static void rng_egd_set_chardev(Object *obj, const char *value, Error **errp)
RngEgd *s = RNG_EGD(b);
if (b->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
error_set(errp, QERR_PERMISSION_DENIED);
} else {
g_free(s->chr_name);
s->chr_name = g_strdup(value);

View File

@@ -74,8 +74,8 @@ static void rng_random_opened(RngBackend *b, Error **errp)
RndRandom *s = RNG_RANDOM(b);
if (s->filename == NULL) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
"filename", "a valid filename");
error_set(errp, QERR_INVALID_PARAMETER_VALUE,
"filename", "a valid filename");
} else {
s->fd = qemu_open(s->filename, O_RDONLY | O_NONBLOCK);
if (s->fd == -1) {
@@ -98,7 +98,7 @@ static void rng_random_set_filename(Object *obj, const char *filename,
RndRandom *s = RNG_RANDOM(obj);
if (b->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
error_set(errp, QERR_PERMISSION_DENIED);
return;
}

View File

@@ -57,7 +57,7 @@ static void rng_backend_prop_set_opened(Object *obj, bool value, Error **errp)
}
if (!value && s->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
error_set(errp, QERR_PERMISSION_DENIED);
return;
}

View File

@@ -96,20 +96,6 @@ bool tpm_backend_get_tpm_established_flag(TPMBackend *s)
return k->ops->get_tpm_established_flag(s);
}
int tpm_backend_reset_tpm_established_flag(TPMBackend *s, uint8_t locty)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->reset_tpm_established_flag(s, locty);
}
TPMVersion tpm_backend_get_tpm_version(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->get_tpm_version(s);
}
static bool tpm_backend_prop_get_opened(Object *obj, Error **errp)
{
TPMBackend *s = TPM_BACKEND(obj);
@@ -133,7 +119,7 @@ static void tpm_backend_prop_set_opened(Object *obj, bool value, Error **errp)
}
if (!value && s->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
error_set(errp, QERR_PERMISSION_DENIED);
return;
}
@@ -179,6 +165,17 @@ void tpm_backend_thread_end(TPMBackendThread *tbt)
}
}
void tpm_backend_thread_tpm_reset(TPMBackendThread *tbt,
GFunc func, gpointer user_data)
{
if (!tbt->pool) {
tpm_backend_thread_create(tbt, func, user_data);
} else {
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_TPM_RESET,
NULL);
}
}
static const TypeInfo tpm_backend_info = {
.name = TYPE_TPM_BACKEND,
.parent = TYPE_OBJECT,

View File

@@ -24,13 +24,12 @@
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "monitor/monitor.h"
#include "exec/cpu-common.h"
#include "sysemu/kvm.h"
#include "sysemu/balloon.h"
#include "trace.h"
#include "qmp-commands.h"
#include "qapi/qmp/qerror.h"
#include "qapi/qmp/qjson.h"
static QEMUBalloonEvent *balloon_event_fn;
@@ -59,6 +58,7 @@ int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
/* We're already registered one balloon handler. How many can
* a guest really have?
*/
error_report("Another balloon device already registered");
return -1;
}
balloon_event_fn = event_func;
@@ -97,7 +97,7 @@ void qmp_balloon(int64_t target, Error **errp)
}
if (target <= 0) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "target", "a size");
error_set(errp, QERR_INVALID_PARAMETER_VALUE, "target", "a size");
return;
}

3221
block.c

File diff suppressed because it is too large Load Diff

View File

@@ -1,16 +1,15 @@
block-obj-y += raw_bsd.o qcow.o vdi.o vmdk.o cloop.o bochs.o vpc.o vvfat.o
block-obj-y += raw_bsd.o qcow.o vdi.o vmdk.o cloop.o dmg.o bochs.o vpc.o vvfat.o
block-obj-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-cache.o
block-obj-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o qed-cluster.o
block-obj-y += qed-check.o
block-obj-$(CONFIG_VHDX) += vhdx.o vhdx-endian.o vhdx-log.o
block-obj-y += quorum.o
block-obj-$(CONFIG_QUORUM) += quorum.o
block-obj-y += parallels.o blkdebug.o blkverify.o
block-obj-y += block-backend.o snapshot.o qapi.o
block-obj-$(CONFIG_WIN32) += raw-win32.o win32-aio.o
block-obj-$(CONFIG_POSIX) += raw-posix.o
block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
block-obj-y += null.o mirror.o io.o
block-obj-y += throttle-groups.o
block-obj-y += null.o mirror.o
block-obj-y += nbd.o nbd-client.o sheepdog.o
block-obj-$(CONFIG_LIBISCSI) += iscsi.o
@@ -38,7 +37,6 @@ gluster.o-libs := $(GLUSTERFS_LIBS)
ssh.o-cflags := $(LIBSSH2_CFLAGS)
ssh.o-libs := $(LIBSSH2_LIBS)
archipelago.o-libs := $(ARCHIPELAGO_LIBS)
block-obj-m += dmg.o
dmg.o-libs := $(BZIP2_LIBS)
qcow.o-libs := -lz
linux-aio.o-libs := -laio

View File

@@ -19,7 +19,6 @@
#include "block/block.h"
#include "block/block_int.h"
#include "block/blockjob.h"
#include "qapi/qmp/qerror.h"
#include "qemu/ratelimit.h"
#define BACKUP_CLUSTER_BITS 16
@@ -38,8 +37,6 @@ typedef struct CowRequest {
typedef struct BackupBlockJob {
BlockJob common;
BlockDriverState *target;
/* bitmap for sync=incremental */
BdrvDirtyBitmap *sync_bitmap;
MirrorSyncMode sync_mode;
RateLimit limit;
BlockdevOnError on_source_error;
@@ -198,7 +195,7 @@ static void backup_set_speed(BlockJob *job, int64_t speed, Error **errp)
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
if (speed < 0) {
error_setg(errp, QERR_INVALID_PARAMETER, "speed");
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
@@ -245,91 +242,6 @@ static void backup_complete(BlockJob *job, void *opaque)
g_free(data);
}
static bool coroutine_fn yield_and_check(BackupBlockJob *job)
{
if (block_job_is_cancelled(&job->common)) {
return true;
}
/* we need to yield so that bdrv_drain_all() returns.
* (without, VM does not reboot)
*/
if (job->common.speed) {
uint64_t delay_ns = ratelimit_calculate_delay(&job->limit,
job->sectors_read);
job->sectors_read = 0;
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, delay_ns);
} else {
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, 0);
}
if (block_job_is_cancelled(&job->common)) {
return true;
}
return false;
}
static int coroutine_fn backup_run_incremental(BackupBlockJob *job)
{
bool error_is_read;
int ret = 0;
int clusters_per_iter;
uint32_t granularity;
int64_t sector;
int64_t cluster;
int64_t end;
int64_t last_cluster = -1;
BlockDriverState *bs = job->common.bs;
HBitmapIter hbi;
granularity = bdrv_dirty_bitmap_granularity(job->sync_bitmap);
clusters_per_iter = MAX((granularity / BACKUP_CLUSTER_SIZE), 1);
bdrv_dirty_iter_init(job->sync_bitmap, &hbi);
/* Find the next dirty sector(s) */
while ((sector = hbitmap_iter_next(&hbi)) != -1) {
cluster = sector / BACKUP_SECTORS_PER_CLUSTER;
/* Fake progress updates for any clusters we skipped */
if (cluster != last_cluster + 1) {
job->common.offset += ((cluster - last_cluster - 1) *
BACKUP_CLUSTER_SIZE);
}
for (end = cluster + clusters_per_iter; cluster < end; cluster++) {
do {
if (yield_and_check(job)) {
return ret;
}
ret = backup_do_cow(bs, cluster * BACKUP_SECTORS_PER_CLUSTER,
BACKUP_SECTORS_PER_CLUSTER, &error_is_read);
if ((ret < 0) &&
backup_error_action(job, error_is_read, -ret) ==
BLOCK_ERROR_ACTION_REPORT) {
return ret;
}
} while (ret < 0);
}
/* If the bitmap granularity is smaller than the backup granularity,
* we need to advance the iterator pointer to the next cluster. */
if (granularity < BACKUP_CLUSTER_SIZE) {
bdrv_set_dirty_iter(&hbi, cluster * BACKUP_SECTORS_PER_CLUSTER);
}
last_cluster = cluster - 1;
}
/* Play some final catchup with the progress meter */
end = DIV_ROUND_UP(job->common.len, BACKUP_CLUSTER_SIZE);
if (last_cluster + 1 < end) {
job->common.offset += ((end - last_cluster - 1) * BACKUP_CLUSTER_SIZE);
}
return ret;
}
static void coroutine_fn backup_run(void *opaque)
{
BackupBlockJob *job = opaque;
@@ -347,7 +259,8 @@ static void coroutine_fn backup_run(void *opaque)
qemu_co_rwlock_init(&job->flush_rwlock);
start = 0;
end = DIV_ROUND_UP(job->common.len, BACKUP_CLUSTER_SIZE);
end = DIV_ROUND_UP(job->common.len / BDRV_SECTOR_SIZE,
BACKUP_SECTORS_PER_CLUSTER);
job->bitmap = hbitmap_alloc(end, 0);
@@ -365,13 +278,28 @@ static void coroutine_fn backup_run(void *opaque)
qemu_coroutine_yield();
job->common.busy = true;
}
} else if (job->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
ret = backup_run_incremental(job);
} else {
/* Both FULL and TOP SYNC_MODE's require copying.. */
for (; start < end; start++) {
bool error_is_read;
if (yield_and_check(job)) {
if (block_job_is_cancelled(&job->common)) {
break;
}
/* we need to yield so that qemu_aio_flush() returns.
* (without, VM does not reboot)
*/
if (job->common.speed) {
uint64_t delay_ns = ratelimit_calculate_delay(
&job->limit, job->sectors_read);
job->sectors_read = 0;
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, delay_ns);
} else {
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, 0);
}
if (block_job_is_cancelled(&job->common)) {
break;
}
@@ -429,18 +357,6 @@ static void coroutine_fn backup_run(void *opaque)
qemu_co_rwlock_wrlock(&job->flush_rwlock);
qemu_co_rwlock_unlock(&job->flush_rwlock);
if (job->sync_bitmap) {
BdrvDirtyBitmap *bm;
if (ret < 0 || block_job_is_cancelled(&job->common)) {
/* Merge the successor back into the parent, delete nothing. */
bm = bdrv_reclaim_dirty_bitmap(bs, job->sync_bitmap, NULL);
assert(bm);
} else {
/* Everything is fine, delete this bitmap and install the backup. */
bm = bdrv_dirty_bitmap_abdicate(bs, job->sync_bitmap, NULL);
assert(bm);
}
}
hbitmap_free(job->bitmap);
bdrv_iostatus_disable(target);
@@ -453,7 +369,6 @@ static void coroutine_fn backup_run(void *opaque)
void backup_start(BlockDriverState *bs, BlockDriverState *target,
int64_t speed, MirrorSyncMode sync_mode,
BdrvDirtyBitmap *sync_bitmap,
BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockCompletionFunc *cb, void *opaque,
@@ -473,7 +388,7 @@ void backup_start(BlockDriverState *bs, BlockDriverState *target,
if ((on_source_error == BLOCKDEV_ON_ERROR_STOP ||
on_source_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
error_setg(errp, QERR_INVALID_PARAMETER, "on-source-error");
error_set(errp, QERR_INVALID_PARAMETER, "on-source-error");
return;
}
@@ -497,36 +412,17 @@ void backup_start(BlockDriverState *bs, BlockDriverState *target,
return;
}
if (sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
if (!sync_bitmap) {
error_setg(errp, "must provide a valid bitmap name for "
"\"incremental\" sync mode");
return;
}
/* Create a new bitmap, and freeze/disable this one. */
if (bdrv_dirty_bitmap_create_successor(bs, sync_bitmap, errp) < 0) {
return;
}
} else if (sync_bitmap) {
error_setg(errp,
"a sync_bitmap was provided to backup_run, "
"but received an incompatible sync_mode (%s)",
MirrorSyncMode_lookup[sync_mode]);
return;
}
len = bdrv_getlength(bs);
if (len < 0) {
error_setg_errno(errp, -len, "unable to get length for '%s'",
bdrv_get_device_name(bs));
goto error;
return;
}
BackupBlockJob *job = block_job_create(&backup_job_driver, bs, speed,
cb, opaque, errp);
if (!job) {
goto error;
return;
}
bdrv_op_block_all(target, job->common.blocker);
@@ -535,15 +431,7 @@ void backup_start(BlockDriverState *bs, BlockDriverState *target,
job->on_target_error = on_target_error;
job->target = target;
job->sync_mode = sync_mode;
job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_INCREMENTAL ?
sync_bitmap : NULL;
job->common.len = len;
job->common.co = qemu_coroutine_create(backup_run);
qemu_coroutine_enter(job->common.co, job);
return;
error:
if (sync_bitmap) {
bdrv_reclaim_dirty_bitmap(bs, sync_bitmap, NULL);
}
}

View File

@@ -216,9 +216,10 @@ static int get_event_by_name(const char *name, BlkDebugEvent *event)
struct add_rule_data {
BDRVBlkdebugState *s;
int action;
Error **errp;
};
static int add_rule(void *opaque, QemuOpts *opts, Error **errp)
static int add_rule(QemuOpts *opts, void *opaque)
{
struct add_rule_data *d = opaque;
BDRVBlkdebugState *s = d->s;
@@ -229,10 +230,10 @@ static int add_rule(void *opaque, QemuOpts *opts, Error **errp)
/* Find the right event for the rule */
event_name = qemu_opt_get(opts, "event");
if (!event_name) {
error_setg(errp, "Missing event name for rule");
error_setg(d->errp, "Missing event name for rule");
return -1;
} else if (get_event_by_name(event_name, &event) < 0) {
error_setg(errp, "Invalid event name \"%s\"", event_name);
error_setg(d->errp, "Invalid event name \"%s\"", event_name);
return -1;
}
@@ -318,7 +319,8 @@ static int read_config(BDRVBlkdebugState *s, const char *filename,
d.s = s;
d.action = ACTION_INJECT_ERROR;
qemu_opts_foreach(&inject_error_opts, add_rule, &d, &local_err);
d.errp = &local_err;
qemu_opts_foreach(&inject_error_opts, add_rule, &d, 1);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
@@ -326,7 +328,7 @@ static int read_config(BDRVBlkdebugState *s, const char *filename,
}
d.action = ACTION_SET_STATE;
qemu_opts_foreach(&set_state_opts, add_rule, &d, &local_err);
qemu_opts_foreach(&set_state_opts, add_rule, &d, 1);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
@@ -429,7 +431,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
/* Open the backing file */
assert(bs->file == NULL);
ret = bdrv_open_image(&bs->file, qemu_opt_get(opts, "x-image"), options, "image",
bs, &child_file, false, &local_err);
flags | BDRV_O_PROTOCOL, false, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
goto out;
@@ -719,11 +721,6 @@ static int64_t blkdebug_getlength(BlockDriverState *bs)
return bdrv_getlength(bs->file);
}
static int blkdebug_truncate(BlockDriverState *bs, int64_t offset)
{
return bdrv_truncate(bs->file, offset);
}
static void blkdebug_refresh_filename(BlockDriverState *bs)
{
QDict *opts;
@@ -782,7 +779,6 @@ static BlockDriver bdrv_blkdebug = {
.bdrv_file_open = blkdebug_open,
.bdrv_close = blkdebug_close,
.bdrv_getlength = blkdebug_getlength,
.bdrv_truncate = blkdebug_truncate,
.bdrv_refresh_filename = blkdebug_refresh_filename,
.bdrv_aio_readv = blkdebug_aio_readv,

View File

@@ -125,7 +125,7 @@ static int blkverify_open(BlockDriverState *bs, QDict *options, int flags,
/* Open the raw file */
assert(bs->file == NULL);
ret = bdrv_open_image(&bs->file, qemu_opt_get(opts, "x-raw"), options,
"raw", bs, &child_file, false, &local_err);
"raw", flags | BDRV_O_PROTOCOL, false, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
goto fail;
@@ -134,7 +134,7 @@ static int blkverify_open(BlockDriverState *bs, QDict *options, int flags,
/* Open the test file */
assert(s->test_file == NULL);
ret = bdrv_open_image(&s->test_file, qemu_opt_get(opts, "x-image"), options,
"test", bs, &child_format, false, &local_err);
"test", flags, false, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
s->test_file = NULL;

View File

@@ -515,17 +515,6 @@ int blk_write(BlockBackend *blk, int64_t sector_num, const uint8_t *buf,
return bdrv_write(blk->bs, sector_num, buf, nb_sectors);
}
int blk_write_zeroes(BlockBackend *blk, int64_t sector_num,
int nb_sectors, BdrvRequestFlags flags)
{
int ret = blk_check_request(blk, sector_num, nb_sectors);
if (ret < 0) {
return ret;
}
return bdrv_write_zeroes(blk->bs, sector_num, nb_sectors, flags);
}
static void error_callback_bh(void *opaque)
{
struct BlockBackendAIOCB *acb = opaque;
@@ -700,11 +689,6 @@ int blk_flush_all(void)
return bdrv_flush_all();
}
void blk_drain(BlockBackend *blk)
{
bdrv_drain(blk->bs);
}
void blk_drain_all(void)
{
bdrv_drain_all();

View File

@@ -15,7 +15,6 @@
#include "trace.h"
#include "block/block_int.h"
#include "block/blockjob.h"
#include "qapi/qmp/qerror.h"
#include "qemu/ratelimit.h"
enum {
@@ -187,7 +186,7 @@ static void commit_set_speed(BlockJob *job, int64_t speed, Error **errp)
CommitBlockJob *s = container_of(job, CommitBlockJob, common);
if (speed < 0) {
error_setg(errp, QERR_INVALID_PARAMETER, "speed");
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);

View File

@@ -22,10 +22,8 @@
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "qemu/error-report.h"
#include "block/block_int.h"
#include "qapi/qmp/qbool.h"
#include "qapi/qmp/qstring.h"
#include <curl/curl.h>
// #define DEBUG_CURL
@@ -299,18 +297,6 @@ static void curl_multi_check_completion(BDRVCURLState *s)
/* ACBs for successful messages get completed in curl_read_cb */
if (msg->data.result != CURLE_OK) {
int i;
static int errcount = 100;
/* Don't lose the original error message from curl, since
* it contains extra data.
*/
if (errcount > 0) {
error_report("curl: %s", state->errmsg);
if (--errcount == 0) {
error_report("curl: further errors suppressed");
}
}
for (i = 0; i < CURL_NUM_ACB; i++) {
CURLAIOCB *acb = state->acb[i];
@@ -318,7 +304,7 @@ static void curl_multi_check_completion(BDRVCURLState *s)
continue;
}
acb->common.cb(acb->common.opaque, -EPROTO);
acb->common.cb(acb->common.opaque, -EIO);
qemu_aio_unref(acb);
state->acb[i] = NULL;
}

View File

@@ -24,7 +24,6 @@
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/bswap.h"
#include "qemu/error-report.h"
#include "qemu/module.h"
#include <zlib.h>
#ifdef CONFIG_BZIP2

2610
block/io.c

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,7 @@
* QEMU Block driver for iSCSI images
*
* Copyright (c) 2010-2011 Ronnie Sahlberg <ronniesahlberg@gmail.com>
* Copyright (c) 2012-2015 Peter Lieven <pl@kamp.de>
* Copyright (c) 2012-2014 Peter Lieven <pl@kamp.de>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
@@ -38,7 +38,6 @@
#include "qemu/iov.h"
#include "sysemu/sysemu.h"
#include "qmp-commands.h"
#include "qapi/qmp/qstring.h"
#include <iscsi/iscsi.h>
#include <iscsi/scsi-lowlevel.h>
@@ -58,6 +57,9 @@ typedef struct IscsiLun {
int events;
QEMUTimer *nop_timer;
QEMUTimer *event_timer;
uint8_t lbpme;
uint8_t lbprz;
uint8_t has_write_same;
struct scsi_inquiry_logical_block_provisioning lbp;
struct scsi_inquiry_block_limits bl;
unsigned char *zeroblock;
@@ -65,12 +67,6 @@ typedef struct IscsiLun {
int cluster_sectors;
bool use_16_for_rw;
bool write_protected;
bool lbpme;
bool lbprz;
bool dpofua;
bool has_write_same;
bool force_next_flush;
bool request_timed_out;
} IscsiLun;
typedef struct IscsiTask {
@@ -83,7 +79,6 @@ typedef struct IscsiTask {
QEMUBH *bh;
IscsiLun *iscsilun;
QEMUTimer retry_timer;
bool force_next_flush;
} IscsiTask;
typedef struct IscsiAIOCB {
@@ -101,12 +96,11 @@ typedef struct IscsiAIOCB {
#endif
} IscsiAIOCB;
/* libiscsi uses time_t so its enough to process events every second */
#define EVENT_INTERVAL 1000
#define EVENT_INTERVAL 250
#define NOP_INTERVAL 5000
#define MAX_NOP_FAILURES 3
#define ISCSI_CMD_RETRIES ARRAY_SIZE(iscsi_retry_times)
static const unsigned iscsi_retry_times[] = {8, 32, 128, 512, 2048, 8192, 32768};
static const unsigned iscsi_retry_times[] = {8, 32, 128, 512, 2048};
/* this threshold is a trade-off knob to choose between
* the potential additional overhead of an extra GET_LBA_STATUS request
@@ -169,19 +163,6 @@ static inline unsigned exp_random(double mean)
return -mean * log((double)rand() / RAND_MAX);
}
/* SCSI_STATUS_TASK_SET_FULL and SCSI_STATUS_TIMEOUT were introduced
* in libiscsi 1.10.0 as part of an enum. The LIBISCSI_API_VERSION
* macro was introduced in 1.11.0. So use the API_VERSION macro as
* a hint that the macros are defined and define them ourselves
* otherwise to keep the required libiscsi version at 1.9.0 */
#if !defined(LIBISCSI_API_VERSION)
#define QEMU_SCSI_STATUS_TASK_SET_FULL 0x28
#define QEMU_SCSI_STATUS_TIMEOUT 0x0f000002
#else
#define QEMU_SCSI_STATUS_TASK_SET_FULL SCSI_STATUS_TASK_SET_FULL
#define QEMU_SCSI_STATUS_TIMEOUT SCSI_STATUS_TIMEOUT
#endif
static void
iscsi_co_generic_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
@@ -202,19 +183,10 @@ iscsi_co_generic_cb(struct iscsi_context *iscsi, int status,
iTask->do_retry = 1;
goto out;
}
if (status == SCSI_STATUS_BUSY ||
status == QEMU_SCSI_STATUS_TIMEOUT ||
status == QEMU_SCSI_STATUS_TASK_SET_FULL) {
if (status == SCSI_STATUS_BUSY) {
unsigned retry_time =
exp_random(iscsi_retry_times[iTask->retries - 1]);
if (status == QEMU_SCSI_STATUS_TIMEOUT) {
/* make sure the request is rescheduled AFTER the
* reconnect is initiated */
retry_time = EVENT_INTERVAL * 2;
iTask->iscsilun->request_timed_out = true;
}
error_report("iSCSI Busy/TaskSetFull/TimeOut"
" (retry #%u in %u ms): %s",
error_report("iSCSI Busy (retry #%u in %u ms): %s",
iTask->retries, retry_time,
iscsi_get_error(iscsi));
aio_timer_init(iTask->iscsilun->aio_context,
@@ -227,8 +199,6 @@ iscsi_co_generic_cb(struct iscsi_context *iscsi, int status,
}
}
error_report("iSCSI Failure: %s", iscsi_get_error(iscsi));
} else {
iTask->iscsilun->force_next_flush |= iTask->force_next_flush;
}
out:
@@ -298,26 +268,20 @@ iscsi_set_events(IscsiLun *iscsilun)
iscsilun);
iscsilun->events = ev;
}
/* newer versions of libiscsi may return zero events. In this
* case start a timer to ensure we are able to return to service
* once this situation changes. */
if (!ev) {
timer_mod(iscsilun->event_timer,
qemu_clock_get_ms(QEMU_CLOCK_REALTIME) + EVENT_INTERVAL);
}
}
static void iscsi_timed_check_events(void *opaque)
static void iscsi_timed_set_events(void *opaque)
{
IscsiLun *iscsilun = opaque;
/* check for timed out requests */
iscsi_service(iscsilun->iscsi, 0);
if (iscsilun->request_timed_out) {
iscsilun->request_timed_out = false;
iscsi_reconnect(iscsilun->iscsi);
}
/* newer versions of libiscsi may return zero events. Ensure we are able
* to return to service once this situation changes. */
iscsi_set_events(iscsilun);
timer_mod(iscsilun->event_timer,
qemu_clock_get_ms(QEMU_CLOCK_REALTIME) + EVENT_INTERVAL);
}
static void
@@ -405,7 +369,6 @@ static int coroutine_fn iscsi_co_writev(BlockDriverState *bs,
struct IscsiTask iTask;
uint64_t lba;
uint32_t num_sectors;
int fua;
if (!is_request_lun_aligned(sector_num, nb_sectors, iscsilun)) {
return -EINVAL;
@@ -421,17 +384,15 @@ static int coroutine_fn iscsi_co_writev(BlockDriverState *bs,
num_sectors = sector_qemu2lun(nb_sectors, iscsilun);
iscsi_co_init_iscsitask(iscsilun, &iTask);
retry:
fua = iscsilun->dpofua && !bs->enable_write_cache;
iTask.force_next_flush = !fua;
if (iscsilun->use_16_for_rw) {
iTask.task = iscsi_write16_task(iscsilun->iscsi, iscsilun->lun, lba,
NULL, num_sectors * iscsilun->block_size,
iscsilun->block_size, 0, 0, fua, 0, 0,
iscsilun->block_size, 0, 0, 0, 0, 0,
iscsi_co_generic_cb, &iTask);
} else {
iTask.task = iscsi_write10_task(iscsilun->iscsi, iscsilun->lun, lba,
NULL, num_sectors * iscsilun->block_size,
iscsilun->block_size, 0, 0, fua, 0, 0,
iscsilun->block_size, 0, 0, 0, 0, 0,
iscsi_co_generic_cb, &iTask);
}
if (iTask.task == NULL) {
@@ -499,7 +460,7 @@ static int64_t coroutine_fn iscsi_co_get_block_status(BlockDriverState *bs,
*pnum = nb_sectors;
/* LUN does not support logical block provisioning */
if (!iscsilun->lbpme) {
if (iscsilun->lbpme == 0) {
goto out;
}
@@ -655,12 +616,12 @@ static int coroutine_fn iscsi_co_flush(BlockDriverState *bs)
IscsiLun *iscsilun = bs->opaque;
struct IscsiTask iTask;
if (!iscsilun->force_next_flush) {
if (bs->sg) {
return 0;
}
iscsilun->force_next_flush = false;
iscsi_co_init_iscsitask(iscsilun, &iTask);
retry:
if (iscsi_synchronizecache10_task(iscsilun->iscsi, iscsilun->lun, 0, 0, 0,
0, iscsi_co_generic_cb, &iTask) == NULL) {
@@ -956,7 +917,6 @@ coroutine_fn iscsi_co_write_zeroes(BlockDriverState *bs, int64_t sector_num,
}
iscsi_co_init_iscsitask(iscsilun, &iTask);
iTask.force_next_flush = true;
retry:
if (use_16_for_ws) {
iTask.task = iscsi_writesame16_task(iscsilun->iscsi, iscsilun->lun, lba,
@@ -1120,37 +1080,16 @@ static char *parse_initiator_name(const char *target)
return iscsi_name;
}
static int parse_timeout(const char *target)
{
QemuOptsList *list;
QemuOpts *opts;
const char *timeout;
list = qemu_find_opts("iscsi");
if (list) {
opts = qemu_opts_find(list, target);
if (!opts) {
opts = QTAILQ_FIRST(&list->head);
}
if (opts) {
timeout = qemu_opt_get(opts, "timeout");
if (timeout) {
return atoi(timeout);
}
}
}
return 0;
}
static void iscsi_nop_timed_event(void *opaque)
{
IscsiLun *iscsilun = opaque;
if (iscsi_get_nops_in_flight(iscsilun->iscsi) >= MAX_NOP_FAILURES) {
if (iscsi_get_nops_in_flight(iscsilun->iscsi) > MAX_NOP_FAILURES) {
error_report("iSCSI: NOP timeout. Reconnecting...");
iscsilun->request_timed_out = true;
} else if (iscsi_nop_out_async(iscsilun->iscsi, NULL, NULL, 0, NULL) != 0) {
iscsi_reconnect(iscsilun->iscsi);
}
if (iscsi_nop_out_async(iscsilun->iscsi, NULL, NULL, 0, NULL) != 0) {
error_report("iSCSI: failed to sent NOP-Out. Disabling NOP messages.");
return;
}
@@ -1182,8 +1121,8 @@ static void iscsi_readcapacity_sync(IscsiLun *iscsilun, Error **errp)
} else {
iscsilun->block_size = rc16->block_length;
iscsilun->num_blocks = rc16->returned_lba + 1;
iscsilun->lbpme = !!rc16->lbpme;
iscsilun->lbprz = !!rc16->lbprz;
iscsilun->lbpme = rc16->lbpme;
iscsilun->lbprz = rc16->lbprz;
iscsilun->use_16_for_rw = (rc16->returned_lba > 0xffffffff);
}
}
@@ -1308,21 +1247,17 @@ static void iscsi_attach_aio_context(BlockDriverState *bs,
timer_mod(iscsilun->nop_timer,
qemu_clock_get_ms(QEMU_CLOCK_REALTIME) + NOP_INTERVAL);
/* Set up a timer for periodic calls to iscsi_set_events and to
* scan for command timeout */
/* Prepare a timer for a delayed call to iscsi_set_events */
iscsilun->event_timer = aio_timer_new(iscsilun->aio_context,
QEMU_CLOCK_REALTIME, SCALE_MS,
iscsi_timed_check_events, iscsilun);
timer_mod(iscsilun->event_timer,
qemu_clock_get_ms(QEMU_CLOCK_REALTIME) + EVENT_INTERVAL);
iscsi_timed_set_events, iscsilun);
}
static void iscsi_modesense_sync(IscsiLun *iscsilun)
static bool iscsi_is_write_protected(IscsiLun *iscsilun)
{
struct scsi_task *task;
struct scsi_mode_sense *ms = NULL;
iscsilun->write_protected = false;
iscsilun->dpofua = false;
bool wrprotected = false;
task = iscsi_modesense6_sync(iscsilun->iscsi, iscsilun->lun,
1, SCSI_MODESENSE_PC_CURRENT,
@@ -1343,13 +1278,13 @@ static void iscsi_modesense_sync(IscsiLun *iscsilun)
iscsi_get_error(iscsilun->iscsi));
goto out;
}
iscsilun->write_protected = ms->device_specific_parameter & 0x80;
iscsilun->dpofua = ms->device_specific_parameter & 0x10;
wrprotected = ms->device_specific_parameter & 0x80;
out:
if (task) {
scsi_free_scsi_task(task);
}
return wrprotected;
}
/*
@@ -1369,7 +1304,14 @@ static int iscsi_open(BlockDriverState *bs, QDict *options, int flags,
QemuOpts *opts;
Error *local_err = NULL;
const char *filename;
int i, ret = 0, timeout = 0;
int i, ret = 0;
if ((BDRV_SECTOR_SIZE % 512) != 0) {
error_setg(errp, "iSCSI: Invalid BDRV_SECTOR_SIZE. "
"BDRV_SECTOR_SIZE(%lld) is not a multiple "
"of 512", BDRV_SECTOR_SIZE);
return -EINVAL;
}
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
@@ -1439,16 +1381,6 @@ static int iscsi_open(BlockDriverState *bs, QDict *options, int flags,
goto out;
}
/* timeout handling is broken in libiscsi before 1.15.0 */
timeout = parse_timeout(iscsi_url->target);
#if defined(LIBISCSI_API_VERSION) && LIBISCSI_API_VERSION >= 20150621
iscsi_set_timeout(iscsi, timeout);
#else
if (timeout) {
error_report("iSCSI: ignoring timeout value for libiscsi <1.15.0");
}
#endif
if (iscsi_full_connect_sync(iscsi, iscsi_url->portal, iscsi_url->lun) != 0) {
error_setg(errp, "iSCSI: Failed to connect to LUN : %s",
iscsi_get_error(iscsi));
@@ -1471,8 +1403,7 @@ static int iscsi_open(BlockDriverState *bs, QDict *options, int flags,
scsi_free_scsi_task(task);
task = NULL;
iscsi_modesense_sync(iscsilun);
iscsilun->write_protected = iscsi_is_write_protected(iscsilun);
/* Check the write protect flag of the LUN if we want to write */
if (iscsilun->type == TYPE_DISK && (flags & BDRV_O_RDWR) &&
iscsilun->write_protected) {
@@ -1550,7 +1481,7 @@ static int iscsi_open(BlockDriverState *bs, QDict *options, int flags,
iscsilun->bl.opt_unmap_gran * iscsilun->block_size <= 16 * 1024 * 1024) {
iscsilun->cluster_sectors = (iscsilun->bl.opt_unmap_gran *
iscsilun->block_size) >> BDRV_SECTOR_BITS;
if (iscsilun->lbprz) {
if (iscsilun->lbprz && !(bs->open_flags & BDRV_O_NOCACHE)) {
iscsilun->allocationmap = iscsi_allocationmap_init(iscsilun);
if (iscsilun->allocationmap == NULL) {
ret = -ENOMEM;
@@ -1724,7 +1655,7 @@ out:
static int iscsi_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
{
IscsiLun *iscsilun = bs->opaque;
bdi->unallocated_blocks_are_zero = iscsilun->lbprz;
bdi->unallocated_blocks_are_zero = !!iscsilun->lbprz;
bdi->can_write_zeroes_with_unmap = iscsilun->lbprz && iscsilun->lbp.lbpws;
bdi->cluster_size = iscsilun->cluster_sectors * BDRV_SECTOR_SIZE;
return 0;
@@ -1797,10 +1728,6 @@ static QemuOptsList qemu_iscsi_opts = {
.name = "initiator-name",
.type = QEMU_OPT_STRING,
.help = "Initiator iqn name to use when connecting",
},{
.name = "timeout",
.type = QEMU_OPT_NUMBER,
.help = "Request timeout in seconds (default 0 = no timeout)",
},
{ /* end of list */ }
},

View File

@@ -14,13 +14,11 @@
#include "trace.h"
#include "block/blockjob.h"
#include "block/block_int.h"
#include "qapi/qmp/qerror.h"
#include "qemu/ratelimit.h"
#include "qemu/bitmap.h"
#define SLICE_TIME 100000000ULL /* ns */
#define MAX_IN_FLIGHT 16
#define DEFAULT_MIRROR_BUF_SIZE (10 << 20)
/* The mirroring buffer is a list of granularity-sized chunks.
* Free chunks are organized in a list.
@@ -128,9 +126,11 @@ static void mirror_write_complete(void *opaque, int ret)
MirrorOp *op = opaque;
MirrorBlockJob *s = op->s;
if (ret < 0) {
BlockDriverState *source = s->common.bs;
BlockErrorAction action;
bdrv_set_dirty_bitmap(s->dirty_bitmap, op->sector_num, op->nb_sectors);
bdrv_set_dirty_bitmap(source, s->dirty_bitmap, op->sector_num,
op->nb_sectors);
action = mirror_error_action(s, false, -ret);
if (action == BLOCK_ERROR_ACTION_REPORT && s->ret >= 0) {
s->ret = ret;
@@ -144,9 +144,11 @@ static void mirror_read_complete(void *opaque, int ret)
MirrorOp *op = opaque;
MirrorBlockJob *s = op->s;
if (ret < 0) {
BlockDriverState *source = s->common.bs;
BlockErrorAction action;
bdrv_set_dirty_bitmap(s->dirty_bitmap, op->sector_num, op->nb_sectors);
bdrv_set_dirty_bitmap(source, s->dirty_bitmap, op->sector_num,
op->nb_sectors);
action = mirror_error_action(s, true, -ret);
if (action == BLOCK_ERROR_ACTION_REPORT && s->ret >= 0) {
s->ret = ret;
@@ -171,9 +173,10 @@ static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s)
s->sector_num = hbitmap_iter_next(&s->hbi);
if (s->sector_num < 0) {
bdrv_dirty_iter_init(s->dirty_bitmap, &s->hbi);
bdrv_dirty_iter_init(source, s->dirty_bitmap, &s->hbi);
s->sector_num = hbitmap_iter_next(&s->hbi);
trace_mirror_restart_iter(s, bdrv_get_dirty_count(s->dirty_bitmap));
trace_mirror_restart_iter(s,
bdrv_get_dirty_count(source, s->dirty_bitmap));
assert(s->sector_num >= 0);
}
@@ -288,7 +291,8 @@ static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s)
next_sector += sectors_per_chunk;
}
bdrv_reset_dirty_bitmap(s->dirty_bitmap, sector_num, nb_sectors);
bdrv_reset_dirty_bitmap(source, s->dirty_bitmap, sector_num,
nb_sectors);
/* Copy the dirty cluster. */
s->in_flight++;
@@ -388,7 +392,7 @@ static void coroutine_fn mirror_run(void *opaque)
MirrorBlockJob *s = opaque;
MirrorExitData *data;
BlockDriverState *bs = s->common.bs;
int64_t sector_num, end, length;
int64_t sector_num, end, sectors_per_chunk, length;
uint64_t last_pause_ns;
BlockDriverInfo bdi;
char backing_filename[2]; /* we only need 2 characters because we are only
@@ -442,28 +446,16 @@ static void coroutine_fn mirror_run(void *opaque)
goto immediate_exit;
}
sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
mirror_free_init(s);
last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
if (!s->is_none_mode) {
/* First part, loop on the sectors and initialize the dirty bitmap. */
BlockDriverState *base = s->base;
for (sector_num = 0; sector_num < end; ) {
/* Just to make sure we are not exceeding int limit. */
int nb_sectors = MIN(INT_MAX >> BDRV_SECTOR_BITS,
end - sector_num);
int64_t now = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
if (now - last_pause_ns > SLICE_TIME) {
last_pause_ns = now;
block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, 0);
}
if (block_job_is_cancelled(&s->common)) {
goto immediate_exit;
}
ret = bdrv_is_allocated_above(bs, base, sector_num, nb_sectors, &n);
int64_t next = (sector_num | (sectors_per_chunk - 1)) + 1;
ret = bdrv_is_allocated_above(bs, base,
sector_num, next - sector_num, &n);
if (ret < 0) {
goto immediate_exit;
@@ -471,13 +463,16 @@ static void coroutine_fn mirror_run(void *opaque)
assert(n > 0);
if (ret == 1) {
bdrv_set_dirty_bitmap(s->dirty_bitmap, sector_num, n);
bdrv_set_dirty_bitmap(bs, s->dirty_bitmap, sector_num, n);
sector_num = next;
} else {
sector_num += n;
}
sector_num += n;
}
}
bdrv_dirty_iter_init(s->dirty_bitmap, &s->hbi);
bdrv_dirty_iter_init(bs, s->dirty_bitmap, &s->hbi);
last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
for (;;) {
uint64_t delay_ns = 0;
int64_t cnt;
@@ -488,7 +483,7 @@ static void coroutine_fn mirror_run(void *opaque)
goto immediate_exit;
}
cnt = bdrv_get_dirty_count(s->dirty_bitmap);
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
/* s->common.offset contains the number of bytes already processed so
* far, cnt is the number of dirty sectors remaining and
* s->sectors_in_flight is the number of sectors currently being
@@ -497,7 +492,7 @@ static void coroutine_fn mirror_run(void *opaque)
(cnt + s->sectors_in_flight) * BDRV_SECTOR_SIZE;
/* Note that even when no rate limit is applied we need to yield
* periodically with no pending I/O so that bdrv_drain_all() returns.
* periodically with no pending I/O so that qemu_aio_flush() returns.
* We do so every SLICE_TIME nanoseconds, or when there is an error,
* or when the source is clean, whichever comes first.
*/
@@ -510,6 +505,9 @@ static void coroutine_fn mirror_run(void *opaque)
continue;
} else if (cnt != 0) {
delay_ns = mirror_iteration(s);
if (delay_ns == 0) {
continue;
}
}
}
@@ -535,7 +533,7 @@ static void coroutine_fn mirror_run(void *opaque)
should_complete = s->should_complete ||
block_job_is_cancelled(&s->common);
cnt = bdrv_get_dirty_count(s->dirty_bitmap);
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
}
}
@@ -550,7 +548,7 @@ static void coroutine_fn mirror_run(void *opaque)
*/
trace_mirror_before_drain(s, cnt);
bdrv_drain(bs);
cnt = bdrv_get_dirty_count(s->dirty_bitmap);
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
}
ret = 0;
@@ -601,7 +599,7 @@ static void mirror_set_speed(BlockJob *job, int64_t speed, Error **errp)
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
if (speed < 0) {
error_setg(errp, QERR_INVALID_PARAMETER, "speed");
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
@@ -626,8 +624,8 @@ static void mirror_complete(BlockJob *job, Error **errp)
return;
}
if (!s->synced) {
error_setg(errp, QERR_BLOCK_JOB_NOT_READY,
bdrv_get_device_name(job->bs));
error_set(errp, QERR_BLOCK_JOB_NOT_READY,
bdrv_get_device_name(job->bs));
return;
}
@@ -653,7 +651,7 @@ static void mirror_complete(BlockJob *job, Error **errp)
}
s->should_complete = true;
block_job_enter(&s->common);
block_job_resume(job);
}
static const BlockJobDriver mirror_job_driver = {
@@ -675,7 +673,7 @@ static const BlockJobDriver commit_active_job_driver = {
static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
const char *replaces,
int64_t speed, uint32_t granularity,
int64_t speed, int64_t granularity,
int64_t buf_size,
BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
@@ -688,7 +686,15 @@ static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
MirrorBlockJob *s;
if (granularity == 0) {
granularity = bdrv_get_default_bitmap_granularity(target);
/* Choose the default granularity based on the target file's cluster
* size, clamped between 4k and 64k. */
BlockDriverInfo bdi;
if (bdrv_get_info(target, &bdi) >= 0 && bdi.cluster_size != 0) {
granularity = MAX(4096, bdi.cluster_size);
granularity = MIN(65536, granularity);
} else {
granularity = 65536;
}
}
assert ((granularity & (granularity - 1)) == 0);
@@ -696,18 +702,10 @@ static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
if ((on_source_error == BLOCKDEV_ON_ERROR_STOP ||
on_source_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
error_setg(errp, QERR_INVALID_PARAMETER, "on-source-error");
error_set(errp, QERR_INVALID_PARAMETER, "on-source-error");
return;
}
if (buf_size < 0) {
error_setg(errp, "Invalid parameter 'buf-size'");
return;
}
if (buf_size == 0) {
buf_size = DEFAULT_MIRROR_BUF_SIZE;
}
s = block_job_create(driver, bs, speed, cb, opaque, errp);
if (!s) {
@@ -721,13 +719,11 @@ static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
s->is_none_mode = is_none_mode;
s->base = base;
s->granularity = granularity;
s->buf_size = ROUND_UP(buf_size, granularity);
s->buf_size = MAX(buf_size, granularity);
s->unmap = unmap;
s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, NULL, errp);
s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, errp);
if (!s->dirty_bitmap) {
g_free(s->replaces);
block_job_release(bs);
return;
}
bdrv_set_enable_write_cache(s->target, true);
@@ -740,7 +736,7 @@ static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
void mirror_start(BlockDriverState *bs, BlockDriverState *target,
const char *replaces,
int64_t speed, uint32_t granularity, int64_t buf_size,
int64_t speed, int64_t granularity, int64_t buf_size,
MirrorSyncMode mode, BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
bool unmap,
@@ -750,10 +746,6 @@ void mirror_start(BlockDriverState *bs, BlockDriverState *target,
bool is_none_mode;
BlockDriverState *base;
if (mode == MIRROR_SYNC_MODE_INCREMENTAL) {
error_setg(errp, "Sync mode 'incremental' not supported");
return;
}
is_none_mode = mode == MIRROR_SYNC_MODE_NONE;
base = mode == MIRROR_SYNC_MODE_TOP ? bs->backing_hd : NULL;
mirror_start_job(bs, target, replaces,

View File

@@ -12,11 +12,8 @@
#include "block/block_int.h"
#define NULL_OPT_LATENCY "latency-ns"
typedef struct {
int64_t length;
int64_t latency_ns;
} BDRVNullState;
static QemuOptsList runtime_opts = {
@@ -33,12 +30,6 @@ static QemuOptsList runtime_opts = {
.type = QEMU_OPT_SIZE,
.help = "size of the null block",
},
{
.name = NULL_OPT_LATENCY,
.type = QEMU_OPT_NUMBER,
.help = "nanoseconds (approximated) to wait "
"before completing request",
},
{ /* end of list */ }
},
};
@@ -48,20 +39,13 @@ static int null_file_open(BlockDriverState *bs, QDict *options, int flags,
{
QemuOpts *opts;
BDRVNullState *s = bs->opaque;
int ret = 0;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &error_abort);
s->length =
qemu_opt_get_size(opts, BLOCK_OPT_SIZE, 1 << 30);
s->latency_ns =
qemu_opt_get_number(opts, NULL_OPT_LATENCY, 0);
if (s->latency_ns < 0) {
error_setg(errp, "latency-ns is invalid");
ret = -EINVAL;
}
qemu_opts_del(opts);
return ret;
return 0;
}
static void null_close(BlockDriverState *bs)
@@ -74,40 +58,28 @@ static int64_t null_getlength(BlockDriverState *bs)
return s->length;
}
static coroutine_fn int null_co_common(BlockDriverState *bs)
{
BDRVNullState *s = bs->opaque;
if (s->latency_ns) {
co_aio_sleep_ns(bdrv_get_aio_context(bs), QEMU_CLOCK_REALTIME,
s->latency_ns);
}
return 0;
}
static coroutine_fn int null_co_readv(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
QEMUIOVector *qiov)
{
return null_co_common(bs);
return 0;
}
static coroutine_fn int null_co_writev(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
QEMUIOVector *qiov)
{
return null_co_common(bs);
return 0;
}
static coroutine_fn int null_co_flush(BlockDriverState *bs)
{
return null_co_common(bs);
return 0;
}
typedef struct {
BlockAIOCB common;
QEMUBH *bh;
QEMUTimer timer;
} NullAIOCB;
static const AIOCBInfo null_aiocb_info = {
@@ -122,33 +94,15 @@ static void null_bh_cb(void *opaque)
qemu_aio_unref(acb);
}
static void null_timer_cb(void *opaque)
{
NullAIOCB *acb = opaque;
acb->common.cb(acb->common.opaque, 0);
timer_deinit(&acb->timer);
qemu_aio_unref(acb);
}
static inline BlockAIOCB *null_aio_common(BlockDriverState *bs,
BlockCompletionFunc *cb,
void *opaque)
{
NullAIOCB *acb;
BDRVNullState *s = bs->opaque;
acb = qemu_aio_get(&null_aiocb_info, bs, cb, opaque);
/* Only emulate latency after vcpu is running. */
if (s->latency_ns) {
aio_timer_init(bdrv_get_aio_context(bs), &acb->timer,
QEMU_CLOCK_REALTIME, SCALE_NS,
null_timer_cb, acb);
timer_mod_ns(&acb->timer,
qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + s->latency_ns);
} else {
acb->bh = aio_bh_new(bdrv_get_aio_context(bs), null_bh_cb, acb);
qemu_bh_schedule(acb->bh);
}
acb->bh = aio_bh_new(bdrv_get_aio_context(bs), null_bh_cb, acb);
qemu_bh_schedule(acb->bh);
return &acb->common;
}
@@ -177,12 +131,6 @@ static BlockAIOCB *null_aio_flush(BlockDriverState *bs,
return null_aio_common(bs, cb, opaque);
}
static int null_reopen_prepare(BDRVReopenState *reopen_state,
BlockReopenQueue *queue, Error **errp)
{
return 0;
}
static BlockDriver bdrv_null_co = {
.format_name = "null-co",
.protocol_name = "null-co",
@@ -195,7 +143,6 @@ static BlockDriver bdrv_null_co = {
.bdrv_co_readv = null_co_readv,
.bdrv_co_writev = null_co_writev,
.bdrv_co_flush_to_disk = null_co_flush,
.bdrv_reopen_prepare = null_reopen_prepare,
};
static BlockDriver bdrv_null_aio = {
@@ -210,7 +157,6 @@ static BlockDriver bdrv_null_aio = {
.bdrv_aio_readv = null_aio_readv,
.bdrv_aio_writev = null_aio_writev,
.bdrv_aio_flush = null_aio_flush,
.bdrv_reopen_prepare = null_reopen_prepare,
};
static void bdrv_null_init(void)

View File

@@ -2,12 +2,8 @@
* Block driver for Parallels disk image format
*
* Copyright (c) 2007 Alex Beregszaszi
* Copyright (c) 2015 Denis V. Lunev <den@openvz.org>
*
* This code was originally based on comparing different disk images created
* by Parallels. Currently it is based on opened OpenVZ sources
* available at
* http://git.openvz.org/?p=ploop;a=summary
* This code is based on comparing different disk images created by Parallels.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
@@ -30,539 +26,63 @@
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "qemu/bitmap.h"
#include "qapi/util.h"
/**************************************************************/
#define HEADER_MAGIC "WithoutFreeSpace"
#define HEADER_MAGIC2 "WithouFreSpacExt"
#define HEADER_VERSION 2
#define HEADER_INUSE_MAGIC (0x746F6E59)
#define DEFAULT_CLUSTER_SIZE 1048576 /* 1 MiB */
#define HEADER_SIZE 64
// always little-endian
typedef struct ParallelsHeader {
struct parallels_header {
char magic[16]; // "WithoutFreeSpace"
uint32_t version;
uint32_t heads;
uint32_t cylinders;
uint32_t tracks;
uint32_t bat_entries;
uint32_t catalog_entries;
uint64_t nb_sectors;
uint32_t inuse;
uint32_t data_off;
char padding[12];
} QEMU_PACKED ParallelsHeader;
typedef enum ParallelsPreallocMode {
PRL_PREALLOC_MODE_FALLOCATE = 0,
PRL_PREALLOC_MODE_TRUNCATE = 1,
PRL_PREALLOC_MODE_MAX = 2,
} ParallelsPreallocMode;
static const char *prealloc_mode_lookup[] = {
"falloc",
"truncate",
NULL,
};
} QEMU_PACKED;
typedef struct BDRVParallelsState {
/** Locking is conservative, the lock protects
* - image file extending (truncate, fallocate)
* - any access to block allocation table
*/
CoMutex lock;
ParallelsHeader *header;
uint32_t header_size;
bool header_unclean;
unsigned long *bat_dirty_bmap;
unsigned int bat_dirty_block;
uint32_t *bat_bitmap;
unsigned int bat_size;
int64_t data_end;
uint64_t prealloc_size;
ParallelsPreallocMode prealloc_mode;
uint32_t *catalog_bitmap;
unsigned int catalog_size;
unsigned int tracks;
unsigned int off_multiplier;
} BDRVParallelsState;
#define PARALLELS_OPT_PREALLOC_MODE "prealloc-mode"
#define PARALLELS_OPT_PREALLOC_SIZE "prealloc-size"
static QemuOptsList parallels_runtime_opts = {
.name = "parallels",
.head = QTAILQ_HEAD_INITIALIZER(parallels_runtime_opts.head),
.desc = {
{
.name = PARALLELS_OPT_PREALLOC_SIZE,
.type = QEMU_OPT_SIZE,
.help = "Preallocation size on image expansion",
.def_value_str = "128MiB",
},
{
.name = PARALLELS_OPT_PREALLOC_MODE,
.type = QEMU_OPT_STRING,
.help = "Preallocation mode on image expansion "
"(allowed values: falloc, truncate)",
.def_value_str = "falloc",
},
{ /* end of list */ },
},
};
static int64_t bat2sect(BDRVParallelsState *s, uint32_t idx)
static int parallels_probe(const uint8_t *buf, int buf_size, const char *filename)
{
return (uint64_t)le32_to_cpu(s->bat_bitmap[idx]) * s->off_multiplier;
}
const struct parallels_header *ph = (const void *)buf;
static uint32_t bat_entry_off(uint32_t idx)
{
return sizeof(ParallelsHeader) + sizeof(uint32_t) * idx;
}
static int64_t seek_to_sector(BDRVParallelsState *s, int64_t sector_num)
{
uint32_t index, offset;
index = sector_num / s->tracks;
offset = sector_num % s->tracks;
/* not allocated */
if ((index >= s->bat_size) || (s->bat_bitmap[index] == 0)) {
return -1;
}
return bat2sect(s, index) + offset;
}
static int cluster_remainder(BDRVParallelsState *s, int64_t sector_num,
int nb_sectors)
{
int ret = s->tracks - sector_num % s->tracks;
return MIN(nb_sectors, ret);
}
static int64_t block_status(BDRVParallelsState *s, int64_t sector_num,
int nb_sectors, int *pnum)
{
int64_t start_off = -2, prev_end_off = -2;
*pnum = 0;
while (nb_sectors > 0 || start_off == -2) {
int64_t offset = seek_to_sector(s, sector_num);
int to_end;
if (start_off == -2) {
start_off = offset;
prev_end_off = offset;
} else if (offset != prev_end_off) {
break;
}
to_end = cluster_remainder(s, sector_num, nb_sectors);
nb_sectors -= to_end;
sector_num += to_end;
*pnum += to_end;
if (offset > 0) {
prev_end_off += to_end;
}
}
return start_off;
}
static int64_t allocate_clusters(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum)
{
BDRVParallelsState *s = bs->opaque;
uint32_t idx, to_allocate, i;
int64_t pos, space;
pos = block_status(s, sector_num, nb_sectors, pnum);
if (pos > 0) {
return pos;
}
idx = sector_num / s->tracks;
if (idx >= s->bat_size) {
return -EINVAL;
}
to_allocate = (sector_num + *pnum + s->tracks - 1) / s->tracks - idx;
space = to_allocate * s->tracks;
if (s->data_end + space > bdrv_getlength(bs->file) >> BDRV_SECTOR_BITS) {
int ret;
space += s->prealloc_size;
if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE) {
ret = bdrv_write_zeroes(bs->file, s->data_end, space, 0);
} else {
ret = bdrv_truncate(bs->file,
(s->data_end + space) << BDRV_SECTOR_BITS);
}
if (ret < 0) {
return ret;
}
}
for (i = 0; i < to_allocate; i++) {
s->bat_bitmap[idx + i] = cpu_to_le32(s->data_end / s->off_multiplier);
s->data_end += s->tracks;
bitmap_set(s->bat_dirty_bmap,
bat_entry_off(idx) / s->bat_dirty_block, 1);
}
return bat2sect(s, idx) + sector_num % s->tracks;
}
static coroutine_fn int parallels_co_flush_to_os(BlockDriverState *bs)
{
BDRVParallelsState *s = bs->opaque;
unsigned long size = DIV_ROUND_UP(s->header_size, s->bat_dirty_block);
unsigned long bit;
qemu_co_mutex_lock(&s->lock);
bit = find_first_bit(s->bat_dirty_bmap, size);
while (bit < size) {
uint32_t off = bit * s->bat_dirty_block;
uint32_t to_write = s->bat_dirty_block;
int ret;
if (off + to_write > s->header_size) {
to_write = s->header_size - off;
}
ret = bdrv_pwrite(bs->file, off, (uint8_t *)s->header + off, to_write);
if (ret < 0) {
qemu_co_mutex_unlock(&s->lock);
return ret;
}
bit = find_next_bit(s->bat_dirty_bmap, size, bit + 1);
}
bitmap_zero(s->bat_dirty_bmap, size);
qemu_co_mutex_unlock(&s->lock);
return 0;
}
static int64_t coroutine_fn parallels_co_get_block_status(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *pnum)
{
BDRVParallelsState *s = bs->opaque;
int64_t offset;
qemu_co_mutex_lock(&s->lock);
offset = block_status(s, sector_num, nb_sectors, pnum);
qemu_co_mutex_unlock(&s->lock);
if (offset < 0) {
if (buf_size < HEADER_SIZE)
return 0;
}
return (offset << BDRV_SECTOR_BITS) |
BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID;
}
static coroutine_fn int parallels_co_writev(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
{
BDRVParallelsState *s = bs->opaque;
uint64_t bytes_done = 0;
QEMUIOVector hd_qiov;
int ret = 0;
qemu_iovec_init(&hd_qiov, qiov->niov);
while (nb_sectors > 0) {
int64_t position;
int n, nbytes;
qemu_co_mutex_lock(&s->lock);
position = allocate_clusters(bs, sector_num, nb_sectors, &n);
qemu_co_mutex_unlock(&s->lock);
if (position < 0) {
ret = (int)position;
break;
}
nbytes = n << BDRV_SECTOR_BITS;
qemu_iovec_reset(&hd_qiov);
qemu_iovec_concat(&hd_qiov, qiov, bytes_done, nbytes);
ret = bdrv_co_writev(bs->file, position, n, &hd_qiov);
if (ret < 0) {
break;
}
nb_sectors -= n;
sector_num += n;
bytes_done += nbytes;
}
qemu_iovec_destroy(&hd_qiov);
return ret;
}
static coroutine_fn int parallels_co_readv(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
{
BDRVParallelsState *s = bs->opaque;
uint64_t bytes_done = 0;
QEMUIOVector hd_qiov;
int ret = 0;
qemu_iovec_init(&hd_qiov, qiov->niov);
while (nb_sectors > 0) {
int64_t position;
int n, nbytes;
qemu_co_mutex_lock(&s->lock);
position = block_status(s, sector_num, nb_sectors, &n);
qemu_co_mutex_unlock(&s->lock);
nbytes = n << BDRV_SECTOR_BITS;
if (position < 0) {
qemu_iovec_memset(qiov, bytes_done, 0, nbytes);
} else {
qemu_iovec_reset(&hd_qiov);
qemu_iovec_concat(&hd_qiov, qiov, bytes_done, nbytes);
ret = bdrv_co_readv(bs->file, position, n, &hd_qiov);
if (ret < 0) {
break;
}
}
nb_sectors -= n;
sector_num += n;
bytes_done += nbytes;
}
qemu_iovec_destroy(&hd_qiov);
return ret;
}
static int parallels_check(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix)
{
BDRVParallelsState *s = bs->opaque;
int64_t size, prev_off, high_off;
int ret;
uint32_t i;
bool flush_bat = false;
int cluster_size = s->tracks << BDRV_SECTOR_BITS;
size = bdrv_getlength(bs->file);
if (size < 0) {
res->check_errors++;
return size;
}
if (s->header_unclean) {
fprintf(stderr, "%s image was not closed correctly\n",
fix & BDRV_FIX_ERRORS ? "Repairing" : "ERROR");
res->corruptions++;
if (fix & BDRV_FIX_ERRORS) {
/* parallels_close will do the job right */
res->corruptions_fixed++;
s->header_unclean = false;
}
}
res->bfi.total_clusters = s->bat_size;
res->bfi.compressed_clusters = 0; /* compression is not supported */
high_off = 0;
prev_off = 0;
for (i = 0; i < s->bat_size; i++) {
int64_t off = bat2sect(s, i) << BDRV_SECTOR_BITS;
if (off == 0) {
prev_off = 0;
continue;
}
/* cluster outside the image */
if (off > size) {
fprintf(stderr, "%s cluster %u is outside image\n",
fix & BDRV_FIX_ERRORS ? "Repairing" : "ERROR", i);
res->corruptions++;
if (fix & BDRV_FIX_ERRORS) {
prev_off = 0;
s->bat_bitmap[i] = 0;
res->corruptions_fixed++;
flush_bat = true;
continue;
}
}
res->bfi.allocated_clusters++;
if (off > high_off) {
high_off = off;
}
if (prev_off != 0 && (prev_off + cluster_size) != off) {
res->bfi.fragmented_clusters++;
}
prev_off = off;
}
if (flush_bat) {
ret = bdrv_pwrite_sync(bs->file, 0, s->header, s->header_size);
if (ret < 0) {
res->check_errors++;
return ret;
}
}
res->image_end_offset = high_off + cluster_size;
if (size > res->image_end_offset) {
int64_t count;
count = DIV_ROUND_UP(size - res->image_end_offset, cluster_size);
fprintf(stderr, "%s space leaked at the end of the image %" PRId64 "\n",
fix & BDRV_FIX_LEAKS ? "Repairing" : "ERROR",
size - res->image_end_offset);
res->leaks += count;
if (fix & BDRV_FIX_LEAKS) {
ret = bdrv_truncate(bs->file, res->image_end_offset);
if (ret < 0) {
res->check_errors++;
return ret;
}
res->leaks_fixed += count;
}
}
return 0;
}
static int parallels_create(const char *filename, QemuOpts *opts, Error **errp)
{
int64_t total_size, cl_size;
uint8_t tmp[BDRV_SECTOR_SIZE];
Error *local_err = NULL;
BlockDriverState *file;
uint32_t bat_entries, bat_sectors;
ParallelsHeader header;
int ret;
total_size = ROUND_UP(qemu_opt_get_size_del(opts, BLOCK_OPT_SIZE, 0),
BDRV_SECTOR_SIZE);
cl_size = ROUND_UP(qemu_opt_get_size_del(opts, BLOCK_OPT_CLUSTER_SIZE,
DEFAULT_CLUSTER_SIZE), BDRV_SECTOR_SIZE);
ret = bdrv_create_file(filename, opts, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
file = NULL;
ret = bdrv_open(&file, filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, NULL, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
ret = bdrv_truncate(file, 0);
if (ret < 0) {
goto exit;
}
bat_entries = DIV_ROUND_UP(total_size, cl_size);
bat_sectors = DIV_ROUND_UP(bat_entry_off(bat_entries), cl_size);
bat_sectors = (bat_sectors * cl_size) >> BDRV_SECTOR_BITS;
memset(&header, 0, sizeof(header));
memcpy(header.magic, HEADER_MAGIC2, sizeof(header.magic));
header.version = cpu_to_le32(HEADER_VERSION);
/* don't care much about geometry, it is not used on image level */
header.heads = cpu_to_le32(16);
header.cylinders = cpu_to_le32(total_size / BDRV_SECTOR_SIZE / 16 / 32);
header.tracks = cpu_to_le32(cl_size >> BDRV_SECTOR_BITS);
header.bat_entries = cpu_to_le32(bat_entries);
header.nb_sectors = cpu_to_le64(DIV_ROUND_UP(total_size, BDRV_SECTOR_SIZE));
header.data_off = cpu_to_le32(bat_sectors);
/* write all the data */
memset(tmp, 0, sizeof(tmp));
memcpy(tmp, &header, sizeof(header));
ret = bdrv_pwrite(file, 0, tmp, BDRV_SECTOR_SIZE);
if (ret < 0) {
goto exit;
}
ret = bdrv_write_zeroes(file, 1, bat_sectors - 1, 0);
if (ret < 0) {
goto exit;
}
ret = 0;
done:
bdrv_unref(file);
return ret;
exit:
error_setg_errno(errp, -ret, "Failed to create Parallels image");
goto done;
}
static int parallels_probe(const uint8_t *buf, int buf_size,
const char *filename)
{
const ParallelsHeader *ph = (const void *)buf;
if (buf_size < sizeof(ParallelsHeader)) {
return 0;
}
if ((!memcmp(ph->magic, HEADER_MAGIC, 16) ||
!memcmp(ph->magic, HEADER_MAGIC2, 16)) &&
(le32_to_cpu(ph->version) == HEADER_VERSION)) {
!memcmp(ph->magic, HEADER_MAGIC2, 16)) &&
(le32_to_cpu(ph->version) == HEADER_VERSION))
return 100;
}
return 0;
}
static int parallels_update_header(BlockDriverState *bs)
{
BDRVParallelsState *s = bs->opaque;
unsigned size = MAX(bdrv_opt_mem_align(bs->file), sizeof(ParallelsHeader));
if (size > s->header_size) {
size = s->header_size;
}
return bdrv_pwrite_sync(bs->file, 0, s->header, size);
}
static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVParallelsState *s = bs->opaque;
ParallelsHeader ph;
int ret, size, i;
QemuOpts *opts = NULL;
Error *local_err = NULL;
char *buf;
int i;
struct parallels_header ph;
int ret;
bs->read_only = 1; // no write support yet
ret = bdrv_pread(bs->file, 0, &ph, sizeof(ph));
if (ret < 0) {
@@ -595,90 +115,25 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
goto fail;
}
s->bat_size = le32_to_cpu(ph.bat_entries);
if (s->bat_size > INT_MAX / sizeof(uint32_t)) {
s->catalog_size = le32_to_cpu(ph.catalog_entries);
if (s->catalog_size > INT_MAX / 4) {
error_setg(errp, "Catalog too large");
ret = -EFBIG;
goto fail;
}
size = bat_entry_off(s->bat_size);
s->header_size = ROUND_UP(size, bdrv_opt_mem_align(bs->file));
s->header = qemu_try_blockalign(bs->file, s->header_size);
if (s->header == NULL) {
s->catalog_bitmap = g_try_new(uint32_t, s->catalog_size);
if (s->catalog_size && s->catalog_bitmap == NULL) {
ret = -ENOMEM;
goto fail;
}
s->data_end = le32_to_cpu(ph.data_off);
if (s->data_end == 0) {
s->data_end = ROUND_UP(bat_entry_off(s->bat_size), BDRV_SECTOR_SIZE);
}
if (s->data_end < s->header_size) {
/* there is not enough unused space to fit to block align between BAT
and actual data. We can't avoid read-modify-write... */
s->header_size = size;
}
ret = bdrv_pread(bs->file, 0, s->header, s->header_size);
ret = bdrv_pread(bs->file, 64, s->catalog_bitmap, s->catalog_size * 4);
if (ret < 0) {
goto fail;
}
s->bat_bitmap = (uint32_t *)(s->header + 1);
for (i = 0; i < s->bat_size; i++) {
int64_t off = bat2sect(s, i);
if (off >= s->data_end) {
s->data_end = off + s->tracks;
}
}
if (le32_to_cpu(ph.inuse) == HEADER_INUSE_MAGIC) {
/* Image was not closed correctly. The check is mandatory */
s->header_unclean = true;
if ((flags & BDRV_O_RDWR) && !(flags & BDRV_O_CHECK)) {
error_setg(errp, "parallels: Image was not closed correctly; "
"cannot be opened read/write");
ret = -EACCES;
goto fail;
}
}
opts = qemu_opts_create(&parallels_runtime_opts, NULL, 0, &local_err);
if (local_err != NULL) {
goto fail_options;
}
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err != NULL) {
goto fail_options;
}
s->prealloc_size =
qemu_opt_get_size_del(opts, PARALLELS_OPT_PREALLOC_SIZE, 0);
s->prealloc_size = MAX(s->tracks, s->prealloc_size >> BDRV_SECTOR_BITS);
buf = qemu_opt_get_del(opts, PARALLELS_OPT_PREALLOC_MODE);
s->prealloc_mode = qapi_enum_parse(prealloc_mode_lookup, buf,
PRL_PREALLOC_MODE_MAX, PRL_PREALLOC_MODE_FALLOCATE, &local_err);
g_free(buf);
if (local_err != NULL) {
goto fail_options;
}
if (!bdrv_has_zero_init(bs->file) ||
bdrv_truncate(bs->file, bdrv_getlength(bs->file)) != 0) {
s->prealloc_mode = PRL_PREALLOC_MODE_FALLOCATE;
}
if (flags & BDRV_O_RDWR) {
s->header->inuse = cpu_to_le32(HEADER_INUSE_MAGIC);
ret = parallels_update_header(bs);
if (ret < 0) {
goto fail;
}
}
s->bat_dirty_block = 4 * getpagesize();
s->bat_dirty_bmap =
bitmap_new(DIV_ROUND_UP(s->header_size, s->bat_dirty_block));
for (i = 0; i < s->catalog_size; i++)
le32_to_cpus(&s->catalog_bitmap[i]);
qemu_co_mutex_init(&s->lock);
return 0;
@@ -687,67 +142,67 @@ fail_format:
error_setg(errp, "Image not in Parallels format");
ret = -EINVAL;
fail:
qemu_vfree(s->header);
g_free(s->catalog_bitmap);
return ret;
fail_options:
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
}
static int64_t seek_to_sector(BlockDriverState *bs, int64_t sector_num)
{
BDRVParallelsState *s = bs->opaque;
uint32_t index, offset;
index = sector_num / s->tracks;
offset = sector_num % s->tracks;
/* not allocated */
if ((index >= s->catalog_size) || (s->catalog_bitmap[index] == 0))
return -1;
return
((uint64_t)s->catalog_bitmap[index] * s->off_multiplier + offset) * 512;
}
static int parallels_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
while (nb_sectors > 0) {
int64_t position = seek_to_sector(bs, sector_num);
if (position >= 0) {
if (bdrv_pread(bs->file, position, buf, 512) != 512)
return -1;
} else {
memset(buf, 0, 512);
}
nb_sectors--;
sector_num++;
buf += 512;
}
return 0;
}
static coroutine_fn int parallels_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVParallelsState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = parallels_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void parallels_close(BlockDriverState *bs)
{
BDRVParallelsState *s = bs->opaque;
if (bs->open_flags & BDRV_O_RDWR) {
s->header->inuse = 0;
parallels_update_header(bs);
}
if (bs->open_flags & BDRV_O_RDWR) {
bdrv_truncate(bs->file, s->data_end << BDRV_SECTOR_BITS);
}
g_free(s->bat_dirty_bmap);
qemu_vfree(s->header);
g_free(s->catalog_bitmap);
}
static QemuOptsList parallels_create_opts = {
.name = "parallels-create-opts",
.head = QTAILQ_HEAD_INITIALIZER(parallels_create_opts.head),
.desc = {
{
.name = BLOCK_OPT_SIZE,
.type = QEMU_OPT_SIZE,
.help = "Virtual disk size",
},
{
.name = BLOCK_OPT_CLUSTER_SIZE,
.type = QEMU_OPT_SIZE,
.help = "Parallels image cluster size",
.def_value_str = stringify(DEFAULT_CLUSTER_SIZE),
},
{ /* end of list */ }
}
};
static BlockDriver bdrv_parallels = {
.format_name = "parallels",
.instance_size = sizeof(BDRVParallelsState),
.bdrv_probe = parallels_probe,
.bdrv_open = parallels_open,
.bdrv_read = parallels_co_read,
.bdrv_close = parallels_close,
.bdrv_co_get_block_status = parallels_co_get_block_status,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_co_flush_to_os = parallels_co_flush_to_os,
.bdrv_co_readv = parallels_co_readv,
.bdrv_co_writev = parallels_co_writev,
.bdrv_create = parallels_create,
.bdrv_check = parallels_check,
.create_opts = &parallels_create_opts,
};
static void bdrv_parallels_init(void)

View File

@@ -24,7 +24,6 @@
#include "block/qapi.h"
#include "block/block_int.h"
#include "block/throttle-groups.h"
#include "block/write-threshold.h"
#include "qmp-commands.h"
#include "qapi-visit.h"
@@ -32,10 +31,8 @@
#include "qapi/qmp/types.h"
#include "sysemu/block-backend.h"
BlockDeviceInfo *bdrv_block_device_info(BlockDriverState *bs, Error **errp)
BlockDeviceInfo *bdrv_block_device_info(BlockDriverState *bs)
{
ImageInfo **p_image_info;
BlockDriverState *bs0;
BlockDeviceInfo *info = g_malloc0(sizeof(*info));
info->file = g_strdup(bs->filename);
@@ -66,9 +63,7 @@ BlockDeviceInfo *bdrv_block_device_info(BlockDriverState *bs, Error **errp)
if (bs->io_limits_enabled) {
ThrottleConfig cfg;
throttle_group_get_config(bs, &cfg);
throttle_get_config(&bs->throttle_state, &cfg);
info->bps = cfg.buckets[THROTTLE_BPS_TOTAL].avg;
info->bps_rd = cfg.buckets[THROTTLE_BPS_READ].avg;
info->bps_wr = cfg.buckets[THROTTLE_BPS_WRITE].avg;
@@ -93,32 +88,10 @@ BlockDeviceInfo *bdrv_block_device_info(BlockDriverState *bs, Error **errp)
info->has_iops_size = cfg.op_size;
info->iops_size = cfg.op_size;
info->has_group = true;
info->group = g_strdup(throttle_group_get_name(bs));
}
info->write_threshold = bdrv_write_threshold_get(bs);
bs0 = bs;
p_image_info = &info->image;
while (1) {
Error *local_err = NULL;
bdrv_query_image_info(bs0, p_image_info, &local_err);
if (local_err) {
error_propagate(errp, local_err);
qapi_free_BlockDeviceInfo(info);
return NULL;
}
if (bs0->drv && bs0->backing_hd) {
bs0 = bs0->backing_hd;
(*p_image_info)->has_backing_image = true;
p_image_info = &((*p_image_info)->backing_image);
} else {
break;
}
}
return info;
}
@@ -291,6 +264,9 @@ static void bdrv_query_info(BlockBackend *blk, BlockInfo **p_info,
{
BlockInfo *info = g_malloc0(sizeof(*info));
BlockDriverState *bs = blk_bs(blk);
BlockDriverState *bs0;
ImageInfo **p_image_info;
Error *local_err = NULL;
info->device = g_strdup(blk_name(blk));
info->type = g_strdup("unknown");
info->locked = blk_dev_is_medium_locked(blk);
@@ -313,9 +289,23 @@ static void bdrv_query_info(BlockBackend *blk, BlockInfo **p_info,
if (bs->drv) {
info->has_inserted = true;
info->inserted = bdrv_block_device_info(bs, errp);
if (info->inserted == NULL) {
goto err;
info->inserted = bdrv_block_device_info(bs);
bs0 = bs;
p_image_info = &info->inserted->image;
while (1) {
bdrv_query_image_info(bs0, p_image_info, &local_err);
if (local_err) {
error_propagate(errp, local_err);
goto err;
}
if (bs0->drv && bs0->backing_hd) {
bs0 = bs0->backing_hd;
(*p_image_info)->has_backing_image = true;
p_image_info = &((*p_image_info)->backing_image);
} else {
break;
}
}
}
@@ -520,9 +510,18 @@ static void dump_qobject(fprintf_function func_fprintf, void *f,
}
case QTYPE_QBOOL: {
QBool *value = qobject_to_qbool(obj);
func_fprintf(f, "%s", qbool_get_bool(value) ? "true" : "false");
func_fprintf(f, "%s", qbool_get_int(value) ? "true" : "false");
break;
}
case QTYPE_QERROR: {
QString *value = qerror_human((QError *)obj);
func_fprintf(f, "%s", qstring_get_str(value));
QDECREF(value);
break;
}
case QTYPE_NONE:
break;
case QTYPE_MAX:
default:
abort();
}

View File

@@ -25,8 +25,7 @@
#include "block/block_int.h"
#include "qemu/module.h"
#include <zlib.h>
#include "qapi/qmp/qerror.h"
#include "crypto/cipher.h"
#include "qemu/aes.h"
#include "migration/migration.h"
/**************************************************************/
@@ -72,8 +71,10 @@ typedef struct BDRVQcowState {
uint8_t *cluster_cache;
uint8_t *cluster_data;
uint64_t cluster_cache_offset;
QCryptoCipher *cipher; /* NULL if no key yet */
uint32_t crypt_method; /* current crypt method, 0 if no key yet */
uint32_t crypt_method_header;
AES_KEY aes_encrypt_key;
AES_KEY aes_decrypt_key;
CoMutex lock;
Error *migration_blocker;
} BDRVQcowState;
@@ -122,8 +123,8 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
char version[64];
snprintf(version, sizeof(version), "QCOW version %" PRIu32,
header.version);
error_setg(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bdrv_get_device_or_node_name(bs), "qcow", version);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bdrv_get_device_name(bs), "qcow", version);
ret = -ENOTSUP;
goto fail;
}
@@ -152,11 +153,6 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
ret = -EINVAL;
goto fail;
}
if (!qcrypto_cipher_supports(QCRYPTO_CIPHER_ALG_AES_128)) {
error_setg(errp, "AES cipher not available");
ret = -EINVAL;
goto fail;
}
s->crypt_method_header = header.crypt_method;
if (s->crypt_method_header) {
bs->encrypted = 1;
@@ -233,9 +229,9 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
}
/* Disable migration when qcow images are used */
error_setg(&s->migration_blocker, "The qcow format used by node '%s' "
"does not support live migration",
bdrv_get_device_or_node_name(bs));
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"qcow", bdrv_get_device_name(bs), "live migration");
migrate_add_blocker(s->migration_blocker);
qemu_co_mutex_init(&s->lock);
@@ -263,7 +259,6 @@ static int qcow_set_key(BlockDriverState *bs, const char *key)
BDRVQcowState *s = bs->opaque;
uint8_t keybuf[16];
int len, i;
Error *err;
memset(keybuf, 0, 16);
len = strlen(key);
@@ -274,68 +269,38 @@ static int qcow_set_key(BlockDriverState *bs, const char *key)
for(i = 0;i < len;i++) {
keybuf[i] = key[i];
}
assert(bs->encrypted);
s->crypt_method = s->crypt_method_header;
qcrypto_cipher_free(s->cipher);
s->cipher = qcrypto_cipher_new(
QCRYPTO_CIPHER_ALG_AES_128,
QCRYPTO_CIPHER_MODE_CBC,
keybuf, G_N_ELEMENTS(keybuf),
&err);
if (!s->cipher) {
/* XXX would be nice if errors in this method could
* be properly propagate to the caller. Would need
* the bdrv_set_key() API signature to be fixed. */
error_free(err);
if (AES_set_encrypt_key(keybuf, 128, &s->aes_encrypt_key) != 0)
return -1;
if (AES_set_decrypt_key(keybuf, 128, &s->aes_decrypt_key) != 0)
return -1;
}
return 0;
}
/* The crypt function is compatible with the linux cryptoloop
algorithm for < 4 GB images. NOTE: out_buf == in_buf is
supported */
static int encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
uint8_t *out_buf, const uint8_t *in_buf,
int nb_sectors, bool enc, Error **errp)
static void encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
uint8_t *out_buf, const uint8_t *in_buf,
int nb_sectors, int enc,
const AES_KEY *key)
{
union {
uint64_t ll[2];
uint8_t b[16];
} ivec;
int i;
int ret;
for(i = 0; i < nb_sectors; i++) {
ivec.ll[0] = cpu_to_le64(sector_num);
ivec.ll[1] = 0;
if (qcrypto_cipher_setiv(s->cipher,
ivec.b, G_N_ELEMENTS(ivec.b),
errp) < 0) {
return -1;
}
if (enc) {
ret = qcrypto_cipher_encrypt(s->cipher,
in_buf,
out_buf,
512,
errp);
} else {
ret = qcrypto_cipher_decrypt(s->cipher,
in_buf,
out_buf,
512,
errp);
}
if (ret < 0) {
return -1;
}
AES_cbc_encrypt(in_buf, out_buf, 512, key,
ivec.b, enc);
sector_num++;
in_buf += 512;
out_buf += 512;
}
return 0;
}
/* 'allocate' is:
@@ -446,23 +411,17 @@ static uint64_t get_cluster_offset(BlockDriverState *bs,
bdrv_truncate(bs->file, cluster_offset + s->cluster_size);
/* if encrypted, we must initialize the cluster
content which won't be written */
if (bs->encrypted &&
if (s->crypt_method &&
(n_end - n_start) < s->cluster_sectors) {
uint64_t start_sect;
assert(s->cipher);
start_sect = (offset & ~(s->cluster_size - 1)) >> 9;
memset(s->cluster_data + 512, 0x00, 512);
for(i = 0; i < s->cluster_sectors; i++) {
if (i < n_start || i >= n_end) {
Error *err = NULL;
if (encrypt_sectors(s, start_sect + i,
s->cluster_data,
s->cluster_data + 512, 1,
true, &err) < 0) {
error_free(err);
errno = EIO;
return -1;
}
encrypt_sectors(s, start_sect + i,
s->cluster_data,
s->cluster_data + 512, 1, 1,
&s->aes_encrypt_key);
if (bdrv_pwrite(bs->file, cluster_offset + i * 512,
s->cluster_data, 512) != 512)
return -1;
@@ -502,7 +461,7 @@ static int64_t coroutine_fn qcow_co_get_block_status(BlockDriverState *bs,
if (!cluster_offset) {
return 0;
}
if ((cluster_offset & QCOW_OFLAG_COMPRESSED) || s->cipher) {
if ((cluster_offset & QCOW_OFLAG_COMPRESSED) || s->crypt_method) {
return BDRV_BLOCK_DATA;
}
cluster_offset |= (index_in_cluster << BDRV_SECTOR_BITS);
@@ -569,7 +528,6 @@ static coroutine_fn int qcow_co_readv(BlockDriverState *bs, int64_t sector_num,
QEMUIOVector hd_qiov;
uint8_t *buf;
void *orig_buf;
Error *err = NULL;
if (qiov->niov > 1) {
buf = orig_buf = qemu_try_blockalign(bs, qiov->size);
@@ -632,12 +590,10 @@ static coroutine_fn int qcow_co_readv(BlockDriverState *bs, int64_t sector_num,
if (ret < 0) {
break;
}
if (bs->encrypted) {
assert(s->cipher);
if (encrypt_sectors(s, sector_num, buf, buf,
n, false, &err) < 0) {
goto fail;
}
if (s->crypt_method) {
encrypt_sectors(s, sector_num, buf, buf,
n, 0,
&s->aes_decrypt_key);
}
}
ret = 0;
@@ -658,7 +614,6 @@ done:
return ret;
fail:
error_free(err);
ret = -EIO;
goto done;
}
@@ -706,18 +661,12 @@ static coroutine_fn int qcow_co_writev(BlockDriverState *bs, int64_t sector_num,
ret = -EIO;
break;
}
if (bs->encrypted) {
Error *err = NULL;
assert(s->cipher);
if (s->crypt_method) {
if (!cluster_data) {
cluster_data = g_malloc0(s->cluster_size);
}
if (encrypt_sectors(s, sector_num, cluster_data, buf,
n, true, &err) < 0) {
error_free(err);
ret = -EIO;
break;
}
encrypt_sectors(s, sector_num, cluster_data, buf,
n, 1, &s->aes_encrypt_key);
src_buf = cluster_data;
} else {
src_buf = buf;
@@ -754,8 +703,6 @@ static void qcow_close(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
qcrypto_cipher_free(s->cipher);
s->cipher = NULL;
g_free(s->l1_table);
qemu_vfree(s->l2_cache);
g_free(s->cluster_cache);

View File

@@ -28,68 +28,62 @@
#include "trace.h"
typedef struct Qcow2CachedTable {
int64_t offset;
bool dirty;
uint64_t lru_counter;
int ref;
void* table;
int64_t offset;
bool dirty;
int cache_hits;
int ref;
} Qcow2CachedTable;
struct Qcow2Cache {
Qcow2CachedTable *entries;
struct Qcow2Cache *depends;
Qcow2CachedTable* entries;
struct Qcow2Cache* depends;
int size;
bool depends_on_flush;
void *table_array;
uint64_t lru_counter;
};
static inline void *qcow2_cache_get_table_addr(BlockDriverState *bs,
Qcow2Cache *c, int table)
{
BDRVQcowState *s = bs->opaque;
return (uint8_t *) c->table_array + (size_t) table * s->cluster_size;
}
static inline int qcow2_cache_get_table_idx(BlockDriverState *bs,
Qcow2Cache *c, void *table)
{
BDRVQcowState *s = bs->opaque;
ptrdiff_t table_offset = (uint8_t *) table - (uint8_t *) c->table_array;
int idx = table_offset / s->cluster_size;
assert(idx >= 0 && idx < c->size && table_offset % s->cluster_size == 0);
return idx;
}
Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables)
{
BDRVQcowState *s = bs->opaque;
Qcow2Cache *c;
int i;
c = g_new0(Qcow2Cache, 1);
c->size = num_tables;
c->entries = g_try_new0(Qcow2CachedTable, num_tables);
c->table_array = qemu_try_blockalign(bs->file,
(size_t) num_tables * s->cluster_size);
if (!c->entries) {
goto fail;
}
if (!c->entries || !c->table_array) {
qemu_vfree(c->table_array);
g_free(c->entries);
g_free(c);
c = NULL;
for (i = 0; i < c->size; i++) {
c->entries[i].table = qemu_try_blockalign(bs->file, s->cluster_size);
if (c->entries[i].table == NULL) {
goto fail;
}
}
return c;
fail:
if (c->entries) {
for (i = 0; i < c->size; i++) {
qemu_vfree(c->entries[i].table);
}
}
g_free(c->entries);
g_free(c);
return NULL;
}
int qcow2_cache_destroy(BlockDriverState *bs, Qcow2Cache *c)
int qcow2_cache_destroy(BlockDriverState* bs, Qcow2Cache *c)
{
int i;
for (i = 0; i < c->size; i++) {
assert(c->entries[i].ref == 0);
qemu_vfree(c->entries[i].table);
}
qemu_vfree(c->table_array);
g_free(c->entries);
g_free(c);
@@ -157,8 +151,8 @@ static int qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
BLKDBG_EVENT(bs->file, BLKDBG_L2_UPDATE);
}
ret = bdrv_pwrite(bs->file, c->entries[i].offset,
qcow2_cache_get_table_addr(bs, c, i), s->cluster_size);
ret = bdrv_pwrite(bs->file, c->entries[i].offset, c->entries[i].table,
s->cluster_size);
if (ret < 0) {
return ret;
}
@@ -234,53 +228,68 @@ int qcow2_cache_empty(BlockDriverState *bs, Qcow2Cache *c)
for (i = 0; i < c->size; i++) {
assert(c->entries[i].ref == 0);
c->entries[i].offset = 0;
c->entries[i].lru_counter = 0;
c->entries[i].cache_hits = 0;
}
c->lru_counter = 0;
return 0;
}
static int qcow2_cache_find_entry_to_replace(Qcow2Cache *c)
{
int i;
int min_count = INT_MAX;
int min_index = -1;
for (i = 0; i < c->size; i++) {
if (c->entries[i].ref) {
continue;
}
if (c->entries[i].cache_hits < min_count) {
min_index = i;
min_count = c->entries[i].cache_hits;
}
/* Give newer hits priority */
/* TODO Check how to optimize the replacement strategy */
if (c->entries[i].cache_hits > 1) {
c->entries[i].cache_hits /= 2;
}
}
if (min_index == -1) {
/* This can't happen in current synchronous code, but leave the check
* here as a reminder for whoever starts using AIO with the cache */
abort();
}
return min_index;
}
static int qcow2_cache_do_get(BlockDriverState *bs, Qcow2Cache *c,
uint64_t offset, void **table, bool read_from_disk)
{
BDRVQcowState *s = bs->opaque;
int i;
int ret;
int lookup_index;
uint64_t min_lru_counter = UINT64_MAX;
int min_lru_index = -1;
trace_qcow2_cache_get(qemu_coroutine_self(), c == s->l2_table_cache,
offset, read_from_disk);
/* Check if the table is already cached */
i = lookup_index = (offset / s->cluster_size * 4) % c->size;
do {
const Qcow2CachedTable *t = &c->entries[i];
if (t->offset == offset) {
for (i = 0; i < c->size; i++) {
if (c->entries[i].offset == offset) {
goto found;
}
if (t->ref == 0 && t->lru_counter < min_lru_counter) {
min_lru_counter = t->lru_counter;
min_lru_index = i;
}
if (++i == c->size) {
i = 0;
}
} while (i != lookup_index);
if (min_lru_index == -1) {
/* This can't happen in current synchronous code, but leave the check
* here as a reminder for whoever starts using AIO with the cache */
abort();
}
/* Cache miss: write a table back and replace it */
i = min_lru_index;
/* If not, write a table back and replace it */
i = qcow2_cache_find_entry_to_replace(c);
trace_qcow2_cache_get_replace_entry(qemu_coroutine_self(),
c == s->l2_table_cache, i);
if (i < 0) {
return i;
}
ret = qcow2_cache_entry_flush(bs, c, i);
if (ret < 0) {
@@ -295,19 +304,22 @@ static int qcow2_cache_do_get(BlockDriverState *bs, Qcow2Cache *c,
BLKDBG_EVENT(bs->file, BLKDBG_L2_LOAD);
}
ret = bdrv_pread(bs->file, offset, qcow2_cache_get_table_addr(bs, c, i),
s->cluster_size);
ret = bdrv_pread(bs->file, offset, c->entries[i].table, s->cluster_size);
if (ret < 0) {
return ret;
}
}
/* Give the table some hits for the start so that it won't be replaced
* immediately. The number 32 is completely arbitrary. */
c->entries[i].cache_hits = 32;
c->entries[i].offset = offset;
/* And return the right table */
found:
c->entries[i].cache_hits++;
c->entries[i].ref++;
*table = qcow2_cache_get_table_addr(bs, c, i);
*table = c->entries[i].table;
trace_qcow2_cache_get_done(qemu_coroutine_self(),
c == s->l2_table_cache, i);
@@ -327,24 +339,36 @@ int qcow2_cache_get_empty(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,
return qcow2_cache_do_get(bs, c, offset, table, false);
}
void qcow2_cache_put(BlockDriverState *bs, Qcow2Cache *c, void **table)
int qcow2_cache_put(BlockDriverState *bs, Qcow2Cache *c, void **table)
{
int i = qcow2_cache_get_table_idx(bs, c, *table);
int i;
for (i = 0; i < c->size; i++) {
if (c->entries[i].table == *table) {
goto found;
}
}
return -ENOENT;
found:
c->entries[i].ref--;
*table = NULL;
if (c->entries[i].ref == 0) {
c->entries[i].lru_counter = ++c->lru_counter;
}
assert(c->entries[i].ref >= 0);
return 0;
}
void qcow2_cache_entry_mark_dirty(BlockDriverState *bs, Qcow2Cache *c,
void *table)
void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table)
{
int i = qcow2_cache_get_table_idx(bs, c, table);
assert(c->entries[i].offset != 0);
int i;
for (i = 0; i < c->size; i++) {
if (c->entries[i].table == table) {
goto found;
}
}
abort();
found:
c->entries[i].dirty = true;
}

View File

@@ -253,14 +253,17 @@ static int l2_allocate(BlockDriverState *bs, int l1_index, uint64_t **table)
memcpy(l2_table, old_table, s->cluster_size);
qcow2_cache_put(bs, s->l2_table_cache, (void **) &old_table);
ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &old_table);
if (ret < 0) {
goto fail;
}
}
/* write the l2 table to the file */
BLKDBG_EVENT(bs->file, BLKDBG_L2_ALLOC_WRITE);
trace_qcow2_l2_allocate_write_l2(bs, l1_index);
qcow2_cache_entry_mark_dirty(bs, s->l2_table_cache, l2_table);
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
ret = qcow2_cache_flush(bs, s->l2_table_cache);
if (ret < 0) {
goto fail;
@@ -339,47 +342,26 @@ static int count_contiguous_free_clusters(uint64_t nb_clusters, uint64_t *l2_tab
/* The crypt function is compatible with the linux cryptoloop
algorithm for < 4 GB images. NOTE: out_buf == in_buf is
supported */
int qcow2_encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
uint8_t *out_buf, const uint8_t *in_buf,
int nb_sectors, bool enc,
Error **errp)
void qcow2_encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
uint8_t *out_buf, const uint8_t *in_buf,
int nb_sectors, int enc,
const AES_KEY *key)
{
union {
uint64_t ll[2];
uint8_t b[16];
} ivec;
int i;
int ret;
for(i = 0; i < nb_sectors; i++) {
ivec.ll[0] = cpu_to_le64(sector_num);
ivec.ll[1] = 0;
if (qcrypto_cipher_setiv(s->cipher,
ivec.b, G_N_ELEMENTS(ivec.b),
errp) < 0) {
return -1;
}
if (enc) {
ret = qcrypto_cipher_encrypt(s->cipher,
in_buf,
out_buf,
512,
errp);
} else {
ret = qcrypto_cipher_decrypt(s->cipher,
in_buf,
out_buf,
512,
errp);
}
if (ret < 0) {
return -1;
}
AES_cbc_encrypt(in_buf, out_buf, 512, key,
ivec.b, enc);
sector_num++;
in_buf += 512;
out_buf += 512;
}
return 0;
}
static int coroutine_fn copy_sectors(BlockDriverState *bs,
@@ -421,16 +403,10 @@ static int coroutine_fn copy_sectors(BlockDriverState *bs,
goto out;
}
if (bs->encrypted) {
Error *err = NULL;
assert(s->cipher);
if (qcow2_encrypt_sectors(s, start_sect + n_start,
iov.iov_base, iov.iov_base, n,
true, &err) < 0) {
ret = -EIO;
error_free(err);
goto out;
}
if (s->crypt_method) {
qcow2_encrypt_sectors(s, start_sect + n_start,
iov.iov_base, iov.iov_base, n, 1,
&s->aes_encrypt_key);
}
ret = qcow2_pre_write_overlap_check(bs, 0,
@@ -716,9 +692,12 @@ uint64_t qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs,
/* compressed clusters never have the copied flag */
BLKDBG_EVENT(bs->file, BLKDBG_L2_UPDATE_COMPRESSED);
qcow2_cache_entry_mark_dirty(bs, s->l2_table_cache, l2_table);
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
l2_table[l2_index] = cpu_to_be64(cluster_offset);
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
if (ret < 0) {
return 0;
}
return cluster_offset;
}
@@ -792,7 +771,7 @@ int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m)
if (ret < 0) {
goto err;
}
qcow2_cache_entry_mark_dirty(bs, s->l2_table_cache, l2_table);
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
assert(l2_index + m->nb_clusters <= s->l2_size);
for (i = 0; i < m->nb_clusters; i++) {
@@ -810,7 +789,10 @@ int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m)
}
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
if (ret < 0) {
goto err;
}
/*
* If this was a COW, we need to decrease the refcount of the old cluster.
@@ -962,7 +944,7 @@ static int handle_copied(BlockDriverState *bs, uint64_t guest_offset,
uint64_t *l2_table;
unsigned int nb_clusters;
unsigned int keep_clusters;
int ret;
int ret, pret;
trace_qcow2_handle_copied(qemu_coroutine_self(), guest_offset, *host_offset,
*bytes);
@@ -1029,7 +1011,10 @@ static int handle_copied(BlockDriverState *bs, uint64_t guest_offset,
/* Cleanup */
out:
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
pret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
if (pret < 0) {
return pret;
}
/* Only return a host offset if we actually made progress. Otherwise we
* would make requirements for handle_alloc() that it can't fulfill */
@@ -1154,7 +1139,10 @@ static int handle_alloc(BlockDriverState *bs, uint64_t guest_offset,
* wrong with our code. */
assert(nb_clusters > 0);
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
if (ret < 0) {
return ret;
}
/* Allocate, if necessary at a given offset in the image file */
alloc_cluster_offset = start_of_cluster(s, *host_offset);
@@ -1482,7 +1470,7 @@ static int discard_single_l2(BlockDriverState *bs, uint64_t offset,
}
/* First remove L2 entries */
qcow2_cache_entry_mark_dirty(bs, s->l2_table_cache, l2_table);
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
if (!full_discard && s->qcow_version >= 3) {
l2_table[l2_index + i] = cpu_to_be64(QCOW_OFLAG_ZERO);
} else {
@@ -1493,7 +1481,10 @@ static int discard_single_l2(BlockDriverState *bs, uint64_t offset,
qcow2_free_any_clusters(bs, old_l2_entry, 1, type);
}
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
if (ret < 0) {
return ret;
}
return nb_clusters;
}
@@ -1567,7 +1558,7 @@ static int zero_single_l2(BlockDriverState *bs, uint64_t offset,
old_offset = be64_to_cpu(l2_table[l2_index + i]);
/* Update L2 entries */
qcow2_cache_entry_mark_dirty(bs, s->l2_table_cache, l2_table);
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
if (old_offset & QCOW_OFLAG_COMPRESSED) {
l2_table[l2_index + i] = cpu_to_be64(QCOW_OFLAG_ZERO);
qcow2_free_any_clusters(bs, old_offset, 1, QCOW2_DISCARD_REQUEST);
@@ -1576,7 +1567,10 @@ static int zero_single_l2(BlockDriverState *bs, uint64_t offset,
}
}
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
if (ret < 0) {
return ret;
}
return nb_clusters;
}
@@ -1766,10 +1760,14 @@ static int expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
if (is_active_l1) {
if (l2_dirty) {
qcow2_cache_entry_mark_dirty(bs, s->l2_table_cache, l2_table);
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
qcow2_cache_depends_on_flush(s->l2_table_cache);
}
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
ret = qcow2_cache_put(bs, s->l2_table_cache, (void **)&l2_table);
if (ret < 0) {
l2_table = NULL;
goto fail;
}
} else {
if (l2_dirty) {
ret = qcow2_pre_write_overlap_check(bs,
@@ -1800,7 +1798,12 @@ fail:
if (!is_active_l1) {
qemu_vfree(l2_table);
} else {
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
if (ret < 0) {
qcow2_cache_put(bs, s->l2_table_cache, (void **)&l2_table);
} else {
ret = qcow2_cache_put(bs, s->l2_table_cache,
(void **)&l2_table);
}
}
}
return ret;

View File

@@ -265,7 +265,10 @@ int qcow2_get_refcount(BlockDriverState *bs, int64_t cluster_index,
block_index = cluster_index & (s->refcount_block_size - 1);
*refcount = s->get_refcount(refcount_block, block_index);
qcow2_cache_put(bs, s->refcount_block_cache, &refcount_block);
ret = qcow2_cache_put(bs, s->refcount_block_cache, &refcount_block);
if (ret < 0) {
return ret;
}
return 0;
}
@@ -421,7 +424,7 @@ static int alloc_refcount_block(BlockDriverState *bs,
/* Now the new refcount block needs to be written to disk */
BLKDBG_EVENT(bs->file, BLKDBG_REFBLOCK_ALLOC_WRITE);
qcow2_cache_entry_mark_dirty(bs, s->refcount_block_cache, *refcount_block);
qcow2_cache_entry_mark_dirty(s->refcount_block_cache, *refcount_block);
ret = qcow2_cache_flush(bs, s->refcount_block_cache);
if (ret < 0) {
goto fail_block;
@@ -445,7 +448,10 @@ static int alloc_refcount_block(BlockDriverState *bs,
return -EAGAIN;
}
qcow2_cache_put(bs, s->refcount_block_cache, refcount_block);
ret = qcow2_cache_put(bs, s->refcount_block_cache, refcount_block);
if (ret < 0) {
goto fail_block;
}
/*
* If we come here, we need to grow the refcount table. Again, a new
@@ -717,8 +723,13 @@ static int QEMU_WARN_UNUSED_RESULT update_refcount(BlockDriverState *bs,
/* Load the refcount block and allocate it if needed */
if (table_index != old_table_index) {
if (refcount_block) {
qcow2_cache_put(bs, s->refcount_block_cache, &refcount_block);
ret = qcow2_cache_put(bs, s->refcount_block_cache,
&refcount_block);
if (ret < 0) {
goto fail;
}
}
ret = alloc_refcount_block(bs, cluster_index, &refcount_block);
if (ret < 0) {
goto fail;
@@ -726,8 +737,7 @@ static int QEMU_WARN_UNUSED_RESULT update_refcount(BlockDriverState *bs,
}
old_table_index = table_index;
qcow2_cache_entry_mark_dirty(bs, s->refcount_block_cache,
refcount_block);
qcow2_cache_entry_mark_dirty(s->refcount_block_cache, refcount_block);
/* we can update the count and save it */
block_index = cluster_index & (s->refcount_block_size - 1);
@@ -763,7 +773,11 @@ fail:
/* Write last changed block to disk */
if (refcount_block) {
qcow2_cache_put(bs, s->refcount_block_cache, &refcount_block);
int wret;
wret = qcow2_cache_put(bs, s->refcount_block_cache, &refcount_block);
if (wret < 0) {
return ret < 0 ? ret : wret;
}
}
/*
@@ -940,21 +954,19 @@ int64_t qcow2_alloc_bytes(BlockDriverState *bs, int size)
}
free_in_cluster = s->cluster_size - offset_into_cluster(s, offset);
do {
if (!offset || free_in_cluster < size) {
int64_t new_cluster = alloc_clusters_noref(bs, s->cluster_size);
if (new_cluster < 0) {
return new_cluster;
}
if (!offset || ROUND_UP(offset, s->cluster_size) != new_cluster) {
offset = new_cluster;
}
if (!offset || free_in_cluster < size) {
int64_t new_cluster = alloc_clusters_noref(bs, s->cluster_size);
if (new_cluster < 0) {
return new_cluster;
}
assert(offset);
ret = update_refcount(bs, offset, size, 1, false, QCOW2_DISCARD_NEVER);
} while (ret == -EAGAIN);
if (!offset || ROUND_UP(offset, s->cluster_size) != new_cluster) {
offset = new_cluster;
}
}
assert(offset);
ret = update_refcount(bs, offset, size, 1, false, QCOW2_DISCARD_NEVER);
if (ret < 0) {
return ret;
}
@@ -1170,12 +1182,15 @@ int qcow2_update_snapshot_refcount(BlockDriverState *bs,
s->refcount_block_cache);
}
l2_table[j] = cpu_to_be64(offset);
qcow2_cache_entry_mark_dirty(bs, s->l2_table_cache,
l2_table);
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
}
}
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
if (ret < 0) {
goto fail;
}
if (addend != 0) {
ret = qcow2_update_cluster_refcount(bs, l2_offset >>
@@ -2440,7 +2455,7 @@ int qcow2_pre_write_overlap_check(BlockDriverState *bs, int ign, int64_t offset,
if (ret < 0) {
return ret;
} else if (ret > 0) {
int metadata_ol_bitnr = ctz32(ret);
int metadata_ol_bitnr = ffs(ret) - 1;
assert(metadata_ol_bitnr < QCOW2_OL_MAX_BITNR);
qcow2_signal_corruption(bs, true, offset, size, "Preventing invalid "

View File

@@ -25,7 +25,6 @@
#include "qemu-common.h"
#include "block/block_int.h"
#include "block/qcow2.h"
#include "qemu/error-report.h"
void qcow2_free_snapshots(BlockDriverState *bs)
{
@@ -352,8 +351,10 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
memset(sn, 0, sizeof(*sn));
/* Generate an ID */
find_new_snapshot_id(bs, sn_info->id_str, sizeof(sn_info->id_str));
/* Generate an ID if it wasn't passed */
if (sn_info->id_str[0] == '\0') {
find_new_snapshot_id(bs, sn_info->id_str, sizeof(sn_info->id_str));
}
/* Check that the ID is unique */
if (find_snapshot_by_id_and_name(bs, sn_info->id_str, NULL) >= 0) {

View File

@@ -25,6 +25,7 @@
#include "block/block_int.h"
#include "qemu/module.h"
#include <zlib.h>
#include "qemu/aes.h"
#include "block/qcow2.h"
#include "qemu/error-report.h"
#include "qapi/qmp/qerror.h"
@@ -206,8 +207,8 @@ static void GCC_FMT_ATTR(3, 4) report_unsupported(BlockDriverState *bs,
vsnprintf(msg, sizeof(msg), fmt, ap);
va_end(ap);
error_setg(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bdrv_get_device_or_node_name(bs), "qcow2", msg);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bdrv_get_device_name(bs), "qcow2", msg);
}
static void report_unsupported_feature(BlockDriverState *bs,
@@ -482,11 +483,9 @@ static const char *overlap_bool_option_names[QCOW2_OL_MAX_BITNR] = {
[QCOW2_OL_INACTIVE_L2_BITNR] = QCOW2_OPT_OVERLAP_INACTIVE_L2,
};
static void read_cache_sizes(BlockDriverState *bs, QemuOpts *opts,
uint64_t *l2_cache_size,
static void read_cache_sizes(QemuOpts *opts, uint64_t *l2_cache_size,
uint64_t *refcount_cache_size, Error **errp)
{
BDRVQcowState *s = bs->opaque;
uint64_t combined_cache_size;
bool l2_cache_size_set, refcount_cache_size_set, combined_cache_size_set;
@@ -526,9 +525,7 @@ static void read_cache_sizes(BlockDriverState *bs, QemuOpts *opts,
}
} else {
if (!l2_cache_size_set && !refcount_cache_size_set) {
*l2_cache_size = MAX(DEFAULT_L2_CACHE_BYTE_SIZE,
(uint64_t)DEFAULT_L2_CACHE_CLUSTERS
* s->cluster_size);
*l2_cache_size = DEFAULT_L2_CACHE_BYTE_SIZE;
*refcount_cache_size = *l2_cache_size
/ DEFAULT_L2_REFCOUNT_SIZE_RATIO;
} else if (!l2_cache_size_set) {
@@ -698,11 +695,6 @@ static int qcow2_open(BlockDriverState *bs, QDict *options, int flags,
ret = -EINVAL;
goto fail;
}
if (!qcrypto_cipher_supports(QCRYPTO_CIPHER_ALG_AES_128)) {
error_setg(errp, "AES cipher not available");
ret = -EINVAL;
goto fail;
}
s->crypt_method_header = header.crypt_method;
if (s->crypt_method_header) {
bs->encrypted = 1;
@@ -811,8 +803,7 @@ static int qcow2_open(BlockDriverState *bs, QDict *options, int flags,
goto fail;
}
read_cache_sizes(bs, opts, &l2_cache_size, &refcount_cache_size,
&local_err);
read_cache_sizes(opts, &l2_cache_size, &refcount_cache_size, &local_err);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
@@ -1036,7 +1027,6 @@ static int qcow2_set_key(BlockDriverState *bs, const char *key)
BDRVQcowState *s = bs->opaque;
uint8_t keybuf[16];
int len, i;
Error *err = NULL;
memset(keybuf, 0, 16);
len = strlen(key);
@@ -1047,22 +1037,30 @@ static int qcow2_set_key(BlockDriverState *bs, const char *key)
for(i = 0;i < len;i++) {
keybuf[i] = key[i];
}
assert(bs->encrypted);
s->crypt_method = s->crypt_method_header;
qcrypto_cipher_free(s->cipher);
s->cipher = qcrypto_cipher_new(
QCRYPTO_CIPHER_ALG_AES_128,
QCRYPTO_CIPHER_MODE_CBC,
keybuf, G_N_ELEMENTS(keybuf),
&err);
if (!s->cipher) {
/* XXX would be nice if errors in this method could
* be properly propagate to the caller. Would need
* the bdrv_set_key() API signature to be fixed. */
error_free(err);
if (AES_set_encrypt_key(keybuf, 128, &s->aes_encrypt_key) != 0)
return -1;
if (AES_set_decrypt_key(keybuf, 128, &s->aes_decrypt_key) != 0)
return -1;
#if 0
/* test */
{
uint8_t in[16];
uint8_t out[16];
uint8_t tmp[16];
for(i=0;i<16;i++)
in[i] = i;
AES_encrypt(in, tmp, &s->aes_encrypt_key);
AES_decrypt(tmp, out, &s->aes_decrypt_key);
for(i = 0; i < 16; i++)
printf(" %02x", tmp[i]);
printf("\n");
for(i = 0; i < 16; i++)
printf(" %02x", out[i]);
printf("\n");
}
#endif
return 0;
}
@@ -1105,7 +1103,7 @@ static int64_t coroutine_fn qcow2_co_get_block_status(BlockDriverState *bs,
}
if (cluster_offset != 0 && ret != QCOW2_CLUSTER_COMPRESSED &&
!s->cipher) {
!s->crypt_method) {
index_in_cluster = sector_num & (s->cluster_sectors - 1);
cluster_offset |= (index_in_cluster << BDRV_SECTOR_BITS);
status |= BDRV_BLOCK_OFFSET_VALID | cluster_offset;
@@ -1155,7 +1153,7 @@ static coroutine_fn int qcow2_co_readv(BlockDriverState *bs, int64_t sector_num,
/* prepare next request */
cur_nr_sectors = remaining_sectors;
if (s->cipher) {
if (s->crypt_method) {
cur_nr_sectors = MIN(cur_nr_sectors,
QCOW_MAX_CRYPT_CLUSTERS * s->cluster_sectors);
}
@@ -1226,9 +1224,7 @@ static coroutine_fn int qcow2_co_readv(BlockDriverState *bs, int64_t sector_num,
goto fail;
}
if (bs->encrypted) {
assert(s->cipher);
if (s->crypt_method) {
/*
* For encrypted images, read everything into a temporary
* contiguous buffer on which the AES functions can work.
@@ -1259,16 +1255,9 @@ static coroutine_fn int qcow2_co_readv(BlockDriverState *bs, int64_t sector_num,
if (ret < 0) {
goto fail;
}
if (bs->encrypted) {
assert(s->cipher);
Error *err = NULL;
if (qcow2_encrypt_sectors(s, sector_num, cluster_data,
cluster_data, cur_nr_sectors, false,
&err) < 0) {
error_free(err);
ret = -EIO;
goto fail;
}
if (s->crypt_method) {
qcow2_encrypt_sectors(s, sector_num, cluster_data,
cluster_data, cur_nr_sectors, 0, &s->aes_decrypt_key);
qemu_iovec_from_buf(qiov, bytes_done,
cluster_data, 512 * cur_nr_sectors);
}
@@ -1326,7 +1315,7 @@ static coroutine_fn int qcow2_co_writev(BlockDriverState *bs,
trace_qcow2_writev_start_part(qemu_coroutine_self());
index_in_cluster = sector_num & (s->cluster_sectors - 1);
cur_nr_sectors = remaining_sectors;
if (bs->encrypted &&
if (s->crypt_method &&
cur_nr_sectors >
QCOW_MAX_CRYPT_CLUSTERS * s->cluster_sectors - index_in_cluster) {
cur_nr_sectors =
@@ -1345,9 +1334,7 @@ static coroutine_fn int qcow2_co_writev(BlockDriverState *bs,
qemu_iovec_concat(&hd_qiov, qiov, bytes_done,
cur_nr_sectors * 512);
if (bs->encrypted) {
Error *err = NULL;
assert(s->cipher);
if (s->crypt_method) {
if (!cluster_data) {
cluster_data = qemu_try_blockalign(bs->file,
QCOW_MAX_CRYPT_CLUSTERS
@@ -1362,13 +1349,8 @@ static coroutine_fn int qcow2_co_writev(BlockDriverState *bs,
QCOW_MAX_CRYPT_CLUSTERS * s->cluster_size);
qemu_iovec_to_buf(&hd_qiov, 0, cluster_data, hd_qiov.size);
if (qcow2_encrypt_sectors(s, sector_num, cluster_data,
cluster_data, cur_nr_sectors,
true, &err) < 0) {
error_free(err);
ret = -EIO;
goto fail;
}
qcow2_encrypt_sectors(s, sector_num, cluster_data,
cluster_data, cur_nr_sectors, 1, &s->aes_encrypt_key);
qemu_iovec_reset(&hd_qiov);
qemu_iovec_add(&hd_qiov, cluster_data,
@@ -1474,9 +1456,6 @@ static void qcow2_close(BlockDriverState *bs)
qcow2_cache_destroy(bs, s->l2_table_cache);
qcow2_cache_destroy(bs, s->refcount_block_cache);
qcrypto_cipher_free(s->cipher);
s->cipher = NULL;
g_free(s->unknown_header_fields);
cleanup_unknown_header_ext(bs);
@@ -1493,7 +1472,9 @@ static void qcow2_invalidate_cache(BlockDriverState *bs, Error **errp)
{
BDRVQcowState *s = bs->opaque;
int flags = s->flags;
QCryptoCipher *cipher = NULL;
AES_KEY aes_encrypt_key;
AES_KEY aes_decrypt_key;
uint32_t crypt_method = 0;
QDict *options;
Error *local_err = NULL;
int ret;
@@ -1503,8 +1484,11 @@ static void qcow2_invalidate_cache(BlockDriverState *bs, Error **errp)
* that means we don't have to worry about reopening them here.
*/
cipher = s->cipher;
s->cipher = NULL;
if (s->crypt_method) {
crypt_method = s->crypt_method;
memcpy(&aes_encrypt_key, &s->aes_encrypt_key, sizeof(aes_encrypt_key));
memcpy(&aes_decrypt_key, &s->aes_decrypt_key, sizeof(aes_decrypt_key));
}
qcow2_close(bs);
@@ -1529,7 +1513,11 @@ static void qcow2_invalidate_cache(BlockDriverState *bs, Error **errp)
return;
}
s->cipher = cipher;
if (crypt_method) {
s->crypt_method = crypt_method;
memcpy(&s->aes_encrypt_key, &aes_encrypt_key, sizeof(aes_encrypt_key));
memcpy(&s->aes_decrypt_key, &aes_decrypt_key, sizeof(aes_decrypt_key));
}
}
static size_t header_ext_add(char *buf, uint32_t magic, const void *s,
@@ -1814,7 +1802,7 @@ static int qcow2_create2(const char *filename, int64_t total_size,
{
/* Calculate cluster_bits */
int cluster_bits;
cluster_bits = ctz32(cluster_size);
cluster_bits = ffs(cluster_size) - 1;
if (cluster_bits < MIN_CLUSTER_BITS || cluster_bits > MAX_CLUSTER_BITS ||
(1 << cluster_bits) != cluster_size)
{
@@ -2122,7 +2110,7 @@ static int qcow2_create(const char *filename, QemuOpts *opts, Error **errp)
goto finish;
}
refcount_order = ctz32(refcount_bits);
refcount_order = ffs(refcount_bits) - 1;
ret = qcow2_create2(filename, size, backing_file, backing_fmt, flags,
cluster_size, prealloc, opts, version, refcount_order,
@@ -2730,9 +2718,8 @@ static int qcow2_amend_options(BlockDriverState *bs, QemuOpts *opts,
backing_format = qemu_opt_get(opts, BLOCK_OPT_BACKING_FMT);
} else if (!strcmp(desc->name, BLOCK_OPT_ENCRYPT)) {
encrypt = qemu_opt_get_bool(opts, BLOCK_OPT_ENCRYPT,
!!s->cipher);
if (encrypt != !!s->cipher) {
s->crypt_method);
if (encrypt != !!s->crypt_method) {
fprintf(stderr, "Changing the encryption flag is not "
"supported.\n");
return -ENOTSUP;
@@ -2837,7 +2824,6 @@ void qcow2_signal_corruption(BlockDriverState *bs, bool fatal, int64_t offset,
int64_t size, const char *message_format, ...)
{
BDRVQcowState *s = bs->opaque;
const char *node_name;
char *message;
va_list ap;
@@ -2861,11 +2847,8 @@ void qcow2_signal_corruption(BlockDriverState *bs, bool fatal, int64_t offset,
"corruption events will be suppressed\n", message);
}
node_name = bdrv_get_node_name(bs);
qapi_event_send_block_image_corrupted(bdrv_get_device_name(bs),
*node_name != '\0', node_name,
message, offset >= 0, offset,
size >= 0, size,
qapi_event_send_block_image_corrupted(bdrv_get_device_name(bs), message,
offset >= 0, offset, size >= 0, size,
fatal, &error_abort);
g_free(message);

View File

@@ -25,7 +25,7 @@
#ifndef BLOCK_QCOW2_H
#define BLOCK_QCOW2_H
#include "crypto/cipher.h"
#include "qemu/aes.h"
#include "block/coroutine.h"
//#define DEBUG_ALLOC
@@ -68,8 +68,6 @@
/* Must be at least 4 to cover all cases of refcount table growth */
#define MIN_REFCOUNT_CACHE_SIZE 4 /* clusters */
/* Whichever is more */
#define DEFAULT_L2_CACHE_CLUSTERS 8 /* clusters */
#define DEFAULT_L2_CACHE_BYTE_SIZE 1048576 /* bytes */
/* The refblock cache needs only a fourth of the L2 cache size to cover as many
@@ -253,8 +251,10 @@ typedef struct BDRVQcowState {
CoMutex lock;
QCryptoCipher *cipher; /* current cipher, NULL if no key yet */
uint32_t crypt_method; /* current crypt method, 0 if no key yet */
uint32_t crypt_method_header;
AES_KEY aes_encrypt_key;
AES_KEY aes_decrypt_key;
uint64_t snapshots_offset;
int snapshots_size;
unsigned int nb_snapshots;
@@ -534,9 +534,10 @@ int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
int qcow2_write_l1_entry(BlockDriverState *bs, int l1_index);
void qcow2_l2_cache_reset(BlockDriverState *bs);
int qcow2_decompress_cluster(BlockDriverState *bs, uint64_t cluster_offset);
int qcow2_encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
uint8_t *out_buf, const uint8_t *in_buf,
int nb_sectors, bool enc, Error **errp);
void qcow2_encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
uint8_t *out_buf, const uint8_t *in_buf,
int nb_sectors, int enc,
const AES_KEY *key);
int qcow2_get_cluster_offset(BlockDriverState *bs, uint64_t offset,
int *num, uint64_t *cluster_offset);
@@ -574,8 +575,7 @@ int qcow2_read_snapshots(BlockDriverState *bs);
Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables);
int qcow2_cache_destroy(BlockDriverState* bs, Qcow2Cache *c);
void qcow2_cache_entry_mark_dirty(BlockDriverState *bs, Qcow2Cache *c,
void *table);
void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table);
int qcow2_cache_flush(BlockDriverState *bs, Qcow2Cache *c);
int qcow2_cache_set_dependency(BlockDriverState *bs, Qcow2Cache *c,
Qcow2Cache *dependency);
@@ -587,6 +587,6 @@ int qcow2_cache_get(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,
void **table);
int qcow2_cache_get_empty(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,
void **table);
void qcow2_cache_put(BlockDriverState *bs, Qcow2Cache *c, void **table);
int qcow2_cache_put(BlockDriverState *bs, Qcow2Cache *c, void **table);
#endif

View File

@@ -407,8 +407,8 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
char buf[64];
snprintf(buf, sizeof(buf), "%" PRIx64,
s->header.features & ~QED_FEATURE_MASK);
error_setg(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bdrv_get_device_or_node_name(bs), "QED", buf);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bdrv_get_device_name(bs), "QED", buf);
return -ENOTSUP;
}
if (!qed_is_cluster_size_valid(s->header.cluster_size)) {
@@ -436,9 +436,9 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
s->table_nelems = (s->header.cluster_size * s->header.table_size) /
sizeof(uint64_t);
s->l2_shift = ctz32(s->header.cluster_size);
s->l2_shift = ffs(s->header.cluster_size) - 1;
s->l2_mask = s->table_nelems - 1;
s->l1_shift = s->l2_shift + ctz32(s->table_nelems);
s->l1_shift = s->l2_shift + ffs(s->table_nelems) - 1;
/* Header size calculation must not overflow uint32_t */
if (s->header.header_size > UINT32_MAX / s->header.cluster_size) {

View File

@@ -13,16 +13,16 @@
* See the COPYING file in the top-level directory.
*/
#include <gnutls/gnutls.h>
#include <gnutls/crypto.h>
#include "block/block_int.h"
#include "qapi/qmp/qbool.h"
#include "qapi/qmp/qdict.h"
#include "qapi/qmp/qerror.h"
#include "qapi/qmp/qint.h"
#include "qapi/qmp/qjson.h"
#include "qapi/qmp/qlist.h"
#include "qapi/qmp/qstring.h"
#include "qapi-event.h"
#include "crypto/hash.h"
#define HASH_LENGTH 32
@@ -33,7 +33,7 @@
/* This union holds a vote hash value */
typedef union QuorumVoteValue {
uint8_t h[HASH_LENGTH]; /* SHA-256 hash */
char h[HASH_LENGTH]; /* SHA-256 hash */
int64_t l; /* simpler 64 bits hash */
} QuorumVoteValue;
@@ -226,7 +226,10 @@ static void quorum_report_bad(QuorumAIOCB *acb, char *node_name, int ret)
static void quorum_report_failure(QuorumAIOCB *acb)
{
const char *reference = bdrv_get_device_or_node_name(acb->common.bs);
const char *reference = bdrv_get_device_name(acb->common.bs)[0] ?
bdrv_get_device_name(acb->common.bs) :
acb->common.bs->node_name;
qapi_event_send_quorum_failure(reference, acb->sector_num,
acb->nb_sectors, &error_abort);
}
@@ -427,21 +430,25 @@ static void quorum_free_vote_list(QuorumVotes *votes)
static int quorum_compute_hash(QuorumAIOCB *acb, int i, QuorumVoteValue *hash)
{
int j, ret;
gnutls_hash_hd_t dig;
QEMUIOVector *qiov = &acb->qcrs[i].qiov;
size_t len = sizeof(hash->h);
uint8_t *data = hash->h;
/* XXX - would be nice if we could pass in the Error **
* and propagate that back, but this quorum code is
* restricted to just errno values currently */
if (qcrypto_hash_bytesv(QCRYPTO_HASH_ALG_SHA256,
qiov->iov, qiov->niov,
&data, &len,
NULL) < 0) {
return -EINVAL;
ret = gnutls_hash_init(&dig, GNUTLS_DIG_SHA256);
if (ret < 0) {
return ret;
}
return 0;
for (j = 0; j < qiov->niov; j++) {
ret = gnutls_hash(dig, qiov->iov[j].iov_base, qiov->iov[j].iov_len);
if (ret < 0) {
break;
}
}
gnutls_hash_deinit(dig, (void *) hash);
return ret;
}
static QuorumVoteVersion *quorum_get_vote_winner(QuorumVotes *votes)
@@ -796,8 +803,8 @@ static int quorum_valid_threshold(int threshold, int num_children, Error **errp)
{
if (threshold < 1) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
"vote-threshold", "value >= 1");
error_set(errp, QERR_INVALID_PARAMETER_VALUE,
"vote-threshold", "value >= 1");
return -ERANGE;
}
@@ -862,18 +869,25 @@ static int quorum_open(BlockDriverState *bs, QDict *options, int flags,
Error *local_err = NULL;
QemuOpts *opts = NULL;
bool *opened;
QDict *sub = NULL;
QList *list = NULL;
const QListEntry *lentry;
int i;
int ret = 0;
qdict_flatten(options);
qdict_extract_subqdict(options, &sub, "children.");
qdict_array_split(sub, &list);
/* count how many different children are present */
s->num_children = qdict_array_entries(options, "children.");
if (s->num_children < 0) {
error_setg(&local_err, "Option children is not a valid array");
if (qdict_size(sub)) {
error_setg(&local_err, "Invalid option children.%s",
qdict_first(sub)->key);
ret = -EINVAL;
goto exit;
}
/* count how many different children are present */
s->num_children = qlist_size(list);
if (s->num_children < 2) {
error_setg(&local_err,
"Number of provided children must be greater than 1");
@@ -926,17 +940,37 @@ static int quorum_open(BlockDriverState *bs, QDict *options, int flags,
s->bs = g_new0(BlockDriverState *, s->num_children);
opened = g_new0(bool, s->num_children);
for (i = 0; i < s->num_children; i++) {
char indexstr[32];
ret = snprintf(indexstr, 32, "children.%d", i);
assert(ret < 32);
for (i = 0, lentry = qlist_first(list); lentry;
lentry = qlist_next(lentry), i++) {
QDict *d;
QString *string;
switch (qobject_type(lentry->value))
{
/* List of options */
case QTYPE_QDICT:
d = qobject_to_qdict(lentry->value);
QINCREF(d);
ret = bdrv_open(&s->bs[i], NULL, NULL, d, flags, NULL,
&local_err);
break;
/* QMP reference */
case QTYPE_QSTRING:
string = qobject_to_qstring(lentry->value);
ret = bdrv_open(&s->bs[i], NULL, qstring_get_str(string), NULL,
flags, NULL, &local_err);
break;
default:
error_setg(&local_err, "Specification of child block device %i "
"is invalid", i);
ret = -EINVAL;
}
ret = bdrv_open_image(&s->bs[i], NULL, options, indexstr, bs,
&child_format, false, &local_err);
if (ret < 0) {
goto close_exit;
}
opened[i] = true;
}
@@ -959,6 +993,8 @@ exit:
if (local_err) {
error_propagate(errp, local_err);
}
QDECREF(list);
QDECREF(sub);
return ret;
}
@@ -1020,9 +1056,9 @@ static void quorum_refresh_filename(BlockDriverState *bs)
qdict_put_obj(opts, QUORUM_OPT_VOTE_THRESHOLD,
QOBJECT(qint_from_int(s->threshold)));
qdict_put_obj(opts, QUORUM_OPT_BLKVERIFY,
QOBJECT(qbool_from_bool(s->is_blkverify)));
QOBJECT(qbool_from_int(s->is_blkverify)));
qdict_put_obj(opts, QUORUM_OPT_REWRITE,
QOBJECT(qbool_from_bool(s->rewrite_corrupted)));
QOBJECT(qbool_from_int(s->rewrite_corrupted)));
qdict_put_obj(opts, "children", QOBJECT(children));
bs->full_open_options = opts;
@@ -1055,10 +1091,6 @@ static BlockDriver bdrv_quorum = {
static void bdrv_quorum_init(void)
{
if (!qcrypto_hash_supports(QCRYPTO_HASH_ALG_SHA256)) {
/* SHA256 hash support is required for quorum device */
return;
}
bdrv_register(&bdrv_quorum);
}

View File

@@ -22,7 +22,6 @@
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "qemu/error-report.h"
#include "qemu/timer.h"
#include "qemu/log.h"
#include "block/block_int.h"
@@ -32,7 +31,6 @@
#include "qemu/iov.h"
#include "raw-aio.h"
#include "qapi/util.h"
#include "qapi/qmp/qstring.h"
#if defined(__APPLE__) && (__MACH__)
#include <paths.h>
@@ -59,7 +57,6 @@
#include <linux/fd.h>
#include <linux/fs.h>
#include <linux/hdreg.h>
#include <scsi/sg.h>
#ifdef __s390__
#include <asm/dasd.h>
#endif
@@ -97,19 +94,15 @@
#include <xfs/xfs.h>
#endif
//#define DEBUG_BLOCK
//#define DEBUG_FLOPPY
#ifdef DEBUG_BLOCK
# define DEBUG_BLOCK_PRINT 1
//#define DEBUG_BLOCK
#if defined(DEBUG_BLOCK)
#define DEBUG_BLOCK_PRINT(formatCstr, ...) do { if (qemu_log_enabled()) \
{ qemu_log(formatCstr, ## __VA_ARGS__); qemu_log_flush(); } } while (0)
#else
# define DEBUG_BLOCK_PRINT 0
#define DEBUG_BLOCK_PRINT(formatCstr, ...)
#endif
#define DPRINTF(fmt, ...) \
do { \
if (DEBUG_BLOCK_PRINT) { \
printf(fmt, ## __VA_ARGS__); \
} \
} while (0)
/* OS X does not have O_DSYNC */
#ifndef O_DSYNC
@@ -308,11 +301,10 @@ static void raw_probe_alignment(BlockDriverState *bs, int fd, Error **errp)
{
BDRVRawState *s = bs->opaque;
char *buf;
size_t max_align = MAX(MAX_BLOCKSIZE, getpagesize());
/* For SCSI generic devices the alignment is not really used.
/* For /dev/sg devices the alignment is not really used.
With buffered I/O, we don't have any restrictions. */
if (bdrv_is_sg(bs) || !s->needs_alignment) {
if (bs->sg || !s->needs_alignment) {
bs->request_alignment = 1;
s->buf_align = 1;
return;
@@ -338,9 +330,9 @@ static void raw_probe_alignment(BlockDriverState *bs, int fd, Error **errp)
/* If we could not get the sizes so far, we can only guess them */
if (!s->buf_align) {
size_t align;
buf = qemu_memalign(max_align, 2 * max_align);
for (align = 512; align <= max_align; align <<= 1) {
if (raw_is_io_aligned(fd, buf + align, max_align)) {
buf = qemu_memalign(MAX_BLOCKSIZE, 2 * MAX_BLOCKSIZE);
for (align = 512; align <= MAX_BLOCKSIZE; align <<= 1) {
if (raw_is_io_aligned(fd, buf + align, MAX_BLOCKSIZE)) {
s->buf_align = align;
break;
}
@@ -350,8 +342,8 @@ static void raw_probe_alignment(BlockDriverState *bs, int fd, Error **errp)
if (!bs->request_alignment) {
size_t align;
buf = qemu_memalign(s->buf_align, max_align);
for (align = 512; align <= max_align; align <<= 1) {
buf = qemu_memalign(s->buf_align, MAX_BLOCKSIZE);
for (align = 512; align <= MAX_BLOCKSIZE; align <<= 1) {
if (raw_is_io_aligned(fd, buf, align)) {
bs->request_alignment = align;
break;
@@ -733,8 +725,7 @@ static void raw_refresh_limits(BlockDriverState *bs, Error **errp)
BDRVRawState *s = bs->opaque;
raw_probe_alignment(bs, s->fd, errp);
bs->bl.min_mem_alignment = s->buf_align;
bs->bl.opt_mem_alignment = MAX(s->buf_align, getpagesize());
bs->bl.opt_mem_alignment = s->buf_align;
}
static int check_for_dasd(int fd)
@@ -1025,7 +1016,6 @@ static ssize_t handle_aiocb_rw(RawPosixAIOData *aiocb)
static int xfs_write_zeroes(BDRVRawState *s, int64_t offset, uint64_t bytes)
{
struct xfs_flock64 fl;
int err;
memset(&fl, 0, sizeof(fl));
fl.l_whence = SEEK_SET;
@@ -1033,9 +1023,8 @@ static int xfs_write_zeroes(BDRVRawState *s, int64_t offset, uint64_t bytes)
fl.l_len = bytes;
if (xfsctl(NULL, s->fd, XFS_IOC_ZERO_RANGE, &fl) < 0) {
err = errno;
DPRINTF("cannot write zero range (%s)\n", strerror(errno));
return -err;
DEBUG_BLOCK_PRINT("cannot write zero range (%s)\n", strerror(errno));
return -errno;
}
return 0;
@@ -1044,7 +1033,6 @@ static int xfs_write_zeroes(BDRVRawState *s, int64_t offset, uint64_t bytes)
static int xfs_discard(BDRVRawState *s, int64_t offset, uint64_t bytes)
{
struct xfs_flock64 fl;
int err;
memset(&fl, 0, sizeof(fl));
fl.l_whence = SEEK_SET;
@@ -1052,9 +1040,8 @@ static int xfs_discard(BDRVRawState *s, int64_t offset, uint64_t bytes)
fl.l_len = bytes;
if (xfsctl(NULL, s->fd, XFS_IOC_UNRESVSP64, &fl) < 0) {
err = errno;
DPRINTF("cannot punch hole (%s)\n", strerror(errno));
return -err;
DEBUG_BLOCK_PRINT("cannot punch hole (%s)\n", strerror(errno));
return -errno;
}
return 0;
@@ -1859,9 +1846,8 @@ static int64_t coroutine_fn raw_co_get_block_status(BlockDriverState *bs,
*pnum = nb_sectors;
ret = BDRV_BLOCK_DATA;
} else if (data == start) {
/* On a data extent, compute sectors to the end of the extent,
* possibly including a partial sector at EOF. */
*pnum = MIN(nb_sectors, DIV_ROUND_UP(hole - start, BDRV_SECTOR_SIZE));
/* On a data extent, compute sectors to the end of the extent. */
*pnum = MIN(nb_sectors, (hole - start) / BDRV_SECTOR_SIZE);
ret = BDRV_BLOCK_DATA;
} else {
/* On a hole, compute sectors to the beginning of the next extent. */
@@ -2087,38 +2073,15 @@ static void hdev_parse_filename(const char *filename, QDict *options,
qdict_put_obj(options, "filename", QOBJECT(qstring_from_str(filename)));
}
static bool hdev_is_sg(BlockDriverState *bs)
{
#if defined(__linux__)
struct stat st;
struct sg_scsi_id scsiid;
int sg_version;
if (stat(bs->filename, &st) >= 0 && S_ISCHR(st.st_mode) &&
!bdrv_ioctl(bs, SG_GET_VERSION_NUM, &sg_version) &&
!bdrv_ioctl(bs, SG_GET_SCSI_ID, &scsiid)) {
DPRINTF("SG device found: type=%d, version=%d\n",
scsiid.scsi_type, sg_version);
return true;
}
#endif
return false;
}
static int hdev_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVRawState *s = bs->opaque;
Error *local_err = NULL;
int ret;
#if defined(__APPLE__) && defined(__MACH__)
const char *filename = qdict_get_str(options, "filename");
#if defined(__APPLE__) && defined(__MACH__)
if (strstart(filename, "/dev/cdrom", NULL)) {
kern_return_t kernResult;
io_iterator_t mediaIterator;
@@ -2147,6 +2110,16 @@ static int hdev_open(BlockDriverState *bs, QDict *options, int flags,
#endif
s->type = FTYPE_FILE;
#if defined(__linux__)
{
char resolved_path[ MAXPATHLEN ], *temp;
temp = realpath(filename, resolved_path);
if (temp && strstart(temp, "/dev/sg", NULL)) {
bs->sg = 1;
}
}
#endif
ret = raw_open_common(bs, options, flags, 0, &local_err);
if (ret < 0) {
@@ -2156,9 +2129,6 @@ static int hdev_open(BlockDriverState *bs, QDict *options, int flags,
return ret;
}
/* Since this does ioctl the device must be already opened */
bs->sg = hdev_is_sg(bs);
if (flags & BDRV_O_RDWR) {
ret = check_hdev_writable(s);
if (ret < 0) {
@@ -2187,12 +2157,16 @@ static int fd_open(BlockDriverState *bs)
(qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - s->fd_open_time) >= FD_OPEN_TIMEOUT) {
qemu_close(s->fd);
s->fd = -1;
DPRINTF("Floppy closed\n");
#ifdef DEBUG_FLOPPY
printf("Floppy closed\n");
#endif
}
if (s->fd < 0) {
if (s->fd_got_error &&
(qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - s->fd_error_time) < FD_OPEN_TIMEOUT) {
DPRINTF("No floppy (open delayed)\n");
#ifdef DEBUG_FLOPPY
printf("No floppy (open delayed)\n");
#endif
return -EIO;
}
s->fd = qemu_open(bs->filename, s->open_flags & ~O_NONBLOCK);
@@ -2201,10 +2175,14 @@ static int fd_open(BlockDriverState *bs)
s->fd_got_error = 1;
if (last_media_present)
s->fd_media_changed = 1;
DPRINTF("No floppy\n");
#ifdef DEBUG_FLOPPY
printf("No floppy\n");
#endif
return -EIO;
}
DPRINTF("Floppy opened\n");
#ifdef DEBUG_FLOPPY
printf("Floppy opened\n");
#endif
}
if (!last_media_present)
s->fd_media_changed = 1;
@@ -2430,8 +2408,7 @@ static int floppy_probe_device(const char *filename)
struct stat st;
if (strstart(filename, "/dev/fd", NULL) &&
!strstart(filename, "/dev/fdset/", NULL) &&
!strstart(filename, "/dev/fd/", NULL)) {
!strstart(filename, "/dev/fdset/", NULL)) {
prio = 50;
}
@@ -2473,7 +2450,9 @@ static int floppy_media_changed(BlockDriverState *bs)
fd_open(bs);
ret = s->fd_media_changed;
s->fd_media_changed = 0;
DPRINTF("Floppy changed=%d\n", ret);
#ifdef DEBUG_FLOPPY
printf("Floppy changed=%d\n", ret);
#endif
return ret;
}

View File

@@ -29,7 +29,6 @@
#include "trace.h"
#include "block/thread-pool.h"
#include "qemu/iov.h"
#include "qapi/qmp/qstring.h"
#include <windows.h>
#include <winioctl.h>

View File

@@ -74,18 +74,25 @@ typedef struct RBDAIOCB {
QEMUIOVector *qiov;
char *bounce;
RBDAIOCmd cmd;
int64_t sector_num;
int error;
struct BDRVRBDState *s;
int status;
} RBDAIOCB;
typedef struct RADOSCB {
int rcbid;
RBDAIOCB *acb;
struct BDRVRBDState *s;
int done;
int64_t size;
char *buf;
int64_t ret;
} RADOSCB;
#define RBD_FD_READ 0
#define RBD_FD_WRITE 1
typedef struct BDRVRBDState {
rados_t cluster;
rados_ioctx_t io_ctx;
@@ -228,9 +235,7 @@ static char *qemu_rbd_parse_clientname(const char *conf, char *clientname)
return NULL;
}
static int qemu_rbd_set_conf(rados_t cluster, const char *conf,
bool only_read_conf_file,
Error **errp)
static int qemu_rbd_set_conf(rados_t cluster, const char *conf, Error **errp)
{
char *p, *buf;
char name[RBD_MAX_CONF_NAME_SIZE];
@@ -262,18 +267,14 @@ static int qemu_rbd_set_conf(rados_t cluster, const char *conf,
qemu_rbd_unescape(value);
if (strcmp(name, "conf") == 0) {
/* read the conf file alone, so it doesn't override more
specific settings for a particular device */
if (only_read_conf_file) {
ret = rados_conf_read_file(cluster, value);
if (ret < 0) {
error_setg(errp, "error reading conf file %s", value);
break;
}
ret = rados_conf_read_file(cluster, value);
if (ret < 0) {
error_setg(errp, "error reading conf file %s", value);
break;
}
} else if (strcmp(name, "id") == 0) {
/* ignore, this is parsed by qemu_rbd_parse_clientname() */
} else if (!only_read_conf_file) {
} else {
ret = rados_conf_set(cluster, name, value);
if (ret < 0) {
error_setg(errp, "invalid conf option %s", name);
@@ -324,7 +325,7 @@ static int qemu_rbd_create(const char *filename, QemuOpts *opts, Error **errp)
error_setg(errp, "obj size too small");
return -EINVAL;
}
obj_order = ctz32(objsize);
obj_order = ffs(objsize) - 1;
}
clientname = qemu_rbd_parse_clientname(conf, clientname_buf);
@@ -336,15 +337,10 @@ static int qemu_rbd_create(const char *filename, QemuOpts *opts, Error **errp)
if (strstr(conf, "conf=") == NULL) {
/* try default location, but ignore failure */
rados_conf_read_file(cluster, NULL);
} else if (conf[0] != '\0' &&
qemu_rbd_set_conf(cluster, conf, true, &local_err) < 0) {
rados_shutdown(cluster);
error_propagate(errp, local_err);
return -EIO;
}
if (conf[0] != '\0' &&
qemu_rbd_set_conf(cluster, conf, false, &local_err) < 0) {
qemu_rbd_set_conf(cluster, conf, &local_err) < 0) {
rados_shutdown(cluster);
error_propagate(errp, local_err);
return -EIO;
@@ -409,6 +405,7 @@ static void qemu_rbd_complete_aio(RADOSCB *rcb)
}
qemu_vfree(acb->bounce);
acb->common.cb(acb->common.opaque, (acb->ret > 0 ? 0 : acb->ret));
acb->status = 0;
qemu_aio_unref(acb);
}
@@ -471,23 +468,6 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags,
s->snap = g_strdup(snap_buf);
}
if (strstr(conf, "conf=") == NULL) {
/* try default location, but ignore failure */
rados_conf_read_file(s->cluster, NULL);
} else if (conf[0] != '\0') {
r = qemu_rbd_set_conf(s->cluster, conf, true, errp);
if (r < 0) {
goto failed_shutdown;
}
}
if (conf[0] != '\0') {
r = qemu_rbd_set_conf(s->cluster, conf, false, errp);
if (r < 0) {
goto failed_shutdown;
}
}
/*
* Fallback to more conservative semantics if setting cache
* options fails. Ignore errors from setting rbd_cache because the
@@ -501,6 +481,18 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags,
rados_conf_set(s->cluster, "rbd_cache", "true");
}
if (strstr(conf, "conf=") == NULL) {
/* try default location, but ignore failure */
rados_conf_read_file(s->cluster, NULL);
}
if (conf[0] != '\0') {
r = qemu_rbd_set_conf(s->cluster, conf, errp);
if (r < 0) {
goto failed_shutdown;
}
}
r = rados_connect(s->cluster);
if (r < 0) {
error_setg(errp, "error connecting");
@@ -629,6 +621,7 @@ static BlockAIOCB *rbd_start_aio(BlockDriverState *bs,
acb->error = 0;
acb->s = s;
acb->bh = NULL;
acb->status = -EINPROGRESS;
if (cmd == RBD_AIO_WRITE) {
qemu_iovec_to_buf(acb->qiov, 0, acb->bounce, qiov->size);
@@ -640,6 +633,7 @@ static BlockAIOCB *rbd_start_aio(BlockDriverState *bs,
size = nb_sectors * BDRV_SECTOR_SIZE;
rcb = g_new(RADOSCB, 1);
rcb->done = 0;
rcb->acb = acb;
rcb->buf = buf;
rcb->s = acb->s;

View File

@@ -318,10 +318,6 @@ enum AIOCBState {
AIOCB_DISCARD_OBJ,
};
#define AIOCBOverwrapping(x, y) \
(!(x->max_affect_data_idx < y->min_affect_data_idx \
|| y->max_affect_data_idx < x->min_affect_data_idx))
struct SheepdogAIOCB {
BlockAIOCB common;
@@ -338,11 +334,6 @@ struct SheepdogAIOCB {
bool cancelable;
int nr_pending;
uint32_t min_affect_data_idx;
uint32_t max_affect_data_idx;
QLIST_ENTRY(SheepdogAIOCB) aiocb_siblings;
};
typedef struct BDRVSheepdogState {
@@ -371,10 +362,8 @@ typedef struct BDRVSheepdogState {
/* Every aio request must be linked to either of these queues. */
QLIST_HEAD(inflight_aio_head, AIOReq) inflight_aio_head;
QLIST_HEAD(pending_aio_head, AIOReq) pending_aio_head;
QLIST_HEAD(failed_aio_head, AIOReq) failed_aio_head;
CoQueue overwrapping_queue;
QLIST_HEAD(inflight_aiocb_head, SheepdogAIOCB) inflight_aiocb_head;
} BDRVSheepdogState;
static const char * sd_strerror(int err)
@@ -509,7 +498,13 @@ static void sd_aio_cancel(BlockAIOCB *blockacb)
AIOReq *aioreq, *next;
if (sd_acb_cancelable(acb)) {
/* Remove outstanding requests from failed queue. */
/* Remove outstanding requests from pending and failed queues. */
QLIST_FOREACH_SAFE(aioreq, &s->pending_aio_head, aio_siblings,
next) {
if (aioreq->aiocb == acb) {
free_aio_req(s, aioreq);
}
}
QLIST_FOREACH_SAFE(aioreq, &s->failed_aio_head, aio_siblings,
next) {
if (aioreq->aiocb == acb) {
@@ -534,10 +529,6 @@ static SheepdogAIOCB *sd_aio_setup(BlockDriverState *bs, QEMUIOVector *qiov,
int64_t sector_num, int nb_sectors)
{
SheepdogAIOCB *acb;
uint32_t object_size;
BDRVSheepdogState *s = bs->opaque;
object_size = (UINT32_C(1) << s->inode.block_size_shift);
acb = qemu_aio_get(&sd_aiocb_info, bs, NULL, NULL);
@@ -551,11 +542,6 @@ static SheepdogAIOCB *sd_aio_setup(BlockDriverState *bs, QEMUIOVector *qiov,
acb->coroutine = qemu_coroutine_self();
acb->ret = 0;
acb->nr_pending = 0;
acb->min_affect_data_idx = acb->sector_num * BDRV_SECTOR_SIZE / object_size;
acb->max_affect_data_idx = (acb->sector_num * BDRV_SECTOR_SIZE +
acb->nb_sectors * BDRV_SECTOR_SIZE) / object_size;
return acb;
}
@@ -717,6 +703,38 @@ static int reload_inode(BDRVSheepdogState *s, uint32_t snapid, const char *tag);
static int get_sheep_fd(BDRVSheepdogState *s, Error **errp);
static void co_write_request(void *opaque);
static AIOReq *find_pending_req(BDRVSheepdogState *s, uint64_t oid)
{
AIOReq *aio_req;
QLIST_FOREACH(aio_req, &s->pending_aio_head, aio_siblings) {
if (aio_req->oid == oid) {
return aio_req;
}
}
return NULL;
}
/*
* This function searchs pending requests to the object `oid', and
* sends them.
*/
static void coroutine_fn send_pending_req(BDRVSheepdogState *s, uint64_t oid)
{
AIOReq *aio_req;
SheepdogAIOCB *acb;
while ((aio_req = find_pending_req(s, oid)) != NULL) {
acb = aio_req->aiocb;
/* move aio_req from pending list to inflight one */
QLIST_REMOVE(aio_req, aio_siblings);
QLIST_INSERT_HEAD(&s->inflight_aio_head, aio_req, aio_siblings);
add_aio_request(s, aio_req, acb->qiov->iov, acb->qiov->niov,
acb->aiocb_type);
}
}
static coroutine_fn void reconnect_to_sdog(void *opaque)
{
BDRVSheepdogState *s = opaque;
@@ -822,6 +840,12 @@ static void coroutine_fn aio_read_response(void *opaque)
s->max_dirty_data_idx = MAX(idx, s->max_dirty_data_idx);
s->min_dirty_data_idx = MIN(idx, s->min_dirty_data_idx);
}
/*
* Some requests may be blocked because simultaneous
* create requests are not allowed, so we search the
* pending requests here.
*/
send_pending_req(s, aio_req->oid);
}
break;
case AIOCB_READ_UDATA:
@@ -1317,6 +1341,30 @@ out:
return ret;
}
/* Return true if the specified request is linked to the pending list. */
static bool check_simultaneous_create(BDRVSheepdogState *s, AIOReq *aio_req)
{
AIOReq *areq;
QLIST_FOREACH(areq, &s->inflight_aio_head, aio_siblings) {
if (areq != aio_req && areq->oid == aio_req->oid) {
/*
* Sheepdog cannot handle simultaneous create requests to the same
* object, so we cannot send the request until the previous request
* finishes.
*/
DPRINTF("simultaneous create to %" PRIx64 "\n", aio_req->oid);
aio_req->flags = 0;
aio_req->base_oid = 0;
aio_req->create = false;
QLIST_REMOVE(aio_req, aio_siblings);
QLIST_INSERT_HEAD(&s->pending_aio_head, aio_req, aio_siblings);
return true;
}
}
return false;
}
static void coroutine_fn resend_aioreq(BDRVSheepdogState *s, AIOReq *aio_req)
{
SheepdogAIOCB *acb = aio_req->aiocb;
@@ -1331,6 +1379,10 @@ static void coroutine_fn resend_aioreq(BDRVSheepdogState *s, AIOReq *aio_req)
goto out;
}
if (check_simultaneous_create(s, aio_req)) {
return;
}
if (s->inode.data_vdi_id[idx]) {
aio_req->base_oid = vid_to_data_oid(s->inode.data_vdi_id[idx], idx);
aio_req->flags |= SD_FLAG_CMD_COW;
@@ -1406,8 +1458,8 @@ static int sd_open(BlockDriverState *bs, QDict *options, int flags,
filename = qemu_opt_get(opts, "filename");
QLIST_INIT(&s->inflight_aio_head);
QLIST_INIT(&s->pending_aio_head);
QLIST_INIT(&s->failed_aio_head);
QLIST_INIT(&s->inflight_aiocb_head);
s->fd = -1;
memset(vdi, 0, sizeof(vdi));
@@ -1472,7 +1524,6 @@ static int sd_open(BlockDriverState *bs, QDict *options, int flags,
bs->total_sectors = s->inode.vdi_size / BDRV_SECTOR_SIZE;
pstrcpy(s->name, sizeof(s->name), vdi);
qemu_co_mutex_init(&s->lock);
qemu_co_queue_init(&s->overwrapping_queue);
qemu_opts_del(opts);
g_free(buf);
return 0;
@@ -1665,7 +1716,7 @@ static int parse_block_size_shift(BDRVSheepdogState *s, QemuOpts *opt)
if ((object_size - 1) & object_size) { /* not a power of 2? */
return -EINVAL;
}
obj_order = ctz32(object_size);
obj_order = ffs(object_size) - 1;
if (obj_order < 20 || obj_order > 31) {
return -EINVAL;
}
@@ -2144,6 +2195,12 @@ static int coroutine_fn sd_co_rw_vector(void *p)
old_oid, done);
QLIST_INSERT_HEAD(&s->inflight_aio_head, aio_req, aio_siblings);
if (create) {
if (check_simultaneous_create(s, aio_req)) {
goto done;
}
}
add_aio_request(s, aio_req, acb->qiov->iov, acb->qiov->niov,
acb->aiocb_type);
done:
@@ -2158,20 +2215,6 @@ out:
return 1;
}
static bool check_overwrapping_aiocb(BDRVSheepdogState *s, SheepdogAIOCB *aiocb)
{
SheepdogAIOCB *cb;
QLIST_FOREACH(cb, &s->inflight_aiocb_head, aiocb_siblings) {
if (AIOCBOverwrapping(aiocb, cb)) {
return true;
}
}
QLIST_INSERT_HEAD(&s->inflight_aiocb_head, aiocb, aiocb_siblings);
return false;
}
static coroutine_fn int sd_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
@@ -2191,25 +2234,14 @@ static coroutine_fn int sd_co_writev(BlockDriverState *bs, int64_t sector_num,
acb->aio_done_func = sd_write_done;
acb->aiocb_type = AIOCB_WRITE_UDATA;
retry:
if (check_overwrapping_aiocb(s, acb)) {
qemu_co_queue_wait(&s->overwrapping_queue);
goto retry;
}
ret = sd_co_rw_vector(acb);
if (ret <= 0) {
QLIST_REMOVE(acb, aiocb_siblings);
qemu_co_queue_restart_all(&s->overwrapping_queue);
qemu_aio_unref(acb);
return ret;
}
qemu_coroutine_yield();
QLIST_REMOVE(acb, aiocb_siblings);
qemu_co_queue_restart_all(&s->overwrapping_queue);
return acb->ret;
}
@@ -2218,30 +2250,19 @@ static coroutine_fn int sd_co_readv(BlockDriverState *bs, int64_t sector_num,
{
SheepdogAIOCB *acb;
int ret;
BDRVSheepdogState *s = bs->opaque;
acb = sd_aio_setup(bs, qiov, sector_num, nb_sectors);
acb->aiocb_type = AIOCB_READ_UDATA;
acb->aio_done_func = sd_finish_aiocb;
retry:
if (check_overwrapping_aiocb(s, acb)) {
qemu_co_queue_wait(&s->overwrapping_queue);
goto retry;
}
ret = sd_co_rw_vector(acb);
if (ret <= 0) {
QLIST_REMOVE(acb, aiocb_siblings);
qemu_co_queue_restart_all(&s->overwrapping_queue);
qemu_aio_unref(acb);
return ret;
}
qemu_coroutine_yield();
QLIST_REMOVE(acb, aiocb_siblings);
qemu_co_queue_restart_all(&s->overwrapping_queue);
return acb->ret;
}
@@ -2320,7 +2341,6 @@ static int sd_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
if (ret < 0) {
error_report("failed to create inode for snapshot: %s",
error_get_pretty(local_err));
error_free(local_err);
goto cleanup;
}
@@ -2589,25 +2609,14 @@ static coroutine_fn int sd_co_discard(BlockDriverState *bs, int64_t sector_num,
acb->aiocb_type = AIOCB_DISCARD_OBJ;
acb->aio_done_func = sd_finish_aiocb;
retry:
if (check_overwrapping_aiocb(s, acb)) {
qemu_co_queue_wait(&s->overwrapping_queue);
goto retry;
}
ret = sd_co_rw_vector(acb);
if (ret <= 0) {
QLIST_REMOVE(acb, aiocb_siblings);
qemu_co_queue_restart_all(&s->overwrapping_queue);
qemu_aio_unref(acb);
return ret;
}
qemu_coroutine_yield();
QLIST_REMOVE(acb, aiocb_siblings);
qemu_co_queue_restart_all(&s->overwrapping_queue);
return acb->ret;
}

View File

@@ -24,7 +24,6 @@
#include "block/snapshot.h"
#include "block/block_int.h"
#include "qapi/qmp/qerror.h"
QemuOptsList internal_snapshot_opts = {
.name = "snapshot",
@@ -230,7 +229,7 @@ int bdrv_snapshot_delete(BlockDriverState *bs,
{
BlockDriver *drv = bs->drv;
if (!drv) {
error_setg(errp, QERR_DEVICE_HAS_NO_MEDIUM, bdrv_get_device_name(bs));
error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, bdrv_get_device_name(bs));
return -ENOMEDIUM;
}
if (!snapshot_id && !name) {
@@ -239,7 +238,7 @@ int bdrv_snapshot_delete(BlockDriverState *bs,
}
/* drain all pending i/o before deleting snapshot */
bdrv_drain(bs);
bdrv_drain_all();
if (drv->bdrv_snapshot_delete) {
return drv->bdrv_snapshot_delete(bs, snapshot_id, name, errp);
@@ -247,9 +246,9 @@ int bdrv_snapshot_delete(BlockDriverState *bs,
if (bs->file) {
return bdrv_snapshot_delete(bs->file, snapshot_id, name, errp);
}
error_setg(errp, "Block format '%s' used by device '%s' "
"does not support internal snapshot deletion",
drv->format_name, bdrv_get_device_name(bs));
error_set(errp, QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
drv->format_name, bdrv_get_device_name(bs),
"internal snapshot deletion");
return -ENOTSUP;
}
@@ -316,7 +315,7 @@ int bdrv_snapshot_load_tmp(BlockDriverState *bs,
BlockDriver *drv = bs->drv;
if (!drv) {
error_setg(errp, QERR_DEVICE_HAS_NO_MEDIUM, bdrv_get_device_name(bs));
error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, bdrv_get_device_name(bs));
return -ENOMEDIUM;
}
if (!snapshot_id && !name) {
@@ -330,9 +329,9 @@ int bdrv_snapshot_load_tmp(BlockDriverState *bs,
if (drv->bdrv_snapshot_load_tmp) {
return drv->bdrv_snapshot_load_tmp(bs, snapshot_id, name, errp);
}
error_setg(errp, "Block format '%s' used by device '%s' "
"does not support temporarily loading internal snapshots",
drv->format_name, bdrv_get_device_name(bs));
error_set(errp, QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
drv->format_name, bdrv_get_device_name(bs),
"temporarily load internal snapshot");
return -ENOTSUP;
}

View File

@@ -30,11 +30,9 @@
#include <libssh2_sftp.h>
#include "block/block_int.h"
#include "qemu/error-report.h"
#include "qemu/sockets.h"
#include "qemu/uri.h"
#include "qapi/qmp/qint.h"
#include "qapi/qmp/qstring.h"
/* DEBUG_SSH=1 enables the DPRINTF (debugging printf) statements in
* this block driver code.
@@ -563,7 +561,7 @@ static int connect_to_ssh(BDRVSSHState *s, QDict *options,
/* Open the socket and connect. */
s->sock = inet_connect(s->hostport, errp);
if (s->sock < 0) {
ret = -EIO;
ret = -errno;
goto err;
}

View File

@@ -14,7 +14,6 @@
#include "trace.h"
#include "block/block_int.h"
#include "block/blockjob.h"
#include "qapi/qmp/qerror.h"
#include "qemu/ratelimit.h"
enum {
@@ -228,7 +227,7 @@ static void stream_set_speed(BlockJob *job, int64_t speed, Error **errp)
StreamBlockJob *s = container_of(job, StreamBlockJob, common);
if (speed < 0) {
error_setg(errp, QERR_INVALID_PARAMETER, "speed");
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
@@ -251,7 +250,7 @@ void stream_start(BlockDriverState *bs, BlockDriverState *base,
if ((on_error == BLOCKDEV_ON_ERROR_STOP ||
on_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
error_setg(errp, QERR_INVALID_PARAMETER, "on-error");
error_set(errp, QERR_INVALID_PARAMETER, "on-error");
return;
}

View File

@@ -1,501 +0,0 @@
/*
* QEMU block throttling group infrastructure
*
* Copyright (C) Nodalink, EURL. 2014
* Copyright (C) Igalia, S.L. 2015
*
* Authors:
* Benoît Canet <benoit.canet@nodalink.com>
* Alberto Garcia <berto@igalia.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2 or
* (at your option) version 3 of the License.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
#include "block/throttle-groups.h"
#include "qemu/queue.h"
#include "qemu/thread.h"
#include "sysemu/qtest.h"
/* The ThrottleGroup structure (with its ThrottleState) is shared
* among different BlockDriverState and it's independent from
* AioContext, so in order to use it from different threads it needs
* its own locking.
*
* This locking is however handled internally in this file, so it's
* mostly transparent to outside users (but see the documentation in
* throttle_groups_lock()).
*
* The whole ThrottleGroup structure is private and invisible to
* outside users, that only use it through its ThrottleState.
*
* In addition to the ThrottleGroup structure, BlockDriverState has
* fields that need to be accessed by other members of the group and
* therefore also need to be protected by this lock. Once a BDS is
* registered in a group those fields can be accessed by other threads
* any time.
*
* Again, all this is handled internally and is mostly transparent to
* the outside. The 'throttle_timers' field however has an additional
* constraint because it may be temporarily invalid (see for example
* bdrv_set_aio_context()). Therefore in this file a thread will
* access some other BDS's timers only after verifying that that BDS
* has throttled requests in the queue.
*/
typedef struct ThrottleGroup {
char *name; /* This is constant during the lifetime of the group */
QemuMutex lock; /* This lock protects the following four fields */
ThrottleState ts;
QLIST_HEAD(, BlockDriverState) head;
BlockDriverState *tokens[2];
bool any_timer_armed[2];
/* These two are protected by the global throttle_groups_lock */
unsigned refcount;
QTAILQ_ENTRY(ThrottleGroup) list;
} ThrottleGroup;
static QemuMutex throttle_groups_lock;
static QTAILQ_HEAD(, ThrottleGroup) throttle_groups =
QTAILQ_HEAD_INITIALIZER(throttle_groups);
/* Increments the reference count of a ThrottleGroup given its name.
*
* If no ThrottleGroup is found with the given name a new one is
* created.
*
* @name: the name of the ThrottleGroup
* @ret: the ThrottleGroup
*/
static ThrottleGroup *throttle_group_incref(const char *name)
{
ThrottleGroup *tg = NULL;
ThrottleGroup *iter;
qemu_mutex_lock(&throttle_groups_lock);
/* Look for an existing group with that name */
QTAILQ_FOREACH(iter, &throttle_groups, list) {
if (!strcmp(name, iter->name)) {
tg = iter;
break;
}
}
/* Create a new one if not found */
if (!tg) {
tg = g_new0(ThrottleGroup, 1);
tg->name = g_strdup(name);
qemu_mutex_init(&tg->lock);
throttle_init(&tg->ts);
QLIST_INIT(&tg->head);
QTAILQ_INSERT_TAIL(&throttle_groups, tg, list);
}
tg->refcount++;
qemu_mutex_unlock(&throttle_groups_lock);
return tg;
}
/* Decrease the reference count of a ThrottleGroup.
*
* When the reference count reaches zero the ThrottleGroup is
* destroyed.
*
* @tg: The ThrottleGroup to unref
*/
static void throttle_group_unref(ThrottleGroup *tg)
{
qemu_mutex_lock(&throttle_groups_lock);
if (--tg->refcount == 0) {
QTAILQ_REMOVE(&throttle_groups, tg, list);
qemu_mutex_destroy(&tg->lock);
g_free(tg->name);
g_free(tg);
}
qemu_mutex_unlock(&throttle_groups_lock);
}
/* Get the name from a BlockDriverState's ThrottleGroup. The name (and
* the pointer) is guaranteed to remain constant during the lifetime
* of the group.
*
* @bs: a BlockDriverState that is member of a throttling group
* @ret: the name of the group.
*/
const char *throttle_group_get_name(BlockDriverState *bs)
{
ThrottleGroup *tg = container_of(bs->throttle_state, ThrottleGroup, ts);
return tg->name;
}
/* Return the next BlockDriverState in the round-robin sequence,
* simulating a circular list.
*
* This assumes that tg->lock is held.
*
* @bs: the current BlockDriverState
* @ret: the next BlockDriverState in the sequence
*/
static BlockDriverState *throttle_group_next_bs(BlockDriverState *bs)
{
ThrottleState *ts = bs->throttle_state;
ThrottleGroup *tg = container_of(ts, ThrottleGroup, ts);
BlockDriverState *next = QLIST_NEXT(bs, round_robin);
if (!next) {
return QLIST_FIRST(&tg->head);
}
return next;
}
/* Return the next BlockDriverState in the round-robin sequence with
* pending I/O requests.
*
* This assumes that tg->lock is held.
*
* @bs: the current BlockDriverState
* @is_write: the type of operation (read/write)
* @ret: the next BlockDriverState with pending requests, or bs
* if there is none.
*/
static BlockDriverState *next_throttle_token(BlockDriverState *bs,
bool is_write)
{
ThrottleGroup *tg = container_of(bs->throttle_state, ThrottleGroup, ts);
BlockDriverState *token, *start;
start = token = tg->tokens[is_write];
/* get next bs round in round robin style */
token = throttle_group_next_bs(token);
while (token != start && !token->pending_reqs[is_write]) {
token = throttle_group_next_bs(token);
}
/* If no IO are queued for scheduling on the next round robin token
* then decide the token is the current bs because chances are
* the current bs get the current request queued.
*/
if (token == start && !token->pending_reqs[is_write]) {
token = bs;
}
return token;
}
/* Check if the next I/O request for a BlockDriverState needs to be
* throttled or not. If there's no timer set in this group, set one
* and update the token accordingly.
*
* This assumes that tg->lock is held.
*
* @bs: the current BlockDriverState
* @is_write: the type of operation (read/write)
* @ret: whether the I/O request needs to be throttled or not
*/
static bool throttle_group_schedule_timer(BlockDriverState *bs,
bool is_write)
{
ThrottleState *ts = bs->throttle_state;
ThrottleTimers *tt = &bs->throttle_timers;
ThrottleGroup *tg = container_of(ts, ThrottleGroup, ts);
bool must_wait;
/* Check if any of the timers in this group is already armed */
if (tg->any_timer_armed[is_write]) {
return true;
}
must_wait = throttle_schedule_timer(ts, tt, is_write);
/* If a timer just got armed, set bs as the current token */
if (must_wait) {
tg->tokens[is_write] = bs;
tg->any_timer_armed[is_write] = true;
}
return must_wait;
}
/* Look for the next pending I/O request and schedule it.
*
* This assumes that tg->lock is held.
*
* @bs: the current BlockDriverState
* @is_write: the type of operation (read/write)
*/
static void schedule_next_request(BlockDriverState *bs, bool is_write)
{
ThrottleGroup *tg = container_of(bs->throttle_state, ThrottleGroup, ts);
bool must_wait;
BlockDriverState *token;
/* Check if there's any pending request to schedule next */
token = next_throttle_token(bs, is_write);
if (!token->pending_reqs[is_write]) {
return;
}
/* Set a timer for the request if it needs to be throttled */
must_wait = throttle_group_schedule_timer(token, is_write);
/* If it doesn't have to wait, queue it for immediate execution */
if (!must_wait) {
/* Give preference to requests from the current bs */
if (qemu_in_coroutine() &&
qemu_co_queue_next(&bs->throttled_reqs[is_write])) {
token = bs;
} else {
ThrottleTimers *tt = &token->throttle_timers;
int64_t now = qemu_clock_get_ns(tt->clock_type);
timer_mod(tt->timers[is_write], now + 1);
tg->any_timer_armed[is_write] = true;
}
tg->tokens[is_write] = token;
}
}
/* Check if an I/O request needs to be throttled, wait and set a timer
* if necessary, and schedule the next request using a round robin
* algorithm.
*
* @bs: the current BlockDriverState
* @bytes: the number of bytes for this I/O
* @is_write: the type of operation (read/write)
*/
void coroutine_fn throttle_group_co_io_limits_intercept(BlockDriverState *bs,
unsigned int bytes,
bool is_write)
{
bool must_wait;
BlockDriverState *token;
ThrottleGroup *tg = container_of(bs->throttle_state, ThrottleGroup, ts);
qemu_mutex_lock(&tg->lock);
/* First we check if this I/O has to be throttled. */
token = next_throttle_token(bs, is_write);
must_wait = throttle_group_schedule_timer(token, is_write);
/* Wait if there's a timer set or queued requests of this type */
if (must_wait || bs->pending_reqs[is_write]) {
bs->pending_reqs[is_write]++;
qemu_mutex_unlock(&tg->lock);
qemu_co_queue_wait(&bs->throttled_reqs[is_write]);
qemu_mutex_lock(&tg->lock);
bs->pending_reqs[is_write]--;
}
/* The I/O will be executed, so do the accounting */
throttle_account(bs->throttle_state, is_write, bytes);
/* Schedule the next request */
schedule_next_request(bs, is_write);
qemu_mutex_unlock(&tg->lock);
}
/* Update the throttle configuration for a particular group. Similar
* to throttle_config(), but guarantees atomicity within the
* throttling group.
*
* @bs: a BlockDriverState that is member of the group
* @cfg: the configuration to set
*/
void throttle_group_config(BlockDriverState *bs, ThrottleConfig *cfg)
{
ThrottleTimers *tt = &bs->throttle_timers;
ThrottleState *ts = bs->throttle_state;
ThrottleGroup *tg = container_of(ts, ThrottleGroup, ts);
qemu_mutex_lock(&tg->lock);
/* throttle_config() cancels the timers */
if (timer_pending(tt->timers[0])) {
tg->any_timer_armed[0] = false;
}
if (timer_pending(tt->timers[1])) {
tg->any_timer_armed[1] = false;
}
throttle_config(ts, tt, cfg);
qemu_mutex_unlock(&tg->lock);
}
/* Get the throttle configuration from a particular group. Similar to
* throttle_get_config(), but guarantees atomicity within the
* throttling group.
*
* @bs: a BlockDriverState that is member of the group
* @cfg: the configuration will be written here
*/
void throttle_group_get_config(BlockDriverState *bs, ThrottleConfig *cfg)
{
ThrottleState *ts = bs->throttle_state;
ThrottleGroup *tg = container_of(ts, ThrottleGroup, ts);
qemu_mutex_lock(&tg->lock);
throttle_get_config(ts, cfg);
qemu_mutex_unlock(&tg->lock);
}
/* ThrottleTimers callback. This wakes up a request that was waiting
* because it had been throttled.
*
* @bs: the BlockDriverState whose request had been throttled
* @is_write: the type of operation (read/write)
*/
static void timer_cb(BlockDriverState *bs, bool is_write)
{
ThrottleState *ts = bs->throttle_state;
ThrottleGroup *tg = container_of(ts, ThrottleGroup, ts);
bool empty_queue;
/* The timer has just been fired, so we can update the flag */
qemu_mutex_lock(&tg->lock);
tg->any_timer_armed[is_write] = false;
qemu_mutex_unlock(&tg->lock);
/* Run the request that was waiting for this timer */
empty_queue = !qemu_co_enter_next(&bs->throttled_reqs[is_write]);
/* If the request queue was empty then we have to take care of
* scheduling the next one */
if (empty_queue) {
qemu_mutex_lock(&tg->lock);
schedule_next_request(bs, is_write);
qemu_mutex_unlock(&tg->lock);
}
}
static void read_timer_cb(void *opaque)
{
timer_cb(opaque, false);
}
static void write_timer_cb(void *opaque)
{
timer_cb(opaque, true);
}
/* Register a BlockDriverState in the throttling group, also
* initializing its timers and updating its throttle_state pointer to
* point to it. If a throttling group with that name does not exist
* yet, it will be created.
*
* @bs: the BlockDriverState to insert
* @groupname: the name of the group
*/
void throttle_group_register_bs(BlockDriverState *bs, const char *groupname)
{
int i;
ThrottleGroup *tg = throttle_group_incref(groupname);
int clock_type = QEMU_CLOCK_REALTIME;
if (qtest_enabled()) {
/* For testing block IO throttling only */
clock_type = QEMU_CLOCK_VIRTUAL;
}
bs->throttle_state = &tg->ts;
qemu_mutex_lock(&tg->lock);
/* If the ThrottleGroup is new set this BlockDriverState as the token */
for (i = 0; i < 2; i++) {
if (!tg->tokens[i]) {
tg->tokens[i] = bs;
}
}
QLIST_INSERT_HEAD(&tg->head, bs, round_robin);
throttle_timers_init(&bs->throttle_timers,
bdrv_get_aio_context(bs),
clock_type,
read_timer_cb,
write_timer_cb,
bs);
qemu_mutex_unlock(&tg->lock);
}
/* Unregister a BlockDriverState from its group, removing it from the
* list, destroying the timers and setting the throttle_state pointer
* to NULL.
*
* The group will be destroyed if it's empty after this operation.
*
* @bs: the BlockDriverState to remove
*/
void throttle_group_unregister_bs(BlockDriverState *bs)
{
ThrottleGroup *tg = container_of(bs->throttle_state, ThrottleGroup, ts);
int i;
qemu_mutex_lock(&tg->lock);
for (i = 0; i < 2; i++) {
if (tg->tokens[i] == bs) {
BlockDriverState *token = throttle_group_next_bs(bs);
/* Take care of the case where this is the last bs in the group */
if (token == bs) {
token = NULL;
}
tg->tokens[i] = token;
}
}
/* remove the current bs from the list */
QLIST_REMOVE(bs, round_robin);
throttle_timers_destroy(&bs->throttle_timers);
qemu_mutex_unlock(&tg->lock);
throttle_group_unref(tg);
bs->throttle_state = NULL;
}
/* Acquire the lock of this throttling group.
*
* You won't normally need to use this. None of the functions from the
* ThrottleGroup API require you to acquire the lock since all of them
* deal with it internally.
*
* This should only be used in exceptional cases when you want to
* access the protected fields of a BlockDriverState directly
* (e.g. bdrv_swap()).
*
* @bs: a BlockDriverState that is member of the group
*/
void throttle_group_lock(BlockDriverState *bs)
{
ThrottleGroup *tg = container_of(bs->throttle_state, ThrottleGroup, ts);
qemu_mutex_lock(&tg->lock);
}
/* Release the lock of this throttling group.
*
* See the comments in throttle_group_lock().
*/
void throttle_group_unlock(BlockDriverState *bs)
{
ThrottleGroup *tg = container_of(bs->throttle_state, ThrottleGroup, ts);
qemu_mutex_unlock(&tg->lock);
}
static void throttle_groups_init(void)
{
qemu_mutex_init(&throttle_groups_lock);
}
block_init(throttle_groups_init);

View File

@@ -502,9 +502,9 @@ static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
}
/* Disable migration when vdi images are used */
error_setg(&s->migration_blocker, "The vdi format used by node '%s' "
"does not support live migration",
bdrv_get_device_or_node_name(bs));
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"vdi", bdrv_get_device_name(bs), "live migration");
migrate_add_blocker(s->migration_blocker);
qemu_co_mutex_init(&s->write_lock);

View File

@@ -19,7 +19,6 @@
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/error-report.h"
#include "qemu/module.h"
#include "block/vhdx.h"

View File

@@ -1002,9 +1002,9 @@ static int vhdx_open(BlockDriverState *bs, QDict *options, int flags,
/* TODO: differencing files */
/* Disable migration when VHDX images are used */
error_setg(&s->migration_blocker, "The vhdx format used by node '%s' "
"does not support live migration",
bdrv_get_device_or_node_name(bs));
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"vhdx", bdrv_get_device_name(bs), "live migration");
migrate_add_blocker(s->migration_blocker);
return 0;
@@ -1269,7 +1269,7 @@ static coroutine_fn int vhdx_co_writev(BlockDriverState *bs, int64_t sector_num,
iov1.iov_base = qemu_blockalign(bs, iov1.iov_len);
memset(iov1.iov_base, 0, iov1.iov_len);
qemu_iovec_concat_iov(&hd_qiov, &iov1, 1, 0,
iov1.iov_len);
sinfo.block_offset);
sectors_to_write += iov1.iov_len >> BDRV_SECTOR_BITS;
}
@@ -1285,7 +1285,7 @@ static coroutine_fn int vhdx_co_writev(BlockDriverState *bs, int64_t sector_num,
iov2.iov_base = qemu_blockalign(bs, iov2.iov_len);
memset(iov2.iov_base, 0, iov2.iov_len);
qemu_iovec_concat_iov(&hd_qiov, &iov2, 1, 0,
iov2.iov_len);
sinfo.block_offset);
sectors_to_write += iov2.iov_len >> BDRV_SECTOR_BITS;
}
}

View File

@@ -25,8 +25,6 @@
#include "qemu-common.h"
#include "block/block_int.h"
#include "qapi/qmp/qerror.h"
#include "qemu/error-report.h"
#include "qemu/module.h"
#include "migration/migration.h"
#include <zlib.h>
@@ -323,13 +321,37 @@ static int vmdk_is_cid_valid(BlockDriverState *bs)
return 1;
}
/* We have nothing to do for VMDK reopen, stubs just return success */
/* Queue extents, if any, for reopen() */
static int vmdk_reopen_prepare(BDRVReopenState *state,
BlockReopenQueue *queue, Error **errp)
{
BDRVVmdkState *s;
int ret = -1;
int i;
VmdkExtent *e;
assert(state != NULL);
assert(state->bs != NULL);
return 0;
if (queue == NULL) {
error_setg(errp, "No reopen queue for VMDK extents");
goto exit;
}
s = state->bs->opaque;
assert(s != NULL);
for (i = 0; i < s->num_extents; i++) {
e = &s->extents[i];
if (e->file != state->bs->file) {
bdrv_reopen_queue(queue, e->file, state->flags);
}
}
ret = 0;
exit:
return ret;
}
static int vmdk_parent_open(BlockDriverState *bs)
@@ -502,7 +524,7 @@ static int vmdk_open_vmfs_sparse(BlockDriverState *bs,
}
ret = vmdk_add_extent(bs, file, false,
le32_to_cpu(header.disk_sectors),
(int64_t)le32_to_cpu(header.l1dir_offset) << 9,
le32_to_cpu(header.l1dir_offset) << 9,
0,
le32_to_cpu(header.l1dir_size),
4096,
@@ -521,7 +543,7 @@ static int vmdk_open_vmfs_sparse(BlockDriverState *bs,
}
static int vmdk_open_desc_file(BlockDriverState *bs, int flags, char *buf,
QDict *options, Error **errp);
Error **errp);
static char *vmdk_read_desc(BlockDriverState *file, uint64_t desc_offset,
Error **errp)
@@ -560,7 +582,7 @@ static char *vmdk_read_desc(BlockDriverState *file, uint64_t desc_offset,
static int vmdk_open_vmdk4(BlockDriverState *bs,
BlockDriverState *file,
int flags, QDict *options, Error **errp)
int flags, Error **errp)
{
int ret;
uint32_t magic;
@@ -584,7 +606,7 @@ static int vmdk_open_vmdk4(BlockDriverState *bs,
if (!buf) {
return -EINVAL;
}
ret = vmdk_open_desc_file(bs, flags, buf, options, errp);
ret = vmdk_open_desc_file(bs, flags, buf, errp);
g_free(buf);
return ret;
}
@@ -647,8 +669,8 @@ static int vmdk_open_vmdk4(BlockDriverState *bs,
char buf[64];
snprintf(buf, sizeof(buf), "VMDK version %" PRId32,
le32_to_cpu(header.version));
error_setg(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bdrv_get_device_or_node_name(bs), "vmdk", buf);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bdrv_get_device_name(bs), "vmdk", buf);
return -ENOTSUP;
} else if (le32_to_cpu(header.version) == 3 && (flags & BDRV_O_RDWR)) {
/* VMware KB 2064959 explains that version 3 added support for
@@ -741,7 +763,7 @@ static int vmdk_parse_description(const char *desc, const char *opt_name,
/* Open an extent file and append to bs array */
static int vmdk_open_sparse(BlockDriverState *bs,
BlockDriverState *file, int flags,
char *buf, QDict *options, Error **errp)
char *buf, Error **errp)
{
uint32_t magic;
@@ -751,7 +773,7 @@ static int vmdk_open_sparse(BlockDriverState *bs,
return vmdk_open_vmfs_sparse(bs, file, flags, errp);
break;
case VMDK4_MAGIC:
return vmdk_open_vmdk4(bs, file, flags, options, errp);
return vmdk_open_vmdk4(bs, file, flags, errp);
break;
default:
error_setg(errp, "Image not in VMDK format");
@@ -761,8 +783,7 @@ static int vmdk_open_sparse(BlockDriverState *bs,
}
static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
const char *desc_file_path, QDict *options,
Error **errp)
const char *desc_file_path, Error **errp)
{
int ret;
int matches;
@@ -776,7 +797,6 @@ static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
BlockDriverState *extent_file;
BDRVVmdkState *s = bs->opaque;
VmdkExtent *extent;
char extent_opt_prefix[32];
while (*p) {
/* parse extent line in one of below formats:
@@ -826,12 +846,8 @@ static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
extent_path = g_malloc0(PATH_MAX);
path_combine(extent_path, PATH_MAX, desc_file_path, fname);
extent_file = NULL;
ret = snprintf(extent_opt_prefix, 32, "extents.%d", s->num_extents);
assert(ret < 32);
ret = bdrv_open_image(&extent_file, extent_path, options,
extent_opt_prefix, bs, &child_file, false, errp);
ret = bdrv_open(&extent_file, extent_path, NULL, NULL,
bs->open_flags | BDRV_O_PROTOCOL, NULL, errp);
g_free(extent_path);
if (ret) {
return ret;
@@ -854,8 +870,7 @@ static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
if (!buf) {
ret = -EINVAL;
} else {
ret = vmdk_open_sparse(bs, extent_file, bs->open_flags, buf,
options, errp);
ret = vmdk_open_sparse(bs, extent_file, bs->open_flags, buf, errp);
}
g_free(buf);
if (ret) {
@@ -883,7 +898,7 @@ next_line:
}
static int vmdk_open_desc_file(BlockDriverState *bs, int flags, char *buf,
QDict *options, Error **errp)
Error **errp)
{
int ret;
char ct[128];
@@ -905,7 +920,7 @@ static int vmdk_open_desc_file(BlockDriverState *bs, int flags, char *buf,
}
s->create_type = g_strdup(ct);
s->desc_offset = 0;
ret = vmdk_parse_extents(buf, bs, bs->file->exact_filename, options, errp);
ret = vmdk_parse_extents(buf, bs, bs->file->exact_filename, errp);
exit:
return ret;
}
@@ -927,11 +942,11 @@ static int vmdk_open(BlockDriverState *bs, QDict *options, int flags,
switch (magic) {
case VMDK3_MAGIC:
case VMDK4_MAGIC:
ret = vmdk_open_sparse(bs, bs->file, flags, buf, options, errp);
ret = vmdk_open_sparse(bs, bs->file, flags, buf, errp);
s->desc_offset = 0x200;
break;
default:
ret = vmdk_open_desc_file(bs, flags, buf, options, errp);
ret = vmdk_open_desc_file(bs, flags, buf, errp);
break;
}
if (ret) {
@@ -948,9 +963,9 @@ static int vmdk_open(BlockDriverState *bs, QDict *options, int flags,
qemu_co_mutex_init(&s->lock);
/* Disable migration when VMDK images are used */
error_setg(&s->migration_blocker, "The vmdk format used by node '%s' "
"does not support live migration",
bdrv_get_device_or_node_name(bs));
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"vmdk", bdrv_get_device_name(bs), "live migration");
migrate_add_blocker(s->migration_blocker);
g_free(buf);
return 0;
@@ -1690,12 +1705,12 @@ static int vmdk_create_extent(const char *filename, int64_t filesize,
/* write all the data */
ret = bdrv_pwrite(bs, 0, &magic, sizeof(magic));
if (ret < 0) {
error_setg(errp, QERR_IO_ERROR);
error_set(errp, QERR_IO_ERROR);
goto exit;
}
ret = bdrv_pwrite(bs, sizeof(magic), &header, sizeof(header));
if (ret < 0) {
error_setg(errp, QERR_IO_ERROR);
error_set(errp, QERR_IO_ERROR);
goto exit;
}
@@ -1715,7 +1730,7 @@ static int vmdk_create_extent(const char *filename, int64_t filesize,
ret = bdrv_pwrite(bs, le64_to_cpu(header.rgd_offset) * BDRV_SECTOR_SIZE,
gd_buf, gd_buf_size);
if (ret < 0) {
error_setg(errp, QERR_IO_ERROR);
error_set(errp, QERR_IO_ERROR);
goto exit;
}
@@ -1727,7 +1742,7 @@ static int vmdk_create_extent(const char *filename, int64_t filesize,
ret = bdrv_pwrite(bs, le64_to_cpu(header.gd_offset) * BDRV_SECTOR_SIZE,
gd_buf, gd_buf_size);
if (ret < 0) {
error_setg(errp, QERR_IO_ERROR);
error_set(errp, QERR_IO_ERROR);
goto exit;
}

View File

@@ -328,9 +328,9 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
qemu_co_mutex_init(&s->lock);
/* Disable migration when VHD images are used */
error_setg(&s->migration_blocker, "The vpc format used by node '%s' "
"does not support live migration",
bdrv_get_device_or_node_name(bs));
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"vpc", bdrv_get_device_name(bs), "live migration");
migrate_add_blocker(s->migration_blocker);
return 0;

View File

@@ -30,7 +30,6 @@
#include "migration/migration.h"
#include "qapi/qmp/qint.h"
#include "qapi/qmp/qbool.h"
#include "qapi/qmp/qstring.h"
#ifndef S_IWGRP
#define S_IWGRP 0
@@ -323,7 +322,6 @@ typedef struct BDRVVVFATState {
int fat_type; /* 16 or 32 */
array_t fat,directory,mapping;
char volume_label[11];
unsigned int cluster_size;
unsigned int sectors_per_cluster;
@@ -861,7 +859,7 @@ static int init_directories(BDRVVVFATState* s,
{
direntry_t* entry=array_get_next(&(s->directory));
entry->attributes=0x28; /* archive | volume label */
memcpy(entry->name, s->volume_label, sizeof(entry->name));
memcpy(entry->name, "QEMU VVFAT ", sizeof(entry->name));
}
/* Now build FAT, and write back information into directory */
@@ -970,8 +968,7 @@ static int init_directories(BDRVVVFATState* s,
bootsector->u.fat16.signature=0x29;
bootsector->u.fat16.id=cpu_to_le32(0xfabe1afd);
memcpy(bootsector->u.fat16.volume_label, s->volume_label,
sizeof(bootsector->u.fat16.volume_label));
memcpy(bootsector->u.fat16.volume_label,"QEMU VVFAT ",11);
memcpy(bootsector->fat_type,(s->fat_type==12?"FAT12 ":s->fat_type==16?"FAT16 ":"FAT32 "),8);
bootsector->magic[0]=0x55; bootsector->magic[1]=0xaa;
@@ -1010,11 +1007,6 @@ static QemuOptsList runtime_opts = {
.type = QEMU_OPT_BOOL,
.help = "Create a floppy rather than a hard disk image",
},
{
.name = "label",
.type = QEMU_OPT_STRING,
.help = "Use a volume label other than QEMU VVFAT",
},
{
.name = "rw",
.type = QEMU_OPT_BOOL,
@@ -1067,8 +1059,8 @@ static void vvfat_parse_filename(const char *filename, QDict *options,
/* Fill in the options QDict */
qdict_put(options, "dir", qstring_from_str(filename));
qdict_put(options, "fat-type", qint_from_int(fat_type));
qdict_put(options, "floppy", qbool_from_bool(floppy));
qdict_put(options, "rw", qbool_from_bool(rw));
qdict_put(options, "floppy", qbool_from_int(floppy));
qdict_put(options, "rw", qbool_from_int(rw));
}
static int vvfat_open(BlockDriverState *bs, QDict *options, int flags,
@@ -1077,7 +1069,7 @@ static int vvfat_open(BlockDriverState *bs, QDict *options, int flags,
BDRVVVFATState *s = bs->opaque;
int cyls, heads, secs;
bool floppy;
const char *dirname, *label;
const char *dirname;
QemuOpts *opts;
Error *local_err = NULL;
int ret;
@@ -1104,18 +1096,6 @@ static int vvfat_open(BlockDriverState *bs, QDict *options, int flags,
s->fat_type = qemu_opt_get_number(opts, "fat-type", 0);
floppy = qemu_opt_get_bool(opts, "floppy", false);
memset(s->volume_label, ' ', sizeof(s->volume_label));
label = qemu_opt_get(opts, "label");
if (label) {
size_t label_length = strlen(label);
if (label_length > 11) {
error_setg(errp, "vvfat label cannot be longer than 11 bytes");
ret = -EINVAL;
goto fail;
}
memcpy(s->volume_label, label, label_length);
}
if (floppy) {
/* 1.44MB or 2.88MB floppy. 2.88MB can be FAT12 (default) or FAT16. */
if (!s->fat_type) {
@@ -1200,10 +1180,9 @@ static int vvfat_open(BlockDriverState *bs, QDict *options, int flags,
/* Disable migration when vvfat is used rw */
if (s->qcow) {
error_setg(&s->migration_blocker,
"The vvfat (rw) format used by node '%s' "
"does not support live migration",
bdrv_get_device_or_node_name(bs));
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"vvfat (rw)", bdrv_get_device_name(bs), "live migration");
migrate_add_blocker(s->migration_blocker);
}

View File

@@ -12,6 +12,7 @@
#include "sysemu/blockdev.h"
#include "sysemu/block-backend.h"
#include "hw/block/block.h"
#include "monitor/monitor.h"
#include "qapi/qmp/qerror.h"
#include "sysemu/sysemu.h"
#include "qmp-commands.h"
@@ -42,7 +43,7 @@ void qmp_nbd_server_start(SocketAddress *addr, Error **errp)
server_fd = socket_listen(addr, errp);
if (server_fd != -1) {
qemu_set_fd_handler(server_fd, nbd_accept, NULL, NULL);
qemu_set_fd_handler2(server_fd, NULL, nbd_accept, NULL, NULL);
}
}
@@ -90,12 +91,11 @@ void qmp_nbd_server_add(const char *device, bool has_writable, bool writable,
blk = blk_by_name(device);
if (!blk) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", device);
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
}
if (!blk_is_inserted(blk)) {
error_setg(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
return;
}
@@ -129,7 +129,7 @@ void qmp_nbd_server_stop(Error **errp)
}
if (server_fd != -1) {
qemu_set_fd_handler(server_fd, NULL, NULL, NULL);
qemu_set_fd_handler2(server_fd, NULL, NULL, NULL, NULL);
close(server_fd);
server_fd = -1;
}

View File

@@ -34,14 +34,11 @@
#include "sysemu/blockdev.h"
#include "hw/block/block.h"
#include "block/blockjob.h"
#include "block/throttle-groups.h"
#include "monitor/monitor.h"
#include "qemu/error-report.h"
#include "qemu/option.h"
#include "qemu/config-file.h"
#include "qapi/qmp/types.h"
#include "qapi-visit.h"
#include "qapi/qmp/qerror.h"
#include "qapi/qmp-output-visitor.h"
#include "qapi/util.h"
#include "sysemu/sysemu.h"
@@ -176,7 +173,7 @@ static int drive_index_to_unit_id(BlockInterfaceType type, int index)
QemuOpts *drive_def(const char *optstr)
{
return qemu_opts_parse_noisily(qemu_find_opts("drive"), optstr, false);
return qemu_opts_parse(qemu_find_opts("drive"), optstr, 0);
}
QemuOpts *drive_add(BlockInterfaceType type, int index, const char *file,
@@ -360,7 +357,6 @@ static BlockBackend *blockdev_init(const char *file, QDict *bs_opts,
const char *id;
bool has_driver_specific_opts;
BlockdevDetectZeroesOptions detect_zeroes;
const char *throttling_group;
/* Check common options by copying from bs_opts to opts, all other options
* stay in bs_opts for processing by bdrv_open(). */
@@ -395,13 +391,13 @@ static BlockBackend *blockdev_init(const char *file, QDict *bs_opts,
}
}
if (qemu_opt_get_bool(opts, BDRV_OPT_CACHE_WB, true)) {
if (qemu_opt_get_bool(opts, "cache.writeback", true)) {
bdrv_flags |= BDRV_O_CACHE_WB;
}
if (qemu_opt_get_bool(opts, BDRV_OPT_CACHE_DIRECT, false)) {
if (qemu_opt_get_bool(opts, "cache.direct", false)) {
bdrv_flags |= BDRV_O_NOCACHE;
}
if (qemu_opt_get_bool(opts, BDRV_OPT_CACHE_NO_FLUSH, false)) {
if (qemu_opt_get_bool(opts, "cache.no-flush", false)) {
bdrv_flags |= BDRV_O_NO_FLUSH;
}
@@ -463,8 +459,6 @@ static BlockBackend *blockdev_init(const char *file, QDict *bs_opts,
cfg.op_size = qemu_opt_get_number(opts, "throttling.iops-size", 0);
throttling_group = qemu_opt_get(opts, "throttling.group");
if (!check_throttle_config(&cfg, &error)) {
error_propagate(errp, error);
goto early_err;
@@ -553,10 +547,7 @@ static BlockBackend *blockdev_init(const char *file, QDict *bs_opts,
/* disk I/O throttling */
if (throttle_enabled(&cfg)) {
if (!throttling_group) {
throttling_group = blk_name(blk);
}
bdrv_io_limits_enable(bs, throttling_group);
bdrv_io_limits_enable(bs);
bdrv_set_io_limits(bs, &cfg);
}
@@ -720,8 +711,6 @@ DriveInfo *drive_new(QemuOpts *all_opts, BlockInterfaceType block_default_type)
{ "iops_size", "throttling.iops-size" },
{ "group", "throttling.group" },
{ "readonly", "read-only" },
};
@@ -744,16 +733,16 @@ DriveInfo *drive_new(QemuOpts *all_opts, BlockInterfaceType block_default_type)
}
/* Specific options take precedence */
if (!qemu_opt_get(all_opts, BDRV_OPT_CACHE_WB)) {
qemu_opt_set_bool(all_opts, BDRV_OPT_CACHE_WB,
if (!qemu_opt_get(all_opts, "cache.writeback")) {
qemu_opt_set_bool(all_opts, "cache.writeback",
!!(flags & BDRV_O_CACHE_WB), &error_abort);
}
if (!qemu_opt_get(all_opts, BDRV_OPT_CACHE_DIRECT)) {
qemu_opt_set_bool(all_opts, BDRV_OPT_CACHE_DIRECT,
if (!qemu_opt_get(all_opts, "cache.direct")) {
qemu_opt_set_bool(all_opts, "cache.direct",
!!(flags & BDRV_O_NOCACHE), &error_abort);
}
if (!qemu_opt_get(all_opts, BDRV_OPT_CACHE_NO_FLUSH)) {
qemu_opt_set_bool(all_opts, BDRV_OPT_CACHE_NO_FLUSH,
if (!qemu_opt_get(all_opts, "cache.no-flush")) {
qemu_opt_set_bool(all_opts, "cache.no-flush",
!!(flags & BDRV_O_NO_FLUSH), &error_abort);
}
qemu_opt_unset(all_opts, "cache");
@@ -944,7 +933,7 @@ DriveInfo *drive_new(QemuOpts *all_opts, BlockInterfaceType block_default_type)
devopts = qemu_opts_create(qemu_find_opts("device"), NULL, 0,
&error_abort);
if (arch_type == QEMU_ARCH_S390X) {
qemu_opt_set(devopts, "driver", "virtio-blk-ccw", &error_abort);
qemu_opt_set(devopts, "driver", "virtio-blk-s390", &error_abort);
} else {
qemu_opt_set(devopts, "driver", "virtio-blk-pci", &error_abort);
}
@@ -1113,8 +1102,7 @@ SnapshotInfo *qmp_blockdev_snapshot_delete_internal_sync(const char *device,
blk = blk_by_name(device);
if (!blk) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", device);
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return NULL;
}
bs = blk_bs(blk);
@@ -1176,68 +1164,6 @@ out_aio_context:
return NULL;
}
/**
* block_dirty_bitmap_lookup:
* Return a dirty bitmap (if present), after validating
* the node reference and bitmap names.
*
* @node: The name of the BDS node to search for bitmaps
* @name: The name of the bitmap to search for
* @pbs: Output pointer for BDS lookup, if desired. Can be NULL.
* @paio: Output pointer for aio_context acquisition, if desired. Can be NULL.
* @errp: Output pointer for error information. Can be NULL.
*
* @return: A bitmap object on success, or NULL on failure.
*/
static BdrvDirtyBitmap *block_dirty_bitmap_lookup(const char *node,
const char *name,
BlockDriverState **pbs,
AioContext **paio,
Error **errp)
{
BlockDriverState *bs;
BdrvDirtyBitmap *bitmap;
AioContext *aio_context;
if (!node) {
error_setg(errp, "Node cannot be NULL");
return NULL;
}
if (!name) {
error_setg(errp, "Bitmap name cannot be NULL");
return NULL;
}
bs = bdrv_lookup_bs(node, node, NULL);
if (!bs) {
error_setg(errp, "Node '%s' not found", node);
return NULL;
}
aio_context = bdrv_get_aio_context(bs);
aio_context_acquire(aio_context);
bitmap = bdrv_find_dirty_bitmap(bs, name);
if (!bitmap) {
error_setg(errp, "Dirty bitmap '%s' not found", name);
goto fail;
}
if (pbs) {
*pbs = bs;
}
if (paio) {
*paio = aio_context;
} else {
aio_context_release(aio_context);
}
return bitmap;
fail:
aio_context_release(aio_context);
return NULL;
}
/* New and old BlockDriverState structs for atomic group operations */
typedef struct BlkTransactionState BlkTransactionState;
@@ -1303,8 +1229,7 @@ static void internal_snapshot_prepare(BlkTransactionState *common,
/* 2. check for validation */
blk = blk_by_name(device);
if (!blk) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", device);
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
}
bs = blk_bs(blk);
@@ -1314,7 +1239,7 @@ static void internal_snapshot_prepare(BlkTransactionState *common,
aio_context_acquire(state->aio_context);
if (!bdrv_is_inserted(bs)) {
error_setg(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
return;
}
@@ -1323,14 +1248,13 @@ static void internal_snapshot_prepare(BlkTransactionState *common,
}
if (bdrv_is_read_only(bs)) {
error_setg(errp, "Device '%s' is read only", device);
error_set(errp, QERR_DEVICE_IS_READ_ONLY, device);
return;
}
if (!bdrv_can_snapshot(bs)) {
error_setg(errp, "Block format '%s' used by device '%s' "
"does not support internal snapshots",
bs->drv->format_name, device);
error_set(errp, QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
bs->drv->format_name, device, "internal snapshot");
return;
}
@@ -1455,7 +1379,7 @@ static void external_snapshot_prepare(BlkTransactionState *common,
/* start processing */
drv = bdrv_find_format(format);
if (!drv) {
error_setg(errp, QERR_INVALID_BLOCK_FORMAT, format);
error_set(errp, QERR_INVALID_BLOCK_FORMAT, format);
return;
}
@@ -1482,7 +1406,7 @@ static void external_snapshot_prepare(BlkTransactionState *common,
aio_context_acquire(state->aio_context);
if (!bdrv_is_inserted(state->old_bs)) {
error_setg(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
return;
}
@@ -1493,13 +1417,13 @@ static void external_snapshot_prepare(BlkTransactionState *common,
if (!bdrv_is_read_only(state->old_bs)) {
if (bdrv_flush(state->old_bs)) {
error_setg(errp, QERR_IO_ERROR);
error_set(errp, QERR_IO_ERROR);
return;
}
}
if (!bdrv_is_first_non_filter(state->old_bs)) {
error_setg(errp, QERR_FEATURE_DISABLED, "snapshot");
error_set(errp, QERR_FEATURE_DISABLED, "snapshot");
return;
}
@@ -1584,8 +1508,7 @@ static void drive_backup_prepare(BlkTransactionState *common, Error **errp)
blk = blk_by_name(backup->device);
if (!blk) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", backup->device);
error_set(errp, QERR_DEVICE_NOT_FOUND, backup->device);
return;
}
bs = blk_bs(blk);
@@ -1599,7 +1522,6 @@ static void drive_backup_prepare(BlkTransactionState *common, Error **errp)
backup->sync,
backup->has_mode, backup->mode,
backup->has_speed, backup->speed,
backup->has_bitmap, backup->bitmap,
backup->has_on_source_error, backup->on_source_error,
backup->has_on_target_error, backup->on_target_error,
&local_err);
@@ -1855,8 +1777,7 @@ void qmp_eject(const char *device, bool has_force, bool force, Error **errp)
blk = blk_by_name(device);
if (!blk) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", device);
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
}
@@ -1916,8 +1837,7 @@ void qmp_change_blockdev(const char *device, const char *filename,
blk = blk_by_name(device);
if (!blk) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", device);
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
}
bs = blk_bs(blk);
@@ -1928,7 +1848,7 @@ void qmp_change_blockdev(const char *device, const char *filename,
if (format) {
drv = bdrv_find_whitelisted_format(format, bs->read_only);
if (!drv) {
error_setg(errp, QERR_INVALID_BLOCK_FORMAT, format);
error_set(errp, QERR_INVALID_BLOCK_FORMAT, format);
goto out;
}
}
@@ -1967,9 +1887,7 @@ void qmp_block_set_io_throttle(const char *device, int64_t bps, int64_t bps_rd,
bool has_iops_wr_max,
int64_t iops_wr_max,
bool has_iops_size,
int64_t iops_size,
bool has_group,
const char *group, Error **errp)
int64_t iops_size, Error **errp)
{
ThrottleConfig cfg;
BlockDriverState *bs;
@@ -1978,8 +1896,7 @@ void qmp_block_set_io_throttle(const char *device, int64_t bps, int64_t bps_rd,
blk = blk_by_name(device);
if (!blk) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", device);
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
}
bs = blk_bs(blk);
@@ -2023,121 +1940,20 @@ void qmp_block_set_io_throttle(const char *device, int64_t bps, int64_t bps_rd,
aio_context = bdrv_get_aio_context(bs);
aio_context_acquire(aio_context);
if (throttle_enabled(&cfg)) {
/* Enable I/O limits if they're not enabled yet, otherwise
* just update the throttling group. */
if (!bs->io_limits_enabled) {
bdrv_io_limits_enable(bs, has_group ? group : device);
} else if (has_group) {
bdrv_io_limits_update_group(bs, group);
}
/* Set the new throttling configuration */
bdrv_set_io_limits(bs, &cfg);
} else if (bs->io_limits_enabled) {
/* If all throttling settings are set to 0, disable I/O limits */
if (!bs->io_limits_enabled && throttle_enabled(&cfg)) {
bdrv_io_limits_enable(bs);
} else if (bs->io_limits_enabled && !throttle_enabled(&cfg)) {
bdrv_io_limits_disable(bs);
}
if (bs->io_limits_enabled) {
bdrv_set_io_limits(bs, &cfg);
}
aio_context_release(aio_context);
}
void qmp_block_dirty_bitmap_add(const char *node, const char *name,
bool has_granularity, uint32_t granularity,
Error **errp)
{
AioContext *aio_context;
BlockDriverState *bs;
if (!name || name[0] == '\0') {
error_setg(errp, "Bitmap name cannot be empty");
return;
}
bs = bdrv_lookup_bs(node, node, errp);
if (!bs) {
return;
}
aio_context = bdrv_get_aio_context(bs);
aio_context_acquire(aio_context);
if (has_granularity) {
if (granularity < 512 || !is_power_of_2(granularity)) {
error_setg(errp, "Granularity must be power of 2 "
"and at least 512");
goto out;
}
} else {
/* Default to cluster size, if available: */
granularity = bdrv_get_default_bitmap_granularity(bs);
}
bdrv_create_dirty_bitmap(bs, granularity, name, errp);
out:
aio_context_release(aio_context);
}
void qmp_block_dirty_bitmap_remove(const char *node, const char *name,
Error **errp)
{
AioContext *aio_context;
BlockDriverState *bs;
BdrvDirtyBitmap *bitmap;
bitmap = block_dirty_bitmap_lookup(node, name, &bs, &aio_context, errp);
if (!bitmap || !bs) {
return;
}
if (bdrv_dirty_bitmap_frozen(bitmap)) {
error_setg(errp,
"Bitmap '%s' is currently frozen and cannot be removed",
name);
goto out;
}
bdrv_dirty_bitmap_make_anon(bitmap);
bdrv_release_dirty_bitmap(bs, bitmap);
out:
aio_context_release(aio_context);
}
/**
* Completely clear a bitmap, for the purposes of synchronizing a bitmap
* immediately after a full backup operation.
*/
void qmp_block_dirty_bitmap_clear(const char *node, const char *name,
Error **errp)
{
AioContext *aio_context;
BdrvDirtyBitmap *bitmap;
BlockDriverState *bs;
bitmap = block_dirty_bitmap_lookup(node, name, &bs, &aio_context, errp);
if (!bitmap || !bs) {
return;
}
if (bdrv_dirty_bitmap_frozen(bitmap)) {
error_setg(errp,
"Bitmap '%s' is currently frozen and cannot be modified",
name);
goto out;
} else if (!bdrv_dirty_bitmap_enabled(bitmap)) {
error_setg(errp,
"Bitmap '%s' is currently disabled and cannot be cleared",
name);
goto out;
}
bdrv_clear_dirty_bitmap(bitmap);
out:
aio_context_release(aio_context);
}
void hmp_drive_del(Monitor *mon, const QDict *qdict)
int hmp_drive_del(Monitor *mon, const QDict *qdict, QObject **ret_data)
{
const char *id = qdict_get_str(qdict, "id");
BlockBackend *blk;
@@ -2148,14 +1964,14 @@ void hmp_drive_del(Monitor *mon, const QDict *qdict)
blk = blk_by_name(id);
if (!blk) {
error_report("Device '%s' not found", id);
return;
return -1;
}
bs = blk_bs(blk);
if (!blk_legacy_dinfo(blk)) {
error_report("Deleting device added with blockdev-add"
" is not supported");
return;
return -1;
}
aio_context = bdrv_get_aio_context(bs);
@@ -2164,9 +1980,12 @@ void hmp_drive_del(Monitor *mon, const QDict *qdict)
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_DRIVE_DEL, &local_err)) {
error_report_err(local_err);
aio_context_release(aio_context);
return;
return -1;
}
/* quiesce block driver; prevent further io */
bdrv_drain_all();
bdrv_flush(bs);
bdrv_close(bs);
/* if we have a device attached to this BlockDriverState
@@ -2184,6 +2003,7 @@ void hmp_drive_del(Monitor *mon, const QDict *qdict)
}
aio_context_release(aio_context);
return 0;
}
void qmp_block_resize(bool has_device, const char *device,
@@ -2207,17 +2027,17 @@ void qmp_block_resize(bool has_device, const char *device,
aio_context_acquire(aio_context);
if (!bdrv_is_first_non_filter(bs)) {
error_setg(errp, QERR_FEATURE_DISABLED, "resize");
error_set(errp, QERR_FEATURE_DISABLED, "resize");
goto out;
}
if (size < 0) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "size", "a >0 size");
error_set(errp, QERR_INVALID_PARAMETER_VALUE, "size", "a >0 size");
goto out;
}
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_RESIZE, NULL)) {
error_setg(errp, QERR_DEVICE_IN_USE, device);
error_set(errp, QERR_DEVICE_IN_USE, device);
goto out;
}
@@ -2229,16 +2049,16 @@ void qmp_block_resize(bool has_device, const char *device,
case 0:
break;
case -ENOMEDIUM:
error_setg(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
break;
case -ENOTSUP:
error_setg(errp, QERR_UNSUPPORTED);
error_set(errp, QERR_UNSUPPORTED);
break;
case -EACCES:
error_setg(errp, "Device '%s' is read only", device);
error_set(errp, QERR_DEVICE_IS_READ_ONLY, device);
break;
case -EBUSY:
error_setg(errp, QERR_DEVICE_IN_USE, device);
error_set(errp, QERR_DEVICE_IN_USE, device);
break;
default:
error_setg_errno(errp, -ret, "Could not resize");
@@ -2296,8 +2116,7 @@ void qmp_block_stream(const char *device,
blk = blk_by_name(device);
if (!blk) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", device);
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
}
bs = blk_bs(blk);
@@ -2312,7 +2131,7 @@ void qmp_block_stream(const char *device,
if (has_base) {
base_bs = bdrv_find_backing_image(bs, base);
if (base_bs == NULL) {
error_setg(errp, QERR_BASE_NOT_FOUND, base);
error_set(errp, QERR_BASE_NOT_FOUND, base);
goto out;
}
assert(bdrv_get_aio_context(base_bs) == aio_context);
@@ -2371,8 +2190,7 @@ void qmp_block_commit(const char *device,
* scenario in which all optional arguments are omitted. */
blk = blk_by_name(device);
if (!blk) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", device);
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
}
bs = blk_bs(blk);
@@ -2380,6 +2198,9 @@ void qmp_block_commit(const char *device,
aio_context = bdrv_get_aio_context(bs);
aio_context_acquire(aio_context);
/* drain all i/o before commits */
bdrv_drain_all();
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_COMMIT_SOURCE, errp)) {
goto out;
}
@@ -2407,7 +2228,7 @@ void qmp_block_commit(const char *device,
}
if (base_bs == NULL) {
error_setg(errp, QERR_BASE_NOT_FOUND, base ? base : "NULL");
error_set(errp, QERR_BASE_NOT_FOUND, base ? base : "NULL");
goto out;
}
@@ -2449,7 +2270,6 @@ void qmp_drive_backup(const char *device, const char *target,
enum MirrorSyncMode sync,
bool has_mode, enum NewImageMode mode,
bool has_speed, int64_t speed,
bool has_bitmap, const char *bitmap,
bool has_on_source_error, BlockdevOnError on_source_error,
bool has_on_target_error, BlockdevOnError on_target_error,
Error **errp)
@@ -2458,7 +2278,6 @@ void qmp_drive_backup(const char *device, const char *target,
BlockDriverState *bs;
BlockDriverState *target_bs;
BlockDriverState *source = NULL;
BdrvDirtyBitmap *bmap = NULL;
AioContext *aio_context;
BlockDriver *drv = NULL;
Error *local_err = NULL;
@@ -2481,8 +2300,7 @@ void qmp_drive_backup(const char *device, const char *target,
blk = blk_by_name(device);
if (!blk) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", device);
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
}
bs = blk_bs(blk);
@@ -2493,7 +2311,7 @@ void qmp_drive_backup(const char *device, const char *target,
/* Although backup_run has this check too, we need to use bs->drv below, so
* do an early check redundantly. */
if (!bdrv_is_inserted(bs)) {
error_setg(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
goto out;
}
@@ -2503,7 +2321,7 @@ void qmp_drive_backup(const char *device, const char *target,
if (format) {
drv = bdrv_find_format(format);
if (!drv) {
error_setg(errp, QERR_INVALID_BLOCK_FORMAT, format);
error_set(errp, QERR_INVALID_BLOCK_FORMAT, format);
goto out;
}
}
@@ -2559,16 +2377,7 @@ void qmp_drive_backup(const char *device, const char *target,
bdrv_set_aio_context(target_bs, aio_context);
if (has_bitmap) {
bmap = bdrv_find_dirty_bitmap(bs, bitmap);
if (!bmap) {
error_setg(errp, "Bitmap '%s' could not be found", bitmap);
goto out;
}
}
backup_start(bs, target_bs, speed, sync, bmap,
on_source_error, on_target_error,
backup_start(bs, target_bs, speed, sync, on_source_error, on_target_error,
block_job_cb, bs, &local_err);
if (local_err != NULL) {
bdrv_unref(target_bs);
@@ -2582,7 +2391,7 @@ out:
BlockDeviceInfoList *qmp_query_named_block_nodes(Error **errp)
{
return bdrv_named_nodes_list(errp);
return bdrv_named_nodes_list();
}
void qmp_blockdev_backup(const char *device, const char *target,
@@ -2629,8 +2438,8 @@ void qmp_blockdev_backup(const char *device, const char *target,
bdrv_ref(target_bs);
bdrv_set_aio_context(target_bs, aio_context);
backup_start(bs, target_bs, speed, sync, NULL, on_source_error,
on_target_error, block_job_cb, bs, &local_err);
backup_start(bs, target_bs, speed, sync, on_source_error, on_target_error,
block_job_cb, bs, &local_err);
if (local_err != NULL) {
bdrv_unref(target_bs);
error_propagate(errp, local_err);
@@ -2639,6 +2448,8 @@ out:
aio_context_release(aio_context);
}
#define DEFAULT_MIRROR_BUF_SIZE (10 << 20)
void qmp_drive_mirror(const char *device, const char *target,
bool has_format, const char *format,
bool has_node_name, const char *node_name,
@@ -2680,27 +2491,25 @@ void qmp_drive_mirror(const char *device, const char *target,
granularity = 0;
}
if (!has_buf_size) {
buf_size = 0;
buf_size = DEFAULT_MIRROR_BUF_SIZE;
}
if (!has_unmap) {
unmap = true;
}
if (granularity != 0 && (granularity < 512 || granularity > 1048576 * 64)) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "granularity",
"a value in range [512B, 64MB]");
error_set(errp, QERR_INVALID_PARAMETER_VALUE, "granularity",
"a value in range [512B, 64MB]");
return;
}
if (granularity & (granularity - 1)) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "granularity",
"power of 2");
error_set(errp, QERR_INVALID_PARAMETER_VALUE, "granularity", "power of 2");
return;
}
blk = blk_by_name(device);
if (!blk) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", device);
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
}
bs = blk_bs(blk);
@@ -2709,7 +2518,7 @@ void qmp_drive_mirror(const char *device, const char *target,
aio_context_acquire(aio_context);
if (!bdrv_is_inserted(bs)) {
error_setg(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
goto out;
}
@@ -2719,7 +2528,7 @@ void qmp_drive_mirror(const char *device, const char *target,
if (format) {
drv = bdrv_find_format(format);
if (!drv) {
error_setg(errp, QERR_INVALID_BLOCK_FORMAT, format);
error_set(errp, QERR_INVALID_BLOCK_FORMAT, format);
goto out;
}
}
@@ -2895,7 +2704,7 @@ void qmp_block_job_cancel(const char *device,
force = false;
}
if (job->user_paused && !force) {
if (job->paused && !force) {
error_setg(errp, "The block job for device '%s' is currently paused",
device);
goto out;
@@ -2912,11 +2721,10 @@ void qmp_block_job_pause(const char *device, Error **errp)
AioContext *aio_context;
BlockJob *job = find_block_job(device, &aio_context, errp);
if (!job || job->user_paused) {
if (!job) {
return;
}
job->user_paused = true;
trace_qmp_block_job_pause(job);
block_job_pause(job);
aio_context_release(aio_context);
@@ -2927,11 +2735,10 @@ void qmp_block_job_resume(const char *device, Error **errp)
AioContext *aio_context;
BlockJob *job = find_block_job(device, &aio_context, errp);
if (!job || !job->user_paused) {
if (!job) {
return;
}
job->user_paused = false;
trace_qmp_block_job_resume(job);
block_job_resume(job);
aio_context_release(aio_context);
@@ -2967,8 +2774,7 @@ void qmp_change_backing_file(const char *device,
blk = blk_by_name(device);
if (!blk) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", device);
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
}
bs = blk_bs(blk);
@@ -3132,15 +2938,15 @@ QemuOptsList qemu_common_drive_opts = {
.type = QEMU_OPT_STRING,
.help = "discard operation (ignore/off, unmap/on)",
},{
.name = BDRV_OPT_CACHE_WB,
.name = "cache.writeback",
.type = QEMU_OPT_BOOL,
.help = "enables writeback mode for any caches",
},{
.name = BDRV_OPT_CACHE_DIRECT,
.name = "cache.direct",
.type = QEMU_OPT_BOOL,
.help = "enables use of O_DIRECT (bypass the host page cache)",
},{
.name = BDRV_OPT_CACHE_NO_FLUSH,
.name = "cache.no-flush",
.type = QEMU_OPT_BOOL,
.help = "ignore any flush requests for the device",
},{
@@ -3215,10 +3021,6 @@ QemuOptsList qemu_common_drive_opts = {
.name = "throttling.iops-size",
.type = QEMU_OPT_NUMBER,
.help = "when limiting by iops max size of an I/O in bytes",
},{
.name = "throttling.group",
.type = QEMU_OPT_STRING,
.help = "name of the block throttling group",
},{
.name = "copy-on-read",
.type = QEMU_OPT_BOOL,

View File

@@ -29,7 +29,6 @@
#include "block/block.h"
#include "block/blockjob.h"
#include "block/block_int.h"
#include "qapi/qmp/qerror.h"
#include "qapi/qmp/qjson.h"
#include "block/coroutine.h"
#include "qmp-commands.h"
@@ -43,7 +42,7 @@ void *block_job_create(const BlockJobDriver *driver, BlockDriverState *bs,
BlockJob *job;
if (bs->job) {
error_setg(errp, QERR_DEVICE_IN_USE, bdrv_get_device_name(bs));
error_set(errp, QERR_DEVICE_IN_USE, bdrv_get_device_name(bs));
return NULL;
}
bdrv_ref(bs);
@@ -66,7 +65,10 @@ void *block_job_create(const BlockJobDriver *driver, BlockDriverState *bs,
block_job_set_speed(job, speed, &local_err);
if (local_err) {
block_job_release(bs);
bs->job = NULL;
bdrv_op_unblock_all(bs, job->blocker);
error_free(job->blocker);
g_free(job);
error_propagate(errp, local_err);
return NULL;
}
@@ -74,23 +76,16 @@ void *block_job_create(const BlockJobDriver *driver, BlockDriverState *bs,
return job;
}
void block_job_release(BlockDriverState *bs)
{
BlockJob *job = bs->job;
bs->job = NULL;
bdrv_op_unblock_all(bs, job->blocker);
error_free(job->blocker);
g_free(job);
}
void block_job_completed(BlockJob *job, int ret)
{
BlockDriverState *bs = job->bs;
assert(bs->job == job);
job->cb(job->opaque, ret);
block_job_release(bs);
bs->job = NULL;
bdrv_op_unblock_all(bs, job->blocker);
error_free(job->blocker);
g_free(job);
}
void block_job_set_speed(BlockJob *job, int64_t speed, Error **errp)
@@ -98,7 +93,7 @@ void block_job_set_speed(BlockJob *job, int64_t speed, Error **errp)
Error *local_err = NULL;
if (!job->driver->set_speed) {
error_setg(errp, QERR_UNSUPPORTED);
error_set(errp, QERR_UNSUPPORTED);
return;
}
job->driver->set_speed(job, speed, &local_err);
@@ -112,9 +107,9 @@ void block_job_set_speed(BlockJob *job, int64_t speed, Error **errp)
void block_job_complete(BlockJob *job, Error **errp)
{
if (job->pause_count || job->cancelled || !job->driver->complete) {
error_setg(errp, QERR_BLOCK_JOB_NOT_READY,
bdrv_get_device_name(job->bs));
if (job->paused || job->cancelled || !job->driver->complete) {
error_set(errp, QERR_BLOCK_JOB_NOT_READY,
bdrv_get_device_name(job->bs));
return;
}
@@ -123,26 +118,17 @@ void block_job_complete(BlockJob *job, Error **errp)
void block_job_pause(BlockJob *job)
{
job->pause_count++;
job->paused = true;
}
bool block_job_is_paused(BlockJob *job)
{
return job->pause_count > 0;
return job->paused;
}
void block_job_resume(BlockJob *job)
{
assert(job->pause_count > 0);
job->pause_count--;
if (job->pause_count) {
return;
}
block_job_enter(job);
}
void block_job_enter(BlockJob *job)
{
job->paused = false;
block_job_iostatus_reset(job);
if (job->co && !job->busy) {
qemu_coroutine_enter(job->co, NULL);
@@ -152,7 +138,7 @@ void block_job_enter(BlockJob *job)
void block_job_cancel(BlockJob *job)
{
job->cancelled = true;
block_job_enter(job);
block_job_resume(job);
}
bool block_job_is_cancelled(BlockJob *job)
@@ -272,7 +258,7 @@ BlockJobInfo *block_job_query(BlockJob *job)
info->device = g_strdup(bdrv_get_device_name(job->bs));
info->len = job->len;
info->busy = job->busy;
info->paused = job->pause_count > 0;
info->paused = job->paused;
info->offset = job->offset;
info->speed = job->speed;
info->io_status = job->iostatus;
@@ -349,8 +335,6 @@ BlockErrorAction block_job_error_action(BlockJob *job, BlockDriverState *bs,
IO_OPERATION_TYPE_WRITE,
action, &error_abort);
if (action == BLOCK_ERROR_ACTION_STOP) {
/* make the pause user visible, which will be resumed from QMP. */
job->user_paused = true;
block_job_pause(job);
block_job_iostatus_set_err(job, error);
if (bs != job->bs) {

View File

@@ -92,7 +92,7 @@ void fork_start(void)
void fork_end(int child)
{
if (child) {
gdbserver_fork(thread_cpu);
gdbserver_fork((CPUArchState *)thread_cpu->env_ptr);
}
}
@@ -108,6 +108,10 @@ void cpu_list_unlock(void)
/***********************************************************/
/* CPUX86 core interface */
void cpu_smm_update(CPUX86State *env)
{
}
uint64_t cpu_get_tsc(CPUX86State *env)
{
return cpu_get_real_ticks();
@@ -166,14 +170,12 @@ static void set_idt(int n, unsigned int dpl)
void cpu_loop(CPUX86State *env)
{
X86CPU *cpu = x86_env_get_cpu(env);
CPUState *cs = CPU(cpu);
int trapnr;
abi_ulong pc;
//target_siginfo_t info;
for(;;) {
trapnr = cpu_x86_exec(cs);
trapnr = cpu_x86_exec(env);
switch(trapnr) {
case 0x80:
/* syscall from int $0x80 */
@@ -514,7 +516,7 @@ void cpu_loop(CPUSPARCState *env)
//target_siginfo_t info;
while (1) {
trapnr = cpu_sparc_exec(cs);
trapnr = cpu_sparc_exec (env);
switch (trapnr) {
#ifndef TARGET_SPARC64
@@ -903,6 +905,7 @@ int main(int argc, char **argv)
#endif
}
tcg_exec_init(0);
cpu_exec_init_all();
/* NOTE: we need to init the CPU at this stage to get
qemu_host_page_size */
cpu = cpu_init(cpu_model);

676
configure vendored

File diff suppressed because it is too large Load Diff

View File

@@ -27,7 +27,6 @@
#include "exec/address-spaces.h"
#include "exec/memory-internal.h"
#include "qemu/rcu.h"
#include "exec/tb-hash.h"
/* -icount align implementation. */
@@ -227,9 +226,10 @@ static inline tcg_target_ulong cpu_tb_exec(CPUState *cpu, uint8_t *tb_ptr)
/* Execute the code without caching the generated code. An interpreter
could be used if available. */
static void cpu_exec_nocache(CPUState *cpu, int max_cycles,
static void cpu_exec_nocache(CPUArchState *env, int max_cycles,
TranslationBlock *orig_tb)
{
CPUState *cpu = ENV_GET_CPU(env);
TranslationBlock *tb;
target_ulong pc = orig_tb->pc;
target_ulong cs_base = orig_tb->cs_base;
@@ -253,12 +253,12 @@ static void cpu_exec_nocache(CPUState *cpu, int max_cycles,
tb_free(tb);
}
static TranslationBlock *tb_find_slow(CPUState *cpu,
static TranslationBlock *tb_find_slow(CPUArchState *env,
target_ulong pc,
target_ulong cs_base,
uint64_t flags)
{
CPUArchState *env = (CPUArchState *)cpu->env_ptr;
CPUState *cpu = ENV_GET_CPU(env);
TranslationBlock *tb, **ptb1;
unsigned int h;
tb_page_addr_t phys_pc, phys_page1;
@@ -310,9 +310,9 @@ static TranslationBlock *tb_find_slow(CPUState *cpu,
return tb;
}
static inline TranslationBlock *tb_find_fast(CPUState *cpu)
static inline TranslationBlock *tb_find_fast(CPUArchState *env)
{
CPUArchState *env = (CPUArchState *)cpu->env_ptr;
CPUState *cpu = ENV_GET_CPU(env);
TranslationBlock *tb;
target_ulong cs_base, pc;
int flags;
@@ -324,13 +324,14 @@ static inline TranslationBlock *tb_find_fast(CPUState *cpu)
tb = cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)];
if (unlikely(!tb || tb->pc != pc || tb->cs_base != cs_base ||
tb->flags != flags)) {
tb = tb_find_slow(cpu, pc, cs_base, flags);
tb = tb_find_slow(env, pc, cs_base, flags);
}
return tb;
}
static void cpu_handle_debug_exception(CPUState *cpu)
static void cpu_handle_debug_exception(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
CPUClass *cc = CPU_GET_CLASS(cpu);
CPUWatchpoint *wp;
@@ -347,12 +348,12 @@ static void cpu_handle_debug_exception(CPUState *cpu)
volatile sig_atomic_t exit_request;
int cpu_exec(CPUState *cpu)
int cpu_exec(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
CPUClass *cc = CPU_GET_CLASS(cpu);
#ifdef TARGET_I386
X86CPU *x86_cpu = X86_CPU(cpu);
CPUArchState *env = &x86_cpu->env;
#endif
int ret, interrupt_request;
TranslationBlock *tb;
@@ -405,7 +406,7 @@ int cpu_exec(CPUState *cpu)
/* exit request from the cpu execution loop */
ret = cpu->exception_index;
if (ret == EXCP_DEBUG) {
cpu_handle_debug_exception(cpu);
cpu_handle_debug_exception(env);
}
cpu->exception_index = -1;
break;
@@ -481,7 +482,7 @@ int cpu_exec(CPUState *cpu)
}
spin_lock(&tcg_ctx.tb_ctx.tb_lock);
have_tb_lock = true;
tb = tb_find_fast(cpu);
tb = tb_find_fast(env);
/* Note: we do it here to avoid a gcc bug on Mac OS X when
doing it in tb_find_slow */
if (tcg_ctx.tb_ctx.tb_invalidated_flag) {
@@ -541,7 +542,7 @@ int cpu_exec(CPUState *cpu)
if (insns_left > 0) {
/* Execute remaining instructions. */
tb = (TranslationBlock *)(next_tb & ~TB_EXIT_MASK);
cpu_exec_nocache(cpu, insns_left, tb);
cpu_exec_nocache(env, insns_left, tb);
align_clocks(&sc, cpu);
}
cpu->exception_index = EXCP_INTERRUPT;
@@ -565,11 +566,11 @@ int cpu_exec(CPUState *cpu)
/* Reload env after longjmp - the compiler may have smashed all
* local variables as longjmp is marked 'noreturn'. */
cpu = current_cpu;
env = cpu->env_ptr;
cc = CPU_GET_CLASS(cpu);
cpu->can_do_io = 1;
#ifdef TARGET_I386
x86_cpu = X86_CPU(cpu);
env = &x86_cpu->env;
#endif
if (have_tb_lock) {
spin_unlock(&tcg_ctx.tb_ctx.tb_lock);

140
cpus.c
View File

@@ -27,7 +27,6 @@
#include "monitor/monitor.h"
#include "qapi/qmp/qerror.h"
#include "qemu/error-report.h"
#include "sysemu/sysemu.h"
#include "exec/gdbstub.h"
#include "sysemu/dma.h"
@@ -106,7 +105,6 @@ static bool all_cpu_threads_idle(void)
/* Protected by TimersState seqlock */
static bool icount_sleep = true;
static int64_t vm_clock_warp_start = -1;
/* Conversion factor from emulated instructions to virtual clock ticks. */
static int icount_time_shift;
@@ -395,18 +393,15 @@ void qemu_clock_warp(QEMUClockType type)
return;
}
if (icount_sleep) {
/*
* If the CPUs have been sleeping, advance QEMU_CLOCK_VIRTUAL timer now.
* This ensures that the deadline for the timer is computed correctly
* below.
* This also makes sure that the insn counter is synchronized before
* the CPU starts running, in case the CPU is woken by an event other
* than the earliest QEMU_CLOCK_VIRTUAL timer.
*/
icount_warp_rt(NULL);
timer_del(icount_warp_timer);
}
/*
* If the CPUs have been sleeping, advance QEMU_CLOCK_VIRTUAL timer now.
* This ensures that the deadline for the timer is computed correctly below.
* This also makes sure that the insn counter is synchronized before the
* CPU starts running, in case the CPU is woken by an event other than
* the earliest QEMU_CLOCK_VIRTUAL timer.
*/
icount_warp_rt(NULL);
timer_del(icount_warp_timer);
if (!all_cpu_threads_idle()) {
return;
}
@@ -420,11 +415,6 @@ void qemu_clock_warp(QEMUClockType type)
clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT);
deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL);
if (deadline < 0) {
static bool notified;
if (!icount_sleep && !notified) {
error_report("WARNING: icount sleep disabled and no active timers");
notified = true;
}
return;
}
@@ -435,35 +425,23 @@ void qemu_clock_warp(QEMUClockType type)
* interrupt to wake it up, but the interrupt never comes because
* the vCPU isn't running any insns and thus doesn't advance the
* QEMU_CLOCK_VIRTUAL.
*
* An extreme solution for this problem would be to never let VCPUs
* sleep in icount mode if there is a pending QEMU_CLOCK_VIRTUAL
* timer; rather time could just advance to the next QEMU_CLOCK_VIRTUAL
* event. Instead, we do stop VCPUs and only advance QEMU_CLOCK_VIRTUAL
* after some "real" time, (related to the time left until the next
* event) has passed. The QEMU_CLOCK_VIRTUAL_RT clock will do this.
* This avoids that the warps are visible externally; for example,
* you will not be sending network packets continuously instead of
* every 100ms.
*/
if (!icount_sleep) {
/*
* We never let VCPUs sleep in no sleep icount mode.
* If there is a pending QEMU_CLOCK_VIRTUAL timer we just advance
* to the next QEMU_CLOCK_VIRTUAL event and notify it.
* It is useful when we want a deterministic execution time,
* isolated from host latencies.
*/
seqlock_write_lock(&timers_state.vm_clock_seqlock);
timers_state.qemu_icount_bias += deadline;
seqlock_write_unlock(&timers_state.vm_clock_seqlock);
qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
} else {
/*
* We do stop VCPUs and only advance QEMU_CLOCK_VIRTUAL after some
* "real" time, (related to the time left until the next event) has
* passed. The QEMU_CLOCK_VIRTUAL_RT clock will do this.
* This avoids that the warps are visible externally; for example,
* you will not be sending network packets continuously instead of
* every 100ms.
*/
seqlock_write_lock(&timers_state.vm_clock_seqlock);
if (vm_clock_warp_start == -1 || vm_clock_warp_start > clock) {
vm_clock_warp_start = clock;
}
seqlock_write_unlock(&timers_state.vm_clock_seqlock);
timer_mod_anticipate(icount_warp_timer, clock + deadline);
seqlock_write_lock(&timers_state.vm_clock_seqlock);
if (vm_clock_warp_start == -1 || vm_clock_warp_start > clock) {
vm_clock_warp_start = clock;
}
seqlock_write_unlock(&timers_state.vm_clock_seqlock);
timer_mod_anticipate(icount_warp_timer, clock + deadline);
} else if (deadline == 0) {
qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
}
@@ -481,7 +459,6 @@ static const VMStateDescription icount_vmstate_timers = {
.name = "timer/icount",
.version_id = 1,
.minimum_version_id = 1,
.needed = icount_state_needed,
.fields = (VMStateField[]) {
VMSTATE_INT64(qemu_icount_bias, TimersState),
VMSTATE_INT64(qemu_icount, TimersState),
@@ -499,9 +476,13 @@ static const VMStateDescription vmstate_timers = {
VMSTATE_INT64_V(cpu_clock_offset, TimersState, 2),
VMSTATE_END_OF_LIST()
},
.subsections = (const VMStateDescription*[]) {
&icount_vmstate_timers,
NULL
.subsections = (VMStateSubsection[]) {
{
.vmsd = &icount_vmstate_timers,
.needed = icount_state_needed,
}, {
/* empty */
}
}
};
@@ -523,18 +504,9 @@ void configure_icount(QemuOpts *opts, Error **errp)
}
return;
}
icount_sleep = qemu_opt_get_bool(opts, "sleep", true);
if (icount_sleep) {
icount_warp_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL_RT,
icount_warp_rt, NULL);
}
icount_align_option = qemu_opt_get_bool(opts, "align", false);
if (icount_align_option && !icount_sleep) {
error_setg(errp, "align=on and sleep=no are incompatible");
}
icount_warp_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL_RT,
icount_warp_rt, NULL);
if (strcmp(option, "auto") != 0) {
errno = 0;
icount_time_shift = strtol(option, &rem_str, 0);
@@ -545,8 +517,6 @@ void configure_icount(QemuOpts *opts, Error **errp)
return;
} else if (icount_align_option) {
error_setg(errp, "shift=auto and align=on are incompatible");
} else if (!icount_sleep) {
error_setg(errp, "shift=auto and sleep=no are incompatible");
}
use_icount = 2;
@@ -954,9 +924,7 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
CPUState *cpu = arg;
int r;
rcu_register_thread();
qemu_mutex_lock_iothread();
qemu_mutex_lock(&qemu_global_mutex);
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->can_do_io = 1;
@@ -997,8 +965,6 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
sigset_t waitset;
int r;
rcu_register_thread();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
@@ -1038,12 +1004,10 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
{
CPUState *cpu = arg;
rcu_register_thread();
qemu_mutex_lock_iothread();
qemu_tcg_init_cpu_signals();
qemu_thread_get_self(cpu->thread);
qemu_mutex_lock(&qemu_global_mutex);
CPU_FOREACH(cpu) {
cpu->thread_id = qemu_get_thread_id();
cpu->created = true;
@@ -1052,7 +1016,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
qemu_cond_signal(&qemu_cpu_cond);
/* wait for initial kick-off after machine start */
while (first_cpu->stopped) {
while (QTAILQ_FIRST(&cpus)->stopped) {
qemu_cond_wait(tcg_halt_cond, &qemu_global_mutex);
/* process any pending work */
@@ -1152,21 +1116,10 @@ bool qemu_in_vcpu_thread(void)
return current_cpu && qemu_cpu_is_self(current_cpu);
}
static __thread bool iothread_locked = false;
bool qemu_mutex_iothread_locked(void)
{
return iothread_locked;
}
void qemu_mutex_lock_iothread(void)
{
atomic_inc(&iothread_requesting_mutex);
/* In the simple case there is no need to bump the VCPU thread out of
* TCG code execution.
*/
if (!tcg_enabled() || qemu_in_vcpu_thread() ||
!first_cpu || !first_cpu->thread) {
if (!tcg_enabled() || !first_cpu || !first_cpu->thread) {
qemu_mutex_lock(&qemu_global_mutex);
atomic_dec(&iothread_requesting_mutex);
} else {
@@ -1177,12 +1130,10 @@ void qemu_mutex_lock_iothread(void)
atomic_dec(&iothread_requesting_mutex);
qemu_cond_broadcast(&qemu_io_proceeded_cond);
}
iothread_locked = true;
}
void qemu_mutex_unlock_iothread(void)
{
iothread_locked = false;
qemu_mutex_unlock(&qemu_global_mutex);
}
@@ -1363,8 +1314,9 @@ int vm_stop_force_state(RunState state)
}
}
static int tcg_cpu_exec(CPUState *cpu)
static int tcg_cpu_exec(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
int ret;
#ifdef CONFIG_PROFILER
int64_t ti;
@@ -1399,7 +1351,7 @@ static int tcg_cpu_exec(CPUState *cpu)
cpu->icount_decr.u16.low = decr;
cpu->icount_extra = count;
}
ret = cpu_exec(cpu);
ret = cpu_exec(env);
#ifdef CONFIG_PROFILER
tcg_time += profile_getclock() - ti;
#endif
@@ -1426,12 +1378,13 @@ static void tcg_exec_all(void)
}
for (; next_cpu != NULL && !exit_request; next_cpu = CPU_NEXT(next_cpu)) {
CPUState *cpu = next_cpu;
CPUArchState *env = cpu->env_ptr;
qemu_clock_enable(QEMU_CLOCK_VIRTUAL,
(cpu->singlestep_enabled & SSTEP_NOTIMER) == 0);
if (cpu_can_run(cpu)) {
r = tcg_cpu_exec(cpu);
r = tcg_cpu_exec(env);
if (r == EXCP_DEBUG) {
cpu_handle_guest_debug(cpu);
break;
@@ -1482,7 +1435,6 @@ CpuInfoList *qmp_query_cpus(Error **errp)
info->value->CPU = cpu->cpu_index;
info->value->current = (cpu == first_cpu);
info->value->halted = cpu->halted;
info->value->qom_path = object_get_canonical_path(OBJECT(cpu));
info->value->thread_id = cpu->thread_id;
#if defined(TARGET_I386)
info->value->has_pc = true;
@@ -1530,8 +1482,8 @@ void qmp_memsave(int64_t addr, int64_t size, const char *filename,
cpu = qemu_get_cpu(cpu_index);
if (cpu == NULL) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "cpu-index",
"a CPU number");
error_set(errp, QERR_INVALID_PARAMETER_VALUE, "cpu-index",
"a CPU number");
return;
}
@@ -1551,7 +1503,7 @@ void qmp_memsave(int64_t addr, int64_t size, const char *filename,
goto exit;
}
if (fwrite(buf, 1, l, f) != l) {
error_setg(errp, QERR_IO_ERROR);
error_set(errp, QERR_IO_ERROR);
goto exit;
}
addr += l;
@@ -1581,7 +1533,7 @@ void qmp_pmemsave(int64_t addr, int64_t size, const char *filename,
l = size;
cpu_physical_memory_read(addr, buf, l);
if (fwrite(buf, 1, l, f) != l) {
error_setg(errp, QERR_IO_ERROR);
error_set(errp, QERR_IO_ERROR);
goto exit;
}
addr += l;

View File

@@ -125,13 +125,14 @@ void tlb_flush_page(CPUState *cpu, target_ulong addr)
can be detected */
void tlb_protect_code(ram_addr_t ram_addr)
{
cpu_physical_memory_test_and_clear_dirty(ram_addr, TARGET_PAGE_SIZE,
DIRTY_MEMORY_CODE);
cpu_physical_memory_reset_dirty(ram_addr, TARGET_PAGE_SIZE,
DIRTY_MEMORY_CODE);
}
/* update the TLB so that writes in physical page 'phys_addr' are no longer
tested for self modifying code */
void tlb_unprotect_code(ram_addr_t ram_addr)
void tlb_unprotect_code_phys(CPUState *cpu, ram_addr_t ram_addr,
target_ulong vaddr)
{
cpu_physical_memory_set_dirty_flag(ram_addr, DIRTY_MEMORY_CODE);
}
@@ -248,9 +249,9 @@ static void tlb_add_large_page(CPUArchState *env, target_ulong vaddr,
* Called from TCG-generated code, which is under an RCU read-side
* critical section.
*/
void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
hwaddr paddr, MemTxAttrs attrs, int prot,
int mmu_idx, target_ulong size)
void tlb_set_page(CPUState *cpu, target_ulong vaddr,
hwaddr paddr, int prot,
int mmu_idx, target_ulong size)
{
CPUArchState *env = cpu->env_ptr;
MemoryRegionSection *section;
@@ -300,8 +301,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
env->iotlb_v[mmu_idx][vidx] = env->iotlb[mmu_idx][index];
/* refill the tlb */
env->iotlb[mmu_idx][index].addr = iotlb - vaddr;
env->iotlb[mmu_idx][index].attrs = attrs;
env->iotlb[mmu_idx][index] = iotlb - vaddr;
te->addend = addend - vaddr;
if (prot & PAGE_READ) {
te->addr_read = address;
@@ -331,17 +331,6 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
}
}
/* Add a new TLB entry, but without specifying the memory
* transaction attributes to be used.
*/
void tlb_set_page(CPUState *cpu, target_ulong vaddr,
hwaddr paddr, int prot,
int mmu_idx, target_ulong size)
{
tlb_set_page_with_attrs(cpu, vaddr, paddr, MEMTXATTRS_UNSPECIFIED,
prot, mmu_idx, size);
}
/* NOTE: this function can trigger an exception */
/* NOTE2: the returned address is not exactly the physical address: it
* is actually a ram_addr_t (in system mode; the user mode emulation
@@ -360,7 +349,7 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr)
(addr & TARGET_PAGE_MASK))) {
cpu_ldub_code(env1, addr);
}
pd = env1->iotlb[mmu_idx][page_index].addr & ~TARGET_PAGE_MASK;
pd = env1->iotlb[mmu_idx][page_index] & ~TARGET_PAGE_MASK;
mr = iotlb_to_region(cpu, pd);
if (memory_region_is_unassigned(mr)) {
CPUClass *cc = CPU_GET_CLASS(cpu);

View File

@@ -1,5 +0,0 @@
util-obj-y += init.o
util-obj-y += hash.o
util-obj-y += aes.o
util-obj-y += desrfb.o
util-obj-y += cipher.o

View File

@@ -1,400 +0,0 @@
/*
* QEMU Crypto cipher built-in algorithms
*
* Copyright (c) 2015 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "crypto/aes.h"
#include "crypto/desrfb.h"
typedef struct QCryptoCipherBuiltinAES QCryptoCipherBuiltinAES;
struct QCryptoCipherBuiltinAES {
AES_KEY encrypt_key;
AES_KEY decrypt_key;
uint8_t *iv;
size_t niv;
};
typedef struct QCryptoCipherBuiltinDESRFB QCryptoCipherBuiltinDESRFB;
struct QCryptoCipherBuiltinDESRFB {
uint8_t *key;
size_t nkey;
};
typedef struct QCryptoCipherBuiltin QCryptoCipherBuiltin;
struct QCryptoCipherBuiltin {
union {
QCryptoCipherBuiltinAES aes;
QCryptoCipherBuiltinDESRFB desrfb;
} state;
void (*free)(QCryptoCipher *cipher);
int (*setiv)(QCryptoCipher *cipher,
const uint8_t *iv, size_t niv,
Error **errp);
int (*encrypt)(QCryptoCipher *cipher,
const void *in,
void *out,
size_t len,
Error **errp);
int (*decrypt)(QCryptoCipher *cipher,
const void *in,
void *out,
size_t len,
Error **errp);
};
static void qcrypto_cipher_free_aes(QCryptoCipher *cipher)
{
QCryptoCipherBuiltin *ctxt = cipher->opaque;
g_free(ctxt->state.aes.iv);
g_free(ctxt);
cipher->opaque = NULL;
}
static int qcrypto_cipher_encrypt_aes(QCryptoCipher *cipher,
const void *in,
void *out,
size_t len,
Error **errp)
{
QCryptoCipherBuiltin *ctxt = cipher->opaque;
if (cipher->mode == QCRYPTO_CIPHER_MODE_ECB) {
const uint8_t *inptr = in;
uint8_t *outptr = out;
while (len) {
if (len > AES_BLOCK_SIZE) {
AES_encrypt(inptr, outptr, &ctxt->state.aes.encrypt_key);
inptr += AES_BLOCK_SIZE;
outptr += AES_BLOCK_SIZE;
len -= AES_BLOCK_SIZE;
} else {
uint8_t tmp1[AES_BLOCK_SIZE], tmp2[AES_BLOCK_SIZE];
memcpy(tmp1, inptr, len);
/* Fill with 0 to avoid valgrind uninitialized reads */
memset(tmp1 + len, 0, sizeof(tmp1) - len);
AES_encrypt(tmp1, tmp2, &ctxt->state.aes.encrypt_key);
memcpy(outptr, tmp2, len);
len = 0;
}
}
} else {
AES_cbc_encrypt(in, out, len,
&ctxt->state.aes.encrypt_key,
ctxt->state.aes.iv, 1);
}
return 0;
}
static int qcrypto_cipher_decrypt_aes(QCryptoCipher *cipher,
const void *in,
void *out,
size_t len,
Error **errp)
{
QCryptoCipherBuiltin *ctxt = cipher->opaque;
if (cipher->mode == QCRYPTO_CIPHER_MODE_ECB) {
const uint8_t *inptr = in;
uint8_t *outptr = out;
while (len) {
if (len > AES_BLOCK_SIZE) {
AES_decrypt(inptr, outptr, &ctxt->state.aes.decrypt_key);
inptr += AES_BLOCK_SIZE;
outptr += AES_BLOCK_SIZE;
len -= AES_BLOCK_SIZE;
} else {
uint8_t tmp1[AES_BLOCK_SIZE], tmp2[AES_BLOCK_SIZE];
memcpy(tmp1, inptr, len);
/* Fill with 0 to avoid valgrind uninitialized reads */
memset(tmp1 + len, 0, sizeof(tmp1) - len);
AES_decrypt(tmp1, tmp2, &ctxt->state.aes.decrypt_key);
memcpy(outptr, tmp2, len);
len = 0;
}
}
} else {
AES_cbc_encrypt(in, out, len,
&ctxt->state.aes.decrypt_key,
ctxt->state.aes.iv, 0);
}
return 0;
}
static int qcrypto_cipher_setiv_aes(QCryptoCipher *cipher,
const uint8_t *iv, size_t niv,
Error **errp)
{
QCryptoCipherBuiltin *ctxt = cipher->opaque;
if (niv != 16) {
error_setg(errp, "IV must be 16 bytes not %zu", niv);
return -1;
}
g_free(ctxt->state.aes.iv);
ctxt->state.aes.iv = g_new0(uint8_t, niv);
memcpy(ctxt->state.aes.iv, iv, niv);
ctxt->state.aes.niv = niv;
return 0;
}
static int qcrypto_cipher_init_aes(QCryptoCipher *cipher,
const uint8_t *key, size_t nkey,
Error **errp)
{
QCryptoCipherBuiltin *ctxt;
if (cipher->mode != QCRYPTO_CIPHER_MODE_CBC &&
cipher->mode != QCRYPTO_CIPHER_MODE_ECB) {
error_setg(errp, "Unsupported cipher mode %d", cipher->mode);
return -1;
}
ctxt = g_new0(QCryptoCipherBuiltin, 1);
if (AES_set_encrypt_key(key, nkey * 8, &ctxt->state.aes.encrypt_key) != 0) {
error_setg(errp, "Failed to set encryption key");
goto error;
}
if (AES_set_decrypt_key(key, nkey * 8, &ctxt->state.aes.decrypt_key) != 0) {
error_setg(errp, "Failed to set decryption key");
goto error;
}
ctxt->free = qcrypto_cipher_free_aes;
ctxt->setiv = qcrypto_cipher_setiv_aes;
ctxt->encrypt = qcrypto_cipher_encrypt_aes;
ctxt->decrypt = qcrypto_cipher_decrypt_aes;
cipher->opaque = ctxt;
return 0;
error:
g_free(ctxt);
return -1;
}
static void qcrypto_cipher_free_des_rfb(QCryptoCipher *cipher)
{
QCryptoCipherBuiltin *ctxt = cipher->opaque;
g_free(ctxt->state.desrfb.key);
g_free(ctxt);
cipher->opaque = NULL;
}
static int qcrypto_cipher_encrypt_des_rfb(QCryptoCipher *cipher,
const void *in,
void *out,
size_t len,
Error **errp)
{
QCryptoCipherBuiltin *ctxt = cipher->opaque;
size_t i;
if (len % 8) {
error_setg(errp, "Buffer size must be multiple of 8 not %zu",
len);
return -1;
}
deskey(ctxt->state.desrfb.key, EN0);
for (i = 0; i < len; i += 8) {
des((void *)in + i, out + i);
}
return 0;
}
static int qcrypto_cipher_decrypt_des_rfb(QCryptoCipher *cipher,
const void *in,
void *out,
size_t len,
Error **errp)
{
QCryptoCipherBuiltin *ctxt = cipher->opaque;
size_t i;
if (len % 8) {
error_setg(errp, "Buffer size must be multiple of 8 not %zu",
len);
return -1;
}
deskey(ctxt->state.desrfb.key, DE1);
for (i = 0; i < len; i += 8) {
des((void *)in + i, out + i);
}
return 0;
}
static int qcrypto_cipher_setiv_des_rfb(QCryptoCipher *cipher,
const uint8_t *iv, size_t niv,
Error **errp)
{
error_setg(errp, "Setting IV is not supported");
return -1;
}
static int qcrypto_cipher_init_des_rfb(QCryptoCipher *cipher,
const uint8_t *key, size_t nkey,
Error **errp)
{
QCryptoCipherBuiltin *ctxt;
if (cipher->mode != QCRYPTO_CIPHER_MODE_ECB) {
error_setg(errp, "Unsupported cipher mode %d", cipher->mode);
return -1;
}
ctxt = g_new0(QCryptoCipherBuiltin, 1);
ctxt->state.desrfb.key = g_new0(uint8_t, nkey);
memcpy(ctxt->state.desrfb.key, key, nkey);
ctxt->state.desrfb.nkey = nkey;
ctxt->free = qcrypto_cipher_free_des_rfb;
ctxt->setiv = qcrypto_cipher_setiv_des_rfb;
ctxt->encrypt = qcrypto_cipher_encrypt_des_rfb;
ctxt->decrypt = qcrypto_cipher_decrypt_des_rfb;
cipher->opaque = ctxt;
return 0;
}
bool qcrypto_cipher_supports(QCryptoCipherAlgorithm alg)
{
switch (alg) {
case QCRYPTO_CIPHER_ALG_DES_RFB:
case QCRYPTO_CIPHER_ALG_AES_128:
case QCRYPTO_CIPHER_ALG_AES_192:
case QCRYPTO_CIPHER_ALG_AES_256:
return true;
default:
return false;
}
}
QCryptoCipher *qcrypto_cipher_new(QCryptoCipherAlgorithm alg,
QCryptoCipherMode mode,
const uint8_t *key, size_t nkey,
Error **errp)
{
QCryptoCipher *cipher;
cipher = g_new0(QCryptoCipher, 1);
cipher->alg = alg;
cipher->mode = mode;
if (!qcrypto_cipher_validate_key_length(alg, nkey, errp)) {
goto error;
}
switch (cipher->alg) {
case QCRYPTO_CIPHER_ALG_DES_RFB:
if (qcrypto_cipher_init_des_rfb(cipher, key, nkey, errp) < 0) {
goto error;
}
break;
case QCRYPTO_CIPHER_ALG_AES_128:
case QCRYPTO_CIPHER_ALG_AES_192:
case QCRYPTO_CIPHER_ALG_AES_256:
if (qcrypto_cipher_init_aes(cipher, key, nkey, errp) < 0) {
goto error;
}
break;
default:
error_setg(errp,
"Unsupported cipher algorithm %d", cipher->alg);
goto error;
}
return cipher;
error:
g_free(cipher);
return NULL;
}
void qcrypto_cipher_free(QCryptoCipher *cipher)
{
QCryptoCipherBuiltin *ctxt;
if (!cipher) {
return;
}
ctxt = cipher->opaque;
ctxt->free(cipher);
g_free(cipher);
}
int qcrypto_cipher_encrypt(QCryptoCipher *cipher,
const void *in,
void *out,
size_t len,
Error **errp)
{
QCryptoCipherBuiltin *ctxt = cipher->opaque;
return ctxt->encrypt(cipher, in, out, len, errp);
}
int qcrypto_cipher_decrypt(QCryptoCipher *cipher,
const void *in,
void *out,
size_t len,
Error **errp)
{
QCryptoCipherBuiltin *ctxt = cipher->opaque;
return ctxt->decrypt(cipher, in, out, len, errp);
}
int qcrypto_cipher_setiv(QCryptoCipher *cipher,
const uint8_t *iv, size_t niv,
Error **errp)
{
QCryptoCipherBuiltin *ctxt = cipher->opaque;
return ctxt->setiv(cipher, iv, niv, errp);
}

View File

@@ -1,195 +0,0 @@
/*
* QEMU Crypto cipher libgcrypt algorithms
*
* Copyright (c) 2015 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include <gcrypt.h>
bool qcrypto_cipher_supports(QCryptoCipherAlgorithm alg)
{
switch (alg) {
case QCRYPTO_CIPHER_ALG_DES_RFB:
case QCRYPTO_CIPHER_ALG_AES_128:
case QCRYPTO_CIPHER_ALG_AES_192:
case QCRYPTO_CIPHER_ALG_AES_256:
return true;
default:
return false;
}
}
QCryptoCipher *qcrypto_cipher_new(QCryptoCipherAlgorithm alg,
QCryptoCipherMode mode,
const uint8_t *key, size_t nkey,
Error **errp)
{
QCryptoCipher *cipher;
gcry_cipher_hd_t handle;
gcry_error_t err;
int gcryalg, gcrymode;
switch (mode) {
case QCRYPTO_CIPHER_MODE_ECB:
gcrymode = GCRY_CIPHER_MODE_ECB;
break;
case QCRYPTO_CIPHER_MODE_CBC:
gcrymode = GCRY_CIPHER_MODE_CBC;
break;
default:
error_setg(errp, "Unsupported cipher mode %d", mode);
return NULL;
}
if (!qcrypto_cipher_validate_key_length(alg, nkey, errp)) {
return NULL;
}
switch (alg) {
case QCRYPTO_CIPHER_ALG_DES_RFB:
gcryalg = GCRY_CIPHER_DES;
break;
case QCRYPTO_CIPHER_ALG_AES_128:
gcryalg = GCRY_CIPHER_AES128;
break;
case QCRYPTO_CIPHER_ALG_AES_192:
gcryalg = GCRY_CIPHER_AES192;
break;
case QCRYPTO_CIPHER_ALG_AES_256:
gcryalg = GCRY_CIPHER_AES256;
break;
default:
error_setg(errp, "Unsupported cipher algorithm %d", alg);
return NULL;
}
cipher = g_new0(QCryptoCipher, 1);
cipher->alg = alg;
cipher->mode = mode;
err = gcry_cipher_open(&handle, gcryalg, gcrymode, 0);
if (err != 0) {
error_setg(errp, "Cannot initialize cipher: %s",
gcry_strerror(err));
goto error;
}
if (cipher->alg == QCRYPTO_CIPHER_ALG_DES_RFB) {
/* We're using standard DES cipher from gcrypt, so we need
* to munge the key so that the results are the same as the
* bizarre RFB variant of DES :-)
*/
uint8_t *rfbkey = qcrypto_cipher_munge_des_rfb_key(key, nkey);
err = gcry_cipher_setkey(handle, rfbkey, nkey);
g_free(rfbkey);
} else {
err = gcry_cipher_setkey(handle, key, nkey);
}
if (err != 0) {
error_setg(errp, "Cannot set key: %s",
gcry_strerror(err));
goto error;
}
cipher->opaque = handle;
return cipher;
error:
gcry_cipher_close(handle);
g_free(cipher);
return NULL;
}
void qcrypto_cipher_free(QCryptoCipher *cipher)
{
gcry_cipher_hd_t handle;
if (!cipher) {
return;
}
handle = cipher->opaque;
gcry_cipher_close(handle);
g_free(cipher);
}
int qcrypto_cipher_encrypt(QCryptoCipher *cipher,
const void *in,
void *out,
size_t len,
Error **errp)
{
gcry_cipher_hd_t handle = cipher->opaque;
gcry_error_t err;
err = gcry_cipher_encrypt(handle,
out, len,
in, len);
if (err != 0) {
error_setg(errp, "Cannot encrypt data: %s",
gcry_strerror(err));
return -1;
}
return 0;
}
int qcrypto_cipher_decrypt(QCryptoCipher *cipher,
const void *in,
void *out,
size_t len,
Error **errp)
{
gcry_cipher_hd_t handle = cipher->opaque;
gcry_error_t err;
err = gcry_cipher_decrypt(handle,
out, len,
in, len);
if (err != 0) {
error_setg(errp, "Cannot decrypt data: %s",
gcry_strerror(err));
return -1;
}
return 0;
}
int qcrypto_cipher_setiv(QCryptoCipher *cipher,
const uint8_t *iv, size_t niv,
Error **errp)
{
gcry_cipher_hd_t handle = cipher->opaque;
gcry_error_t err;
gcry_cipher_reset(handle);
err = gcry_cipher_setiv(handle, iv, niv);
if (err != 0) {
error_setg(errp, "Cannot set IV: %s",
gcry_strerror(err));
return -1;
}
return 0;
}

View File

@@ -1,245 +0,0 @@
/*
* QEMU Crypto cipher nettle algorithms
*
* Copyright (c) 2015 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include <nettle/nettle-types.h>
#include <nettle/aes.h>
#include <nettle/des.h>
#include <nettle/cbc.h>
#if CONFIG_NETTLE_VERSION_MAJOR < 3
typedef nettle_crypt_func nettle_cipher_func;
typedef void * cipher_ctx_t;
typedef unsigned cipher_length_t;
#else
typedef const void * cipher_ctx_t;
typedef size_t cipher_length_t;
#endif
static nettle_cipher_func aes_encrypt_wrapper;
static nettle_cipher_func aes_decrypt_wrapper;
static nettle_cipher_func des_encrypt_wrapper;
static nettle_cipher_func des_decrypt_wrapper;
static void aes_encrypt_wrapper(cipher_ctx_t ctx, cipher_length_t length,
uint8_t *dst, const uint8_t *src)
{
aes_encrypt(ctx, length, dst, src);
}
static void aes_decrypt_wrapper(cipher_ctx_t ctx, cipher_length_t length,
uint8_t *dst, const uint8_t *src)
{
aes_decrypt(ctx, length, dst, src);
}
static void des_encrypt_wrapper(cipher_ctx_t ctx, cipher_length_t length,
uint8_t *dst, const uint8_t *src)
{
des_encrypt(ctx, length, dst, src);
}
static void des_decrypt_wrapper(cipher_ctx_t ctx, cipher_length_t length,
uint8_t *dst, const uint8_t *src)
{
des_decrypt(ctx, length, dst, src);
}
typedef struct QCryptoCipherNettle QCryptoCipherNettle;
struct QCryptoCipherNettle {
void *ctx_encrypt;
void *ctx_decrypt;
nettle_cipher_func *alg_encrypt;
nettle_cipher_func *alg_decrypt;
uint8_t *iv;
size_t niv;
};
bool qcrypto_cipher_supports(QCryptoCipherAlgorithm alg)
{
switch (alg) {
case QCRYPTO_CIPHER_ALG_DES_RFB:
case QCRYPTO_CIPHER_ALG_AES_128:
case QCRYPTO_CIPHER_ALG_AES_192:
case QCRYPTO_CIPHER_ALG_AES_256:
return true;
default:
return false;
}
}
QCryptoCipher *qcrypto_cipher_new(QCryptoCipherAlgorithm alg,
QCryptoCipherMode mode,
const uint8_t *key, size_t nkey,
Error **errp)
{
QCryptoCipher *cipher;
QCryptoCipherNettle *ctx;
uint8_t *rfbkey;
switch (mode) {
case QCRYPTO_CIPHER_MODE_ECB:
case QCRYPTO_CIPHER_MODE_CBC:
break;
default:
error_setg(errp, "Unsupported cipher mode %d", mode);
return NULL;
}
if (!qcrypto_cipher_validate_key_length(alg, nkey, errp)) {
return NULL;
}
cipher = g_new0(QCryptoCipher, 1);
cipher->alg = alg;
cipher->mode = mode;
ctx = g_new0(QCryptoCipherNettle, 1);
switch (alg) {
case QCRYPTO_CIPHER_ALG_DES_RFB:
ctx->ctx_encrypt = g_new0(struct des_ctx, 1);
ctx->ctx_decrypt = NULL; /* 1 ctx can do both */
rfbkey = qcrypto_cipher_munge_des_rfb_key(key, nkey);
des_set_key(ctx->ctx_encrypt, rfbkey);
g_free(rfbkey);
ctx->alg_encrypt = des_encrypt_wrapper;
ctx->alg_decrypt = des_decrypt_wrapper;
ctx->niv = DES_BLOCK_SIZE;
break;
case QCRYPTO_CIPHER_ALG_AES_128:
case QCRYPTO_CIPHER_ALG_AES_192:
case QCRYPTO_CIPHER_ALG_AES_256:
ctx->ctx_encrypt = g_new0(struct aes_ctx, 1);
ctx->ctx_decrypt = g_new0(struct aes_ctx, 1);
aes_set_encrypt_key(ctx->ctx_encrypt, nkey, key);
aes_set_decrypt_key(ctx->ctx_decrypt, nkey, key);
ctx->alg_encrypt = aes_encrypt_wrapper;
ctx->alg_decrypt = aes_decrypt_wrapper;
ctx->niv = AES_BLOCK_SIZE;
break;
default:
error_setg(errp, "Unsupported cipher algorithm %d", alg);
goto error;
}
ctx->iv = g_new0(uint8_t, ctx->niv);
cipher->opaque = ctx;
return cipher;
error:
g_free(cipher);
g_free(ctx);
return NULL;
}
void qcrypto_cipher_free(QCryptoCipher *cipher)
{
QCryptoCipherNettle *ctx;
if (!cipher) {
return;
}
ctx = cipher->opaque;
g_free(ctx->iv);
g_free(ctx->ctx_encrypt);
g_free(ctx->ctx_decrypt);
g_free(ctx);
g_free(cipher);
}
int qcrypto_cipher_encrypt(QCryptoCipher *cipher,
const void *in,
void *out,
size_t len,
Error **errp)
{
QCryptoCipherNettle *ctx = cipher->opaque;
switch (cipher->mode) {
case QCRYPTO_CIPHER_MODE_ECB:
ctx->alg_encrypt(ctx->ctx_encrypt, len, out, in);
break;
case QCRYPTO_CIPHER_MODE_CBC:
cbc_encrypt(ctx->ctx_encrypt, ctx->alg_encrypt,
ctx->niv, ctx->iv,
len, out, in);
break;
default:
error_setg(errp, "Unsupported cipher algorithm %d",
cipher->alg);
return -1;
}
return 0;
}
int qcrypto_cipher_decrypt(QCryptoCipher *cipher,
const void *in,
void *out,
size_t len,
Error **errp)
{
QCryptoCipherNettle *ctx = cipher->opaque;
switch (cipher->mode) {
case QCRYPTO_CIPHER_MODE_ECB:
ctx->alg_decrypt(ctx->ctx_decrypt ? ctx->ctx_decrypt : ctx->ctx_encrypt,
len, out, in);
break;
case QCRYPTO_CIPHER_MODE_CBC:
cbc_decrypt(ctx->ctx_decrypt ? ctx->ctx_decrypt : ctx->ctx_encrypt,
ctx->alg_decrypt, ctx->niv, ctx->iv,
len, out, in);
break;
default:
error_setg(errp, "Unsupported cipher algorithm %d",
cipher->alg);
return -1;
}
return 0;
}
int qcrypto_cipher_setiv(QCryptoCipher *cipher,
const uint8_t *iv, size_t niv,
Error **errp)
{
QCryptoCipherNettle *ctx = cipher->opaque;
if (niv != ctx->niv) {
error_setg(errp, "Expected IV size %zu not %zu",
ctx->niv, niv);
return -1;
}
memcpy(ctx->iv, iv, niv);
return 0;
}

View File

@@ -1,74 +0,0 @@
/*
* QEMU Crypto cipher algorithms
*
* Copyright (c) 2015 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "crypto/cipher.h"
static size_t alg_key_len[QCRYPTO_CIPHER_ALG_LAST] = {
[QCRYPTO_CIPHER_ALG_AES_128] = 16,
[QCRYPTO_CIPHER_ALG_AES_192] = 24,
[QCRYPTO_CIPHER_ALG_AES_256] = 32,
[QCRYPTO_CIPHER_ALG_DES_RFB] = 8,
};
static bool
qcrypto_cipher_validate_key_length(QCryptoCipherAlgorithm alg,
size_t nkey,
Error **errp)
{
if ((unsigned)alg >= QCRYPTO_CIPHER_ALG_LAST) {
error_setg(errp, "Cipher algorithm %d out of range",
alg);
return false;
}
if (alg_key_len[alg] != nkey) {
error_setg(errp, "Cipher key length %zu should be %zu",
alg_key_len[alg], nkey);
return false;
}
return true;
}
#if defined(CONFIG_GNUTLS_GCRYPT) || defined(CONFIG_GNUTLS_NETTLE)
static uint8_t *
qcrypto_cipher_munge_des_rfb_key(const uint8_t *key,
size_t nkey)
{
uint8_t *ret = g_new0(uint8_t, nkey);
size_t i;
for (i = 0; i < nkey; i++) {
uint8_t r = key[i];
r = (r & 0xf0) >> 4 | (r & 0x0f) << 4;
r = (r & 0xcc) >> 2 | (r & 0x33) << 2;
r = (r & 0xaa) >> 1 | (r & 0x55) << 1;
ret[i] = r;
}
return ret;
}
#endif /* CONFIG_GNUTLS_GCRYPT || CONFIG_GNUTLS_NETTLE */
#ifdef CONFIG_GNUTLS_GCRYPT
#include "crypto/cipher-gcrypt.c"
#elif defined CONFIG_GNUTLS_NETTLE
#include "crypto/cipher-nettle.c"
#else
#include "crypto/cipher-builtin.c"
#endif

View File

@@ -1,200 +0,0 @@
/*
* QEMU Crypto hash algorithms
*
* Copyright (c) 2015 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "crypto/hash.h"
#ifdef CONFIG_GNUTLS_HASH
#include <gnutls/gnutls.h>
#include <gnutls/crypto.h>
static int qcrypto_hash_alg_map[QCRYPTO_HASH_ALG_LAST] = {
[QCRYPTO_HASH_ALG_MD5] = GNUTLS_DIG_MD5,
[QCRYPTO_HASH_ALG_SHA1] = GNUTLS_DIG_SHA1,
[QCRYPTO_HASH_ALG_SHA256] = GNUTLS_DIG_SHA256,
};
gboolean qcrypto_hash_supports(QCryptoHashAlgorithm alg)
{
if (alg < G_N_ELEMENTS(qcrypto_hash_alg_map)) {
return true;
}
return false;
}
int qcrypto_hash_bytesv(QCryptoHashAlgorithm alg,
const struct iovec *iov,
size_t niov,
uint8_t **result,
size_t *resultlen,
Error **errp)
{
int i, ret;
gnutls_hash_hd_t dig;
if (alg >= G_N_ELEMENTS(qcrypto_hash_alg_map)) {
error_setg(errp,
"Unknown hash algorithm %d",
alg);
return -1;
}
ret = gnutls_hash_init(&dig, qcrypto_hash_alg_map[alg]);
if (ret < 0) {
error_setg(errp,
"Unable to initialize hash algorithm: %s",
gnutls_strerror(ret));
return -1;
}
for (i = 0; i < niov; i++) {
ret = gnutls_hash(dig, iov[i].iov_base, iov[i].iov_len);
if (ret < 0) {
error_setg(errp,
"Unable process hash data: %s",
gnutls_strerror(ret));
goto error;
}
}
ret = gnutls_hash_get_len(qcrypto_hash_alg_map[alg]);
if (ret <= 0) {
error_setg(errp,
"Unable to get hash length: %s",
gnutls_strerror(ret));
goto error;
}
if (*resultlen == 0) {
*resultlen = ret;
*result = g_new0(uint8_t, *resultlen);
} else if (*resultlen != ret) {
error_setg(errp,
"Result buffer size %zu is smaller than hash %d",
*resultlen, ret);
goto error;
}
gnutls_hash_deinit(dig, *result);
return 0;
error:
gnutls_hash_deinit(dig, NULL);
return -1;
}
#else /* ! CONFIG_GNUTLS_HASH */
gboolean qcrypto_hash_supports(QCryptoHashAlgorithm alg G_GNUC_UNUSED)
{
return false;
}
int qcrypto_hash_bytesv(QCryptoHashAlgorithm alg,
const struct iovec *iov G_GNUC_UNUSED,
size_t niov G_GNUC_UNUSED,
uint8_t **result G_GNUC_UNUSED,
size_t *resultlen G_GNUC_UNUSED,
Error **errp)
{
error_setg(errp,
"Hash algorithm %d not supported without GNUTLS",
alg);
return -1;
}
#endif /* ! CONFIG_GNUTLS_HASH */
int qcrypto_hash_bytes(QCryptoHashAlgorithm alg,
const char *buf,
size_t len,
uint8_t **result,
size_t *resultlen,
Error **errp)
{
struct iovec iov = { .iov_base = (char *)buf,
.iov_len = len };
return qcrypto_hash_bytesv(alg, &iov, 1, result, resultlen, errp);
}
static const char hex[] = "0123456789abcdef";
int qcrypto_hash_digestv(QCryptoHashAlgorithm alg,
const struct iovec *iov,
size_t niov,
char **digest,
Error **errp)
{
uint8_t *result = NULL;
size_t resultlen = 0;
size_t i;
if (qcrypto_hash_bytesv(alg, iov, niov, &result, &resultlen, errp) < 0) {
return -1;
}
*digest = g_new0(char, (resultlen * 2) + 1);
for (i = 0 ; i < resultlen ; i++) {
(*digest)[(i * 2)] = hex[(result[i] >> 4) & 0xf];
(*digest)[(i * 2) + 1] = hex[result[i] & 0xf];
}
(*digest)[resultlen * 2] = '\0';
g_free(result);
return 0;
}
int qcrypto_hash_digest(QCryptoHashAlgorithm alg,
const char *buf,
size_t len,
char **digest,
Error **errp)
{
struct iovec iov = { .iov_base = (char *)buf, .iov_len = len };
return qcrypto_hash_digestv(alg, &iov, 1, digest, errp);
}
int qcrypto_hash_base64v(QCryptoHashAlgorithm alg,
const struct iovec *iov,
size_t niov,
char **base64,
Error **errp)
{
uint8_t *result = NULL;
size_t resultlen = 0;
if (qcrypto_hash_bytesv(alg, iov, niov, &result, &resultlen, errp) < 0) {
return -1;
}
*base64 = g_base64_encode(result, resultlen);
g_free(result);
return 0;
}
int qcrypto_hash_base64(QCryptoHashAlgorithm alg,
const char *buf,
size_t len,
char **base64,
Error **errp)
{
struct iovec iov = { .iov_base = (char *)buf, .iov_len = len };
return qcrypto_hash_base64v(alg, &iov, 1, base64, errp);
}

View File

@@ -1,150 +0,0 @@
/*
* QEMU Crypto initialization
*
* Copyright (c) 2015 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "crypto/init.h"
#include "qemu/thread.h"
#ifdef CONFIG_GNUTLS
#include <gnutls/gnutls.h>
#include <gnutls/crypto.h>
#ifdef CONFIG_GNUTLS_GCRYPT
#include <gcrypt.h>
#endif
/* #define DEBUG_GNUTLS */
/*
* If GNUTLS is built against GCrypt then
*
* - When GNUTLS >= 2.12, we must not initialize gcrypt threading
* because GNUTLS will do that itself
* - When GNUTLS < 2.12 we must always initialize gcrypt threading
*
* But....
*
* When gcrypt >= 1.6.0 we must not initialize gcrypt threading
* because gcrypt will do that itself.
*
* So we need to init gcrypt threading if
*
* - gcrypt < 1.6.0
* AND
* - gnutls < 2.12
*
*/
#if (defined(CONFIG_GNUTLS_GCRYPT) && \
(!defined(GNUTLS_VERSION_NUMBER) || \
(GNUTLS_VERSION_NUMBER < 0x020c00)) && \
(!defined(GCRYPT_VERSION_NUMBER) || \
(GCRYPT_VERSION_NUMBER < 0x010600)))
#define QCRYPTO_INIT_GCRYPT_THREADS
#else
#undef QCRYPTO_INIT_GCRYPT_THREADS
#endif
#ifdef DEBUG_GNUTLS
static void qcrypto_gnutls_log(int level, const char *str)
{
fprintf(stderr, "%d: %s", level, str);
}
#endif
#ifdef QCRYPTO_INIT_GCRYPT_THREADS
static int qcrypto_gcrypt_mutex_init(void **priv)
{ \
QemuMutex *lock = NULL;
lock = g_new0(QemuMutex, 1);
qemu_mutex_init(lock);
*priv = lock;
return 0;
}
static int qcrypto_gcrypt_mutex_destroy(void **priv)
{
QemuMutex *lock = *priv;
qemu_mutex_destroy(lock);
g_free(lock);
return 0;
}
static int qcrypto_gcrypt_mutex_lock(void **priv)
{
QemuMutex *lock = *priv;
qemu_mutex_lock(lock);
return 0;
}
static int qcrypto_gcrypt_mutex_unlock(void **priv)
{
QemuMutex *lock = *priv;
qemu_mutex_unlock(lock);
return 0;
}
static struct gcry_thread_cbs qcrypto_gcrypt_thread_impl = {
(GCRY_THREAD_OPTION_PTHREAD | (GCRY_THREAD_OPTION_VERSION << 8)),
NULL,
qcrypto_gcrypt_mutex_init,
qcrypto_gcrypt_mutex_destroy,
qcrypto_gcrypt_mutex_lock,
qcrypto_gcrypt_mutex_unlock,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL
};
#endif /* QCRYPTO_INIT_GCRYPT */
int qcrypto_init(Error **errp)
{
int ret;
ret = gnutls_global_init();
if (ret < 0) {
error_setg(errp,
"Unable to initialize GNUTLS library: %s",
gnutls_strerror(ret));
return -1;
}
#ifdef DEBUG_GNUTLS
gnutls_global_set_log_level(10);
gnutls_global_set_log_function(qcrypto_gnutls_log);
#endif
#ifdef CONFIG_GNUTLS_GCRYPT
if (!gcry_check_version(GCRYPT_VERSION)) {
error_setg(errp, "Unable to initialize gcrypt");
return -1;
}
#ifdef QCRYPTO_INIT_GCRYPT_THREADS
gcry_control(GCRYCTL_SET_THREAD_CBS, &qcrypto_gcrypt_thread_impl);
#endif /* QCRYPTO_INIT_GCRYPT_THREADS */
gcry_control(GCRYCTL_INITIALIZATION_FINISHED, 0);
#endif
return 0;
}
#else /* ! CONFIG_GNUTLS */
int qcrypto_init(Error **errp G_GNUC_UNUSED)
{
return 0;
}
#endif /* ! CONFIG_GNUTLS */

View File

@@ -3,4 +3,4 @@
# We support all the 32 bit boards so need all their config
include arm-softmmu.mak
CONFIG_XLNX_ZYNQMP=y
# Currently no 64-bit specific config requirements

View File

@@ -101,4 +101,3 @@ CONFIG_ALLWINNER_A10=y
CONFIG_XIO3130=y
CONFIG_IOH3420=y
CONFIG_I82801B11=y
CONFIG_ACPI=y

View File

@@ -15,10 +15,6 @@ CONFIG_PCSPK=y
CONFIG_PCKBD=y
CONFIG_FDC=y
CONFIG_ACPI=y
CONFIG_ACPI_X86=y
CONFIG_ACPI_X86_ICH=y
CONFIG_ACPI_MEMORY_HOTPLUG=y
CONFIG_ACPI_CPU_HOTPLUG=y
CONFIG_APM=y
CONFIG_I8257=y
CONFIG_IDE_ISA=y

View File

@@ -1,3 +1,11 @@
# Default configuration for microblazeel-softmmu
include microblaze-softmmu.mak
CONFIG_PTIMER=y
CONFIG_PFLASH_CFI01=y
CONFIG_SERIAL=y
CONFIG_XILINX=y
CONFIG_XILINX_AXI=y
CONFIG_XILINX_SPI=y
CONFIG_XILINX_ETHLITE=y
CONFIG_SSI=y
CONFIG_SSI_M25P80=y

View File

@@ -15,18 +15,20 @@ CONFIG_PCSPK=y
CONFIG_PCKBD=y
CONFIG_FDC=y
CONFIG_ACPI=y
CONFIG_ACPI_X86=y
CONFIG_ACPI_MEMORY_HOTPLUG=y
CONFIG_ACPI_CPU_HOTPLUG=y
CONFIG_APM=y
CONFIG_I8257=y
CONFIG_PIIX4=y
CONFIG_IDE_ISA=y
CONFIG_IDE_PIIX=y
CONFIG_NE2000_ISA=y
CONFIG_RC4030=y
CONFIG_DP8393X=y
CONFIG_DS1225Y=y
CONFIG_MIPSNET=y
CONFIG_PFLASH_CFI01=y
CONFIG_G364FB=y
CONFIG_I8259=y
CONFIG_JAZZ_LED=y
CONFIG_MC146818RTC=y
CONFIG_ISA_TESTDEV=y
CONFIG_EMPTY_SLOT=y

View File

@@ -15,9 +15,6 @@ CONFIG_PCSPK=y
CONFIG_PCKBD=y
CONFIG_FDC=y
CONFIG_ACPI=y
CONFIG_ACPI_X86=y
CONFIG_ACPI_MEMORY_HOTPLUG=y
CONFIG_ACPI_CPU_HOTPLUG=y
CONFIG_APM=y
CONFIG_I8257=y
CONFIG_PIIX4=y
@@ -29,7 +26,6 @@ CONFIG_DP8393X=y
CONFIG_DS1225Y=y
CONFIG_MIPSNET=y
CONFIG_PFLASH_CFI01=y
CONFIG_JAZZ=y
CONFIG_G364FB=y
CONFIG_I8259=y
CONFIG_JAZZ_LED=y

View File

@@ -15,9 +15,6 @@ CONFIG_PCSPK=y
CONFIG_PCKBD=y
CONFIG_FDC=y
CONFIG_ACPI=y
CONFIG_ACPI_X86=y
CONFIG_ACPI_MEMORY_HOTPLUG=y
CONFIG_ACPI_CPU_HOTPLUG=y
CONFIG_APM=y
CONFIG_I8257=y
CONFIG_PIIX4=y
@@ -31,7 +28,6 @@ CONFIG_DS1225Y=y
CONFIG_MIPSNET=y
CONFIG_PFLASH_CFI01=y
CONFIG_FULONG=y
CONFIG_JAZZ=y
CONFIG_G364FB=y
CONFIG_I8259=y
CONFIG_JAZZ_LED=y

View File

@@ -15,18 +15,20 @@ CONFIG_PCSPK=y
CONFIG_PCKBD=y
CONFIG_FDC=y
CONFIG_ACPI=y
CONFIG_ACPI_X86=y
CONFIG_ACPI_MEMORY_HOTPLUG=y
CONFIG_ACPI_CPU_HOTPLUG=y
CONFIG_APM=y
CONFIG_I8257=y
CONFIG_PIIX4=y
CONFIG_IDE_ISA=y
CONFIG_IDE_PIIX=y
CONFIG_NE2000_ISA=y
CONFIG_RC4030=y
CONFIG_DP8393X=y
CONFIG_DS1225Y=y
CONFIG_MIPSNET=y
CONFIG_PFLASH_CFI01=y
CONFIG_G364FB=y
CONFIG_I8259=y
CONFIG_JAZZ_LED=y
CONFIG_MC146818RTC=y
CONFIG_ISA_TESTDEV=y
CONFIG_EMPTY_SLOT=y

View File

@@ -36,4 +36,3 @@ CONFIG_EDU=y
CONFIG_VGA=y
CONFIG_VGA_PCI=y
CONFIG_IVSHMEM=$(CONFIG_KVM)
CONFIG_ROCKER=y

View File

@@ -4,4 +4,3 @@ CONFIG_VIRTIO=y
CONFIG_SCLPCONSOLE=y
CONFIG_S390_FLIC=y
CONFIG_S390_FLIC_KVM=$(CONFIG_KVM)
CONFIG_WDT_DIAG288=y

View File

@@ -7,7 +7,6 @@ CONFIG_QXL=$(CONFIG_SPICE)
CONFIG_VGA_ISA=y
CONFIG_VGA_CIRRUS=y
CONFIG_VMWARE_VGA=y
CONFIG_VIRTIO_VGA=y
CONFIG_VMMOUSE=y
CONFIG_SERIAL=y
CONFIG_PARALLEL=y
@@ -16,10 +15,6 @@ CONFIG_PCSPK=y
CONFIG_PCKBD=y
CONFIG_FDC=y
CONFIG_ACPI=y
CONFIG_ACPI_X86=y
CONFIG_ACPI_X86_ICH=y
CONFIG_ACPI_MEMORY_HOTPLUG=y
CONFIG_ACPI_CPU_HOTPLUG=y
CONFIG_APM=y
CONFIG_I8257=y
CONFIG_IDE_ISA=y

View File

@@ -18,6 +18,7 @@
#include <unistd.h>
#include <stdlib.h>
#include "config.h"
#include "qemu-common.h"
#include "qemu/error-report.h"
#include "sysemu/device_tree.h"
@@ -241,7 +242,7 @@ uint32_t qemu_fdt_alloc_phandle(void *fdt)
/*
* We need to find out if the user gave us special instruction at
* which phandle id to start allocating phandles.
* which phandle id to start allocting phandles.
*/
if (!phandle) {
phandle = machine_phandle_start(current_machine);

Some files were not shown because too many files have changed in this diff Show More