Compare commits

..

70 Commits

Author SHA1 Message Date
Michael S. Tsirkin
f609ef91bc virtio: allow mapping up to max queue size
It's a loop from i < num_sg  and the array is VIRTQUEUE_MAX_SIZE - so
it's OK if the value read is VIRTQUEUE_MAX_SIZE.

Not a big problem in practice as people don't use
such big queues, but it's inelegant.

Reported-by: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 9372514080)
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:27 +02:00
Michael S. Tsirkin
5d2ec830b4 virtio: validate config_len on load
Malformed input can have config_len in migration stream
exceed the array size allocated on destination, the
result will be heap overflow.

To fix, that config_len matches on both sides.

CVE-2014-0182

Reported-by: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>

--

v2: use %ix and %zx to print config_len values
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit a890a2f913)
[AF: BNC#874788]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:27 +02:00
Michael S. Tsirkin
2eae80d0ad virtio-net: out-of-bounds buffer write on load
CVE-2013-4149 QEMU 1.3.0 out-of-bounds buffer write in
virtio_net_load()@hw/net/virtio-net.c

>         } else if (n->mac_table.in_use) {
>             uint8_t *buf = g_malloc0(n->mac_table.in_use);

We are allocating buffer of size n->mac_table.in_use

>             qemu_get_buffer(f, buf, n->mac_table.in_use * ETH_ALEN);

and read to the n->mac_table.in_use size buffer n->mac_table.in_use *
ETH_ALEN bytes, corrupting memory.

If adversary controls state then memory written there is controlled
by adversary.

Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 98f93ddd84)
[AF: BNC#864649]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:27 +02:00
Michael Roth
e70b977473 openpic: avoid buffer overrun on incoming migration
CVE-2013-4534

opp->nb_cpus is read from the wire and used to determine how many
IRQDest elements to read into opp->dst[]. If the value exceeds the
length of opp->dst[], MAX_CPU, opp->dst[] can be overrun with arbitrary
data from the wire.

Fix this by failing migration if the value read from the wire exceeds
MAX_CPU.

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 73d963c0a7)
[AF: BNC#864811]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:27 +02:00
Michael S. Tsirkin
9ec43fe486 ssi-sd: fix buffer overrun on invalid state load
CVE-2013-4537

s->arglen is taken from wire and used as idx
in ssi_sd_transfer().

Validate it before access.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit a9c380db3b)
[AF: BNC#864391]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:27 +02:00
Peter Maydell
9ad9afb2ff savevm: Ignore minimum_version_id_old if there is no load_state_old
At the moment we require vmstate definitions to set minimum_version_id_old
to the same value as minimum_version_id if they do not provide a
load_state_old handler. Since the load_state_old functionality is
required only for a handful of devices that need to retain migration
compatibility with a pre-vmstate implementation, this means the bulk
of devices have pointless boilerplate. Relax the definition so that
minimum_version_id_old is ignored if there is no load_state_old handler.

Note that under the old scheme we would segfault if the vmstate
specified a minimum_version_id_old that was less than minimum_version_id
but did not provide a load_state_old function, and the incoming state
specified a version number between minimum_version_id_old and
minimum_version_id. Under the new scheme this will just result in
our failing the migration.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 767adce2d9)
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:27 +02:00
Michael S. Tsirkin
b94f504fbb usb: sanity check setup_index+setup_len in post_load
CVE-2013-4541

s->setup_len and s->setup_index are fed into usb_packet_copy as
size/offset into s->data_buf, it's possible for invalid state to exploit
this to load arbitrary data.

setup_len and setup_index should be checked to make sure
they are not negative.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 9f8e9895c5)
[AF: BNC#864802]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:26 +02:00
Michael S. Tsirkin
075764d38e vmstate: s/VMSTATE_INT32_LE/VMSTATE_INT32_POSITIVE_LE/
As the macro verifies the value is positive, rename it
to make the function clearer.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 3476436a44)
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:26 +02:00
Michael S. Tsirkin
2f55ce6ce2 virtio-scsi: fix buffer overrun on invalid state load
CVE-2013-4542

hw/scsi/scsi-bus.c invokes load_request.

 virtio_scsi_load_request does:
    qemu_get_buffer(f, (unsigned char *)&req->elem, sizeof(req->elem));

this probably can make elem invalid, for example,
make in_num or out_num huge, then:

    virtio_scsi_parse_req(s, vs->cmd_vqs[n], req);

will do:

    if (req->elem.out_num > 1) {
        qemu_sgl_init_external(req, &req->elem.out_sg[1],
                               &req->elem.out_addr[1],
                               req->elem.out_num - 1);
    } else {
        qemu_sgl_init_external(req, &req->elem.in_sg[1],
                               &req->elem.in_addr[1],
                               req->elem.in_num - 1);
    }

and this will access out of array bounds.

Note: this adds security checks within assert calls since
SCSIBusInfo's load_request cannot fail.
For now simply disable builds with NDEBUG - there seems
to be little value in supporting these.

Cc: Andreas Färber <afaerber@suse.de>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 3c3ce98142)
[AF: BNC#864804]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:26 +02:00
Michael S. Tsirkin
a075e63d02 zaurus: fix buffer overrun on invalid state load
CVE-2013-4540

Within scoop_gpio_handler_update, if prev_level has a high bit set, then
we get bit > 16 and that causes a buffer overrun.

Since prev_level comes from wire indirectly, this can
happen on invalid state load.

Similarly for gpio_level and gpio_dir.

To fix, limit to 16 bit.

Reported-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 52f91c3723)
[AF: BNC#864801]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:26 +02:00
Michael S. Tsirkin
9258d36c43 tsc210x: fix buffer overrun on invalid state load
CVE-2013-4539

s->precision, nextprecision, function and nextfunction
come from wire and are used
as idx into resolution[] in TSC_CUT_RESOLUTION.

Validate after load to avoid buffer overrun.

Cc: Andreas Färber <afaerber@suse.de>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 5193be3be3)
[AF: BNC#864805]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:26 +02:00
Michael S. Tsirkin
fb4795c347 ssd0323: fix buffer overun on invalid state load
CVE-2013-4538

s->cmd_len used as index in ssd0323_transfer() to store 32-bit field.
Possible this field might then be supplied by guest to overwrite a
return addr somewhere. Same for row/col fields, which are indicies into
framebuffer array.

To fix validate after load.

Additionally, validate that the row/col_start/end are within bounds;
otherwise the guest can provoke an overrun by either setting the _end
field so large that the row++ increments just walk off the end of the
array, or by setting the _start value to something bogus and then
letting the "we hit end of row" logic reset row to row_start.

For completeness, validate mode as well.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit ead7a57df3)
[AF: BNC#864769]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:26 +02:00
Michael S. Tsirkin
f1cebceb57 pxa2xx: avoid buffer overrun on incoming migration
CVE-2013-4533

s->rx_level is read from the wire and used to determine how many bytes
to subsequently read into s->rx_fifo[]. If s->rx_level exceeds the
length of s->rx_fifo[] the buffer can be overrun with arbitrary data
from the wire.

Fix this by validating rx_level against the size of s->rx_fifo.

Cc: Don Koch <dkoch@verizon.com>
Reported-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Don Koch <dkoch@verizon.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit caa881abe0)
[AF: BNC#864655]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:26 +02:00
Michael S. Tsirkin
c5b839d16e virtio: validate num_sg when mapping
CVE-2013-4535
CVE-2013-4536

Both virtio-block and virtio-serial read,
VirtQueueElements are read in as buffers, and passed to
virtqueue_map_sg(), where num_sg is taken from the wire and can force
writes to indicies beyond VIRTQUEUE_MAX_SIZE.

To fix, validate num_sg.

Reported-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Cc: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 36cf2a3713)
[AF: BNC#864665]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:26 +02:00
Michael Roth
49af37a1df virtio: avoid buffer overrun on incoming migration
CVE-2013-6399

vdev->queue_sel is read from the wire, and later used in the
emulation code as an index into vdev->vq[]. If the value of
vdev->queue_sel exceeds the length of vdev->vq[], currently
allocated to be VIRTIO_PCI_QUEUE_MAX elements, subsequent PIO
operations such as VIRTIO_PCI_QUEUE_PFN can be used to overrun
the buffer with arbitrary data originating from the source.

Fix this by failing migration if the value from the wire exceeds
VIRTIO_PCI_QUEUE_MAX.

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 4b53c2c72c)
[AF: BNC#864814]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:26 +02:00
Michael S. Tsirkin
5c94e6582a vmstate: fix buffer overflow in target-arm/machine.c
CVE-2013-4531

cpreg_vmstate_indexes is a VARRAY_INT32. A negative value for
cpreg_vmstate_array_len will cause a buffer overflow.

VMSTATE_INT32_LE was supposed to protect against this
but doesn't because it doesn't validate that input is
non-negative.

Fix this macro to valide the value appropriately.

The only other user of VMSTATE_INT32_LE doesn't
ever use negative numbers so it doesn't care.

Reported-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit d2ef4b61fe)
[AF: BNC#864796]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:26 +02:00
Michael S. Tsirkin
d4e6359ea7 pl022: fix buffer overun on invalid state load
CVE-2013-4530

pl022.c did not bounds check tx_fifo_head and
rx_fifo_head after loading them from file and
before they are used to dereference array.

Reported-by: Michael S. Tsirkin <mst@redhat.com
Reported-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit d8d0a0bc7e)
[AF: BNC#864682]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:26 +02:00
Michael S. Tsirkin
3f2e8a7a3a hw/pci/pcie_aer.c: fix buffer overruns on invalid state load
4) CVE-2013-4529
hw/pci/pcie_aer.c    pcie aer log can overrun the buffer if log_num is
                     too large

There are two issues in this file:
1. log_max from remote can be larger than on local
then buffer will overrun with data coming from state file.
2. log_num can be larger then we get data corruption
again with an overflow but not adversary controlled.

Fix both issues.

Reported-by: Anthony Liguori <anthony@codemonkey.ws>
Reported-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 5f691ff91d)
[AF: BNC#864678]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:25 +02:00
Michael S. Tsirkin
085771b0f8 hpet: fix buffer overrun on invalid state load
CVE-2013-4527 hw/timer/hpet.c buffer overrun

hpet is a VARRAY with a uint8 size but static array of 32

To fix, make sure num_timers is valid using VMSTATE_VALID hook.

Reported-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 3f1c49e213)
[AF: BNC#864673]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:25 +02:00
Michael S. Tsirkin
b591a65b23 ahci: fix buffer overrun on invalid state load
CVE-2013-4526

Within hw/ide/ahci.c, VARRAY refers to ports which is also loaded.  So
we use the old version of ports to read the array but then allow any
value for ports.  This can cause the code to overflow.

There's no reason to migrate ports - it never changes.
So just make sure it matches.

Reported-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit ae2158ad6c)
[AF: BNC#864671]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:25 +02:00
Michael S. Tsirkin
a76b760980 virtio: out-of-bounds buffer write on invalid state load
CVE-2013-4151 QEMU 1.0 out-of-bounds buffer write in
virtio_load@hw/virtio/virtio.c

So we have this code since way back when:

    num = qemu_get_be32(f);

    for (i = 0; i < num; i++) {
        vdev->vq[i].vring.num = qemu_get_be32(f);

array of vqs has size VIRTIO_PCI_QUEUE_MAX, so
on invalid input this will write beyond end of buffer.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit cc45995294)
[AF: BNC#864653]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:25 +02:00
Michael S. Tsirkin
0c0a6b53c5 virtio-net: out-of-bounds buffer write on invalid state load
CVE-2013-4150 QEMU 1.5.0 out-of-bounds buffer write in
virtio_net_load()@hw/net/virtio-net.c

This code is in hw/net/virtio-net.c:

    if (n->max_queues > 1) {
        if (n->max_queues != qemu_get_be16(f)) {
            error_report("virtio-net: different max_queues ");
            return -1;
        }

        n->curr_queues = qemu_get_be16(f);
        for (i = 1; i < n->curr_queues; i++) {
            n->vqs[i].tx_waiting = qemu_get_be32(f);
        }
    }

Number of vqs is max_queues, so if we get invalid input here,
for example if max_queues = 2, curr_queues = 3, we get
write beyond end of the buffer, with data that comes from
wire.

This might be used to corrupt qemu memory in hard to predict ways.
Since we have lots of function pointers around, RCE might be possible.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit eea750a562)
[AF: BNC#864650]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:25 +02:00
Michael S. Tsirkin
e3f320a759 virtio-net: fix buffer overflow on invalid state load
CVE-2013-4148 QEMU 1.0 integer conversion in
virtio_net_load()@hw/net/virtio-net.c

Deals with loading a corrupted savevm image.

>         n->mac_table.in_use = qemu_get_be32(f);

in_use is int so it can get negative when assigned 32bit unsigned value.

>         /* MAC_TABLE_ENTRIES may be different from the saved image */
>         if (n->mac_table.in_use <= MAC_TABLE_ENTRIES) {

passing this check ^^^

>             qemu_get_buffer(f, n->mac_table.macs,
>                             n->mac_table.in_use * ETH_ALEN);

with good in_use value, "n->mac_table.in_use * ETH_ALEN" can get
positive and bigger than mac_table.macs. For example 0x81000000
satisfies this condition when ETH_ALEN is 6.

Fix it by making the value unsigned.
For consistency, change first_multi as well.

Note: all call sites were audited to confirm that
making them unsigned didn't cause any issues:
it turns out we actually never do math on them,
so it's easy to validate because both values are
always <= MAC_TABLE_ENTRIES.

Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 71f7fe48e1)
[AF: BNC#864812]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:25 +02:00
Michael S. Tsirkin
e258560116 vmstate: add VMSTATE_VALIDATE
Validate state using VMS_ARRAY with num = 0 and VMS_MUST_EXIST

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 4082f0889b)
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:25 +02:00
Michael S. Tsirkin
52c324d64c vmstate: add VMS_MUST_EXIST
Can be used to verify a required field exists or validate
state in some other way.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 5bf81c8d63)
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-23 17:06:25 +02:00
Ulrich Weigand
243f0e345c tcg-ppc64: Support the ELFv2 ABI
The new ELFv2 ABI, used by default on powerpc64le-linux hosts,
introduced some changes that are incompatible with code currently
generated by the ppc64 TGC target.  In particular, we no longer
use function descriptors.

This patch adds support for the ELFv2 ABI in the ppc64 TGC
function call and function prologue sequences.

Signed-off-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-05-13 10:15:11 +02:00
Alex Bennée
de439482d4 target-arm: A64: fix unallocated test of scalar SQXTUN
The test for the U bit was incorrectly inverted in the scalar case of SQXTUN.
This doesn't affect the vector case as the U bit is used to select XTN(2).

Reported-by: Hao Liu <hao.liu@arm.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit e44a90c596)
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-18 19:53:11 +02:00
Peter Crosthwaite
0fb8a7de8e arm: translate.c: Fix smlald Instruction
The smlald (and probably smlsld) instruction was doing incorrect sign
extensions of the operands amongst 64bit result calculation. The
instruction psuedo-code is:

 operand2 = if m_swap then ROR(R[m],16) else R[m];
 product1 = SInt(R[n]<15:0>) * SInt(operand2<15:0>);
 product2 = SInt(R[n]<31:16>) * SInt(operand2<31:16>);
 result = product1 + product2 + SInt(R[dHi]:R[dLo]);
 R[dHi] = result<63:32>;
 R[dLo] = result<31:0>;

The result calculation should be done in 64 bit arithmetic, and hence
product1 and product2 should be sign extended to 64b before calculation.

The current implementation was adding product1 and product2 together
then sign-extending the intermediate result leading to false negatives.

E.G. if product1 = product2 = 0x4000000, their sum = 0x80000000, which
will be incorrectly interpreted as -ve on sign extension.

We fix by doing the 64b extensions on both product1 and product2 before
any addition/subtraction happens.

We also fix where we were possibly incorrectly setting the Q saturation
flag for SMLSLD, which the ARM ARM specifically says is not set.

Reported-by: Christina Smith <christina.smith@xilinx.com>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 2cddb6f5a15be4ab8d2160f3499d128ae93d304d.1397704570.git.peter.crosthwaite@xilinx.com
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 33bbd75a7c)
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-18 19:52:57 +02:00
Andreas Färber
9938d82cc9 qtest: Be paranoid about accept() addrlen argument
If EINTR occurs, re-initialize our argument.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-18 00:38:10 +02:00
Andreas Färber
1126af0e66 qtest: Increase socket timeout
Change from 5 to 15 seconds.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-18 00:38:10 +02:00
Andreas Färber
19fed6c601 qtest: Add error reporting to socket_accept()
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-18 00:38:04 +02:00
Andreas Färber
c58810a9fe qtest: Assure that init_socket()'s listen() does not fail
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-17 18:43:24 +02:00
Andreas Färber
857545e61d tests: Don't run qom-test twice
Commit 3687d5325 accidentally resulted in running qom-test twice
for x86_64, once directly via the wildcard, and once because x86_64
includes all the i386 qtests (which includes qom-test).

Filter out x86_64 as well as microblazeel and xtensaeb to fix this.

Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-17 17:37:12 +02:00
Olaf Hering
1798372872 xen_disk: add discard support
Implement discard support for xen_disk. It makes use of the existing
discard code in qemu.

The discard support is enabled unconditionally. The tool stack may provide a
property "discard-enable" in the backend node to optionally disable discard
support.  This is helpful in case the backing file was intentionally created
non-sparse to avoid fragmentation.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-17 17:37:12 +02:00
Dinar Valeev
c5ce0620bf configure: Enable PIE for ppc and ppc64 hosts
Signed-off-by: Dinar Valeev <dvaleev@suse.com>
[AF: Rebased for v1.7]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-17 17:37:12 +02:00
Bruce Rogers
52b3782f6e virtfs-proxy-helper: Provide __u64 for broken sys/capability.h
Fixes the build on SLE 11 SP2.

[AF: Extend to ppc64]
2014-04-17 17:37:12 +02:00
Alexander Graf
f222ce0d5a linux-user: lseek: explicitly cast non-set offsets to signed
When doing lseek, SEEK_SET indicates that the offset is an unsigned variable.
Other seek types have parameters that can be negative.

When converting from 32bit to 64bit parameters, we need to take this into
account and enable SEEK_END and SEEK_CUR to be negative, while SEEK_SET stays
absolute positioned which we need to maintain as unsigned.

Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-17 17:37:12 +02:00
Alexander Graf
e30f0e39ab Make char muxer more robust wrt small FIFOs
Virtio-Console can only process one character at a time. Using it on S390
gave me strage "lags" where I got the character I pressed before when
pressing one. So I typed in "abc" and only received "a", then pressed "d"
but the guest received "b" and so on.

While the stdio driver calls a poll function that just processes on its
queue in case virtio-console can't take multiple characters at once, the
muxer does not have such callbacks, so it can't empty its queue.

To work around that limitation, I introduced a new timer that only gets
active when the guest can not receive any more characters. In that case
it polls again after a while to check if the guest is now receiving input.

This patch fixes input when using -nographic on s390 for me.
2014-04-17 17:37:11 +02:00
Alexander Graf
1f6cee2319 console: add question-mark escape operator
Some termcaps (found using SLES11SP1) use [? sequences. According to man
console_codes (http://linux.die.net/man/4/console_codes) the question mark
is a nop and should simply be ignored.

This patch does exactly that, rendering screen output readable when
outputting guest serial consoles to the graphical console emulator.

Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-17 17:37:11 +02:00
Alexander Graf
e828d54e5b Legacy Patch kvm-qemu-preXX-report-default-mac-used.patch 2014-04-17 17:37:11 +02:00
Alexander Graf
e26cff5986 Legacy Patch kvm-qemu-preXX-dictzip3.patch 2014-04-17 17:37:11 +02:00
Alexander Graf
9faf6837f5 block: Add tar container format
Tar is a very widely used format to store data in. Sometimes people even put
virtual machine images in there.

So it makes sense for qemu to be able to read from tar files. I implemented a
written from scratch reader that also knows about the GNU sparse format, which
is what pigz creates.

This version checks for filenames that end on well-known extensions. The logic
could be changed to search for filenames given on the command line, but that
would require changes to more parts of qemu.

The tar reader in conjunctiuon with dzip gives us the chance to download
tar'ed up virtual machine images (even via http) and instantly make use of
them.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Bruce Rogers <brogers@novell.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
[TH: Use bdrv_open options instead of filename]
Signed-off-by: Tim Hardeck <thardeck@suse.de>
[AF: bdrv_file_open got an Error **errp argument, bdrv_delete -> brd_unref]
[AF: qemu_opts_create_nofail() -> qemu_opts_create(),
     bdrv_file_open() -> bdrv_open(), based on work by brogers]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-17 17:37:11 +02:00
Alexander Graf
8b201b80c7 block: Add support for DictZip enabled gzip files
DictZip is an extension to the gzip format that allows random seeks in gzip
compressed files by cutting the file into pieces and storing the piece offsets
in the "extra" header of the gzip format.

Thanks to that extension, we can use gzip compressed files as block backend,
though only in read mode.

This makes a lot of sense when stacked with tar files that can then be shipped
to VM users. If a VM image is inside a tar file that is inside a DictZip
enabled gzip file, the user can run the tar.gz file as is without having to
extract the image first.

Tar patch follows.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Bruce Rogers <brogers@novell.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
[TH: Use bdrv_open options instead of filename]
Signed-off-by: Tim Hardeck <thardeck@suse.de>
[AF: Error **errp added for bdrv_file_open, bdrv_delete -> bdrv_unref]
[AF: qemu_opts_create_nofail() -> qemu_opts_create(),
     bdrv_file_open() -> bdrv_open(), based on work by brogers]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-17 17:37:11 +02:00
Alexander Graf
9f8f18dc79 linux-user: use target_ulong
Linux syscalls pass pointers or data length or other information of that sort
to the kernel. This is all stuff you don't want to have sign extended.
Otherwise a host 64bit variable parameter with a size parameter will extend
it to a negative number, breaking lseek for example.

Pass syscall arguments as ulong always.

Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-17 17:37:11 +02:00
Alexander Graf
6b62214c4b linux-user: add more blk ioctls
Implement a few more ioctls that operate on block devices.

Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-17 17:37:11 +02:00
Andreas Färber
955ef0968a vnc: password-file= and incoming-connections=
TBD (from SUSE Studio team)
2014-04-17 17:37:11 +02:00
Andreas Färber
871d3d13b5 slirp: -nooutgoing
TBD (from SUSE Studio team)
2014-04-17 17:37:11 +02:00
Alexander Graf
fac2c74e75 linux-user: XXX disable fiemap
agraf: fiemap breaks in libarchive. Disable it for now.
2014-04-17 17:37:11 +02:00
Alexander Graf
8c00316b29 linux-user: implement FS_IOC_SETFLAGS ioctl
Signed-off-by: Alexander Graf <agraf@suse.de>

---

v1 -> v2

  - use TYPE_LONG instead of TYPE_INT
2014-04-17 17:37:11 +02:00
Alexander Graf
36fc0fea8b linux-user: implement FS_IOC_GETFLAGS ioctl
Signed-off-by: Alexander Graf <agraf@suse.de>

---

v1 -> v2:

  - use TYPE_LONG instead of TYPE_INT
2014-04-17 17:37:11 +02:00
Alexander Graf
761b115c27 linux-user: Fake /proc/cpuinfo
Fedora 17 for ARM reads /proc/cpuinfo and fails if it doesn't contain
ARM related contents. This patch implements a quick hack to expose real
/proc/cpuinfo data taken from a real world machine.

The real fix would be to generate at least the flags automatically based
on the selected CPU. Please do not submit this patch upstream until this
has happened.

Signed-off-by: Alexander Graf <agraf@suse.de>
[AF: Rebased for v1.6 and v1.7]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-17 17:37:11 +02:00
Alexander Graf
cba80a9dc1 linux-user: lock tb flushing too
Signed-off-by: Alexander Graf <agraf@suse.de>
[AF: Rebased onto exec.c/translate-all.c split for 1.4]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-17 17:37:11 +02:00
Alexander Graf
ca45f1d446 linux-user: Run multi-threaded code on a single core
Running multi-threaded code can easily expose some of the fundamental
breakages in QEMU's design. It's just not a well supported scenario.

So if we pin the whole process to a single host CPU, we guarantee that
we will never have concurrent memory access actually happen. We can still
get scheduled away at any time, so it's no complete guarantee, but apparently
it reduces the odds well enough to get my test cases to pass.

This gets Java 1.7 working for me again on my test box.

Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-17 17:37:11 +02:00
Alexander Graf
c06014909f linux-user: lock tcg
The tcg code generator is not thread safe. Lock its generation between
different threads.

Signed-off-by: Alexander Graf <agraf@suse.de>
[AF: Rebased onto exec.c/translate-all.c split for 1.4]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-17 17:37:10 +02:00
Alexander Graf
9414e435ed linux-user: Ignore broken loop ioctl
During invocations of losetup, we run into an ioctl that doesn't
exist. However, because of that we output an error, which then
screws up the kiwi logic around that call.

So let's silently ignore that bogus ioctl.

Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-17 17:37:10 +02:00
Alexander Graf
bc949bb060 linux-user: arm: no tb_flush on reset
When running automoc4 as linux-user guest program, it segfaults right after
it creates a thread. Bisecting pointed to commit a84fac1426 which introduces
tb_flush on reset.

So something in our thread creation is broken. But for now, let's revert the
change to at least get a working build again.
2014-04-17 17:37:10 +02:00
Alexander Graf
ae7b4452a2 linux-user: binfmt: support host binaries
When we have a working host binary equivalent for the guest binary we're
trying to run, let's just use that instead as it will be a lot faster.

Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-17 17:37:10 +02:00
Alexander Graf
c8bac440ee linux-user: fix segfault deadlock
When entering the guest we take a lock to ensure that nobody else messes
with our TB chaining while we're doing it. If we get a segfault inside that
code, we manage to work on, but will not unlock the lock.

This patch forces unlocking of that lock in the segv handler. I'm not sure
this is the right approach though. Maybe we should rather make sure we don't
segfault in the code? I would greatly appreciate someone more intelligible
than me to look at this :).

Example code to trigger this is at: http://csgraf.de/tmp/conftest.c

Reported-by: Fabio Erculiani <lxnay@sabayon.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-17 17:37:10 +02:00
Alexander Graf
9ceca2f2c2 PPC: KVM: Disable mmu notifier check
When using hugetlbfs (which is required for HV mode KVM on 970), we
check for MMU notifiers that on 970 can not be implemented properly.

So disable the check for mmu notifiers on PowerPC guests, making
KVM guests work there, even if possibly racy in some odd circumstances.
2014-04-17 17:37:10 +02:00
Alexander Graf
99a5283091 linux-user: add binfmt wrapper for argv[0] handling
When using qemu's linux-user binaries through binfmt, argv[0] gets lost
along the execution because qemu only gets passed in the full file name
to the executable while argv[0] can be something completely different.

This breaks in some subtile situations, such as the grep and make test
suites.

This patch adds a wrapper binary called qemu-$TARGET-binfmt that can be
used with binfmt's P flag which passes the full path _and_ argv[0] to
the binfmt handler.

The binary would be smart enough to be versatile and only exist in the
system once, creating the qemu binary path names from its own argv[0].
However, this seemed like it didn't fit the make system too well, so
we're currently creating a new binary for each target archictecture.

CC: Reinhard Max <max@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
[AF: Rebased onto new Makefile infrastructure, twice]
[AF: Updated for aarch64 for v2.0.0-rc1]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-17 17:37:10 +02:00
Ulrich Hecht
a96062c693 block/vmdk: Support creation of SCSI VMDK images in qemu-img
Signed-off-by: Ulrich Hecht <uli@suse.de>
[AF: Changed BLOCK_FLAG_SCSI from 8 to 16 for v1.2]
[AF: Rebased onto upstream VMDK SCSI support]
[AF: Rebased onto skipping of image creation in v1.7]
[AF: Simplified in preparation for v1.7.1/v2.0]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-17 17:37:10 +02:00
Alexander Graf
a535f471a3 qemu-cvs-ioctl_nodirection
the direction given in the ioctl should be correct so we can assume the
communication is uni-directional. The alsa developers did not like this
concept though and declared ioctls IOC_R and IOC_W even though they were
IOC_RW.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Ulrich Hecht <uli@suse.de>
2014-04-17 17:37:10 +02:00
Alexander Graf
9671d1a0e8 qemu-cvs-ioctl_debug
Extends unsupported ioctl debug output.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Ulrich Hecht <uli@suse.de>
2014-04-17 17:37:10 +02:00
Ulrich Hecht
2a9ed81b68 qemu-cvs-gettimeofday
No clue what this is for.
2014-04-17 17:37:10 +02:00
Alexander Graf
b1f9433704 qemu-cvs-alsa_mmap
Hack to prevent ALSA from using mmap() interface to simplify emulation.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Ulrich Hecht <uli@suse.de>
2014-04-17 17:37:10 +02:00
Alexander Graf
4820daf43d qemu-cvs-alsa_ioctl
Implements ALSA ioctls on PPC hosts.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Ulrich Hecht <uli@suse.de>
2014-04-17 17:37:10 +02:00
Alexander Graf
08da583bd0 qemu-cvs-alsa_bitfield
Implements TYPE_INTBITFIELD partially. (required for ALSA support)

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Ulrich Hecht <uli@suse.de>
2014-04-17 17:37:10 +02:00
Ulrich Hecht
b34c0c408d qemu-0.9.0.cvs-binfmt
Fixes binfmt_misc setup script:
- x86_64 is i386-compatible
- m68k signature fixed
- path to QEMU

Signed-off-by: Ulrich Hecht <uli@suse.de>
[AF: Update path for qemu-aarch64 for v2.0.0-rc1]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-04-17 17:37:10 +02:00
Alexander Graf
e9ce5f5933 XXX work around SA_RESTART race with boehm-gc (ARM only)
[AF: CPUState -> CPUArchState, adapt to reindentation]
[AF: CPUArchState::opaque -> CPUState::opaque]
2014-04-17 17:37:10 +02:00
Alexander Graf
afd1df16c2 XXX dont dump core on sigabort 2014-04-17 17:37:09 +02:00
3972 changed files with 188875 additions and 539317 deletions

View File

@@ -1,2 +0,0 @@
((c-mode . ((c-file-style . "stroustrup")
(indent-tabs-mode . nil))))

51
.gitignore vendored
View File

@@ -4,46 +4,41 @@
/config-host.*
/config-target.*
/config.status
/config-temp
/trace-events-all
/trace/generated-tracers.h
/trace/generated-tracers.c
/trace/generated-tracers-dtrace.h
/trace/generated-tracers.dtrace
/trace/generated-events.h
/trace/generated-events.c
/trace/generated-helpers-wrappers.h
/trace/generated-helpers.h
/trace/generated-helpers.c
/trace/generated-tcg-tracers.h
/trace/generated-ust-provider.h
/trace/generated-ust.c
/ui/shader/texture-blit-frag.h
/ui/shader/texture-blit-vert.h
/libcacard/trace/generated-tracers.c
*-timestamp
/*-softmmu
/*-darwin-user
/*-linux-user
/*-bsd-user
/ivshmem-client
/ivshmem-server
/libdis*
/libuser
libdis*
libuser
/linux-headers/asm
/qga/qapi-generated
/qapi-generated
/qapi-types.[ch]
/qapi-visit.[ch]
/qapi-event.[ch]
/qmp-commands.h
/qmp-introspect.[ch]
/qmp-marshal.c
/qemu-doc.html
/qemu-tech.html
/qemu-doc.info
/qemu-tech.info
/qemu.1
/qemu.pod
/qemu-img.1
/qemu-img.pod
/qemu-img
/qemu-nbd
/qemu-nbd.8
/qemu-nbd.pod
/qemu-options.def
/qemu-options.texi
/qemu-img-cmds.texi
@@ -52,18 +47,26 @@
/qemu-ga
/qemu-bridge-helper
/qemu-monitor.texi
/qemu-monitor-info.texi
/qemu-version.h
/qemu-version.h.tmp
/qmp-commands.txt
/vscclient
/test-bitops
/test-coroutine
/test-int128
/test-opts-visitor
/test-qmp-input-visitor
/test-qmp-output-visitor
/test-string-input-visitor
/test-string-output-visitor
/test-visitor-serialization
/fsdev/virtfs-proxy-helper
*.[1-9]
/fsdev/virtfs-proxy-helper.1
/fsdev/virtfs-proxy-helper.pod
/.gdbinit
*.a
*.aux
*.cp
*.dvi
*.exe
*.msi
*.dll
*.so
*.mo
@@ -71,7 +74,6 @@
*.ky
*.log
*.pdf
*.pod
*.cps
*.fns
*.kys
@@ -88,18 +90,18 @@
*.pc
.libs
.sdk
*.swp
*.orig
.pc
*.gcda
*.gcno
patches
/pc-bios/bios-pq/status
/pc-bios/vgabios-pq/status
/pc-bios/optionrom/linuxboot.asm
/pc-bios/optionrom/linuxboot.bin
/pc-bios/optionrom/linuxboot.raw
/pc-bios/optionrom/linuxboot.img
/pc-bios/optionrom/linuxboot_dma.asm
/pc-bios/optionrom/linuxboot_dma.bin
/pc-bios/optionrom/linuxboot_dma.raw
/pc-bios/optionrom/linuxboot_dma.img
/pc-bios/optionrom/multiboot.asm
/pc-bios/optionrom/multiboot.bin
/pc-bios/optionrom/multiboot.raw
@@ -114,5 +116,4 @@
cscope.*
tags
TAGS
docker-src.*
*~

3
.gitmodules vendored
View File

@@ -28,6 +28,3 @@
[submodule "dtc"]
path = dtc
url = git://git.qemu-project.org/dtc.git
[submodule "roms/u-boot"]
path = roms/u-boot
url = git://git.qemu-project.org/u-boot.git

View File

@@ -1,101 +1,81 @@
sudo: false
language: c
python:
- "2.4"
compiler:
- gcc
- clang
cache: ccache
addons:
apt:
packages:
- libaio-dev
- libattr1-dev
- libbrlapi-dev
- libcap-ng-dev
- libgnutls-dev
- libgtk-3-dev
- libiscsi-dev
- liblttng-ust-dev
- libnfs-dev
- libncurses5-dev
- libnss3-dev
- libpixman-1-dev
- libpng12-dev
- librados-dev
- libsdl1.2-dev
- libseccomp-dev
- libspice-protocol-dev
- libspice-server-dev
- libssh2-1-dev
- liburcu-dev
- libusb-1.0-0-dev
- libvte-2.90-dev
- sparse
- uuid-dev
# The channel name "irc.oftc.net#qemu" is encrypted against qemu/qemu
# to prevent IRC notifications from forks. This was created using:
# $ travis encrypt -r "qemu/qemu" "irc.oftc.net#qemu"
notifications:
irc:
channels:
- secure: "F7GDRgjuOo5IUyRLqSkmDL7kvdU4UcH3Lm/W2db2JnDHTGCqgEdaYEYKciyCLZ57vOTsTsOgesN8iUT7hNHBd1KWKjZe9KDTZWppWRYVwAwQMzVeSOsbbU4tRoJ6Pp+3qhH1Z0eGYR9ZgKYAoTumDFgSAYRp4IscKS8jkoedOqM="
- "irc.oftc.net#qemu"
on_success: change
on_failure: always
env:
global:
- TEST_CMD="make check"
- EXTRA_CONFIG=""
# Development packages, EXTRA_PKGS saved for additional builds
- CORE_PKGS="libusb-1.0-0-dev libiscsi-dev librados-dev libncurses5-dev"
- NET_PKGS="libseccomp-dev libgnutls-dev libssh2-1-dev libspice-server-dev libspice-protocol-dev libnss3-dev"
- GUI_PKGS="libgtk-3-dev libvte-2.90-dev libsdl1.2-dev libpng12-dev libpixman-1-dev"
- EXTRA_PKGS=""
matrix:
- CONFIG=""
- CONFIG="--enable-debug --enable-debug-tcg --enable-trace-backends=log"
- CONFIG="--disable-linux-aio --disable-cap-ng --disable-attr --disable-brlapi --disable-uuid --disable-libusb"
- CONFIG="--enable-modules"
- CONFIG="--with-coroutine=ucontext"
- CONFIG="--with-coroutine=sigaltstack"
git:
# we want to do this ourselves
submodules: false
- TARGETS=alpha-softmmu,alpha-linux-user
- TARGETS=arm-softmmu,arm-linux-user
- TARGETS=aarch64-softmmu,aarch64-linux-user
- TARGETS=cris-softmmu
- TARGETS=i386-softmmu,x86_64-softmmu
- TARGETS=lm32-softmmu
- TARGETS=m68k-softmmu
- TARGETS=microblaze-softmmu,microblazeel-softmmu
- TARGETS=mips-softmmu,mips64-softmmu,mips64el-softmmu,mipsel-softmmu
- TARGETS=moxie-softmmu
- TARGETS=or32-softmmu,
- TARGETS=ppc-softmmu,ppc64-softmmu,ppcemb-softmmu
- TARGETS=s390x-softmmu
- TARGETS=sh4-softmmu,sh4eb-softmmu
- TARGETS=sparc-softmmu,sparc64-softmmu
- TARGETS=unicore32-softmmu
- TARGETS=xtensa-softmmu,xtensaeb-softmmu
before_install:
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew update ; fi
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew install libffi gettext glib pixman ; fi
- wget -O - http://people.linaro.org/~alex.bennee/qemu-submodule-git-seed.tar.xz | tar -xvJ
- git submodule update --init --recursive
before_script:
- ./configure ${CONFIG}
script:
- make -j3 && ${TEST_CMD}
- sudo apt-get update -qq
- sudo apt-get install -qq ${CORE_PKGS} ${NET_PKGS} ${GUI_PKGS} ${EXTRA_PKGS}
script: "./configure --target-list=${TARGETS} ${EXTRA_CONFIG} && make && ${TEST_CMD}"
matrix:
# We manually include a number of additional build for non-standard bits
include:
# gprof/gcov are GCC features
- env: CONFIG="--enable-gprof --enable-gcov --disable-pie"
# Debug related options
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-debug"
compiler: gcc
# We manually include builds which we disable "make check" for
- env: CONFIG="--enable-debug --enable-tcg-interpreter"
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-debug --enable-tcg-interpreter"
compiler: gcc
# All the extra -dev packages
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="libaio-dev libcap-ng-dev libattr1-dev libbrlapi-dev uuid-dev libusb-1.0.0-dev"
compiler: gcc
# Currently configure doesn't force --disable-pie
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-gprof --enable-gcov --disable-pie"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="sparse"
EXTRA_CONFIG="--enable-sparse"
compiler: gcc
# All the trace backends (apart from dtrace)
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backend=stderr"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backend=simple"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backend=ftrace"
TEST_CMD=""
compiler: gcc
- env: CONFIG="--enable-trace-backends=simple"
TEST_CMD=""
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="liblttng-ust-dev liburcu-dev"
EXTRA_CONFIG="--enable-trace-backend=ust"
compiler: gcc
- env: CONFIG="--enable-trace-backends=ftrace"
TEST_CMD=""
compiler: gcc
- env: CONFIG="--enable-trace-backends=ust"
TEST_CMD=""
compiler: gcc
- env: CONFIG="--with-coroutine=gthread"
TEST_CMD=""
compiler: gcc
- env: CONFIG=""
os: osx
compiler: clang
- env: CONFIG=""
sudo: required
addons:
dist: trusty
compiler: gcc
before_install:
- sudo apt-get update -qq
- sudo apt-get build-dep -qq qemu
- wget -O - http://people.linaro.org/~alex.bennee/qemu-submodule-git-seed.tar.xz | tar -xvJ
- git submodule update --init --recursive

View File

@@ -31,11 +31,7 @@ Do not leave whitespace dangling off the ends of lines.
2. Line width
Lines should be 80 characters; try not to make them longer.
Sometimes it is hard to do, especially when dealing with QEMU subsystems
that use long function or symbol names. Even in that case, do not make
lines much longer than 80 characters.
Lines are 80 characters; not longer.
Rationale:
- Some people like to tile their 24" screens with a 6x4 matrix of 80x24
@@ -43,8 +39,6 @@ Rationale:
let them keep doing it.
- Code and especially patches is much more readable if limited to a sane
line length. Eighty is traditional.
- The four-space indentation makes the most common excuse ("But look
at all that white space on the left!") moot.
- It is the QEMU coding style.
3. Naming
@@ -93,26 +87,7 @@ Furthermore, it is the QEMU coding style.
5. Declarations
Mixed declarations (interleaving statements and declarations within
blocks) are generally not allowed; declarations should be at the beginning
of blocks.
Every now and then, an exception is made for declarations inside a
#ifdef or #ifndef block: if the code looks nicer, such declarations can
be placed at the top of the block even if there are statements above.
On the other hand, however, it's often best to move that #ifdef/#ifndef
block to a separate function altogether.
6. Conditional statements
When comparing a variable for (in)equality with a constant, list the
constant on the right, as in:
if (a == 1) {
/* Reads like: "If a equals 1" */
do_something();
}
Rationale: Yoda conditions (as in 'if (1 == a)') are awkward to read.
Besides, good compilers already warn users when '==' is mis-typed as '=',
even when the constant is on the right.
Mixed declarations (interleaving statements and declarations within blocks)
are not allowed; declarations should be at the beginning of blocks. In other
words, the code should not generate warnings if using GCC's
-Wdeclaration-after-statement option.

59
HACKING
View File

@@ -157,62 +157,3 @@ painful. These are:
* you may assume that integers are 2s complement representation
* you may assume that right shift of a signed integer duplicates
the sign bit (ie it is an arithmetic shift, not a logical shift)
In addition, QEMU assumes that the compiler does not use the latitude
given in C99 and C11 to treat aspects of signed '<<' as undefined, as
documented in the GNU Compiler Collection manual starting at version 4.0.
7. Error handling and reporting
7.1 Reporting errors to the human user
Do not use printf(), fprintf() or monitor_printf(). Instead, use
error_report() or error_vreport() from error-report.h. This ensures the
error is reported in the right place (current monitor or stderr), and in
a uniform format.
Use error_printf() & friends to print additional information.
error_report() prints the current location. In certain common cases
like command line parsing, the current location is tracked
automatically. To manipulate it manually, use the loc_*() from
error-report.h.
7.2 Propagating errors
An error can't always be reported to the user right where it's detected,
but often needs to be propagated up the call chain to a place that can
handle it. This can be done in various ways.
The most flexible one is Error objects. See error.h for usage
information.
Use the simplest suitable method to communicate success / failure to
callers. Stick to common methods: non-negative on success / -1 on
error, non-negative / -errno, non-null / null, or Error objects.
Example: when a function returns a non-null pointer on success, and it
can fail only in one way (as far as the caller is concerned), returning
null on failure is just fine, and certainly simpler and a lot easier on
the eyes than propagating an Error object through an Error ** parameter.
Example: when a function's callers need to report details on failure
only the function really knows, use Error **, and set suitable errors.
Do not report an error to the user when you're also returning an error
for somebody else to handle. Leave the reporting to the place that
consumes the error returned.
7.3 Handling errors
Calling exit() is fine when handling configuration errors during
startup. It's problematic during normal operation. In particular,
monitor commands should never exit().
Do not call exit() or abort() to handle an error that can be triggered
by the guest (e.g., some unimplemented corner case in guest code
translation or device emulation). Guests should not be able to
terminate QEMU.
Note that &error_fatal is just another way to exit(1), and &error_abort
is just another way to abort().

View File

@@ -11,7 +11,7 @@ option) any later version.
As of July 2013, contributions under version 2 of the GNU General Public
License (and no later version) are only accepted for the following files
or directories: bsd-user/, linux-user/, hw/vfio/, hw/xen/xen_pt*.
or directories: bsd-user/, linux-user/, hw/misc/vfio.c, hw/xen/xen_pt*.
3) The Tiny Code Generator (TCG) is released under the BSD license
(see license headers in files).

File diff suppressed because it is too large Load Diff

340
Makefile
View File

@@ -3,11 +3,6 @@
# Always point to the root of the build tree (needs GNU make).
BUILD_DIR=$(CURDIR)
# Before including a proper config-host.mak, assume we are in the source tree
SRC_PATH=.
UNCHECKED_GOALS := %clean TAGS cscope ctags docker docker-%
# All following code might depend on configuration variables
ifneq ($(wildcard config-host.mak),)
# Put the all: rule here so that config-host.mak can contain dependencies.
@@ -30,7 +25,8 @@ CONFIG_ALL=y
-include config-all-devices.mak
-include config-all-disas.mak
config-host.mak: $(SRC_PATH)/configure $(SRC_PATH)/pc-bios
include $(SRC_PATH)/rules.mak
config-host.mak: $(SRC_PATH)/configure
@echo $@ is out-of-date, running configure
@# TODO: The next lines include code which supports a smooth
@# transition from old configurations without config.status.
@@ -42,49 +38,37 @@ config-host.mak: $(SRC_PATH)/configure $(SRC_PATH)/pc-bios
fi
else
config-host.mak:
ifneq ($(filter-out $(UNCHECKED_GOALS),$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
ifneq ($(filter-out %clean,$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
@echo "Please call configure before running make!"
@exit 1
endif
endif
include $(SRC_PATH)/rules.mak
GENERATED_HEADERS = qemu-version.h config-host.h qemu-options.def
GENERATED_HEADERS += qmp-commands.h qapi-types.h qapi-visit.h qapi-event.h
GENERATED_SOURCES += qmp-marshal.c qapi-types.c qapi-visit.c qapi-event.c
GENERATED_HEADERS += qmp-introspect.h
GENERATED_SOURCES += qmp-introspect.c
GENERATED_HEADERS = config-host.h qemu-options.def
GENERATED_HEADERS += qmp-commands.h qapi-types.h qapi-visit.h
GENERATED_SOURCES += qmp-marshal.c qapi-types.c qapi-visit.c
GENERATED_HEADERS += trace/generated-events.h
GENERATED_SOURCES += trace/generated-events.c
GENERATED_HEADERS += trace/generated-tracers.h
ifeq ($(findstring dtrace,$(TRACE_BACKENDS)),dtrace)
ifeq ($(TRACE_BACKEND),dtrace)
GENERATED_HEADERS += trace/generated-tracers-dtrace.h
endif
GENERATED_SOURCES += trace/generated-tracers.c
GENERATED_HEADERS += trace/generated-tcg-tracers.h
GENERATED_HEADERS += trace/generated-helpers-wrappers.h
GENERATED_HEADERS += trace/generated-helpers.h
GENERATED_SOURCES += trace/generated-helpers.c
ifeq ($(findstring ust,$(TRACE_BACKENDS)),ust)
ifeq ($(TRACE_BACKEND),ust)
GENERATED_HEADERS += trace/generated-ust-provider.h
GENERATED_SOURCES += trace/generated-ust.c
endif
GENERATED_HEADERS += module_block.h
# Don't try to regenerate Makefile or configure
# We don't generate any of them
Makefile: ;
configure: ;
.PHONY: all clean cscope distclean dvi html info install install-doc \
pdf recurse-all speed test dist msi FORCE
pdf recurse-all speed test dist
$(call set-vpath, $(SRC_PATH))
@@ -93,7 +77,7 @@ LIBS+=-lz $(LIBS_TOOLS)
HELPERS-$(CONFIG_LINUX) = qemu-bridge-helper$(EXESUF)
ifdef BUILD_DOCS
DOCS=qemu-doc.html qemu-tech.html qemu.1 qemu-img.1 qemu-nbd.8 qemu-ga.8
DOCS=qemu-doc.html qemu-tech.html qemu.1 qemu-img.1 qemu-nbd.8 qmp-commands.txt
ifdef CONFIG_VIRTFS
DOCS+=fsdev/virtfs-proxy-helper.1
endif
@@ -118,10 +102,9 @@ endif
-include $(SUBDIR_DEVICES_MAK_DEP)
%/config-devices.mak: default-configs/%.mak $(SRC_PATH)/scripts/make_device_config.sh
$(call quiet-command, \
$(SHELL) $(SRC_PATH)/scripts/make_device_config.sh $< $*-config-devices.mak.d $@ > $@.tmp, " GEN $@.tmp")
$(call quiet-command, if test -f $@; then \
%/config-devices.mak: default-configs/%.mak
$(call quiet-command,$(SHELL) $(SRC_PATH)/scripts/make_device_config.sh $@ $<, " GEN $@")
@if test -f $@; then \
if cmp -s $@.old $@; then \
mv $@.tmp $@; \
cp -p $@ $@.old; \
@@ -137,7 +120,7 @@ endif
else \
mv $@.tmp $@; \
cp -p $@ $@.old; \
fi, " GEN $@");
fi
defconfig:
rm -f config-all-devices.mak $(SUBDIR_DEVICES_MAK)
@@ -150,55 +133,34 @@ dummy := $(call unnest-vars,, \
stub-obj-y \
util-obj-y \
qga-obj-y \
ivshmem-client-obj-y \
ivshmem-server-obj-y \
qga-vss-dll-obj-y \
block-obj-y \
block-obj-m \
crypto-obj-y \
crypto-aes-obj-y \
qom-obj-y \
io-obj-y \
common-obj-y \
common-obj-m)
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/tests/Makefile.include
include $(SRC_PATH)/tests/Makefile
endif
ifeq ($(CONFIG_SMARTCARD_NSS),y)
include $(SRC_PATH)/libcacard/Makefile
endif
all: $(DOCS) $(TOOLS) $(HELPERS-y) recurse-all modules
qemu-version.h: FORCE
$(call quiet-command, \
(cd $(SRC_PATH); \
printf '#define QEMU_PKGVERSION '; \
if test -n "$(PKGVERSION)"; then \
printf '"$(PKGVERSION)"\n'; \
else \
if test -d .git; then \
printf '" ('; \
git describe --match 'v*' 2>/dev/null | tr -d '\n'; \
if ! git diff-index --quiet HEAD &>/dev/null; then \
printf -- '-dirty'; \
fi; \
printf ')"\n'; \
else \
printf '""\n'; \
fi; \
fi) > $@.tmp)
$(call quiet-command, cmp -s $@ $@.tmp || mv $@.tmp $@)
vl.o: QEMU_CFLAGS+=$(GPROF_CFLAGS)
vl.o: QEMU_CFLAGS+=$(SDL_CFLAGS)
config-host.h: config-host.h-timestamp
config-host.h-timestamp: config-host.mak
qemu-options.def: $(SRC_PATH)/qemu-options.hx $(SRC_PATH)/scripts/hxtool
qemu-options.def: $(SRC_PATH)/qemu-options.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $@")
SUBDIR_RULES=$(patsubst %,subdir-%, $(TARGET_DIRS))
SOFTMMU_SUBDIR_RULES=$(filter %-softmmu,$(SUBDIR_RULES))
$(SOFTMMU_SUBDIR_RULES): $(block-obj-y)
$(SOFTMMU_SUBDIR_RULES): $(crypto-obj-y)
$(SOFTMMU_SUBDIR_RULES): $(io-obj-y)
$(SOFTMMU_SUBDIR_RULES): config-all-devices.mak
subdir-%:
@@ -223,20 +185,21 @@ subdir-dtc:dtc/libfdt dtc/tests
dtc/%:
mkdir -p $@
$(SUBDIR_RULES): libqemuutil.a libqemustub.a $(common-obj-y) $(qom-obj-y) $(crypto-aes-obj-$(CONFIG_USER_ONLY))
$(SUBDIR_RULES): libqemuutil.a libqemustub.a $(common-obj-y)
ROMSUBDIR_RULES=$(patsubst %,romsubdir-%, $(ROMS))
# Only keep -O and -g cflags
romsubdir-%:
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C pc-bios/$* V="$(V)" TARGET_DIR="$*/" CFLAGS="$(filter -O% -g%,$(CFLAGS))",)
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C pc-bios/$* V="$(V)" TARGET_DIR="$*/",)
ALL_SUBDIRS=$(TARGET_DIRS) $(patsubst %,pc-bios/%, $(ROMS))
recurse-all: $(SUBDIR_RULES) $(ROMSUBDIR_RULES)
$(BUILD_DIR)/version.o: $(SRC_PATH)/version.rc config-host.h | $(BUILD_DIR)/version.lo
bt-host.o: QEMU_CFLAGS += $(BLUEZ_CFLAGS)
$(BUILD_DIR)/version.o: $(SRC_PATH)/version.rc $(BUILD_DIR)/config-host.h | $(BUILD_DIR)/version.lo
$(call quiet-command,$(WINDRES) -I$(BUILD_DIR) -o $@ $<," RC version.o")
$(BUILD_DIR)/version.lo: $(SRC_PATH)/version.rc config-host.h
$(BUILD_DIR)/version.lo: $(SRC_PATH)/version.rc $(BUILD_DIR)/config-host.h
$(call quiet-command,$(WINDRES) -I$(BUILD_DIR) -o $@ $<," RC version.lo")
Makefile: $(version-obj-y) $(version-lobj-y)
@@ -245,22 +208,25 @@ Makefile: $(version-obj-y) $(version-lobj-y)
# Build libraries
libqemustub.a: $(stub-obj-y)
libqemuutil.a: $(util-obj-y)
libqemuutil.a: $(util-obj-y) qapi-types.o qapi-visit.o
block-modules = $(foreach o,$(block-obj-m),"$(basename $(subst /,-,$o))",) NULL
util/module.o-cflags = -D'CONFIG_BLOCK_MODULES=$(block-modules)'
######################################################################
qemu-img.o: qemu-img-cmds.h
qemu-img$(EXESUF): qemu-img.o $(block-obj-y) $(crypto-obj-y) $(io-obj-y) $(qom-obj-y) libqemuutil.a libqemustub.a
qemu-nbd$(EXESUF): qemu-nbd.o $(block-obj-y) $(crypto-obj-y) $(io-obj-y) $(qom-obj-y) libqemuutil.a libqemustub.a
qemu-io$(EXESUF): qemu-io.o $(block-obj-y) $(crypto-obj-y) $(io-obj-y) $(qom-obj-y) libqemuutil.a libqemustub.a
qemu-img$(EXESUF): qemu-img.o $(block-obj-y) libqemuutil.a libqemustub.a
qemu-nbd$(EXESUF): qemu-nbd.o $(block-obj-y) libqemuutil.a libqemustub.a
qemu-io$(EXESUF): qemu-io.o $(block-obj-y) libqemuutil.a libqemustub.a
qemu-bridge-helper$(EXESUF): qemu-bridge-helper.o libqemuutil.a libqemustub.a
qemu-bridge-helper$(EXESUF): qemu-bridge-helper.o
fsdev/virtfs-proxy-helper$(EXESUF): fsdev/virtfs-proxy-helper.o fsdev/9p-marshal.o fsdev/9p-iov-marshal.o libqemuutil.a libqemustub.a
fsdev/virtfs-proxy-helper$(EXESUF): fsdev/virtfs-proxy-helper.o fsdev/virtio-9p-marshal.o libqemuutil.a libqemustub.a
fsdev/virtfs-proxy-helper$(EXESUF): LIBS += -lcap
qemu-img-cmds.h: $(SRC_PATH)/qemu-img-cmds.hx $(SRC_PATH)/scripts/hxtool
qemu-img-cmds.h: $(SRC_PATH)/qemu-img-cmds.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $@")
qemu-ga$(EXESUF): LIBS = $(LIBS_QGA)
@@ -272,51 +238,23 @@ qapi-py = $(SRC_PATH)/scripts/qapi.py $(SRC_PATH)/scripts/ordereddict.py
qga/qapi-generated/qga-qapi-types.c qga/qapi-generated/qga-qapi-types.h :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" $<, \
" GEN $@")
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py $(gen-out-type) -o qga/qapi-generated -p "qga-" < $<, " GEN $@")
qga/qapi-generated/qga-qapi-visit.c qga/qapi-generated/qga-qapi-visit.h :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" $<, \
" GEN $@")
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py $(gen-out-type) -o qga/qapi-generated -p "qga-" < $<, " GEN $@")
qga/qapi-generated/qga-qmp-commands.h qga/qapi-generated/qga-qmp-marshal.c :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" $<, \
" GEN $@")
qapi-modules = $(SRC_PATH)/qapi-schema.json $(SRC_PATH)/qapi/common.json \
$(SRC_PATH)/qapi/block.json $(SRC_PATH)/qapi/block-core.json \
$(SRC_PATH)/qapi/event.json $(SRC_PATH)/qapi/introspect.json \
$(SRC_PATH)/qapi/crypto.json $(SRC_PATH)/qapi/rocker.json \
$(SRC_PATH)/qapi/trace.json
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py $(gen-out-type) -o qga/qapi-generated -p "qga-" < $<, " GEN $@")
qapi-types.c qapi-types.h :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py \
$(gen-out-type) -o "." -b $<, \
" GEN $@")
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py $(gen-out-type) -o "." -b < $<, " GEN $@")
qapi-visit.c qapi-visit.h :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py \
$(gen-out-type) -o "." -b $<, \
" GEN $@")
qapi-event.c qapi-event.h :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-event.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-event.py \
$(gen-out-type) -o "." $<, \
" GEN $@")
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py $(gen-out-type) -o "." -b < $<, " GEN $@")
qmp-commands.h qmp-marshal.c :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py \
$(gen-out-type) -o "." $<, \
" GEN $@")
qmp-introspect.h qmp-introspect.c :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-introspect.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-introspect.py \
$(gen-out-type) -o "." $<, \
" GEN $@")
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py $(gen-out-type) -m -o "." < $<, " GEN $@")
QGALIB_GEN=$(addprefix qga/qapi-generated/, qga-qapi-types.h qga-qapi-visit.h qga-qmp-commands.h)
$(qga-obj-y) qemu-ga.o: $(QGALIB_GEN)
@@ -324,49 +262,15 @@ $(qga-obj-y) qemu-ga.o: $(QGALIB_GEN)
qemu-ga$(EXESUF): $(qga-obj-y) libqemuutil.a libqemustub.a
$(call LINK, $^)
ifdef QEMU_GA_MSI_ENABLED
QEMU_GA_MSI=qemu-ga-$(ARCH).msi
msi: $(QEMU_GA_MSI)
$(QEMU_GA_MSI): qemu-ga.exe $(QGA_VSS_PROVIDER)
$(QEMU_GA_MSI): config-host.mak
$(QEMU_GA_MSI): $(SRC_PATH)/qga/installer/qemu-ga.wxs
$(call quiet-command,QEMU_GA_VERSION="$(QEMU_GA_VERSION)" QEMU_GA_MANUFACTURER="$(QEMU_GA_MANUFACTURER)" QEMU_GA_DISTRO="$(QEMU_GA_DISTRO)" BUILD_DIR="$(BUILD_DIR)" \
wixl -o $@ $(QEMU_GA_MSI_ARCH) $(QEMU_GA_MSI_WITH_VSS) $(QEMU_GA_MSI_MINGW_DLL_PATH) $<, " WIXL $@")
else
msi:
@echo "MSI build not configured or dependency resolution failed (reconfigure with --enable-guest-agent-msi option)"
endif
ifneq ($(EXESUF),)
.PHONY: qemu-ga
qemu-ga: qemu-ga$(EXESUF) $(QGA_VSS_PROVIDER) $(QEMU_GA_MSI)
endif
ivshmem-client$(EXESUF): $(ivshmem-client-obj-y) libqemuutil.a libqemustub.a
$(call LINK, $^)
ivshmem-server$(EXESUF): $(ivshmem-server-obj-y) libqemuutil.a libqemustub.a
$(call LINK, $^)
module_block.h: $(SRC_PATH)/scripts/modules/module_block.py config-host.mak
$(call quiet-command,$(PYTHON) $< $@ \
$(addprefix $(SRC_PATH)/,$(patsubst %.mo,%.c,$(block-obj-m))), \
" GEN $@")
clean:
# avoid old build problems by removing potentially incorrect old files
rm -f config.mak op-i386.h opc-i386.h gen-op-i386.h op-arm.h opc-arm.h gen-op-arm.h
rm -f qemu-options.def
rm -f *.msi
find . \( -name '*.l[oa]' -o -name '*.so' -o -name '*.dll' -o -name '*.mo' -o -name '*.[oda]' \) -type f -exec rm {} +
rm -f $(filter-out %.tlb,$(TOOLS)) $(HELPERS-y) qemu-ga TAGS cscope.* *.pod *~ */*~
rm -f fsdev/*.pod
rm -rf .libs */.libs
rm -f qemu-img-cmds.h
rm -f ui/shader/*-vert.h ui/shader/*-frag.h
@# May not be present in GENERATED_HEADERS
rm -f trace/generated-tracers-dtrace.dtrace*
rm -f trace/generated-tracers-dtrace.h*
@@ -378,7 +282,6 @@ clean:
if test -d $$d; then $(MAKE) -C $$d $@ || exit 1; fi; \
rm -f $$d/qemu-options.def; \
done
rm -f $(SUBDIR_DEVICES_MAK) config-all-devices.mak
VERSION ?= $(shell cat VERSION)
@@ -388,9 +291,9 @@ qemu-%.tar.bz2:
$(SRC_PATH)/scripts/make-release "$(SRC_PATH)" "$(patsubst qemu-%.tar.bz2,%,$@)"
distclean: clean
rm -f config-host.mak config-host.h* config-host.ld $(DOCS) qemu-options.texi qemu-img-cmds.texi qemu-monitor.texi qemu-monitor-info.texi
rm -f config-all-devices.mak config-all-disas.mak config.status
rm -f po/*.mo tests/qemu-iotests/common.env
rm -f config-host.mak config-host.h* config-host.ld $(DOCS) qemu-options.texi qemu-img-cmds.texi qemu-monitor.texi
rm -f config-all-devices.mak config-all-disas.mak
rm -f po/*.mo
rm -f roms/seabios/config.mak roms/vgabios/config.mak
rm -f qemu-doc.info qemu-doc.aux qemu-doc.cp qemu-doc.cps qemu-doc.dvi
rm -f qemu-doc.fn qemu-doc.fns qemu-doc.info qemu-doc.ky qemu-doc.kys
@@ -403,8 +306,8 @@ distclean: clean
rm -rf $$d || exit 1 ; \
done
rm -Rf .sdk
if test -f pixman/config.log; then $(MAKE) -C pixman distclean; fi
if test -f dtc/version_gen.h; then $(MAKE) $(DTC_MAKE_ARGS) clean; fi
if test -f pixman/config.log; then make -C pixman distclean; fi
if test -f dtc/version_gen.h; then make $(DTC_MAKE_ARGS) clean; fi
KEYMAPS=da en-gb et fr fr-ch is lt modifiers no pt-br sv \
ar de en-us fi fr-be hr it lv nl pl ru th \
@@ -413,21 +316,20 @@ bepo cz
ifdef INSTALL_BLOBS
BLOBS=bios.bin bios-256k.bin sgabios.bin vgabios.bin vgabios-cirrus.bin \
vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin vgabios-virtio.bin \
acpi-dsdt.aml \
vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin \
acpi-dsdt.aml q35-acpi-dsdt.aml \
ppc_rom.bin openbios-sparc32 openbios-sparc64 openbios-ppc QEMU,tcx.bin QEMU,cgthree.bin \
pxe-e1000.rom pxe-eepro100.rom pxe-ne2k_pci.rom \
pxe-pcnet.rom pxe-rtl8139.rom pxe-virtio.rom \
efi-e1000.rom efi-eepro100.rom efi-ne2k_pci.rom \
efi-pcnet.rom efi-rtl8139.rom efi-virtio.rom \
efi-e1000e.rom efi-vmxnet3.rom \
qemu-icon.bmp qemu_logo_no_text.svg \
bamboo.dtb petalogix-s3adsp1800.dtb petalogix-ml605.dtb \
multiboot.bin linuxboot.bin linuxboot_dma.bin kvmvapic.bin \
multiboot.bin linuxboot.bin kvmvapic.bin \
s390-zipl.rom \
s390-ccw.img \
spapr-rtas.bin slof.bin \
palcode-clipper \
u-boot.e500
palcode-clipper
else
BLOBS=
endif
@@ -435,7 +337,7 @@ endif
install-doc: $(DOCS)
$(INSTALL_DIR) "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) qemu-doc.html qemu-tech.html "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) $(SRC_PATH)/docs/qmp-commands.txt "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) qmp-commands.txt "$(DESTDIR)$(qemu_docdir)"
ifdef CONFIG_POSIX
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DATA) qemu.1 "$(DESTDIR)$(mandir)/man1"
@@ -444,9 +346,6 @@ ifneq ($(TOOLS),)
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man8"
$(INSTALL_DATA) qemu-nbd.8 "$(DESTDIR)$(mandir)/man8"
endif
ifneq (,$(findstring qemu-ga,$(TOOLS)))
$(INSTALL_DATA) qemu-ga.8 "$(DESTDIR)$(mandir)/man8"
endif
endif
ifdef CONFIG_VIRTFS
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man1"
@@ -463,22 +362,27 @@ ifneq (,$(findstring qemu-ga,$(TOOLS)))
endif
endif
install-confdir:
$(INSTALL_DIR) "$(DESTDIR)$(qemu_confdir)"
install: all $(if $(BUILD_DOCS),install-doc) \
install-sysconfig: install-datadir install-confdir
$(INSTALL_DATA) $(SRC_PATH)/sysconfigs/target/target-x86_64.conf "$(DESTDIR)$(qemu_confdir)"
install: all $(if $(BUILD_DOCS),install-doc) install-sysconfig \
install-datadir install-localstatedir
$(INSTALL_DIR) "$(DESTDIR)$(bindir)"
ifneq ($(TOOLS),)
$(call install-prog,$(subst qemu-ga,qemu-ga$(EXESUF),$(TOOLS)),$(DESTDIR)$(bindir))
$(INSTALL_PROG) $(STRIP_OPT) $(TOOLS) "$(DESTDIR)$(bindir)"
endif
ifneq ($(CONFIG_MODULES),)
$(INSTALL_DIR) "$(DESTDIR)$(qemu_moddir)"
for s in $(modules-m:.mo=$(DSOSUF)); do \
t="$(DESTDIR)$(qemu_moddir)/$$(echo $$s | tr / -)"; \
$(INSTALL_LIB) $$s "$$t"; \
test -z "$(STRIP)" || $(STRIP) "$$t"; \
for s in $(patsubst %.mo,%$(DSOSUF),$(modules-m)); do \
$(INSTALL_PROG) $(STRIP_OPT) $$s "$(DESTDIR)$(qemu_moddir)/$$(echo $$s | tr / -)"; \
done
endif
ifneq ($(HELPERS-y),)
$(call install-prog,$(HELPERS-y),$(DESTDIR)$(libexecdir))
$(INSTALL_DIR) "$(DESTDIR)$(libexecdir)"
$(INSTALL_PROG) $(STRIP_OPT) $(HELPERS-y) "$(DESTDIR)$(libexecdir)"
endif
ifneq ($(BLOBS),)
set -e; for x in $(BLOBS); do \
@@ -492,7 +396,6 @@ endif
set -e; for x in $(KEYMAPS); do \
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/keymaps/$$x "$(DESTDIR)$(qemu_datadir)/keymaps"; \
done
$(INSTALL_DATA) $(BUILD_DIR)/trace-events-all "$(DESTDIR)$(qemu_datadir)/trace-events-all"
for d in $(TARGET_DIRS); do \
$(MAKE) $(SUBDIR_MAKEFLAGS) TARGET_DIR=$$d/ -C $$d $@ || exit 1 ; \
done
@@ -501,36 +404,15 @@ endif
test speed: all
$(MAKE) -C tests/tcg $@
.PHONY: ctags
ctags:
rm -f tags
find "$(SRC_PATH)" -name '*.[hc]' -exec ctags --append {} +
.PHONY: TAGS
TAGS:
rm -f TAGS
rm -f $@
find "$(SRC_PATH)" -name '*.[hc]' -exec etags --append {} +
cscope:
rm -f "$(SRC_PATH)"/cscope.*
find "$(SRC_PATH)/" -name "*.[chsS]" -print | sed 's,^\./,,' > "$(SRC_PATH)/cscope.files"
cscope -b -i"$(SRC_PATH)/cscope.files"
# opengl shader programs
ui/shader/%-vert.h: $(SRC_PATH)/ui/shader/%.vert $(SRC_PATH)/scripts/shaderinclude.pl
@mkdir -p $(dir $@)
$(call quiet-command,\
perl $(SRC_PATH)/scripts/shaderinclude.pl $< > $@,\
" VERT $@")
ui/shader/%-frag.h: $(SRC_PATH)/ui/shader/%.frag $(SRC_PATH)/scripts/shaderinclude.pl
@mkdir -p $(dir $@)
$(call quiet-command,\
perl $(SRC_PATH)/scripts/shaderinclude.pl $< > $@,\
" FRAG $@")
ui/console-gl.o: $(SRC_PATH)/ui/console-gl.c \
ui/shader/texture-blit-vert.h ui/shader/texture-blit-frag.h
rm -f ./cscope.*
find "$(SRC_PATH)" -name "*.[chsS]" -print | sed 's,^\./,,' > ./cscope.files
cscope -b
# documentation
MAKEINFO=makeinfo
@@ -549,26 +431,25 @@ TEXIFLAG=$(if $(V),,--quiet)
%.pdf: %.texi
$(call quiet-command,texi2pdf $(TEXIFLAG) -I . $<," GEN $@")
qemu-options.texi: $(SRC_PATH)/qemu-options.hx $(SRC_PATH)/scripts/hxtool
qemu-options.texi: $(SRC_PATH)/qemu-options.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -t < $< > $@," GEN $@")
qemu-monitor.texi: $(SRC_PATH)/hmp-commands.hx $(SRC_PATH)/scripts/hxtool
qemu-monitor.texi: $(SRC_PATH)/hmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -t < $< > $@," GEN $@")
qemu-monitor-info.texi: $(SRC_PATH)/hmp-commands-info.hx $(SRC_PATH)/scripts/hxtool
qmp-commands.txt: $(SRC_PATH)/qmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -q < $< > $@," GEN $@")
qemu-img-cmds.texi: $(SRC_PATH)/qemu-img-cmds.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -t < $< > $@," GEN $@")
qemu-img-cmds.texi: $(SRC_PATH)/qemu-img-cmds.hx $(SRC_PATH)/scripts/hxtool
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -t < $< > $@," GEN $@")
qemu.1: qemu-doc.texi qemu-options.texi qemu-monitor.texi qemu-monitor-info.texi
qemu.1: qemu-doc.texi qemu-options.texi qemu-monitor.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< qemu.pod && \
$(POD2MAN) --section=1 --center=" " --release=" " qemu.pod > $@, \
" GEN $@")
qemu.1: qemu-option-trace.texi
qemu-img.1: qemu-img.texi qemu-option-trace.texi qemu-img-cmds.texi
qemu-img.1: qemu-img.texi qemu-img-cmds.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< qemu-img.pod && \
$(POD2MAN) --section=1 --center=" " --release=" " qemu-img.pod > $@, \
@@ -580,27 +461,20 @@ fsdev/virtfs-proxy-helper.1: fsdev/virtfs-proxy-helper.texi
$(POD2MAN) --section=1 --center=" " --release=" " fsdev/virtfs-proxy-helper.pod > $@, \
" GEN $@")
qemu-nbd.8: qemu-nbd.texi qemu-option-trace.texi
qemu-nbd.8: qemu-nbd.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< qemu-nbd.pod && \
$(POD2MAN) --section=8 --center=" " --release=" " qemu-nbd.pod > $@, \
" GEN $@")
qemu-ga.8: qemu-ga.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< qemu-ga.pod && \
$(POD2MAN) --section=8 --center=" " --release=" " qemu-ga.pod > $@, \
" GEN $@")
dvi: qemu-doc.dvi qemu-tech.dvi
html: qemu-doc.html qemu-tech.html
info: qemu-doc.info qemu-tech.info
pdf: qemu-doc.pdf qemu-tech.pdf
qemu-doc.dvi qemu-doc.html qemu-doc.info qemu-doc.pdf: \
qemu-img.texi qemu-nbd.texi qemu-options.texi qemu-option-trace.texi \
qemu-monitor.texi qemu-img-cmds.texi qemu-ga.texi \
qemu-monitor-info.texi
qemu-img.texi qemu-nbd.texi qemu-options.texi \
qemu-monitor.texi qemu-img-cmds.texi
ifdef CONFIG_WIN32
@@ -625,7 +499,7 @@ installer: $(INSTALLER)
INSTDIR=/tmp/qemu-nsis
$(INSTALLER): $(SRC_PATH)/qemu.nsi
$(MAKE) install prefix=${INSTDIR}
make install prefix=${INSTDIR}
ifdef SIGNCODE
(cd ${INSTDIR}; \
for i in *.exe; do \
@@ -650,7 +524,6 @@ endif # SIGNCODE
$(if $(DLL_PATH),-DDLLDIR="$(DLL_PATH)") \
-DSRCDIR="$(SRC_PATH)" \
-DOUTFILE="$(INSTALLER)" \
-DDISPLAYVERSION="$(VERSION)" \
$(SRC_PATH)/qemu.nsi
rm -r ${INSTDIR}
ifdef SIGNCODE
@@ -660,49 +533,10 @@ endif # CONFIG_WIN
# Add a dependency on the generated files, so that they are always
# rebuilt before other object files
ifneq ($(filter-out $(UNCHECKED_GOALS),$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
ifneq ($(filter-out %clean,$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
Makefile: $(GENERATED_HEADERS)
endif
# Include automatically generated dependency files
# Dependencies in Makefile.objs files come from our recursive subdir rules
-include $(wildcard *.d tests/*.d)
include $(SRC_PATH)/tests/docker/Makefile.include
.PHONY: help
help:
@echo 'Generic targets:'
@echo ' all - Build all'
@echo ' dir/file.o - Build specified target only'
@echo ' install - Install QEMU, documentation and tools'
@echo ' ctags/TAGS - Generate tags file for editors'
@echo ' cscope - Generate cscope index'
@echo ''
@$(if $(TARGET_DIRS), \
echo 'Architecture specific targets:'; \
$(foreach t, $(TARGET_DIRS), \
printf " %-30s - Build for %s\\n" $(patsubst %,subdir-%,$(t)) $(t);) \
echo '')
@echo 'Cleaning targets:'
@echo ' clean - Remove most generated files but keep the config'
@echo ' distclean - Remove all generated files'
@echo ' dist - Build a distributable tarball'
@echo ''
@echo 'Test targets:'
@echo ' check - Run all tests (check-help for details)'
@echo ' docker - Help about targets running tests inside Docker containers'
@echo ''
@echo 'Documentation targets:'
@echo ' dvi html info pdf'
@echo ' - Build documentation in specified format'
@echo ''
ifdef CONFIG_WIN32
@echo 'Windows targets:'
@echo ' installer - Build NSIS-based installer for qemu-ga'
ifdef QEMU_GA_MSI_ENABLED
@echo ' msi - Build MSI-based installer for qemu-ga'
endif
@echo ''
endif
@echo ' make V=0|1 [targets] 0 => quiet build (default), 1 => verbose build'

View File

@@ -1,39 +1,36 @@
#######################################################################
# Common libraries for tools and emulators
stub-obj-y = stubs/ crypto/
util-obj-y = util/ qobject/ qapi/
util-obj-y += qmp-introspect.o qapi-types.o qapi-visit.o qapi-event.o
stub-obj-y = stubs/
util-obj-y = util/ qobject/ qapi/ trace/
#######################################################################
# block-obj-y is code used by both qemu system emulation and qemu-img
block-obj-y = async.o thread-pool.o
block-obj-y += nbd/
block-obj-y += block.o blockjob.o
block-obj-y += nbd.o block.o blockjob.o
block-obj-y += main-loop.o iohandler.o qemu-timer.o
block-obj-$(CONFIG_POSIX) += aio-posix.o
block-obj-$(CONFIG_WIN32) += aio-win32.o
block-obj-y += block/
block-obj-y += qapi-types.o qapi-visit.o
block-obj-y += qemu-io-cmds.o
block-obj-$(CONFIG_REPLICATION) += replication.o
block-obj-y += qemu-coroutine.o qemu-coroutine-lock.o qemu-coroutine-io.o
block-obj-y += qemu-coroutine-sleep.o
block-obj-y += coroutine-$(CONFIG_COROUTINE_BACKEND).o
block-obj-m = block/
#######################################################################
# crypto-obj-y is code used by both qemu system emulation and qemu-img
crypto-obj-y = crypto/
crypto-aes-obj-y = crypto/
######################################################################
# smartcard
#######################################################################
# qom-obj-y is code used by both qemu system emulation and qemu-img
qom-obj-y = qom/
#######################################################################
# io-obj-y is code used by both qemu system emulation and qemu-img
io-obj-y = io/
libcacard-y += libcacard/cac.o libcacard/event.o
libcacard-y += libcacard/vcard.o libcacard/vreader.o
libcacard-y += libcacard/vcard_emul_nss.o
libcacard-y += libcacard/vcard_emul_type.o
libcacard-y += libcacard/card_7816.o
libcacard-y += libcacard/vcardt.o
######################################################################
# Target independent part of system emulation. The long term path is to
@@ -50,25 +47,26 @@ common-obj-$(CONFIG_POSIX) += os-posix.o
common-obj-$(CONFIG_LINUX) += fsdev/
common-obj-y += migration/
common-obj-y += migration.o migration-tcp.o
common-obj-y += vmstate.o
common-obj-y += qemu-file.o
common-obj-$(CONFIG_RDMA) += migration-rdma.o
common-obj-y += qemu-char.o #aio.o
common-obj-y += page_cache.o
common-obj-y += block-migration.o
common-obj-y += page_cache.o xbzrle.o
common-obj-$(CONFIG_POSIX) += migration-exec.o migration-unix.o migration-fd.o
common-obj-$(CONFIG_SPICE) += spice-qemu-char.o
common-obj-y += audio/
common-obj-y += hw/
common-obj-y += accel.o
common-obj-y += replay/
common-obj-y += ui/
common-obj-y += bt-host.o bt-vhci.o
bt-host.o-cflags := $(BLUEZ_CFLAGS)
common-obj-y += dma-helpers.o
common-obj-y += vl.o
vl.o-cflags := $(GPROF_CFLAGS) $(SDL_CFLAGS)
common-obj-y += tpm.o
common-obj-$(CONFIG_SLIRP) += slirp/
@@ -77,18 +75,23 @@ common-obj-y += backends/
common-obj-$(CONFIG_SECCOMP) += qemu-seccomp.o
common-obj-$(CONFIG_FDT) += device_tree.o
common-obj-$(CONFIG_SMARTCARD_NSS) += $(libcacard-y)
######################################################################
# qapi
common-obj-y += qmp-marshal.o
common-obj-y += qmp-introspect.o
common-obj-y += qmp.o hmp.o
endif
######################################################################
# some qapi visitors are used by both system and user emulation:
common-obj-y += qapi-visit.o qapi-types.o
#######################################################################
# Target-independent parts used in system and user emulation
common-obj-y += qemu-log.o
common-obj-y += tcg-runtime.o
common-obj-y += hw/
common-obj-y += qom/
@@ -99,64 +102,10 @@ common-obj-y += disas/
version-obj-$(CONFIG_WIN32) += $(BUILD_DIR)/version.o
version-lobj-$(CONFIG_WIN32) += $(BUILD_DIR)/version.lo
######################################################################
# tracing
util-obj-y += trace/
target-obj-y += trace/
######################################################################
# guest agent
# FIXME: a few definitions from qapi-types.o/qapi-visit.o are needed
# by libqemuutil.a. These should be moved to a separate .json schema.
qga-obj-y = qga/
qga-obj-y = qga/ qapi-types.o qapi-visit.o
qga-vss-dll-obj-y = qga/
######################################################################
# contrib
ivshmem-client-obj-y = contrib/ivshmem-client/
ivshmem-server-obj-y = contrib/ivshmem-server/
######################################################################
trace-events-y = trace-events
trace-events-y += util/trace-events
trace-events-y += crypto/trace-events
trace-events-y += io/trace-events
trace-events-y += migration/trace-events
trace-events-y += block/trace-events
trace-events-y += hw/block/trace-events
trace-events-y += hw/char/trace-events
trace-events-y += hw/intc/trace-events
trace-events-y += hw/net/trace-events
trace-events-y += hw/virtio/trace-events
trace-events-y += hw/audio/trace-events
trace-events-y += hw/misc/trace-events
trace-events-y += hw/usb/trace-events
trace-events-y += hw/scsi/trace-events
trace-events-y += hw/nvram/trace-events
trace-events-y += hw/display/trace-events
trace-events-y += hw/input/trace-events
trace-events-y += hw/timer/trace-events
trace-events-y += hw/dma/trace-events
trace-events-y += hw/sparc/trace-events
trace-events-y += hw/sd/trace-events
trace-events-y += hw/isa/trace-events
trace-events-y += hw/i386/trace-events
trace-events-y += hw/9pfs/trace-events
trace-events-y += hw/ppc/trace-events
trace-events-y += hw/pci/trace-events
trace-events-y += hw/s390x/trace-events
trace-events-y += hw/vfio/trace-events
trace-events-y += hw/acpi/trace-events
trace-events-y += hw/arm/trace-events
trace-events-y += hw/alpha/trace-events
trace-events-y += ui/trace-events
trace-events-y += audio/trace-events
trace-events-y += net/trace-events
trace-events-y += target-i386/trace-events
trace-events-y += target-sparc/trace-events
trace-events-y += target-s390x/trace-events
trace-events-y += target-ppc/trace-events
trace-events-y += qom/trace-events
trace-events-y += linux-user/trace-events

View File

@@ -1,13 +1,11 @@
# -*- Mode: makefile -*-
BUILD_DIR?=$(CURDIR)/..
include ../config-host.mak
include config-target.mak
include config-devices.mak
include $(SRC_PATH)/rules.mak
$(call set-vpath, $(SRC_PATH):$(BUILD_DIR))
$(call set-vpath, $(SRC_PATH))
ifdef CONFIG_LINUX
QEMU_CFLAGS += -I../linux-headers
endif
@@ -18,29 +16,30 @@ QEMU_CFLAGS+=-I$(SRC_PATH)/include
ifdef CONFIG_USER_ONLY
# user emulator name
QEMU_PROG=qemu-$(TARGET_NAME)
QEMU_PROG_BUILD = $(QEMU_PROG)
else
# system emulator name
QEMU_PROG=qemu-system-$(TARGET_NAME)$(EXESUF)
ifneq (,$(findstring -mwindows,$(libs_softmmu)))
# Terminate program name with a 'w' because the linker builds a windows executable.
QEMU_PROGW=qemu-system-$(TARGET_NAME)w$(EXESUF)
$(QEMU_PROG): $(QEMU_PROGW)
$(call quiet-command,$(OBJCOPY) --subsystem console $(QEMU_PROGW) $(QEMU_PROG)," GEN $(TARGET_DIR)$(QEMU_PROG)")
QEMU_PROG_BUILD = $(QEMU_PROGW)
else
QEMU_PROG_BUILD = $(QEMU_PROG)
endif
endif # windows executable
QEMU_PROG=qemu-system-$(TARGET_NAME)$(EXESUF)
endif
PROGS=$(QEMU_PROG) $(QEMU_PROGW)
PROGS=$(QEMU_PROG)
ifdef QEMU_PROGW
PROGS+=$(QEMU_PROGW)
endif
STPFILES=
ifdef CONFIG_LINUX_USER
PROGS+=$(QEMU_PROG)-binfmt
endif
config-target.h: config-target.h-timestamp
config-target.h-timestamp: config-target.mak
ifdef CONFIG_TRACE_SYSTEMTAP
stap: $(QEMU_PROG).stp-installed $(QEMU_PROG).stp $(QEMU_PROG)-simpletrace.stp
stap: $(QEMU_PROG).stp-installed $(QEMU_PROG).stp
ifdef CONFIG_USER_ONLY
TARGET_TYPE=user
@@ -48,31 +47,24 @@ else
TARGET_TYPE=system
endif
$(QEMU_PROG).stp-installed: $(BUILD_DIR)/trace-events-all
$(QEMU_PROG).stp-installed: $(SRC_PATH)/trace-events
$(call quiet-command,$(TRACETOOL) \
--format=stap \
--backends=$(TRACE_BACKENDS) \
--backend=$(TRACE_BACKEND) \
--binary=$(bindir)/$(QEMU_PROG) \
--target-name=$(TARGET_NAME) \
--target-type=$(TARGET_TYPE) \
< $< > $@," GEN $(TARGET_DIR)$(QEMU_PROG).stp-installed")
$(QEMU_PROG).stp: $(BUILD_DIR)/trace-events-all
$(QEMU_PROG).stp: $(SRC_PATH)/trace-events
$(call quiet-command,$(TRACETOOL) \
--format=stap \
--backends=$(TRACE_BACKENDS) \
--backend=$(TRACE_BACKEND) \
--binary=$(realpath .)/$(QEMU_PROG) \
--target-name=$(TARGET_NAME) \
--target-type=$(TARGET_TYPE) \
< $< > $@," GEN $(TARGET_DIR)$(QEMU_PROG).stp")
$(QEMU_PROG)-simpletrace.stp: $(BUILD_DIR)/trace-events-all
$(call quiet-command,$(TRACETOOL) \
--format=simpletrace-stap \
--backends=$(TRACE_BACKENDS) \
--probe-prefix=qemu.$(TARGET_TYPE).$(TARGET_NAME) \
< $< > $@," GEN $(TARGET_DIR)$(QEMU_PROG)-simpletrace.stp")
else
stap:
endif
@@ -85,11 +77,8 @@ all: $(PROGS) stap
#########################################################
# cpu emulator library
obj-y = exec.o translate-all.o cpu-exec.o
obj-y += translate-common.o
obj-y += cpu-exec-common.o
obj-y += tcg/tcg.o tcg/tcg-op.o tcg/optimize.o
obj-y += tcg/tcg.o tcg/optimize.o
obj-$(CONFIG_TCG_INTERPRETER) += tci.o
obj-y += tcg/tcg-common.o
obj-$(CONFIG_TCG_INTERPRETER) += disas/tci.o
obj-y += fpu/softfloat.o
obj-y += target-$(TARGET_BASE_ARCH)/
@@ -97,24 +86,18 @@ obj-y += disas.o
obj-$(call notempty,$(TARGET_XML_FILES)) += gdbstub-xml.o
obj-$(call lnot,$(CONFIG_KVM)) += kvm-stub.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/decContext.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/decNumber.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/dpd/decimal32.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/dpd/decimal64.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/dpd/decimal128.o
#########################################################
# Linux user emulator target
ifdef CONFIG_LINUX_USER
QEMU_CFLAGS+=-I$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR) \
-I$(SRC_PATH)/linux-user/host/$(ARCH) \
-I$(SRC_PATH)/linux-user
QEMU_CFLAGS+=-I$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR) -I$(SRC_PATH)/linux-user
obj-y += linux-user/
obj-y += gdbstub.o thunk.o user-exec.o
obj-binfmt-y += linux-user/
endif #CONFIG_LINUX_USER
#########################################################
@@ -122,8 +105,7 @@ endif #CONFIG_LINUX_USER
ifdef CONFIG_BSD_USER
QEMU_CFLAGS+=-I$(SRC_PATH)/bsd-user -I$(SRC_PATH)/bsd-user/$(TARGET_ABI_DIR) \
-I$(SRC_PATH)/bsd-user/$(HOST_VARIANT_DIR)
QEMU_CFLAGS+=-I$(SRC_PATH)/bsd-user -I$(SRC_PATH)/bsd-user/$(TARGET_ABI_DIR)
obj-y += bsd-user/
obj-y += gdbstub.o user-exec.o
@@ -133,21 +115,19 @@ endif #CONFIG_BSD_USER
#########################################################
# System emulator target
ifdef CONFIG_SOFTMMU
obj-y += arch_init.o cpus.o monitor.o gdbstub.o balloon.o ioport.o numa.o
obj-y += qtest.o bootdevice.o
obj-y += arch_init.o cpus.o monitor.o gdbstub.o balloon.o ioport.o
obj-y += qtest.o
obj-y += hw/
obj-$(CONFIG_FDT) += device_tree.o
obj-$(CONFIG_KVM) += kvm-all.o
obj-y += memory.o cputlb.o
obj-y += memory.o savevm.o cputlb.o
obj-y += memory_mapping.o
obj-y += dump.o
obj-y += migration/ram.o migration/savevm.o
LIBS := $(libs_softmmu) $(LIBS)
LIBS+=$(libs_softmmu)
# xen support
obj-$(CONFIG_XEN) += xen-common.o
obj-$(CONFIG_XEN_I386) += xen-hvm.o xen-mapcache.o
obj-$(call lnot,$(CONFIG_XEN)) += xen-common-stub.o
obj-$(call lnot,$(CONFIG_XEN_I386)) += xen-hvm-stub.o
obj-$(CONFIG_XEN) += xen-all.o xen-mapcache.o
obj-$(call lnot,$(CONFIG_XEN)) += xen-stub.o
# Hardware support
ifeq ($(TARGET_NAME), sparc64)
@@ -156,75 +136,83 @@ else
obj-y += hw/$(TARGET_BASE_ARCH)/
endif
GENERATED_HEADERS += hmp-commands.h hmp-commands-info.h
GENERATED_HEADERS += hmp-commands.h qmp-commands-old.h
endif # CONFIG_SOFTMMU
# Workaround for http://gcc.gnu.org/PR55489, see configure.
%/translate.o: QEMU_CFLAGS += $(TRANSLATE_OPT_CFLAGS)
ifdef CONFIG_LINUX_USER
dummy := $(call unnest-vars,,obj-y obj-binfmt-y)
else
dummy := $(call unnest-vars,,obj-y)
all-obj-y := $(obj-y)
endif
# we are making another call to unnest-vars with different vars, protect obj-y,
# it can be overriden in subdir Makefile.objs
obj-y-save := $(obj-y)
target-obj-y :=
block-obj-y :=
common-obj-y :=
include $(SRC_PATH)/Makefile.objs
dummy := $(call unnest-vars,,target-obj-y)
target-obj-y-save := $(target-obj-y)
dummy := $(call unnest-vars,.., \
block-obj-y \
block-obj-m \
crypto-obj-y \
crypto-aes-obj-y \
qom-obj-y \
io-obj-y \
common-obj-y \
common-obj-m)
target-obj-y := $(target-obj-y-save)
all-obj-y += $(common-obj-y)
all-obj-y += $(target-obj-y)
all-obj-y += $(qom-obj-y)
# Now restore obj-y
obj-y := $(obj-y-save)
all-obj-y = $(obj-y) $(common-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(block-obj-y)
all-obj-$(CONFIG_USER_ONLY) += $(crypto-aes-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(crypto-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(io-obj-y)
$(QEMU_PROG_BUILD): config-devices.mak
# build either PROG or PROGW
$(QEMU_PROG_BUILD): $(all-obj-y) ../libqemuutil.a ../libqemustub.a
$(call LINK, $(filter-out %.mak, $^))
ifdef CONFIG_DARWIN
$(call quiet-command,Rez -append $(SRC_PATH)/pc-bios/qemu.rsrc -o $@," REZ $(TARGET_DIR)$@")
$(call quiet-command,SetFile -a C $@," SETFILE $(TARGET_DIR)$@")
ifndef CONFIG_HAIKU
LIBS+=-lm
endif
ifdef QEMU_PROGW
# The linker builds a windows executable. Make also a console executable.
$(QEMU_PROGW): $(all-obj-y) ../libqemuutil.a ../libqemustub.a
$(call LINK,$^)
$(QEMU_PROG): $(QEMU_PROGW)
$(call quiet-command,$(OBJCOPY) --subsystem console $(QEMU_PROGW) $(QEMU_PROG)," GEN $(TARGET_DIR)$(QEMU_PROG)")
else
$(QEMU_PROG): $(all-obj-y) ../libqemuutil.a ../libqemustub.a
$(call LINK,$^)
endif
$(QEMU_PROG)-binfmt: $(obj-binfmt-y)
$(call LINK,$^)
gdbstub-xml.c: $(TARGET_XML_FILES) $(SRC_PATH)/scripts/feature_to_c.sh
$(call quiet-command,rm -f $@ && $(SHELL) $(SRC_PATH)/scripts/feature_to_c.sh $@ $(TARGET_XML_FILES)," GEN $(TARGET_DIR)$@")
hmp-commands.h: $(SRC_PATH)/hmp-commands.hx $(SRC_PATH)/scripts/hxtool
hmp-commands.h: $(SRC_PATH)/hmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
hmp-commands-info.h: $(SRC_PATH)/hmp-commands-info.hx $(SRC_PATH)/scripts/hxtool
qmp-commands-old.h: $(SRC_PATH)/qmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
clean: clean-target
clean:
rm -f *.a *~ $(PROGS)
rm -f $(shell find . -name '*.[od]')
rm -f hmp-commands.h gdbstub-xml.c
rm -f hmp-commands.h qmp-commands-old.h gdbstub-xml.c
ifdef CONFIG_TRACE_SYSTEMTAP
rm -f *.stp
endif
install: all
ifneq ($(PROGS),)
$(call install-prog,$(PROGS),$(DESTDIR)$(bindir))
$(INSTALL) -m 755 $(PROGS) "$(DESTDIR)$(bindir)"
ifneq ($(STRIP),)
$(STRIP) $(patsubst %,"$(DESTDIR)$(bindir)/%",$(PROGS))
endif
endif
ifdef CONFIG_TRACE_SYSTEMTAP
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset"
$(INSTALL_DATA) $(QEMU_PROG).stp-installed "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset/$(QEMU_PROG).stp"
$(INSTALL_DATA) $(QEMU_PROG)-simpletrace.stp "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset/$(QEMU_PROG)-simpletrace.stp"
endif
GENERATED_HEADERS += config-target.h

108
README
View File

@@ -1,107 +1,3 @@
QEMU README
===========
Read the documentation in qemu-doc.html or on http://wiki.qemu-project.org
QEMU is a generic and open source machine & userspace emulator and
virtualizer.
QEMU is capable of emulating a complete machine in software without any
need for hardware virtualization support. By using dynamic translation,
it achieves very good performance. QEMU can also integrate with the Xen
and KVM hypervisors to provide emulated hardware while allowing the
hypervisor to manage the CPU. With hypervisor support, QEMU can achieve
near native performance for CPUs. When QEMU emulates CPUs directly it is
capable of running operating systems made for one machine (e.g. an ARMv7
board) on a different machine (e.g. an x86_64 PC board).
QEMU is also capable of providing userspace API virtualization for Linux
and BSD kernel interfaces. This allows binaries compiled against one
architecture ABI (e.g. the Linux PPC64 ABI) to be run on a host using a
different architecture ABI (e.g. the Linux x86_64 ABI). This does not
involve any hardware emulation, simply CPU and syscall emulation.
QEMU aims to fit into a variety of use cases. It can be invoked directly
by users wishing to have full control over its behaviour and settings.
It also aims to facilitate integration into higher level management
layers, by providing a stable command line interface and monitor API.
It is commonly invoked indirectly via the libvirt library when using
open source applications such as oVirt, OpenStack and virt-manager.
QEMU as a whole is released under the GNU General Public License,
version 2. For full licensing details, consult the LICENSE file.
Building
========
QEMU is multi-platform software intended to be buildable on all modern
Linux platforms, OS-X, Win32 (via the Mingw64 toolchain) and a variety
of other UNIX targets. The simple steps to build QEMU are:
mkdir build
cd build
../configure
make
Complete details of the process for building and configuring QEMU for
all supported host platforms can be found in the qemu-tech.html file.
Additional information can also be found online via the QEMU website:
http://qemu-project.org/Hosts/Linux
http://qemu-project.org/Hosts/W32
Submitting patches
==================
The QEMU source code is maintained under the GIT version control system.
git clone git://git.qemu-project.org/qemu.git
When submitting patches, the preferred approach is to use 'git
format-patch' and/or 'git send-email' to format & send the mail to the
qemu-devel@nongnu.org mailing list. All patches submitted must contain
a 'Signed-off-by' line from the author. Patches should follow the
guidelines set out in the HACKING and CODING_STYLE files.
Additional information on submitting patches can be found online via
the QEMU website
http://qemu-project.org/Contribute/SubmitAPatch
http://qemu-project.org/Contribute/TrivialPatches
Bug reporting
=============
The QEMU project uses Launchpad as its primary upstream bug tracker. Bugs
found when running code built from QEMU git or upstream released sources
should be reported via:
https://bugs.launchpad.net/qemu/
If using QEMU via an operating system vendor pre-built binary package, it
is preferable to report bugs to the vendor's own bug tracker first. If
the bug is also known to affect latest upstream code, it can also be
reported via launchpad.
For additional information on bug reporting consult:
http://qemu-project.org/Contribute/ReportABug
Contact
=======
The QEMU community can be contacted in a number of ways, with the two
main methods being email and IRC
- qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel
- #qemu on irc.oftc.net
Information on additional methods of contacting the community can be
found online via the QEMU website:
http://qemu-project.org/Contribute/StartHere
-- End
- QEMU team

View File

@@ -1 +1 @@
2.7.50
2.0.0

156
accel.c
View File

@@ -1,156 +0,0 @@
/*
* QEMU System Emulator, accelerator interfaces
*
* Copyright (c) 2003-2008 Fabrice Bellard
* Copyright (c) 2014 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "sysemu/accel.h"
#include "hw/boards.h"
#include "qemu-common.h"
#include "sysemu/arch_init.h"
#include "sysemu/sysemu.h"
#include "sysemu/kvm.h"
#include "sysemu/qtest.h"
#include "hw/xen/xen.h"
#include "qom/object.h"
#include "hw/boards.h"
int tcg_tb_size;
static bool tcg_allowed = true;
static int tcg_init(MachineState *ms)
{
tcg_exec_init(tcg_tb_size * 1024 * 1024);
return 0;
}
static const TypeInfo accel_type = {
.name = TYPE_ACCEL,
.parent = TYPE_OBJECT,
.class_size = sizeof(AccelClass),
.instance_size = sizeof(AccelState),
};
/* Lookup AccelClass from opt_name. Returns NULL if not found */
static AccelClass *accel_find(const char *opt_name)
{
char *class_name = g_strdup_printf(ACCEL_CLASS_NAME("%s"), opt_name);
AccelClass *ac = ACCEL_CLASS(object_class_by_name(class_name));
g_free(class_name);
return ac;
}
static int accel_init_machine(AccelClass *acc, MachineState *ms)
{
ObjectClass *oc = OBJECT_CLASS(acc);
const char *cname = object_class_get_name(oc);
AccelState *accel = ACCEL(object_new(cname));
int ret;
ms->accelerator = accel;
*(acc->allowed) = true;
ret = acc->init_machine(ms);
if (ret < 0) {
ms->accelerator = NULL;
*(acc->allowed) = false;
object_unref(OBJECT(accel));
}
return ret;
}
void configure_accelerator(MachineState *ms)
{
const char *p;
char buf[10];
int ret;
bool accel_initialised = false;
bool init_failed = false;
AccelClass *acc = NULL;
p = qemu_opt_get(qemu_get_machine_opts(), "accel");
if (p == NULL) {
/* Use the default "accelerator", tcg */
p = "tcg";
}
while (!accel_initialised && *p != '\0') {
if (*p == ':') {
p++;
}
p = get_opt_name(buf, sizeof(buf), p, ':');
acc = accel_find(buf);
if (!acc) {
fprintf(stderr, "\"%s\" accelerator not found.\n", buf);
continue;
}
if (acc->available && !acc->available()) {
printf("%s not supported for this target\n",
acc->name);
continue;
}
ret = accel_init_machine(acc, ms);
if (ret < 0) {
init_failed = true;
fprintf(stderr, "failed to initialize %s: %s\n",
acc->name,
strerror(-ret));
} else {
accel_initialised = true;
}
}
if (!accel_initialised) {
if (!init_failed) {
fprintf(stderr, "No accelerator found!\n");
}
exit(1);
}
if (init_failed) {
fprintf(stderr, "Back to %s accelerator.\n", acc->name);
}
}
static void tcg_accel_class_init(ObjectClass *oc, void *data)
{
AccelClass *ac = ACCEL_CLASS(oc);
ac->name = "tcg";
ac->init_machine = tcg_init;
ac->allowed = &tcg_allowed;
}
#define TYPE_TCG_ACCEL ACCEL_CLASS_NAME("tcg")
static const TypeInfo tcg_accel_type = {
.name = TYPE_TCG_ACCEL,
.parent = TYPE_ACCEL,
.class_init = tcg_accel_class_init,
};
static void register_accel_types(void)
{
type_register_static(&accel_type);
type_register_static(&tcg_accel_type);
}
type_init(register_accel_types);

View File

@@ -13,14 +13,10 @@
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "block/block.h"
#include "qemu/queue.h"
#include "qemu/sockets.h"
#ifdef CONFIG_EPOLL_CREATE1
#include <sys/epoll.h>
#endif
struct AioHandler
{
@@ -28,167 +24,11 @@ struct AioHandler
IOHandler *io_read;
IOHandler *io_write;
int deleted;
int pollfds_idx;
void *opaque;
bool is_external;
QLIST_ENTRY(AioHandler) node;
};
#ifdef CONFIG_EPOLL_CREATE1
/* The fd number threashold to switch to epoll */
#define EPOLL_ENABLE_THRESHOLD 64
static void aio_epoll_disable(AioContext *ctx)
{
ctx->epoll_available = false;
if (!ctx->epoll_enabled) {
return;
}
ctx->epoll_enabled = false;
close(ctx->epollfd);
}
static inline int epoll_events_from_pfd(int pfd_events)
{
return (pfd_events & G_IO_IN ? EPOLLIN : 0) |
(pfd_events & G_IO_OUT ? EPOLLOUT : 0) |
(pfd_events & G_IO_HUP ? EPOLLHUP : 0) |
(pfd_events & G_IO_ERR ? EPOLLERR : 0);
}
static bool aio_epoll_try_enable(AioContext *ctx)
{
AioHandler *node;
struct epoll_event event;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
int r;
if (node->deleted || !node->pfd.events) {
continue;
}
event.events = epoll_events_from_pfd(node->pfd.events);
event.data.ptr = node;
r = epoll_ctl(ctx->epollfd, EPOLL_CTL_ADD, node->pfd.fd, &event);
if (r) {
return false;
}
}
ctx->epoll_enabled = true;
return true;
}
static void aio_epoll_update(AioContext *ctx, AioHandler *node, bool is_new)
{
struct epoll_event event;
int r;
if (!ctx->epoll_enabled) {
return;
}
if (!node->pfd.events) {
r = epoll_ctl(ctx->epollfd, EPOLL_CTL_DEL, node->pfd.fd, &event);
if (r) {
aio_epoll_disable(ctx);
}
} else {
event.data.ptr = node;
event.events = epoll_events_from_pfd(node->pfd.events);
if (is_new) {
r = epoll_ctl(ctx->epollfd, EPOLL_CTL_ADD, node->pfd.fd, &event);
if (r) {
aio_epoll_disable(ctx);
}
} else {
r = epoll_ctl(ctx->epollfd, EPOLL_CTL_MOD, node->pfd.fd, &event);
if (r) {
aio_epoll_disable(ctx);
}
}
}
}
static int aio_epoll(AioContext *ctx, GPollFD *pfds,
unsigned npfd, int64_t timeout)
{
AioHandler *node;
int i, ret = 0;
struct epoll_event events[128];
assert(npfd == 1);
assert(pfds[0].fd == ctx->epollfd);
if (timeout > 0) {
ret = qemu_poll_ns(pfds, npfd, timeout);
}
if (timeout <= 0 || ret > 0) {
ret = epoll_wait(ctx->epollfd, events,
sizeof(events) / sizeof(events[0]),
timeout);
if (ret <= 0) {
goto out;
}
for (i = 0; i < ret; i++) {
int ev = events[i].events;
node = events[i].data.ptr;
node->pfd.revents = (ev & EPOLLIN ? G_IO_IN : 0) |
(ev & EPOLLOUT ? G_IO_OUT : 0) |
(ev & EPOLLHUP ? G_IO_HUP : 0) |
(ev & EPOLLERR ? G_IO_ERR : 0);
}
}
out:
return ret;
}
static bool aio_epoll_enabled(AioContext *ctx)
{
/* Fall back to ppoll when external clients are disabled. */
return !aio_external_disabled(ctx) && ctx->epoll_enabled;
}
static bool aio_epoll_check_poll(AioContext *ctx, GPollFD *pfds,
unsigned npfd, int64_t timeout)
{
if (!ctx->epoll_available) {
return false;
}
if (aio_epoll_enabled(ctx)) {
return true;
}
if (npfd >= EPOLL_ENABLE_THRESHOLD) {
if (aio_epoll_try_enable(ctx)) {
return true;
} else {
aio_epoll_disable(ctx);
}
}
return false;
}
#else
static void aio_epoll_update(AioContext *ctx, AioHandler *node, bool is_new)
{
}
static int aio_epoll(AioContext *ctx, GPollFD *pfds,
unsigned npfd, int64_t timeout)
{
assert(false);
}
static bool aio_epoll_enabled(AioContext *ctx)
{
return false;
}
static bool aio_epoll_check_poll(AioContext *ctx, GPollFD *pfds,
unsigned npfd, int64_t timeout)
{
return false;
}
#endif
static AioHandler *find_aio_handler(AioContext *ctx, int fd)
{
AioHandler *node;
@@ -204,14 +44,11 @@ static AioHandler *find_aio_handler(AioContext *ctx, int fd)
void aio_set_fd_handler(AioContext *ctx,
int fd,
bool is_external,
IOHandler *io_read,
IOHandler *io_write,
void *opaque)
{
AioHandler *node;
bool is_new = false;
bool deleted = false;
node = find_aio_handler(ctx, fd);
@@ -230,48 +67,37 @@ void aio_set_fd_handler(AioContext *ctx,
* releasing the walking_handlers lock.
*/
QLIST_REMOVE(node, node);
deleted = true;
g_free(node);
}
}
} else {
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = g_new0(AioHandler, 1);
node = g_malloc0(sizeof(AioHandler));
node->pfd.fd = fd;
QLIST_INSERT_HEAD(&ctx->aio_handlers, node, node);
g_source_add_poll(&ctx->source, &node->pfd);
is_new = true;
}
/* Update handler with latest information */
node->io_read = io_read;
node->io_write = io_write;
node->opaque = opaque;
node->is_external = is_external;
node->pollfds_idx = -1;
node->pfd.events = (io_read ? G_IO_IN | G_IO_HUP | G_IO_ERR : 0);
node->pfd.events |= (io_write ? G_IO_OUT | G_IO_ERR : 0);
}
aio_epoll_update(ctx, node, is_new);
aio_notify(ctx);
if (deleted) {
g_free(node);
}
}
void aio_set_event_notifier(AioContext *ctx,
EventNotifier *notifier,
bool is_external,
EventNotifierHandler *io_read)
{
aio_set_fd_handler(ctx, event_notifier_get_fd(notifier),
is_external, (IOHandler *)io_read, NULL, notifier);
}
bool aio_prepare(AioContext *ctx)
{
return false;
(IOHandler *)io_read, NULL, notifier);
}
bool aio_pending(AioContext *ctx)
@@ -282,12 +108,10 @@ bool aio_pending(AioContext *ctx)
int revents;
revents = node->pfd.revents & node->pfd.events;
if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read &&
aio_node_check(ctx, node->is_external)) {
if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) {
return true;
}
if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write &&
aio_node_check(ctx, node->is_external)) {
if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write) {
return true;
}
}
@@ -295,22 +119,13 @@ bool aio_pending(AioContext *ctx)
return false;
}
bool aio_dispatch(AioContext *ctx)
static bool aio_dispatch(AioContext *ctx)
{
AioHandler *node;
bool progress = false;
/*
* If there are callbacks left that have been queued, we need to call them.
* Do not call select in this case, because it is possible that the caller
* does not need a complete flush (as is the case for aio_poll loops).
*/
if (aio_bh_poll(ctx)) {
progress = true;
}
/*
* We have to walk very carefully in case aio_set_fd_handler is
* We have to walk very carefully in case qemu_aio_set_fd_handler is
* called while we're walking.
*/
node = QLIST_FIRST(&ctx->aio_handlers);
@@ -325,7 +140,6 @@ bool aio_dispatch(AioContext *ctx)
if (!node->deleted &&
(revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
aio_node_check(ctx, node->is_external) &&
node->io_read) {
node->io_read(node->opaque);
@@ -336,7 +150,6 @@ bool aio_dispatch(AioContext *ctx)
}
if (!node->deleted &&
(revents & (G_IO_OUT | G_IO_ERR)) &&
aio_node_check(ctx, node->is_external) &&
node->io_write) {
node->io_write(node->opaque);
progress = true;
@@ -359,142 +172,71 @@ bool aio_dispatch(AioContext *ctx)
return progress;
}
/* These thread-local variables are used only in a small part of aio_poll
* around the call to the poll() system call. In particular they are not
* used while aio_poll is performing callbacks, which makes it much easier
* to think about reentrancy!
*
* Stack-allocated arrays would be perfect but they have size limitations;
* heap allocation is expensive enough that we want to reuse arrays across
* calls to aio_poll(). And because poll() has to be called without holding
* any lock, the arrays cannot be stored in AioContext. Thread-local data
* has none of the disadvantages of these three options.
*/
static __thread GPollFD *pollfds;
static __thread AioHandler **nodes;
static __thread unsigned npfd, nalloc;
static __thread Notifier pollfds_cleanup_notifier;
static void pollfds_cleanup(Notifier *n, void *unused)
{
g_assert(npfd == 0);
g_free(pollfds);
g_free(nodes);
nalloc = 0;
}
static void add_pollfd(AioHandler *node)
{
if (npfd == nalloc) {
if (nalloc == 0) {
pollfds_cleanup_notifier.notify = pollfds_cleanup;
qemu_thread_atexit_add(&pollfds_cleanup_notifier);
nalloc = 8;
} else {
g_assert(nalloc <= INT_MAX);
nalloc *= 2;
}
pollfds = g_renew(GPollFD, pollfds, nalloc);
nodes = g_renew(AioHandler *, nodes, nalloc);
}
nodes[npfd] = node;
pollfds[npfd] = (GPollFD) {
.fd = node->pfd.fd,
.events = node->pfd.events,
};
npfd++;
}
bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
int i, ret;
int ret;
bool progress;
int64_t timeout;
aio_context_acquire(ctx);
progress = false;
/* aio_notify can avoid the expensive event_notifier_set if
* everything (file descriptors, bottom halves, timers) will
* be re-evaluated before the next blocking poll(). This is
* already true when aio_poll is called with blocking == false;
* if blocking == true, it is only true after poll() returns,
* so disable the optimization now.
/*
* If there are callbacks left that have been queued, we need to call them.
* Do not call select in this case, because it is possible that the caller
* does not need a complete flush (as is the case for qemu_aio_wait loops).
*/
if (blocking) {
atomic_add(&ctx->notify_me, 2);
if (aio_bh_poll(ctx)) {
blocking = false;
progress = true;
}
if (aio_dispatch(ctx)) {
progress = true;
}
if (progress && !blocking) {
return true;
}
ctx->walking_handlers++;
assert(npfd == 0);
g_array_set_size(ctx->pollfds, 0);
/* fill pollfds */
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (!node->deleted && node->pfd.events
&& !aio_epoll_enabled(ctx)
&& aio_node_check(ctx, node->is_external)) {
add_pollfd(node);
node->pollfds_idx = -1;
if (!node->deleted && node->pfd.events) {
GPollFD pfd = {
.fd = node->pfd.fd,
.events = node->pfd.events,
};
node->pollfds_idx = ctx->pollfds->len;
g_array_append_val(ctx->pollfds, pfd);
}
}
timeout = blocking ? aio_compute_timeout(ctx) : 0;
ctx->walking_handlers--;
/* wait until next event */
if (timeout) {
aio_context_release(ctx);
}
if (aio_epoll_check_poll(ctx, pollfds, npfd, timeout)) {
AioHandler epoll_handler;
epoll_handler.pfd.fd = ctx->epollfd;
epoll_handler.pfd.events = G_IO_IN | G_IO_OUT | G_IO_HUP | G_IO_ERR;
npfd = 0;
add_pollfd(&epoll_handler);
ret = aio_epoll(ctx, pollfds, npfd, timeout);
} else {
ret = qemu_poll_ns(pollfds, npfd, timeout);
}
if (blocking) {
atomic_sub(&ctx->notify_me, 2);
}
if (timeout) {
aio_context_acquire(ctx);
}
aio_notify_accept(ctx);
ret = qemu_poll_ns((GPollFD *)ctx->pollfds->data,
ctx->pollfds->len,
blocking ? timerlistgroup_deadline_ns(&ctx->tlg) : 0);
/* if we have any readable fds, dispatch event */
if (ret > 0) {
for (i = 0; i < npfd; i++) {
nodes[i]->pfd.revents = pollfds[i].revents;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->pollfds_idx != -1) {
GPollFD *pfd = &g_array_index(ctx->pollfds, GPollFD,
node->pollfds_idx);
node->pfd.revents = pfd->revents;
}
}
}
npfd = 0;
ctx->walking_handlers--;
/* Run dispatch even if there were no readable fds to run timers */
if (aio_dispatch(ctx)) {
progress = true;
}
aio_context_release(ctx);
return progress;
}
void aio_context_setup(AioContext *ctx)
{
#ifdef CONFIG_EPOLL_CREATE1
assert(!ctx->epollfd);
ctx->epollfd = epoll_create1(EPOLL_CLOEXEC);
if (ctx->epollfd == -1) {
fprintf(stderr, "Failed to create epoll instance: %s", strerror(errno));
ctx->epoll_available = false;
} else {
ctx->epoll_available = true;
}
#endif
}

View File

@@ -15,7 +15,6 @@
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "block/block.h"
#include "qemu/queue.h"
@@ -23,86 +22,14 @@
struct AioHandler {
EventNotifier *e;
IOHandler *io_read;
IOHandler *io_write;
EventNotifierHandler *io_notify;
GPollFD pfd;
int deleted;
void *opaque;
bool is_external;
QLIST_ENTRY(AioHandler) node;
};
void aio_set_fd_handler(AioContext *ctx,
int fd,
bool is_external,
IOHandler *io_read,
IOHandler *io_write,
void *opaque)
{
/* fd is a SOCKET in our case */
AioHandler *node;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->pfd.fd == fd && !node->deleted) {
break;
}
}
/* Are we deleting the fd handler? */
if (!io_read && !io_write) {
if (node) {
/* If the lock is held, just mark the node as deleted */
if (ctx->walking_handlers) {
node->deleted = 1;
node->pfd.revents = 0;
} else {
/* Otherwise, delete it for real. We can't just mark it as
* deleted because deleted nodes are only cleaned up after
* releasing the walking_handlers lock.
*/
QLIST_REMOVE(node, node);
g_free(node);
}
}
} else {
HANDLE event;
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = g_new0(AioHandler, 1);
node->pfd.fd = fd;
QLIST_INSERT_HEAD(&ctx->aio_handlers, node, node);
}
node->pfd.events = 0;
if (node->io_read) {
node->pfd.events |= G_IO_IN;
}
if (node->io_write) {
node->pfd.events |= G_IO_OUT;
}
node->e = &ctx->notifier;
/* Update handler with latest information */
node->opaque = opaque;
node->io_read = io_read;
node->io_write = io_write;
node->is_external = is_external;
event = event_notifier_get_handle(&ctx->notifier);
WSAEventSelect(node->pfd.fd, event,
FD_READ | FD_ACCEPT | FD_CLOSE |
FD_CONNECT | FD_WRITE | FD_OOB);
}
aio_notify(ctx);
}
void aio_set_event_notifier(AioContext *ctx,
EventNotifier *e,
bool is_external,
EventNotifierHandler *io_notify)
{
AioHandler *node;
@@ -134,11 +61,10 @@ void aio_set_event_notifier(AioContext *ctx,
} else {
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = g_new0(AioHandler, 1);
node = g_malloc0(sizeof(AioHandler));
node->e = e;
node->pfd.fd = (uintptr_t)event_notifier_get_handle(e);
node->pfd.events = G_IO_IN;
node->is_external = is_external;
QLIST_INSERT_HEAD(&ctx->aio_handlers, node, node);
g_source_add_poll(&ctx->source, &node->pfd);
@@ -150,43 +76,6 @@ void aio_set_event_notifier(AioContext *ctx,
aio_notify(ctx);
}
bool aio_prepare(AioContext *ctx)
{
static struct timeval tv0;
AioHandler *node;
bool have_select_revents = false;
fd_set rfds, wfds;
/* fill fd sets */
FD_ZERO(&rfds);
FD_ZERO(&wfds);
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->io_read) {
FD_SET ((SOCKET)node->pfd.fd, &rfds);
}
if (node->io_write) {
FD_SET ((SOCKET)node->pfd.fd, &wfds);
}
}
if (select(0, &rfds, &wfds, NULL, &tv0) > 0) {
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
node->pfd.revents = 0;
if (FD_ISSET(node->pfd.fd, &rfds)) {
node->pfd.revents |= G_IO_IN;
have_select_revents = true;
}
if (FD_ISSET(node->pfd.fd, &wfds)) {
node->pfd.revents |= G_IO_OUT;
have_select_revents = true;
}
}
}
return have_select_revents;
}
bool aio_pending(AioContext *ctx)
{
AioHandler *node;
@@ -195,37 +84,47 @@ bool aio_pending(AioContext *ctx)
if (node->pfd.revents && node->io_notify) {
return true;
}
if ((node->pfd.revents & G_IO_IN) && node->io_read) {
return true;
}
if ((node->pfd.revents & G_IO_OUT) && node->io_write) {
return true;
}
}
return false;
}
static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
bool progress = false;
HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
bool progress;
int count;
int timeout;
progress = false;
/*
* We have to walk very carefully in case aio_set_fd_handler is
* If there are callbacks left that have been queued, we need to call then.
* Do not call select in this case, because it is possible that the caller
* does not need a complete flush (as is the case for qemu_aio_wait loops).
*/
if (aio_bh_poll(ctx)) {
blocking = false;
progress = true;
}
/* Run timers */
progress |= timerlistgroup_run_timers(&ctx->tlg);
/*
* Then dispatch any pending callbacks from the GSource.
*
* We have to walk very carefully in case qemu_aio_set_fd_handler is
* called while we're walking.
*/
node = QLIST_FIRST(&ctx->aio_handlers);
while (node) {
AioHandler *tmp;
int revents = node->pfd.revents;
ctx->walking_handlers++;
if (!node->deleted &&
(revents || event_notifier_get_handle(node->e) == event) &&
node->io_notify) {
if (node->pfd.revents && node->io_notify) {
node->pfd.revents = 0;
node->io_notify(node->e);
@@ -235,28 +134,6 @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
}
}
if (!node->deleted &&
(node->io_read || node->io_write)) {
node->pfd.revents = 0;
if ((revents & G_IO_IN) && node->io_read) {
node->io_read(node->opaque);
progress = true;
}
if ((revents & G_IO_OUT) && node->io_write) {
node->io_write(node->opaque);
progress = true;
}
/* if the next select() will return an event, we have progressed */
if (event == event_notifier_get_handle(&ctx->notifier)) {
WSANETWORKEVENTS ev;
WSAEnumNetworkEvents(node->pfd.fd, event, &ev);
if (ev.lNetworkEvents) {
progress = true;
}
}
}
tmp = node;
node = QLIST_NEXT(node, node);
@@ -268,109 +145,79 @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
}
}
return progress;
}
bool aio_dispatch(AioContext *ctx)
{
bool progress;
progress = aio_bh_poll(ctx);
progress |= aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
progress |= timerlistgroup_run_timers(&ctx->tlg);
return progress;
}
bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
bool progress, have_select_revents, first;
int count;
int timeout;
aio_context_acquire(ctx);
progress = false;
/* aio_notify can avoid the expensive event_notifier_set if
* everything (file descriptors, bottom halves, timers) will
* be re-evaluated before the next blocking poll(). This is
* already true when aio_poll is called with blocking == false;
* if blocking == true, it is only true after poll() returns,
* so disable the optimization now.
*/
if (blocking) {
atomic_add(&ctx->notify_me, 2);
if (progress && !blocking) {
return true;
}
have_select_revents = aio_prepare(ctx);
ctx->walking_handlers++;
/* fill fd sets */
count = 0;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (!node->deleted && node->io_notify
&& aio_node_check(ctx, node->is_external)) {
if (!node->deleted && node->io_notify) {
events[count++] = event_notifier_get_handle(node->e);
}
}
ctx->walking_handlers--;
first = true;
/* ctx->notifier is always registered. */
assert(count > 0);
/* Multiple iterations, all of them non-blocking except the first,
* may be necessary to process all pending events. After the first
* WaitForMultipleObjects call ctx->notify_me will be decremented.
*/
do {
HANDLE event;
/* wait until next event */
while (count > 0) {
int ret;
timeout = blocking && !have_select_revents
? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0;
if (timeout) {
aio_context_release(ctx);
}
timeout = blocking ?
qemu_timeout_ns_to_ms(timerlistgroup_deadline_ns(&ctx->tlg)) : 0;
ret = WaitForMultipleObjects(count, events, FALSE, timeout);
if (blocking) {
assert(first);
atomic_sub(&ctx->notify_me, 2);
}
if (timeout) {
aio_context_acquire(ctx);
}
if (first) {
aio_notify_accept(ctx);
progress |= aio_bh_poll(ctx);
first = false;
}
/* if we have any signaled events, dispatch event */
event = NULL;
if ((DWORD) (ret - WAIT_OBJECT_0) < count) {
event = events[ret - WAIT_OBJECT_0];
events[ret - WAIT_OBJECT_0] = events[--count];
} else if (!have_select_revents) {
if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
break;
}
have_select_revents = false;
blocking = false;
progress |= aio_dispatch_handlers(ctx, event);
} while (count > 0);
/* we have to walk very carefully in case
* qemu_aio_set_fd_handler is called while we're walking */
node = QLIST_FIRST(&ctx->aio_handlers);
while (node) {
AioHandler *tmp;
progress |= timerlistgroup_run_timers(&ctx->tlg);
ctx->walking_handlers++;
if (!node->deleted &&
event_notifier_get_handle(node->e) == events[ret - WAIT_OBJECT_0] &&
node->io_notify) {
node->io_notify(node->e);
/* aio_notify() does not count as progress */
if (node->e != &ctx->notifier) {
progress = true;
}
}
tmp = node;
node = QLIST_NEXT(node, node);
ctx->walking_handlers--;
if (!ctx->walking_handlers && tmp->deleted) {
QLIST_REMOVE(tmp, node);
g_free(tmp);
}
}
/* Try again, but only call each handler once. */
events[ret - WAIT_OBJECT_0] = events[--count];
}
if (blocking) {
/* Run the timers a second time. We do this because otherwise aio_wait
* will not note progress - and will stop a drain early - if we have
* a timer that was not ready to run entering g_poll but is ready
* after g_poll. This will only do anything if a timer has expired.
*/
progress |= timerlistgroup_run_timers(&ctx->tlg);
}
aio_context_release(ctx);
return progress;
}
void aio_context_setup(AioContext *ctx)
{
}

File diff suppressed because it is too large Load Diff

201
async.c
View File

@@ -22,14 +22,10 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "block/aio.h"
#include "block/thread-pool.h"
#include "qemu/main-loop.h"
#include "qemu/atomic.h"
#include "block/raw-aio.h"
/***********************************************************/
/* bottom halves (can be seen as timers which expire ASAP) */
@@ -47,12 +43,10 @@ struct QEMUBH {
QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
{
QEMUBH *bh;
bh = g_new(QEMUBH, 1);
*bh = (QEMUBH){
.ctx = ctx,
.cb = cb,
.opaque = opaque,
};
bh = g_malloc0(sizeof(QEMUBH));
bh->ctx = ctx;
bh->cb = cb;
bh->opaque = opaque;
qemu_mutex_lock(&ctx->bh_lock);
bh->next = ctx->first_bh;
/* Make sure that the members are ready before putting bh into list */
@@ -62,11 +56,6 @@ QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
return bh;
}
void aio_bh_call(QEMUBH *bh)
{
bh->cb(bh->opaque);
}
/* Multiple occurrences of aio_bh_poll cannot be called concurrently */
int aio_bh_poll(AioContext *ctx)
{
@@ -80,19 +69,16 @@ int aio_bh_poll(AioContext *ctx)
/* Make sure that fetching bh happens before accessing its members */
smp_read_barrier_depends();
next = bh->next;
/* The atomic_xchg is paired with the one in qemu_bh_schedule. The
* implicit memory barrier ensures that the callback sees all writes
* done by the scheduling thread. It also ensures that the scheduling
* thread sees the zero before bh->cb has run, and thus will call
* aio_notify again if necessary.
*/
if (!bh->deleted && atomic_xchg(&bh->scheduled, 0)) {
/* Idle BHs and the notify BH don't count as progress */
if (!bh->idle && bh != ctx->notify_dummy_bh) {
if (!bh->deleted && bh->scheduled) {
bh->scheduled = 0;
/* Paired with write barrier in bh schedule to ensure reading for
* idle & callbacks coming after bh's scheduling.
*/
smp_rmb();
if (!bh->idle)
ret = 1;
}
bh->idle = 0;
aio_bh_call(bh);
bh->cb(bh->opaque);
}
}
@@ -119,28 +105,27 @@ int aio_bh_poll(AioContext *ctx)
void qemu_bh_schedule_idle(QEMUBH *bh)
{
if (bh->scheduled)
return;
bh->idle = 1;
/* Make sure that idle & any writes needed by the callback are done
* before the locations are read in the aio_bh_poll.
*/
atomic_mb_set(&bh->scheduled, 1);
smp_wmb();
bh->scheduled = 1;
}
void qemu_bh_schedule(QEMUBH *bh)
{
AioContext *ctx;
ctx = bh->ctx;
if (bh->scheduled)
return;
bh->idle = 0;
/* The memory barrier implicit in atomic_xchg makes sure that:
* 1. idle & any writes needed by the callback are done before the
* locations are read in the aio_bh_poll.
* 2. ctx is loaded before scheduled is set and the callback has a chance
* to execute.
/* Make sure that idle & any writes needed by the callback are done
* before the locations are read in the aio_bh_poll.
*/
if (atomic_xchg(&bh->scheduled, 1) == 0) {
aio_notify(ctx);
}
smp_wmb();
bh->scheduled = 1;
aio_notify(bh->ctx);
}
@@ -160,50 +145,39 @@ void qemu_bh_delete(QEMUBH *bh)
bh->deleted = 1;
}
int64_t
aio_compute_timeout(AioContext *ctx)
static gboolean
aio_ctx_prepare(GSource *source, gint *timeout)
{
int64_t deadline;
int timeout = -1;
AioContext *ctx = (AioContext *) source;
QEMUBH *bh;
int deadline;
/* We assume there is no timeout already supplied */
*timeout = -1;
for (bh = ctx->first_bh; bh; bh = bh->next) {
if (!bh->deleted && bh->scheduled) {
if (bh->idle) {
/* idle bottom halves will be polled at least
* every 10ms */
timeout = 10000000;
*timeout = 10;
} else {
/* non-idle bottom halves will be executed
* immediately */
return 0;
*timeout = 0;
return true;
}
}
}
deadline = timerlistgroup_deadline_ns(&ctx->tlg);
deadline = qemu_timeout_ns_to_ms(timerlistgroup_deadline_ns(&ctx->tlg));
if (deadline == 0) {
return 0;
} else {
return qemu_soonest_timeout(timeout, deadline);
}
}
static gboolean
aio_ctx_prepare(GSource *source, gint *timeout)
{
AioContext *ctx = (AioContext *) source;
atomic_or(&ctx->notify_me, 1);
/* We assume there is no timeout already supplied */
*timeout = qemu_timeout_ns_to_ms(aio_compute_timeout(ctx));
if (aio_prepare(ctx)) {
*timeout = 0;
return true;
} else {
*timeout = qemu_soonest_timeout(*timeout, deadline);
}
return *timeout == 0;
return false;
}
static gboolean
@@ -212,13 +186,10 @@ aio_ctx_check(GSource *source)
AioContext *ctx = (AioContext *) source;
QEMUBH *bh;
atomic_and(&ctx->notify_me, ~1);
aio_notify_accept(ctx);
for (bh = ctx->first_bh; bh; bh = bh->next) {
if (!bh->deleted && bh->scheduled) {
return true;
}
}
}
return aio_pending(ctx) || (timerlistgroup_deadline_ns(&ctx->tlg) == 0);
}
@@ -231,7 +202,7 @@ aio_ctx_dispatch(GSource *source,
AioContext *ctx = (AioContext *) source;
assert(callback == NULL);
aio_dispatch(ctx);
aio_poll(ctx, false);
return true;
}
@@ -240,33 +211,12 @@ aio_ctx_finalize(GSource *source)
{
AioContext *ctx = (AioContext *) source;
qemu_bh_delete(ctx->notify_dummy_bh);
thread_pool_free(ctx->thread_pool);
#ifdef CONFIG_LINUX_AIO
if (ctx->linux_aio) {
laio_detach_aio_context(ctx->linux_aio, ctx);
laio_cleanup(ctx->linux_aio);
ctx->linux_aio = NULL;
}
#endif
qemu_mutex_lock(&ctx->bh_lock);
while (ctx->first_bh) {
QEMUBH *next = ctx->first_bh->next;
/* qemu_bh_delete() must have been called on BHs in this AioContext */
assert(ctx->first_bh->deleted);
g_free(ctx->first_bh);
ctx->first_bh = next;
}
qemu_mutex_unlock(&ctx->bh_lock);
aio_set_event_notifier(ctx, &ctx->notifier, false, NULL);
aio_set_event_notifier(ctx, &ctx->notifier, NULL);
event_notifier_cleanup(&ctx->notifier);
rfifolock_destroy(&ctx->lock);
qemu_mutex_destroy(&ctx->bh_lock);
g_array_free(ctx->pollfds, TRUE);
timerlistgroup_deinit(&ctx->tlg);
}
@@ -291,34 +241,9 @@ ThreadPool *aio_get_thread_pool(AioContext *ctx)
return ctx->thread_pool;
}
#ifdef CONFIG_LINUX_AIO
LinuxAioState *aio_get_linux_aio(AioContext *ctx)
{
if (!ctx->linux_aio) {
ctx->linux_aio = laio_init();
laio_attach_aio_context(ctx->linux_aio, ctx);
}
return ctx->linux_aio;
}
#endif
void aio_notify(AioContext *ctx)
{
/* Write e.g. bh->scheduled before reading ctx->notify_me. Pairs
* with atomic_or in aio_ctx_prepare or atomic_add in aio_poll.
*/
smp_mb();
if (ctx->notify_me) {
event_notifier_set(&ctx->notifier);
atomic_mb_set(&ctx->notified, true);
}
}
void aio_notify_accept(AioContext *ctx)
{
if (atomic_xchg(&ctx->notified, false)) {
event_notifier_test_and_clear(&ctx->notifier);
}
event_notifier_set(&ctx->notifier);
}
static void aio_timerlist_notify(void *opaque)
@@ -328,53 +253,25 @@ static void aio_timerlist_notify(void *opaque)
static void aio_rfifolock_cb(void *opaque)
{
AioContext *ctx = opaque;
/* Kick owner thread in case they are blocked in aio_poll() */
qemu_bh_schedule(ctx->notify_dummy_bh);
aio_notify(opaque);
}
static void notify_dummy_bh(void *opaque)
AioContext *aio_context_new(void)
{
/* Do nothing, we were invoked just to force the event loop to iterate */
}
static void event_notifier_dummy_cb(EventNotifier *e)
{
}
AioContext *aio_context_new(Error **errp)
{
int ret;
AioContext *ctx;
ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext));
aio_context_setup(ctx);
ret = event_notifier_init(&ctx->notifier, false);
if (ret < 0) {
error_setg_errno(errp, -ret, "Failed to initialize event notifier");
goto fail;
}
g_source_set_can_recurse(&ctx->source, true);
aio_set_event_notifier(ctx, &ctx->notifier,
false,
(EventNotifierHandler *)
event_notifier_dummy_cb);
#ifdef CONFIG_LINUX_AIO
ctx->linux_aio = NULL;
#endif
ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
ctx->thread_pool = NULL;
qemu_mutex_init(&ctx->bh_lock);
rfifolock_init(&ctx->lock, aio_rfifolock_cb, ctx);
event_notifier_init(&ctx->notifier, false);
aio_set_event_notifier(ctx, &ctx->notifier,
(EventNotifierHandler *)
event_notifier_test_and_clear);
timerlistgroup_init(&ctx->tlg, aio_timerlist_notify, ctx);
ctx->notify_dummy_bh = aio_bh_new(ctx, notify_dummy_bh, NULL);
return ctx;
fail:
g_source_destroy(&ctx->source);
return NULL;
}
void aio_context_ref(AioContext *ctx)

View File

@@ -5,9 +5,13 @@ common-obj-$(CONFIG_SPICE) += spiceaudio.o
common-obj-$(CONFIG_COREAUDIO) += coreaudio.o
common-obj-$(CONFIG_ALSA) += alsaaudio.o
common-obj-$(CONFIG_DSOUND) += dsoundaudio.o
common-obj-$(CONFIG_FMOD) += fmodaudio.o
common-obj-$(CONFIG_ESD) += esdaudio.o
common-obj-$(CONFIG_PA) += paaudio.o
common-obj-$(CONFIG_WINWAVE) += winwaveaudio.o
common-obj-$(CONFIG_AUDIO_PT_INT) += audio_pt_int.o
common-obj-$(CONFIG_AUDIO_WIN_INT) += audio_win_int.o
common-obj-y += wavcapture.o
sdlaudio.o-cflags := $(SDL_CFLAGS)
$(obj)/audio.o $(obj)/fmodaudio.o: QEMU_CFLAGS += $(FMOD_CFLAGS)
$(obj)/sdlaudio.o: QEMU_CFLAGS += $(SDL_CFLAGS)

View File

@@ -21,12 +21,10 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include <alsa/asoundlib.h>
#include "qemu-common.h"
#include "qemu/main-loop.h"
#include "audio.h"
#include "trace.h"
#if QEMU_GNUC_PREREQ(4, 3)
#pragma GCC diagnostic ignored "-Waddress"
@@ -35,28 +33,9 @@
#define AUDIO_CAP "alsa"
#include "audio_int.h"
typedef struct ALSAConf {
int size_in_usec_in;
int size_in_usec_out;
const char *pcm_name_in;
const char *pcm_name_out;
unsigned int buffer_size_in;
unsigned int period_size_in;
unsigned int buffer_size_out;
unsigned int period_size_out;
unsigned int threshold;
int buffer_size_in_overridden;
int period_size_in_overridden;
int buffer_size_out_overridden;
int period_size_out_overridden;
} ALSAConf;
struct pollhlp {
snd_pcm_t *handle;
struct pollfd *pfds;
ALSAConf *conf;
int count;
int mask;
};
@@ -77,6 +56,30 @@ typedef struct ALSAVoiceIn {
struct pollhlp pollhlp;
} ALSAVoiceIn;
static struct {
int size_in_usec_in;
int size_in_usec_out;
const char *pcm_name_in;
const char *pcm_name_out;
unsigned int buffer_size_in;
unsigned int period_size_in;
unsigned int buffer_size_out;
unsigned int period_size_out;
unsigned int threshold;
int buffer_size_in_overridden;
int period_size_in_overridden;
int buffer_size_out_overridden;
int period_size_out_overridden;
int verbose;
} conf = {
.buffer_size_out = 4096,
.period_size_out = 1024,
.pcm_name_out = "default",
.pcm_name_in = "default",
};
struct alsa_params_req {
int freq;
snd_pcm_format_t fmt;
@@ -202,7 +205,9 @@ static void alsa_poll_handler (void *opaque)
}
if (!(revents & hlp->mask)) {
trace_alsa_revents(revents);
if (conf.verbose) {
dolog ("revents = %d\n", revents);
}
return;
}
@@ -261,14 +266,31 @@ static int alsa_poll_helper (snd_pcm_t *handle, struct pollhlp *hlp, int mask)
for (i = 0; i < count; ++i) {
if (pfds[i].events & POLLIN) {
qemu_set_fd_handler (pfds[i].fd, alsa_poll_handler, NULL, hlp);
err = qemu_set_fd_handler (pfds[i].fd, alsa_poll_handler,
NULL, hlp);
}
if (pfds[i].events & POLLOUT) {
trace_alsa_pollout(i, pfds[i].fd);
qemu_set_fd_handler (pfds[i].fd, NULL, alsa_poll_handler, hlp);
if (conf.verbose) {
dolog ("POLLOUT %d %d\n", i, pfds[i].fd);
}
err = qemu_set_fd_handler (pfds[i].fd, NULL,
alsa_poll_handler, hlp);
}
if (conf.verbose) {
dolog ("Set handler events=%#x index=%d fd=%d err=%d\n",
pfds[i].events, i, pfds[i].fd, err);
}
trace_alsa_set_handler(pfds[i].events, i, pfds[i].fd, err);
if (err) {
dolog ("Failed to set handler events=%#x index=%d fd=%d err=%d\n",
pfds[i].events, i, pfds[i].fd, err);
while (i--) {
qemu_set_fd_handler (pfds[i].fd, NULL, NULL, NULL);
}
g_free (pfds);
return -1;
}
}
hlp->pfds = pfds;
hlp->count = count;
@@ -454,15 +476,14 @@ static void alsa_set_threshold (snd_pcm_t *handle, snd_pcm_uframes_t threshold)
}
static int alsa_open (int in, struct alsa_params_req *req,
struct alsa_params_obt *obt, snd_pcm_t **handlep,
ALSAConf *conf)
struct alsa_params_obt *obt, snd_pcm_t **handlep)
{
snd_pcm_t *handle;
snd_pcm_hw_params_t *hw_params;
int err;
int size_in_usec;
unsigned int freq, nchannels;
const char *pcm_name = in ? conf->pcm_name_in : conf->pcm_name_out;
const char *pcm_name = in ? conf.pcm_name_in : conf.pcm_name_out;
snd_pcm_uframes_t obt_buffer_size;
const char *typ = in ? "ADC" : "DAC";
snd_pcm_format_t obtfmt;
@@ -501,7 +522,7 @@ static int alsa_open (int in, struct alsa_params_req *req,
}
err = snd_pcm_hw_params_set_format (handle, hw_params, req->fmt);
if (err < 0) {
if (err < 0 && conf.verbose) {
alsa_logerr2 (err, typ, "Failed to set format %d\n", req->fmt);
}
@@ -633,7 +654,7 @@ static int alsa_open (int in, struct alsa_params_req *req,
goto err;
}
if (!in && conf->threshold) {
if (!in && conf.threshold) {
snd_pcm_uframes_t threshold;
int bytes_per_sec;
@@ -655,7 +676,7 @@ static int alsa_open (int in, struct alsa_params_req *req,
break;
}
threshold = (conf->threshold * bytes_per_sec) / 1000;
threshold = (conf.threshold * bytes_per_sec) / 1000;
alsa_set_threshold (handle, threshold);
}
@@ -665,9 +686,10 @@ static int alsa_open (int in, struct alsa_params_req *req,
*handlep = handle;
if (obtfmt != req->fmt ||
if (conf.verbose &&
(obtfmt != req->fmt ||
obt->nchannels != req->nchannels ||
obt->freq != req->freq) {
obt->freq != req->freq)) {
dolog ("Audio parameters for %s\n", typ);
alsa_dump_info (req, obt, obtfmt);
}
@@ -721,7 +743,9 @@ static void alsa_write_pending (ALSAVoiceOut *alsa)
if (written <= 0) {
switch (written) {
case 0:
trace_alsa_wrote_zero(len);
if (conf.verbose) {
dolog ("Failed to write %d frames (wrote zero)\n", len);
}
return;
case -EPIPE:
@@ -730,7 +754,9 @@ static void alsa_write_pending (ALSAVoiceOut *alsa)
len);
return;
}
trace_alsa_xrun_out();
if (conf.verbose) {
dolog ("Recovering from playback xrun\n");
}
continue;
case -ESTRPIPE:
@@ -741,7 +767,9 @@ static void alsa_write_pending (ALSAVoiceOut *alsa)
len);
return;
}
trace_alsa_resume_out();
if (conf.verbose) {
dolog ("Resuming suspended output stream\n");
}
continue;
case -EAGAIN:
@@ -787,31 +815,31 @@ static void alsa_fini_out (HWVoiceOut *hw)
ldebug ("alsa_fini\n");
alsa_anal_close (&alsa->handle, &alsa->pollhlp);
g_free(alsa->pcm_buf);
alsa->pcm_buf = NULL;
if (alsa->pcm_buf) {
g_free (alsa->pcm_buf);
alsa->pcm_buf = NULL;
}
}
static int alsa_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int alsa_init_out (HWVoiceOut *hw, struct audsettings *as)
{
ALSAVoiceOut *alsa = (ALSAVoiceOut *) hw;
struct alsa_params_req req;
struct alsa_params_obt obt;
snd_pcm_t *handle;
struct audsettings obt_as;
ALSAConf *conf = drv_opaque;
req.fmt = aud_to_alsafmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.period_size = conf->period_size_out;
req.buffer_size = conf->buffer_size_out;
req.size_in_usec = conf->size_in_usec_out;
req.period_size = conf.period_size_out;
req.buffer_size = conf.buffer_size_out;
req.size_in_usec = conf.size_in_usec_out;
req.override_mask =
(conf->period_size_out_overridden ? 1 : 0) |
(conf->buffer_size_out_overridden ? 2 : 0);
(conf.period_size_out_overridden ? 1 : 0) |
(conf.buffer_size_out_overridden ? 2 : 0);
if (alsa_open (0, &req, &obt, &handle, conf)) {
if (alsa_open (0, &req, &obt, &handle)) {
return -1;
}
@@ -832,7 +860,6 @@ static int alsa_init_out(HWVoiceOut *hw, struct audsettings *as,
}
alsa->handle = handle;
alsa->pollhlp.conf = conf;
return 0;
}
@@ -903,26 +930,25 @@ static int alsa_ctl_out (HWVoiceOut *hw, int cmd, ...)
return -1;
}
static int alsa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int alsa_init_in (HWVoiceIn *hw, struct audsettings *as)
{
ALSAVoiceIn *alsa = (ALSAVoiceIn *) hw;
struct alsa_params_req req;
struct alsa_params_obt obt;
snd_pcm_t *handle;
struct audsettings obt_as;
ALSAConf *conf = drv_opaque;
req.fmt = aud_to_alsafmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.period_size = conf->period_size_in;
req.buffer_size = conf->buffer_size_in;
req.size_in_usec = conf->size_in_usec_in;
req.period_size = conf.period_size_in;
req.buffer_size = conf.buffer_size_in;
req.size_in_usec = conf.size_in_usec_in;
req.override_mask =
(conf->period_size_in_overridden ? 1 : 0) |
(conf->buffer_size_in_overridden ? 2 : 0);
(conf.period_size_in_overridden ? 1 : 0) |
(conf.buffer_size_in_overridden ? 2 : 0);
if (alsa_open (1, &req, &obt, &handle, conf)) {
if (alsa_open (1, &req, &obt, &handle)) {
return -1;
}
@@ -943,7 +969,6 @@ static int alsa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
alsa->handle = handle;
alsa->pollhlp.conf = conf;
return 0;
}
@@ -953,8 +978,10 @@ static void alsa_fini_in (HWVoiceIn *hw)
alsa_anal_close (&alsa->handle, &alsa->pollhlp);
g_free(alsa->pcm_buf);
alsa->pcm_buf = NULL;
if (alsa->pcm_buf) {
g_free (alsa->pcm_buf);
alsa->pcm_buf = NULL;
}
}
static int alsa_run_in (HWVoiceIn *hw)
@@ -999,10 +1026,14 @@ static int alsa_run_in (HWVoiceIn *hw)
dolog ("Failed to resume suspended input stream\n");
return 0;
}
trace_alsa_resume_in();
if (conf.verbose) {
dolog ("Resuming suspended input stream\n");
}
break;
default:
trace_alsa_no_frames(state);
if (conf.verbose) {
dolog ("No frames available and ALSA state is %d\n", state);
}
return 0;
}
}
@@ -1037,7 +1068,9 @@ static int alsa_run_in (HWVoiceIn *hw)
if (nread <= 0) {
switch (nread) {
case 0:
trace_alsa_read_zero(len);
if (conf.verbose) {
dolog ("Failed to read %ld frames (read zero)\n", len);
}
goto exit;
case -EPIPE:
@@ -1045,7 +1078,9 @@ static int alsa_run_in (HWVoiceIn *hw)
alsa_logerr (nread, "Failed to read %ld frames\n", len);
goto exit;
}
trace_alsa_xrun_in();
if (conf.verbose) {
dolog ("Recovering from capture xrun\n");
}
continue;
case -EAGAIN:
@@ -1117,85 +1152,82 @@ static int alsa_ctl_in (HWVoiceIn *hw, int cmd, ...)
return -1;
}
static ALSAConf glob_conf = {
.buffer_size_out = 4096,
.period_size_out = 1024,
.pcm_name_out = "default",
.pcm_name_in = "default",
};
static void *alsa_audio_init (void)
{
ALSAConf *conf = g_malloc(sizeof(ALSAConf));
*conf = glob_conf;
return conf;
return &conf;
}
static void alsa_audio_fini (void *opaque)
{
g_free(opaque);
(void) opaque;
}
static struct audio_option alsa_options[] = {
{
.name = "DAC_SIZE_IN_USEC",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.size_in_usec_out,
.valp = &conf.size_in_usec_out,
.descr = "DAC period/buffer size in microseconds (otherwise in frames)"
},
{
.name = "DAC_PERIOD_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.period_size_out,
.valp = &conf.period_size_out,
.descr = "DAC period size (0 to go with system default)",
.overriddenp = &glob_conf.period_size_out_overridden
.overriddenp = &conf.period_size_out_overridden
},
{
.name = "DAC_BUFFER_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.buffer_size_out,
.valp = &conf.buffer_size_out,
.descr = "DAC buffer size (0 to go with system default)",
.overriddenp = &glob_conf.buffer_size_out_overridden
.overriddenp = &conf.buffer_size_out_overridden
},
{
.name = "ADC_SIZE_IN_USEC",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.size_in_usec_in,
.valp = &conf.size_in_usec_in,
.descr =
"ADC period/buffer size in microseconds (otherwise in frames)"
},
{
.name = "ADC_PERIOD_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.period_size_in,
.valp = &conf.period_size_in,
.descr = "ADC period size (0 to go with system default)",
.overriddenp = &glob_conf.period_size_in_overridden
.overriddenp = &conf.period_size_in_overridden
},
{
.name = "ADC_BUFFER_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.buffer_size_in,
.valp = &conf.buffer_size_in,
.descr = "ADC buffer size (0 to go with system default)",
.overriddenp = &glob_conf.buffer_size_in_overridden
.overriddenp = &conf.buffer_size_in_overridden
},
{
.name = "THRESHOLD",
.tag = AUD_OPT_INT,
.valp = &glob_conf.threshold,
.valp = &conf.threshold,
.descr = "(undocumented)"
},
{
.name = "DAC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.pcm_name_out,
.valp = &conf.pcm_name_out,
.descr = "DAC device name (for instance dmix)"
},
{
.name = "ADC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.pcm_name_in,
.valp = &conf.pcm_name_in,
.descr = "ADC device name"
},
{
.name = "VERBOSE",
.tag = AUD_OPT_BOOL,
.valp = &conf.verbose,
.descr = "Behave in a more verbose way"
},
{ /* End of list */ }
};

View File

@@ -21,17 +21,16 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "hw/hw.h"
#include "audio.h"
#include "monitor/monitor.h"
#include "qemu/timer.h"
#include "sysemu/sysemu.h"
#include "qemu/cutils.h"
#define AUDIO_CAP "audio"
#include "audio_int.h"
/* #define DEBUG_PLIVE */
/* #define DEBUG_LIVE */
/* #define DEBUG_OUT */
/* #define DEBUG_CAPTURE */
@@ -67,6 +66,8 @@ static struct {
int hertz;
int64_t ticks;
} period;
int plive;
int log_to_monitor;
int try_poll_in;
int try_poll_out;
} conf = {
@@ -95,6 +96,8 @@ static struct {
},
.period = { .hertz = 100 },
.plive = 0,
.log_to_monitor = 0,
.try_poll_in = 1,
.try_poll_out = 1,
};
@@ -328,11 +331,20 @@ static const char *audio_get_conf_str (const char *key,
void AUD_vlog (const char *cap, const char *fmt, va_list ap)
{
if (cap) {
fprintf(stderr, "%s: ", cap);
}
if (conf.log_to_monitor) {
if (cap) {
monitor_printf(default_mon, "%s: ", cap);
}
vfprintf(stderr, fmt, ap);
monitor_vprintf(default_mon, fmt, ap);
}
else {
if (cap) {
fprintf (stderr, "%s: ", cap);
}
vfprintf (stderr, fmt, ap);
}
}
void AUD_log (const char *cap, const char *fmt, ...)
@@ -1131,6 +1143,8 @@ static void audio_timer (void *opaque)
*/
int AUD_write (SWVoiceOut *sw, void *buf, int size)
{
int bytes;
if (!sw) {
/* XXX: Consider options */
return size;
@@ -1141,11 +1155,14 @@ int AUD_write (SWVoiceOut *sw, void *buf, int size)
return 0;
}
return sw->hw->pcm_ops->write(sw, buf, size);
bytes = sw->hw->pcm_ops->write (sw, buf, size);
return bytes;
}
int AUD_read (SWVoiceIn *sw, void *buf, int size)
{
int bytes;
if (!sw) {
/* XXX: Consider options */
return size;
@@ -1156,7 +1173,8 @@ int AUD_read (SWVoiceIn *sw, void *buf, int size)
return 0;
}
return sw->hw->pcm_ops->read(sw, buf, size);
bytes = sw->hw->pcm_ops->read (sw, buf, size);
return bytes;
}
int AUD_get_buffer_size_out (SWVoiceOut *sw)
@@ -1436,6 +1454,9 @@ static void audio_run_out (AudioState *s)
while (sw) {
sw1 = sw->entries.le_next;
if (!sw->active && !sw->callback.fn) {
#ifdef DEBUG_PLIVE
dolog ("Finishing with old voice\n");
#endif
audio_close_out (sw);
}
sw = sw1;
@@ -1627,6 +1648,18 @@ static struct audio_option audio_options[] = {
.valp = &conf.period.hertz,
.descr = "Timer period in HZ (0 - use lowest possible)"
},
{
.name = "PLIVE",
.tag = AUD_OPT_BOOL,
.valp = &conf.plive,
.descr = "(undocumented)"
},
{
.name = "LOG_TO_MONITOR",
.tag = AUD_OPT_BOOL,
.valp = &conf.log_to_monitor,
.descr = "Print logging messages to monitor instead of stderr"
},
{ /* End of list */ }
};
@@ -1739,21 +1772,13 @@ static void audio_vm_change_state_handler (void *opaque, int running,
audio_reset_timer (s);
}
static bool is_cleaning_up;
bool audio_is_cleaning_up(void)
{
return is_cleaning_up;
}
void audio_cleanup(void)
static void audio_atexit (void)
{
AudioState *s = &glob_audio_state;
HWVoiceOut *hwo, *hwon;
HWVoiceIn *hwi, *hwin;
HWVoiceOut *hwo = NULL;
HWVoiceIn *hwi = NULL;
is_cleaning_up = true;
QLIST_FOREACH_SAFE(hwo, &glob_audio_state.hw_head_out, entries, hwon) {
while ((hwo = audio_pcm_hw_find_any_out (hwo))) {
SWVoiceCap *sc;
if (hwo->enabled) {
@@ -1769,20 +1794,17 @@ void audio_cleanup(void)
cb->ops.destroy (cb->opaque);
}
}
QLIST_REMOVE(hwo, entries);
}
QLIST_FOREACH_SAFE(hwi, &glob_audio_state.hw_head_in, entries, hwin) {
while ((hwi = audio_pcm_hw_find_any_in (hwi))) {
if (hwi->enabled) {
hwi->pcm_ops->ctl_in (hwi, VOICE_DISABLE);
}
hwi->pcm_ops->fini_in (hwi);
QLIST_REMOVE(hwi, entries);
}
if (s->drv) {
s->drv->fini (s->drv_opaque);
s->drv = NULL;
}
}
@@ -1790,7 +1812,8 @@ static const VMStateDescription vmstate_audio = {
.name = "audio",
.version_id = 1,
.minimum_version_id = 1,
.fields = (VMStateField[]) {
.minimum_version_id_old = 1,
.fields = (VMStateField []) {
VMSTATE_END_OF_LIST()
}
};
@@ -1810,9 +1833,12 @@ static void audio_init (void)
QLIST_INIT (&s->hw_head_out);
QLIST_INIT (&s->hw_head_in);
QLIST_INIT (&s->cap_head);
atexit(audio_cleanup);
atexit (audio_atexit);
s->ts = timer_new_ns(QEMU_CLOCK_VIRTUAL, audio_timer, s);
if (!s->ts) {
hw_error("Could not create audio timer\n");
}
audio_process_options ("AUDIO", audio_options);
@@ -1863,8 +1889,12 @@ static void audio_init (void)
if (!done) {
done = !audio_driver_init (s, &no_audio_driver);
assert(done);
dolog("warning: Using timer based audio emulation\n");
if (!done) {
hw_error("Could not initialize audio subsystem\n");
}
else {
dolog ("warning: Using timer based audio emulation\n");
}
}
if (conf.period.hertz <= 0) {
@@ -1875,7 +1905,8 @@ static void audio_init (void)
}
conf.period.ticks = 1;
} else {
conf.period.ticks = NANOSECONDS_PER_SECOND / conf.period.hertz;
conf.period.ticks =
muldiv64 (1, get_ticks_per_sec (), conf.period.hertz);
}
e = qemu_add_vm_change_state_handler (audio_vm_change_state_handler, s);
@@ -1977,7 +2008,8 @@ CaptureVoiceOut *AUD_add_capture (
QLIST_INSERT_HEAD (&s->cap_head, cap, entries);
QLIST_INSERT_HEAD (&cap->cb_head, cb, entries);
QLIST_FOREACH(hw, &glob_audio_state.hw_head_out, entries) {
hw = NULL;
while ((hw = audio_pcm_hw_find_any_out (hw))) {
audio_attach_capture (hw);
}
return cap;

View File

@@ -21,10 +21,10 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#ifndef QEMU_AUDIO_H
#define QEMU_AUDIO_H
#include "config-host.h"
#include "qemu/queue.h"
typedef void (*audio_callback_fn) (void *opaque, int avail);
@@ -163,7 +163,4 @@ static inline void *advance (void *p, int incr)
int wav_start_capture (CaptureState *s, const char *path, int freq,
int bits, int nchannels);
bool audio_is_cleaning_up(void);
void audio_cleanup(void);
#endif /* QEMU_AUDIO_H */
#endif /* audio.h */

View File

@@ -21,7 +21,6 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#ifndef QEMU_AUDIO_INT_H
#define QEMU_AUDIO_INT_H
@@ -157,13 +156,13 @@ struct audio_driver {
};
struct audio_pcm_ops {
int (*init_out)(HWVoiceOut *hw, struct audsettings *as, void *drv_opaque);
int (*init_out)(HWVoiceOut *hw, struct audsettings *as);
void (*fini_out)(HWVoiceOut *hw);
int (*run_out) (HWVoiceOut *hw, int live);
int (*write) (SWVoiceOut *sw, void *buf, int size);
int (*ctl_out) (HWVoiceOut *hw, int cmd, ...);
int (*init_in) (HWVoiceIn *hw, struct audsettings *as, void *drv_opaque);
int (*init_in) (HWVoiceIn *hw, struct audsettings *as);
void (*fini_in) (HWVoiceIn *hw);
int (*run_in) (HWVoiceIn *hw);
int (*read) (SWVoiceIn *sw, void *buf, int size);
@@ -207,11 +206,14 @@ extern struct audio_driver no_audio_driver;
extern struct audio_driver oss_audio_driver;
extern struct audio_driver sdl_audio_driver;
extern struct audio_driver wav_audio_driver;
extern struct audio_driver fmod_audio_driver;
extern struct audio_driver alsa_audio_driver;
extern struct audio_driver coreaudio_audio_driver;
extern struct audio_driver dsound_audio_driver;
extern struct audio_driver esd_audio_driver;
extern struct audio_driver pa_audio_driver;
extern struct audio_driver spice_audio_driver;
extern struct audio_driver winwave_audio_driver;
extern const struct mixeng_volume nominal_volume;
void audio_pcm_init_info (struct audio_pcm_info *info, struct audsettings *as);
@@ -258,4 +260,4 @@ static inline int audio_ring_dist (int dst, int src, int len)
#define AUDIO_FUNC __FILE__ ":" AUDIO_STRINGIFY (__LINE__)
#endif
#endif /* QEMU_AUDIO_INT_H */
#endif /* audio_int.h */

View File

@@ -1,4 +1,3 @@
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "audio.h"

View File

@@ -19,4 +19,4 @@ int audio_pt_wait (struct audio_pt *, const char *);
int audio_pt_unlock_and_signal (struct audio_pt *, const char *);
int audio_pt_join (struct audio_pt *, void **, const char *);
#endif /* QEMU_AUDIO_PT_INT_H */
#endif /* audio_pt_int.h */

View File

@@ -71,7 +71,10 @@ static void glue (audio_init_nb_voices_, TYPE) (struct audio_driver *drv)
static void glue (audio_pcm_hw_free_resources_, TYPE) (HW *hw)
{
g_free (HWBUF);
if (HWBUF) {
g_free (HWBUF);
}
HWBUF = NULL;
}
@@ -89,7 +92,9 @@ static int glue (audio_pcm_hw_alloc_resources_, TYPE) (HW *hw)
static void glue (audio_pcm_sw_free_resources_, TYPE) (SW *sw)
{
g_free (sw->buf);
if (sw->buf) {
g_free (sw->buf);
}
if (sw->rate) {
st_rate_stop (sw->rate);
@@ -167,8 +172,10 @@ static int glue (audio_pcm_sw_init_, TYPE) (
static void glue (audio_pcm_sw_fini_, TYPE) (SW *sw)
{
glue (audio_pcm_sw_free_resources_, TYPE) (sw);
g_free (sw->name);
sw->name = NULL;
if (sw->name) {
g_free (sw->name);
sw->name = NULL;
}
}
static void glue (audio_pcm_hw_add_sw_, TYPE) (HW *hw, SW *sw)
@@ -191,9 +198,9 @@ static void glue (audio_pcm_hw_gc_, TYPE) (HW **hwp)
audio_detach_capture (hw);
#endif
QLIST_REMOVE (hw, entries);
glue (hw->pcm_ops->fini_, TYPE) (hw);
glue (s->nb_hw_voices_, TYPE) += 1;
glue (audio_pcm_hw_free_resources_ ,TYPE) (hw);
glue (hw->pcm_ops->fini_, TYPE) (hw);
g_free (hw);
*hwp = NULL;
}
@@ -262,7 +269,7 @@ static HW *glue (audio_pcm_hw_add_new_, TYPE) (struct audsettings *as)
#ifdef DAC
QLIST_INIT (&hw->cap_head);
#endif
if (glue (hw->pcm_ops->init_, TYPE) (hw, as, s->drv_opaque)) {
if (glue (hw->pcm_ops->init_, TYPE) (hw, as)) {
goto err0;
}
@@ -398,6 +405,10 @@ SW *glue (AUD_open_, TYPE) (
)
{
AudioState *s = &glob_audio_state;
#ifdef DAC
int live = 0;
SW *old_sw = NULL;
#endif
if (audio_bug (AUDIO_FUNC, !card || !name || !callback_fn || !as)) {
dolog ("card=%p name=%p callback_fn=%p as=%p\n",
@@ -422,6 +433,29 @@ SW *glue (AUD_open_, TYPE) (
return sw;
}
#ifdef DAC
if (conf.plive && sw && (!sw->active && !sw->empty)) {
live = sw->total_hw_samples_mixed;
#ifdef DEBUG_PLIVE
dolog ("Replacing voice %s with %d live samples\n", SW_NAME (sw), live);
dolog ("Old %s freq %d, bits %d, channels %d\n",
SW_NAME (sw), sw->info.freq, sw->info.bits, sw->info.nchannels);
dolog ("New %s freq %d, bits %d, channels %d\n",
name,
as->freq,
(as->fmt == AUD_FMT_S16 || as->fmt == AUD_FMT_U16) ? 16 : 8,
as->nchannels);
#endif
if (live) {
old_sw = sw;
old_sw->callback.fn = NULL;
sw = NULL;
}
}
#endif
if (!glue (conf.fixed_, TYPE).enabled && sw) {
glue (AUD_close_, TYPE) (card, sw);
sw = NULL;
@@ -454,6 +488,20 @@ SW *glue (AUD_open_, TYPE) (
sw->callback.fn = callback_fn;
sw->callback.opaque = callback_opaque;
#ifdef DAC
if (live) {
int mixed =
(live << old_sw->info.shift)
* old_sw->info.bytes_per_second
/ sw->info.bytes_per_second;
#ifdef DEBUG_PLIVE
dolog ("Silence will be mixed %d\n", mixed);
#endif
sw->total_hw_samples_mixed += mixed;
}
#endif
#ifdef DEBUG_AUDIO
dolog ("%s\n", name);
audio_pcm_print_info ("hw", &sw->hw->info);

View File

@@ -1,6 +1,5 @@
/* public domain */
#include "qemu/osdep.h"
#include "qemu-common.h"
#define AUDIO_CAP "win-int"

View File

@@ -22,8 +22,8 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include <CoreAudio/CoreAudio.h>
#include <string.h> /* strerror */
#include <pthread.h> /* pthread_X */
#include "qemu-common.h"
@@ -32,248 +32,28 @@
#define AUDIO_CAP "coreaudio"
#include "audio_int.h"
#ifndef MAC_OS_X_VERSION_10_6
#define MAC_OS_X_VERSION_10_6 1060
#endif
typedef struct {
struct {
int buffer_frames;
int nbuffers;
} CoreaudioConf;
int isAtexit;
} conf = {
.buffer_frames = 512,
.nbuffers = 4,
.isAtexit = 0
};
typedef struct coreaudioVoiceOut {
HWVoiceOut hw;
pthread_mutex_t mutex;
int isAtexit;
AudioDeviceID outputDeviceID;
UInt32 audioDevicePropertyBufferFrameSize;
AudioStreamBasicDescription outputStreamBasicDescription;
AudioDeviceIOProcID ioprocid;
int live;
int decr;
int rpos;
} coreaudioVoiceOut;
#if MAC_OS_X_VERSION_MAX_ALLOWED >= MAC_OS_X_VERSION_10_6
/* The APIs used here only become available from 10.6 */
static OSStatus coreaudio_get_voice(AudioDeviceID *id)
{
UInt32 size = sizeof(*id);
AudioObjectPropertyAddress addr = {
kAudioHardwarePropertyDefaultOutputDevice,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster
};
return AudioObjectGetPropertyData(kAudioObjectSystemObject,
&addr,
0,
NULL,
&size,
id);
}
static OSStatus coreaudio_get_framesizerange(AudioDeviceID id,
AudioValueRange *framerange)
{
UInt32 size = sizeof(*framerange);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyBufferFrameSizeRange,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectGetPropertyData(id,
&addr,
0,
NULL,
&size,
framerange);
}
static OSStatus coreaudio_get_framesize(AudioDeviceID id, UInt32 *framesize)
{
UInt32 size = sizeof(*framesize);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyBufferFrameSize,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectGetPropertyData(id,
&addr,
0,
NULL,
&size,
framesize);
}
static OSStatus coreaudio_set_framesize(AudioDeviceID id, UInt32 *framesize)
{
UInt32 size = sizeof(*framesize);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyBufferFrameSize,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectSetPropertyData(id,
&addr,
0,
NULL,
size,
framesize);
}
static OSStatus coreaudio_get_streamformat(AudioDeviceID id,
AudioStreamBasicDescription *d)
{
UInt32 size = sizeof(*d);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyStreamFormat,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectGetPropertyData(id,
&addr,
0,
NULL,
&size,
d);
}
static OSStatus coreaudio_set_streamformat(AudioDeviceID id,
AudioStreamBasicDescription *d)
{
UInt32 size = sizeof(*d);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyStreamFormat,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectSetPropertyData(id,
&addr,
0,
NULL,
size,
d);
}
static OSStatus coreaudio_get_isrunning(AudioDeviceID id, UInt32 *result)
{
UInt32 size = sizeof(*result);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyDeviceIsRunning,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectGetPropertyData(id,
&addr,
0,
NULL,
&size,
result);
}
#else
/* Legacy versions of functions using deprecated APIs */
static OSStatus coreaudio_get_voice(AudioDeviceID *id)
{
UInt32 size = sizeof(*id);
return AudioHardwareGetProperty(
kAudioHardwarePropertyDefaultOutputDevice,
&size,
id);
}
static OSStatus coreaudio_get_framesizerange(AudioDeviceID id,
AudioValueRange *framerange)
{
UInt32 size = sizeof(*framerange);
return AudioDeviceGetProperty(
id,
0,
0,
kAudioDevicePropertyBufferFrameSizeRange,
&size,
framerange);
}
static OSStatus coreaudio_get_framesize(AudioDeviceID id, UInt32 *framesize)
{
UInt32 size = sizeof(*framesize);
return AudioDeviceGetProperty(
id,
0,
false,
kAudioDevicePropertyBufferFrameSize,
&size,
framesize);
}
static OSStatus coreaudio_set_framesize(AudioDeviceID id, UInt32 *framesize)
{
UInt32 size = sizeof(*framesize);
return AudioDeviceSetProperty(
id,
NULL,
0,
false,
kAudioDevicePropertyBufferFrameSize,
size,
framesize);
}
static OSStatus coreaudio_get_streamformat(AudioDeviceID id,
AudioStreamBasicDescription *d)
{
UInt32 size = sizeof(*d);
return AudioDeviceGetProperty(
id,
0,
false,
kAudioDevicePropertyStreamFormat,
&size,
d);
}
static OSStatus coreaudio_set_streamformat(AudioDeviceID id,
AudioStreamBasicDescription *d)
{
UInt32 size = sizeof(*d);
return AudioDeviceSetProperty(
id,
0,
0,
0,
kAudioDevicePropertyStreamFormat,
size,
d);
}
static OSStatus coreaudio_get_isrunning(AudioDeviceID id, UInt32 *result)
{
UInt32 size = sizeof(*result);
return AudioDeviceGetProperty(
id,
0,
0,
kAudioDevicePropertyDeviceIsRunning,
&size,
result);
}
#endif
static void coreaudio_logstatus (OSStatus status)
{
const char *str = "BUG";
@@ -368,7 +148,10 @@ static inline UInt32 isPlaying (AudioDeviceID outputDeviceID)
{
OSStatus status;
UInt32 result = 0;
status = coreaudio_get_isrunning(outputDeviceID, &result);
UInt32 propertySize = sizeof(outputDeviceID);
status = AudioDeviceGetProperty(
outputDeviceID, 0, 0,
kAudioDevicePropertyDeviceIsRunning, &propertySize, &result);
if (status != kAudioHardwareNoError) {
coreaudio_logerr(status,
"Could not determine whether Device is playing\n");
@@ -376,6 +159,11 @@ static inline UInt32 isPlaying (AudioDeviceID outputDeviceID)
return result;
}
static void coreaudio_atexit (void)
{
conf.isAtexit = 1;
}
static int coreaudio_lock (coreaudioVoiceOut *core, const char *fn_name)
{
int err;
@@ -499,15 +287,14 @@ static int coreaudio_write (SWVoiceOut *sw, void *buf, int len)
return audio_pcm_sw_write (sw, buf, len);
}
static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int coreaudio_init_out (HWVoiceOut *hw, struct audsettings *as)
{
OSStatus status;
coreaudioVoiceOut *core = (coreaudioVoiceOut *) hw;
UInt32 propertySize;
int err;
const char *typ = "playback";
AudioValueRange frameRange;
CoreaudioConf *conf = drv_opaque;
/* create mutex */
err = pthread_mutex_init(&core->mutex, NULL);
@@ -518,7 +305,12 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
audio_pcm_init_info (&hw->info, as);
status = coreaudio_get_voice(&core->outputDeviceID);
/* open default output device */
propertySize = sizeof(core->outputDeviceID);
status = AudioHardwareGetProperty(
kAudioHardwarePropertyDefaultOutputDevice,
&propertySize,
&core->outputDeviceID);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ,
"Could not get default output Device\n");
@@ -530,29 +322,42 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
}
/* get minimum and maximum buffer frame sizes */
status = coreaudio_get_framesizerange(core->outputDeviceID,
&frameRange);
propertySize = sizeof(frameRange);
status = AudioDeviceGetProperty(
core->outputDeviceID,
0,
0,
kAudioDevicePropertyBufferFrameSizeRange,
&propertySize,
&frameRange);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ,
"Could not get device buffer frame range\n");
return -1;
}
if (frameRange.mMinimum > conf->buffer_frames) {
if (frameRange.mMinimum > conf.buffer_frames) {
core->audioDevicePropertyBufferFrameSize = (UInt32) frameRange.mMinimum;
dolog ("warning: Upsizing Buffer Frames to %f\n", frameRange.mMinimum);
}
else if (frameRange.mMaximum < conf->buffer_frames) {
else if (frameRange.mMaximum < conf.buffer_frames) {
core->audioDevicePropertyBufferFrameSize = (UInt32) frameRange.mMaximum;
dolog ("warning: Downsizing Buffer Frames to %f\n", frameRange.mMaximum);
}
else {
core->audioDevicePropertyBufferFrameSize = conf->buffer_frames;
core->audioDevicePropertyBufferFrameSize = conf.buffer_frames;
}
/* set Buffer Frame Size */
status = coreaudio_set_framesize(core->outputDeviceID,
&core->audioDevicePropertyBufferFrameSize);
propertySize = sizeof(core->audioDevicePropertyBufferFrameSize);
status = AudioDeviceSetProperty(
core->outputDeviceID,
NULL,
0,
false,
kAudioDevicePropertyBufferFrameSize,
propertySize,
&core->audioDevicePropertyBufferFrameSize);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ,
"Could not set device buffer frame size %" PRIu32 "\n",
@@ -561,18 +366,30 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
}
/* get Buffer Frame Size */
status = coreaudio_get_framesize(core->outputDeviceID,
&core->audioDevicePropertyBufferFrameSize);
propertySize = sizeof(core->audioDevicePropertyBufferFrameSize);
status = AudioDeviceGetProperty(
core->outputDeviceID,
0,
false,
kAudioDevicePropertyBufferFrameSize,
&propertySize,
&core->audioDevicePropertyBufferFrameSize);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ,
"Could not get device buffer frame size\n");
return -1;
}
hw->samples = conf->nbuffers * core->audioDevicePropertyBufferFrameSize;
hw->samples = conf.nbuffers * core->audioDevicePropertyBufferFrameSize;
/* get StreamFormat */
status = coreaudio_get_streamformat(core->outputDeviceID,
&core->outputStreamBasicDescription);
propertySize = sizeof(core->outputStreamBasicDescription);
status = AudioDeviceGetProperty(
core->outputDeviceID,
0,
false,
kAudioDevicePropertyStreamFormat,
&propertySize,
&core->outputStreamBasicDescription);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ,
"Could not get Device Stream properties\n");
@@ -582,8 +399,15 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
/* set Samplerate */
core->outputStreamBasicDescription.mSampleRate = (Float64) as->freq;
status = coreaudio_set_streamformat(core->outputDeviceID,
&core->outputStreamBasicDescription);
propertySize = sizeof(core->outputStreamBasicDescription);
status = AudioDeviceSetProperty(
core->outputDeviceID,
0,
0,
0,
kAudioDevicePropertyStreamFormat,
propertySize,
&core->outputStreamBasicDescription);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ, "Could not set samplerate %d\n",
as->freq);
@@ -592,12 +416,8 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
}
/* set Callback */
core->ioprocid = NULL;
status = AudioDeviceCreateIOProcID(core->outputDeviceID,
audioDeviceIOProc,
hw,
&core->ioprocid);
if (status != kAudioHardwareNoError || core->ioprocid == NULL) {
status = AudioDeviceAddIOProc(core->outputDeviceID, audioDeviceIOProc, hw);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ, "Could not set IOProc\n");
core->outputDeviceID = kAudioDeviceUnknown;
return -1;
@@ -605,10 +425,10 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
/* start Playback */
if (!isPlaying(core->outputDeviceID)) {
status = AudioDeviceStart(core->outputDeviceID, core->ioprocid);
status = AudioDeviceStart(core->outputDeviceID, audioDeviceIOProc);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ, "Could not start playback\n");
AudioDeviceDestroyIOProcID(core->outputDeviceID, core->ioprocid);
AudioDeviceRemoveIOProc(core->outputDeviceID, audioDeviceIOProc);
core->outputDeviceID = kAudioDeviceUnknown;
return -1;
}
@@ -623,18 +443,18 @@ static void coreaudio_fini_out (HWVoiceOut *hw)
int err;
coreaudioVoiceOut *core = (coreaudioVoiceOut *) hw;
if (!audio_is_cleaning_up()) {
if (!conf.isAtexit) {
/* stop playback */
if (isPlaying(core->outputDeviceID)) {
status = AudioDeviceStop(core->outputDeviceID, core->ioprocid);
status = AudioDeviceStop(core->outputDeviceID, audioDeviceIOProc);
if (status != kAudioHardwareNoError) {
coreaudio_logerr (status, "Could not stop playback\n");
}
}
/* remove callback */
status = AudioDeviceDestroyIOProcID(core->outputDeviceID,
core->ioprocid);
status = AudioDeviceRemoveIOProc(core->outputDeviceID,
audioDeviceIOProc);
if (status != kAudioHardwareNoError) {
coreaudio_logerr (status, "Could not remove IOProc\n");
}
@@ -657,7 +477,7 @@ static int coreaudio_ctl_out (HWVoiceOut *hw, int cmd, ...)
case VOICE_ENABLE:
/* start playback */
if (!isPlaying(core->outputDeviceID)) {
status = AudioDeviceStart(core->outputDeviceID, core->ioprocid);
status = AudioDeviceStart(core->outputDeviceID, audioDeviceIOProc);
if (status != kAudioHardwareNoError) {
coreaudio_logerr (status, "Could not resume playback\n");
}
@@ -666,10 +486,9 @@ static int coreaudio_ctl_out (HWVoiceOut *hw, int cmd, ...)
case VOICE_DISABLE:
/* stop playback */
if (!audio_is_cleaning_up()) {
if (!conf.isAtexit) {
if (isPlaying(core->outputDeviceID)) {
status = AudioDeviceStop(core->outputDeviceID,
core->ioprocid);
status = AudioDeviceStop(core->outputDeviceID, audioDeviceIOProc);
if (status != kAudioHardwareNoError) {
coreaudio_logerr (status, "Could not pause playback\n");
}
@@ -680,35 +499,28 @@ static int coreaudio_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static CoreaudioConf glob_conf = {
.buffer_frames = 512,
.nbuffers = 4,
};
static void *coreaudio_audio_init (void)
{
CoreaudioConf *conf = g_malloc(sizeof(CoreaudioConf));
*conf = glob_conf;
return conf;
atexit(coreaudio_atexit);
return &coreaudio_audio_init;
}
static void coreaudio_audio_fini (void *opaque)
{
g_free(opaque);
(void) opaque;
}
static struct audio_option coreaudio_options[] = {
{
.name = "BUFFER_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.buffer_frames,
.valp = &conf.buffer_frames,
.descr = "Size of the buffer in frames"
},
{
.name = "BUFFER_COUNT",
.tag = AUD_OPT_INT,
.valp = &glob_conf.nbuffers,
.valp = &conf.nbuffers,
.descr = "Number of buffers"
},
{ /* End of list */ }

View File

@@ -67,11 +67,11 @@ static int glue (dsound_lock_, TYPE) (
LPVOID *p2p,
DWORD *blen1p,
DWORD *blen2p,
int entire,
dsound *s
int entire
)
{
HRESULT hr;
int i;
LPVOID p1 = NULL, p2 = NULL;
DWORD blen1 = 0, blen2 = 0;
DWORD flag;
@@ -81,18 +81,37 @@ static int glue (dsound_lock_, TYPE) (
#else
flag = entire ? DSBLOCK_ENTIREBUFFER : 0;
#endif
hr = glue(IFACE, _Lock)(buf, pos, len, &p1, &blen1, &p2, &blen2, flag);
for (i = 0; i < conf.lock_retries; ++i) {
hr = glue (IFACE, _Lock) (
buf,
pos,
len,
&p1,
&blen1,
&p2,
&blen2,
flag
);
if (FAILED (hr)) {
if (FAILED (hr)) {
#ifndef DSBTYPE_IN
if (hr == DSERR_BUFFERLOST) {
if (glue (dsound_restore_, TYPE) (buf, s)) {
dsound_logerr (hr, "Could not lock " NAME "\n");
if (hr == DSERR_BUFFERLOST) {
if (glue (dsound_restore_, TYPE) (buf)) {
dsound_logerr (hr, "Could not lock " NAME "\n");
goto fail;
}
continue;
}
#endif
dsound_logerr (hr, "Could not lock " NAME "\n");
goto fail;
}
#endif
dsound_logerr (hr, "Could not lock " NAME "\n");
break;
}
if (i == conf.lock_retries) {
dolog ("%d attempts to lock " NAME " failed\n", i);
goto fail;
}
@@ -155,19 +174,16 @@ static void dsound_fini_out (HWVoiceOut *hw)
}
#ifdef DSBTYPE_IN
static int dsound_init_in(HWVoiceIn *hw, struct audsettings *as,
void *drv_opaque)
static int dsound_init_in (HWVoiceIn *hw, struct audsettings *as)
#else
static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int dsound_init_out (HWVoiceOut *hw, struct audsettings *as)
#endif
{
int err;
HRESULT hr;
dsound *s = drv_opaque;
dsound *s = &glob_dsound;
WAVEFORMATEX wfx;
struct audsettings obt_as;
DSoundConf *conf = &s->conf;
#ifdef DSBTYPE_IN
const char *typ = "ADC";
DSoundVoiceIn *ds = (DSoundVoiceIn *) hw;
@@ -194,7 +210,7 @@ static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
bd.dwSize = sizeof (bd);
bd.lpwfxFormat = &wfx;
#ifdef DSBTYPE_IN
bd.dwBufferBytes = conf->bufsize_in;
bd.dwBufferBytes = conf.bufsize_in;
hr = IDirectSoundCapture_CreateCaptureBuffer (
s->dsound_capture,
&bd,
@@ -203,7 +219,7 @@ static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
);
#else
bd.dwFlags = DSBCAPS_STICKYFOCUS | DSBCAPS_GETCURRENTPOSITION2;
bd.dwBufferBytes = conf->bufsize_out;
bd.dwBufferBytes = conf.bufsize_out;
hr = IDirectSound_CreateSoundBuffer (
s->dsound,
&bd,
@@ -253,7 +269,6 @@ static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
);
}
hw->samples = bc.dwBufferBytes >> hw->info.shift;
ds->s = s;
#ifdef DEBUG_DSOUND
dolog ("caps %ld, desc %ld\n",

View File

@@ -26,7 +26,6 @@
* SEAL 1.07 by Carlos 'pel' Hasan was used as documentation
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "audio.h"
@@ -42,25 +41,42 @@
/* #define DEBUG_DSOUND */
typedef struct {
static struct {
int lock_retries;
int restore_retries;
int getstatus_retries;
int set_primary;
int bufsize_in;
int bufsize_out;
struct audsettings settings;
int latency_millis;
} DSoundConf;
} conf = {
.lock_retries = 1,
.restore_retries = 1,
.getstatus_retries = 1,
.set_primary = 0,
.bufsize_in = 16384,
.bufsize_out = 16384,
.settings.freq = 44100,
.settings.nchannels = 2,
.settings.fmt = AUD_FMT_S16,
.latency_millis = 10
};
typedef struct {
LPDIRECTSOUND dsound;
LPDIRECTSOUNDCAPTURE dsound_capture;
LPDIRECTSOUNDBUFFER dsound_primary_buffer;
struct audsettings settings;
DSoundConf conf;
} dsound;
static dsound glob_dsound;
typedef struct {
HWVoiceOut hw;
LPDIRECTSOUNDBUFFER dsound_buffer;
DWORD old_pos;
int first_time;
dsound *s;
#ifdef DEBUG_DSOUND
DWORD old_ppos;
DWORD played;
@@ -72,7 +88,6 @@ typedef struct {
HWVoiceIn hw;
int first_time;
LPDIRECTSOUNDCAPTUREBUFFER dsound_capture_buffer;
dsound *s;
} DSoundVoiceIn;
static void dsound_log_hresult (HRESULT hr)
@@ -266,17 +281,29 @@ static void print_wave_format (WAVEFORMATEX *wfx)
}
#endif
static int dsound_restore_out (LPDIRECTSOUNDBUFFER dsb, dsound *s)
static int dsound_restore_out (LPDIRECTSOUNDBUFFER dsb)
{
HRESULT hr;
int i;
hr = IDirectSoundBuffer_Restore (dsb);
for (i = 0; i < conf.restore_retries; ++i) {
hr = IDirectSoundBuffer_Restore (dsb);
if (hr != DS_OK) {
dsound_logerr (hr, "Could not restore playback buffer\n");
return -1;
switch (hr) {
case DS_OK:
return 0;
case DSERR_BUFFERLOST:
continue;
default:
dsound_logerr (hr, "Could not restore playback buffer\n");
return -1;
}
}
return 0;
dolog ("%d attempts to restore playback buffer failed\n", i);
return -1;
}
#include "dsound_template.h"
@@ -284,20 +311,25 @@ static int dsound_restore_out (LPDIRECTSOUNDBUFFER dsb, dsound *s)
#include "dsound_template.h"
#undef DSBTYPE_IN
static int dsound_get_status_out (LPDIRECTSOUNDBUFFER dsb, DWORD *statusp,
dsound *s)
static int dsound_get_status_out (LPDIRECTSOUNDBUFFER dsb, DWORD *statusp)
{
HRESULT hr;
int i;
hr = IDirectSoundBuffer_GetStatus (dsb, statusp);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not get playback buffer status\n");
return -1;
}
for (i = 0; i < conf.getstatus_retries; ++i) {
hr = IDirectSoundBuffer_GetStatus (dsb, statusp);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not get playback buffer status\n");
return -1;
}
if (*statusp & DSERR_BUFFERLOST) {
dsound_restore_out(dsb, s);
return -1;
if (*statusp & DSERR_BUFFERLOST) {
if (dsound_restore_out (dsb)) {
return -1;
}
continue;
}
break;
}
return 0;
@@ -344,8 +376,7 @@ static void dsound_write_sample (HWVoiceOut *hw, uint8_t *dst, int dst_len)
hw->rpos = pos % hw->samples;
}
static void dsound_clear_sample (HWVoiceOut *hw, LPDIRECTSOUNDBUFFER dsb,
dsound *s)
static void dsound_clear_sample (HWVoiceOut *hw, LPDIRECTSOUNDBUFFER dsb)
{
int err;
LPVOID p1, p2;
@@ -358,8 +389,7 @@ static void dsound_clear_sample (HWVoiceOut *hw, LPDIRECTSOUNDBUFFER dsb,
hw->samples << hw->info.shift,
&p1, &p2,
&blen1, &blen2,
1,
s
1
);
if (err) {
return;
@@ -385,9 +415,25 @@ static void dsound_clear_sample (HWVoiceOut *hw, LPDIRECTSOUNDBUFFER dsb,
dsound_unlock_out (dsb, p1, p2, blen1, blen2);
}
static int dsound_open (dsound *s)
static void dsound_close (dsound *s)
{
HRESULT hr;
if (s->dsound_primary_buffer) {
hr = IDirectSoundBuffer_Release (s->dsound_primary_buffer);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not release primary buffer\n");
}
s->dsound_primary_buffer = NULL;
}
}
static int dsound_open (dsound *s)
{
int err;
HRESULT hr;
WAVEFORMATEX wfx;
DSBUFFERDESC dsbd;
HWND hwnd;
hwnd = GetForegroundWindow ();
@@ -403,7 +449,63 @@ static int dsound_open (dsound *s)
return -1;
}
if (!conf.set_primary) {
return 0;
}
err = waveformat_from_audio_settings (&wfx, &conf.settings);
if (err) {
return -1;
}
memset (&dsbd, 0, sizeof (dsbd));
dsbd.dwSize = sizeof (dsbd);
dsbd.dwFlags = DSBCAPS_PRIMARYBUFFER;
dsbd.dwBufferBytes = 0;
dsbd.lpwfxFormat = NULL;
hr = IDirectSound_CreateSoundBuffer (
s->dsound,
&dsbd,
&s->dsound_primary_buffer,
NULL
);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not create primary playback buffer\n");
return -1;
}
hr = IDirectSoundBuffer_SetFormat (s->dsound_primary_buffer, &wfx);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not set primary playback buffer format\n");
}
hr = IDirectSoundBuffer_GetFormat (
s->dsound_primary_buffer,
&wfx,
sizeof (wfx),
NULL
);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not get primary playback buffer format\n");
goto fail0;
}
#ifdef DEBUG_DSOUND
dolog ("Primary\n");
print_wave_format (&wfx);
#endif
err = waveformat_to_audio_settings (&wfx, &s->settings);
if (err) {
goto fail0;
}
return 0;
fail0:
dsound_close (s);
return -1;
}
static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
@@ -412,7 +514,6 @@ static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
DWORD status;
DSoundVoiceOut *ds = (DSoundVoiceOut *) hw;
LPDIRECTSOUNDBUFFER dsb = ds->dsound_buffer;
dsound *s = ds->s;
if (!dsb) {
dolog ("Attempt to control voice without a buffer\n");
@@ -421,7 +522,7 @@ static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
switch (cmd) {
case VOICE_ENABLE:
if (dsound_get_status_out (dsb, &status, s)) {
if (dsound_get_status_out (dsb, &status)) {
return -1;
}
@@ -430,7 +531,7 @@ static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
dsound_clear_sample (hw, dsb, s);
dsound_clear_sample (hw, dsb);
hr = IDirectSoundBuffer_Play (dsb, 0, 0, DSBPLAY_LOOPING);
if (FAILED (hr)) {
@@ -440,7 +541,7 @@ static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
break;
case VOICE_DISABLE:
if (dsound_get_status_out (dsb, &status, s)) {
if (dsound_get_status_out (dsb, &status)) {
return -1;
}
@@ -477,8 +578,6 @@ static int dsound_run_out (HWVoiceOut *hw, int live)
DWORD wpos, ppos, old_pos;
LPVOID p1, p2;
int bufsize;
dsound *s = ds->s;
DSoundConf *conf = &s->conf;
if (!dsb) {
dolog ("Attempt to run empty with playback buffer\n");
@@ -501,14 +600,14 @@ static int dsound_run_out (HWVoiceOut *hw, int live)
len = live << hwshift;
if (ds->first_time) {
if (conf->latency_millis) {
if (conf.latency_millis) {
DWORD cur_blat;
cur_blat = audio_ring_dist (wpos, ppos, bufsize);
ds->first_time = 0;
old_pos = wpos;
old_pos +=
millis_to_bytes (&hw->info, conf->latency_millis) - cur_blat;
millis_to_bytes (&hw->info, conf.latency_millis) - cur_blat;
old_pos %= bufsize;
old_pos &= ~hw->info.align;
}
@@ -564,8 +663,7 @@ static int dsound_run_out (HWVoiceOut *hw, int live)
len,
&p1, &p2,
&blen1, &blen2,
0,
s
0
);
if (err) {
return 0;
@@ -668,7 +766,6 @@ static int dsound_run_in (HWVoiceIn *hw)
DWORD cpos, rpos;
LPVOID p1, p2;
int hwshift;
dsound *s = ds->s;
if (!dscb) {
dolog ("Attempt to run without capture buffer\n");
@@ -723,8 +820,7 @@ static int dsound_run_in (HWVoiceIn *hw)
&p2,
&blen1,
&blen2,
0,
s
0
);
if (err) {
return 0;
@@ -747,19 +843,12 @@ static int dsound_run_in (HWVoiceIn *hw)
return decr;
}
static DSoundConf glob_conf = {
.bufsize_in = 16384,
.bufsize_out = 16384,
.latency_millis = 10
};
static void dsound_audio_fini (void *opaque)
{
HRESULT hr;
dsound *s = opaque;
if (!s->dsound) {
g_free(s);
return;
}
@@ -770,7 +859,6 @@ static void dsound_audio_fini (void *opaque)
s->dsound = NULL;
if (!s->dsound_capture) {
g_free(s);
return;
}
@@ -779,21 +867,17 @@ static void dsound_audio_fini (void *opaque)
dsound_logerr (hr, "Could not release DirectSoundCapture\n");
}
s->dsound_capture = NULL;
g_free(s);
}
static void *dsound_audio_init (void)
{
int err;
HRESULT hr;
dsound *s = g_malloc0(sizeof(dsound));
dsound *s = &glob_dsound;
s->conf = glob_conf;
hr = CoInitialize (NULL);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not initialize COM\n");
g_free(s);
return NULL;
}
@@ -806,7 +890,6 @@ static void *dsound_audio_init (void)
);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not create DirectSound instance\n");
g_free(s);
return NULL;
}
@@ -818,7 +901,7 @@ static void *dsound_audio_init (void)
if (FAILED (hr)) {
dsound_logerr (hr, "Could not release DirectSound\n");
}
g_free(s);
s->dsound = NULL;
return NULL;
}
@@ -855,22 +938,64 @@ static void *dsound_audio_init (void)
}
static struct audio_option dsound_options[] = {
{
.name = "LOCK_RETRIES",
.tag = AUD_OPT_INT,
.valp = &conf.lock_retries,
.descr = "Number of times to attempt locking the buffer"
},
{
.name = "RESTOURE_RETRIES",
.tag = AUD_OPT_INT,
.valp = &conf.restore_retries,
.descr = "Number of times to attempt restoring the buffer"
},
{
.name = "GETSTATUS_RETRIES",
.tag = AUD_OPT_INT,
.valp = &conf.getstatus_retries,
.descr = "Number of times to attempt getting status of the buffer"
},
{
.name = "SET_PRIMARY",
.tag = AUD_OPT_BOOL,
.valp = &conf.set_primary,
.descr = "Set the parameters of primary buffer"
},
{
.name = "LATENCY_MILLIS",
.tag = AUD_OPT_INT,
.valp = &glob_conf.latency_millis,
.valp = &conf.latency_millis,
.descr = "(undocumented)"
},
{
.name = "PRIMARY_FREQ",
.tag = AUD_OPT_INT,
.valp = &conf.settings.freq,
.descr = "Primary buffer frequency"
},
{
.name = "PRIMARY_CHANNELS",
.tag = AUD_OPT_INT,
.valp = &conf.settings.nchannels,
.descr = "Primary buffer number of channels (1 - mono, 2 - stereo)"
},
{
.name = "PRIMARY_FMT",
.tag = AUD_OPT_FMT,
.valp = &conf.settings.fmt,
.descr = "Primary buffer format"
},
{
.name = "BUFSIZE_OUT",
.tag = AUD_OPT_INT,
.valp = &glob_conf.bufsize_out,
.valp = &conf.bufsize_out,
.descr = "(undocumented)"
},
{
.name = "BUFSIZE_IN",
.tag = AUD_OPT_INT,
.valp = &glob_conf.bufsize_in,
.valp = &conf.bufsize_in,
.descr = "(undocumented)"
},
{ /* End of list */ }

557
audio/esdaudio.c Normal file
View File

@@ -0,0 +1,557 @@
/*
* QEMU ESD audio driver
*
* Copyright (c) 2006 Frederick Reeve (brushed up by malc)
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <esd.h>
#include "qemu-common.h"
#include "audio.h"
#define AUDIO_CAP "esd"
#include "audio_int.h"
#include "audio_pt_int.h"
typedef struct {
HWVoiceOut hw;
int done;
int live;
int decr;
int rpos;
void *pcm_buf;
int fd;
struct audio_pt pt;
} ESDVoiceOut;
typedef struct {
HWVoiceIn hw;
int done;
int dead;
int incr;
int wpos;
void *pcm_buf;
int fd;
struct audio_pt pt;
} ESDVoiceIn;
static struct {
int samples;
int divisor;
char *dac_host;
char *adc_host;
} conf = {
.samples = 1024,
.divisor = 2,
};
static void GCC_FMT_ATTR (2, 3) qesd_logerr (int err, const char *fmt, ...)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
AUD_log (AUDIO_CAP, "Reason: %s\n", strerror (err));
}
/* playback */
static void *qesd_thread_out (void *arg)
{
ESDVoiceOut *esd = arg;
HWVoiceOut *hw = &esd->hw;
int threshold;
threshold = conf.divisor ? hw->samples / conf.divisor : 0;
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
for (;;) {
int decr, to_mix, rpos;
for (;;) {
if (esd->done) {
goto exit;
}
if (esd->live > threshold) {
break;
}
if (audio_pt_wait (&esd->pt, AUDIO_FUNC)) {
goto exit;
}
}
decr = to_mix = esd->live;
rpos = hw->rpos;
if (audio_pt_unlock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
while (to_mix) {
ssize_t written;
int chunk = audio_MIN (to_mix, hw->samples - rpos);
struct st_sample *src = hw->mix_buf + rpos;
hw->clip (esd->pcm_buf, src, chunk);
again:
written = write (esd->fd, esd->pcm_buf, chunk << hw->info.shift);
if (written == -1) {
if (errno == EINTR || errno == EAGAIN) {
goto again;
}
qesd_logerr (errno, "write failed\n");
return NULL;
}
if (written != chunk << hw->info.shift) {
int wsamples = written >> hw->info.shift;
int wbytes = wsamples << hw->info.shift;
if (wbytes != written) {
dolog ("warning: Misaligned write %d (requested %zd), "
"alignment %d\n",
wbytes, written, hw->info.align + 1);
}
to_mix -= wsamples;
rpos = (rpos + wsamples) % hw->samples;
break;
}
rpos = (rpos + chunk) % hw->samples;
to_mix -= chunk;
}
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
esd->rpos = rpos;
esd->live -= decr;
esd->decr += decr;
}
exit:
audio_pt_unlock (&esd->pt, AUDIO_FUNC);
return NULL;
}
static int qesd_run_out (HWVoiceOut *hw, int live)
{
int decr;
ESDVoiceOut *esd = (ESDVoiceOut *) hw;
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return 0;
}
decr = audio_MIN (live, esd->decr);
esd->decr -= decr;
esd->live = live - decr;
hw->rpos = esd->rpos;
if (esd->live > 0) {
audio_pt_unlock_and_signal (&esd->pt, AUDIO_FUNC);
}
else {
audio_pt_unlock (&esd->pt, AUDIO_FUNC);
}
return decr;
}
static int qesd_write (SWVoiceOut *sw, void *buf, int len)
{
return audio_pcm_sw_write (sw, buf, len);
}
static int qesd_init_out (HWVoiceOut *hw, struct audsettings *as)
{
ESDVoiceOut *esd = (ESDVoiceOut *) hw;
struct audsettings obt_as = *as;
int esdfmt = ESD_STREAM | ESD_PLAY;
esdfmt |= (as->nchannels == 2) ? ESD_STEREO : ESD_MONO;
switch (as->fmt) {
case AUD_FMT_S8:
case AUD_FMT_U8:
esdfmt |= ESD_BITS8;
obt_as.fmt = AUD_FMT_U8;
break;
case AUD_FMT_S32:
case AUD_FMT_U32:
dolog ("Will use 16 instead of 32 bit samples\n");
/* fall through */
case AUD_FMT_S16:
case AUD_FMT_U16:
deffmt:
esdfmt |= ESD_BITS16;
obt_as.fmt = AUD_FMT_S16;
break;
default:
dolog ("Internal logic error: Bad audio format %d\n", as->fmt);
goto deffmt;
}
obt_as.endianness = AUDIO_HOST_ENDIANNESS;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = conf.samples;
esd->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
if (!esd->pcm_buf) {
dolog ("Could not allocate buffer (%d bytes)\n",
hw->samples << hw->info.shift);
return -1;
}
esd->fd = esd_play_stream (esdfmt, as->freq, conf.dac_host, NULL);
if (esd->fd < 0) {
qesd_logerr (errno, "esd_play_stream failed\n");
goto fail1;
}
if (audio_pt_init (&esd->pt, qesd_thread_out, esd, AUDIO_CAP, AUDIO_FUNC)) {
goto fail2;
}
return 0;
fail2:
if (close (esd->fd)) {
qesd_logerr (errno, "%s: close on esd socket(%d) failed\n",
AUDIO_FUNC, esd->fd);
}
esd->fd = -1;
fail1:
g_free (esd->pcm_buf);
esd->pcm_buf = NULL;
return -1;
}
static void qesd_fini_out (HWVoiceOut *hw)
{
void *ret;
ESDVoiceOut *esd = (ESDVoiceOut *) hw;
audio_pt_lock (&esd->pt, AUDIO_FUNC);
esd->done = 1;
audio_pt_unlock_and_signal (&esd->pt, AUDIO_FUNC);
audio_pt_join (&esd->pt, &ret, AUDIO_FUNC);
if (esd->fd >= 0) {
if (close (esd->fd)) {
qesd_logerr (errno, "failed to close esd socket\n");
}
esd->fd = -1;
}
audio_pt_fini (&esd->pt, AUDIO_FUNC);
g_free (esd->pcm_buf);
esd->pcm_buf = NULL;
}
static int qesd_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
(void) hw;
(void) cmd;
return 0;
}
/* capture */
static void *qesd_thread_in (void *arg)
{
ESDVoiceIn *esd = arg;
HWVoiceIn *hw = &esd->hw;
int threshold;
threshold = conf.divisor ? hw->samples / conf.divisor : 0;
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
for (;;) {
int incr, to_grab, wpos;
for (;;) {
if (esd->done) {
goto exit;
}
if (esd->dead > threshold) {
break;
}
if (audio_pt_wait (&esd->pt, AUDIO_FUNC)) {
goto exit;
}
}
incr = to_grab = esd->dead;
wpos = hw->wpos;
if (audio_pt_unlock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
while (to_grab) {
ssize_t nread;
int chunk = audio_MIN (to_grab, hw->samples - wpos);
void *buf = advance (esd->pcm_buf, wpos);
again:
nread = read (esd->fd, buf, chunk << hw->info.shift);
if (nread == -1) {
if (errno == EINTR || errno == EAGAIN) {
goto again;
}
qesd_logerr (errno, "read failed\n");
return NULL;
}
if (nread != chunk << hw->info.shift) {
int rsamples = nread >> hw->info.shift;
int rbytes = rsamples << hw->info.shift;
if (rbytes != nread) {
dolog ("warning: Misaligned write %d (requested %zd), "
"alignment %d\n",
rbytes, nread, hw->info.align + 1);
}
to_grab -= rsamples;
wpos = (wpos + rsamples) % hw->samples;
break;
}
hw->conv (hw->conv_buf + wpos, buf, nread >> hw->info.shift);
wpos = (wpos + chunk) % hw->samples;
to_grab -= chunk;
}
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
esd->wpos = wpos;
esd->dead -= incr;
esd->incr += incr;
}
exit:
audio_pt_unlock (&esd->pt, AUDIO_FUNC);
return NULL;
}
static int qesd_run_in (HWVoiceIn *hw)
{
int live, incr, dead;
ESDVoiceIn *esd = (ESDVoiceIn *) hw;
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return 0;
}
live = audio_pcm_hw_get_live_in (hw);
dead = hw->samples - live;
incr = audio_MIN (dead, esd->incr);
esd->incr -= incr;
esd->dead = dead - incr;
hw->wpos = esd->wpos;
if (esd->dead > 0) {
audio_pt_unlock_and_signal (&esd->pt, AUDIO_FUNC);
}
else {
audio_pt_unlock (&esd->pt, AUDIO_FUNC);
}
return incr;
}
static int qesd_read (SWVoiceIn *sw, void *buf, int len)
{
return audio_pcm_sw_read (sw, buf, len);
}
static int qesd_init_in (HWVoiceIn *hw, struct audsettings *as)
{
ESDVoiceIn *esd = (ESDVoiceIn *) hw;
struct audsettings obt_as = *as;
int esdfmt = ESD_STREAM | ESD_RECORD;
esdfmt |= (as->nchannels == 2) ? ESD_STEREO : ESD_MONO;
switch (as->fmt) {
case AUD_FMT_S8:
case AUD_FMT_U8:
esdfmt |= ESD_BITS8;
obt_as.fmt = AUD_FMT_U8;
break;
case AUD_FMT_S16:
case AUD_FMT_U16:
esdfmt |= ESD_BITS16;
obt_as.fmt = AUD_FMT_S16;
break;
case AUD_FMT_S32:
case AUD_FMT_U32:
dolog ("Will use 16 instead of 32 bit samples\n");
esdfmt |= ESD_BITS16;
obt_as.fmt = AUD_FMT_S16;
break;
}
obt_as.endianness = AUDIO_HOST_ENDIANNESS;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = conf.samples;
esd->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
if (!esd->pcm_buf) {
dolog ("Could not allocate buffer (%d bytes)\n",
hw->samples << hw->info.shift);
return -1;
}
esd->fd = esd_record_stream (esdfmt, as->freq, conf.adc_host, NULL);
if (esd->fd < 0) {
qesd_logerr (errno, "esd_record_stream failed\n");
goto fail1;
}
if (audio_pt_init (&esd->pt, qesd_thread_in, esd, AUDIO_CAP, AUDIO_FUNC)) {
goto fail2;
}
return 0;
fail2:
if (close (esd->fd)) {
qesd_logerr (errno, "%s: close on esd socket(%d) failed\n",
AUDIO_FUNC, esd->fd);
}
esd->fd = -1;
fail1:
g_free (esd->pcm_buf);
esd->pcm_buf = NULL;
return -1;
}
static void qesd_fini_in (HWVoiceIn *hw)
{
void *ret;
ESDVoiceIn *esd = (ESDVoiceIn *) hw;
audio_pt_lock (&esd->pt, AUDIO_FUNC);
esd->done = 1;
audio_pt_unlock_and_signal (&esd->pt, AUDIO_FUNC);
audio_pt_join (&esd->pt, &ret, AUDIO_FUNC);
if (esd->fd >= 0) {
if (close (esd->fd)) {
qesd_logerr (errno, "failed to close esd socket\n");
}
esd->fd = -1;
}
audio_pt_fini (&esd->pt, AUDIO_FUNC);
g_free (esd->pcm_buf);
esd->pcm_buf = NULL;
}
static int qesd_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
(void) hw;
(void) cmd;
return 0;
}
/* common */
static void *qesd_audio_init (void)
{
return &conf;
}
static void qesd_audio_fini (void *opaque)
{
(void) opaque;
ldebug ("esd_fini");
}
struct audio_option qesd_options[] = {
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.samples,
.descr = "buffer size in samples"
},
{
.name = "DIVISOR",
.tag = AUD_OPT_INT,
.valp = &conf.divisor,
.descr = "threshold divisor"
},
{
.name = "DAC_HOST",
.tag = AUD_OPT_STR,
.valp = &conf.dac_host,
.descr = "playback host"
},
{
.name = "ADC_HOST",
.tag = AUD_OPT_STR,
.valp = &conf.adc_host,
.descr = "capture host"
},
{ /* End of list */ }
};
static struct audio_pcm_ops qesd_pcm_ops = {
.init_out = qesd_init_out,
.fini_out = qesd_fini_out,
.run_out = qesd_run_out,
.write = qesd_write,
.ctl_out = qesd_ctl_out,
.init_in = qesd_init_in,
.fini_in = qesd_fini_in,
.run_in = qesd_run_in,
.read = qesd_read,
.ctl_in = qesd_ctl_in,
};
struct audio_driver esd_audio_driver = {
.name = "esd",
.descr = "http://en.wikipedia.org/wiki/Esound",
.options = qesd_options,
.init = qesd_audio_init,
.fini = qesd_audio_fini,
.pcm_ops = &qesd_pcm_ops,
.can_be_default = 0,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (ESDVoiceOut),
.voice_size_in = sizeof (ESDVoiceIn)
};

685
audio/fmodaudio.c Normal file
View File

@@ -0,0 +1,685 @@
/*
* QEMU FMOD audio driver
*
* Copyright (c) 2004-2005 Vassili Karpov (malc)
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <fmod.h>
#include <fmod_errors.h>
#include "qemu-common.h"
#include "audio.h"
#define AUDIO_CAP "fmod"
#include "audio_int.h"
typedef struct FMODVoiceOut {
HWVoiceOut hw;
unsigned int old_pos;
FSOUND_SAMPLE *fmod_sample;
int channel;
} FMODVoiceOut;
typedef struct FMODVoiceIn {
HWVoiceIn hw;
FSOUND_SAMPLE *fmod_sample;
} FMODVoiceIn;
static struct {
const char *drvname;
int nb_samples;
int freq;
int nb_channels;
int bufsize;
int broken_adc;
} conf = {
.nb_samples = 2048 * 2,
.freq = 44100,
.nb_channels = 2,
};
static void GCC_FMT_ATTR (1, 2) fmod_logerr (const char *fmt, ...)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
AUD_log (AUDIO_CAP, "Reason: %s\n",
FMOD_ErrorString (FSOUND_GetError ()));
}
static void GCC_FMT_ATTR (2, 3) fmod_logerr2 (
const char *typ,
const char *fmt,
...
)
{
va_list ap;
AUD_log (AUDIO_CAP, "Could not initialize %s\n", typ);
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
AUD_log (AUDIO_CAP, "Reason: %s\n",
FMOD_ErrorString (FSOUND_GetError ()));
}
static int fmod_write (SWVoiceOut *sw, void *buf, int len)
{
return audio_pcm_sw_write (sw, buf, len);
}
static void fmod_clear_sample (FMODVoiceOut *fmd)
{
HWVoiceOut *hw = &fmd->hw;
int status;
void *p1 = 0, *p2 = 0;
unsigned int len1 = 0, len2 = 0;
status = FSOUND_Sample_Lock (
fmd->fmod_sample,
0,
hw->samples << hw->info.shift,
&p1,
&p2,
&len1,
&len2
);
if (!status) {
fmod_logerr ("Failed to lock sample\n");
return;
}
if ((len1 & hw->info.align) || (len2 & hw->info.align)) {
dolog ("Lock returned misaligned length %d, %d, alignment %d\n",
len1, len2, hw->info.align + 1);
goto fail;
}
if ((len1 + len2) - (hw->samples << hw->info.shift)) {
dolog ("Lock returned incomplete length %d, %d\n",
len1 + len2, hw->samples << hw->info.shift);
goto fail;
}
audio_pcm_info_clear_buf (&hw->info, p1, hw->samples);
fail:
status = FSOUND_Sample_Unlock (fmd->fmod_sample, p1, p2, len1, len2);
if (!status) {
fmod_logerr ("Failed to unlock sample\n");
}
}
static void fmod_write_sample (HWVoiceOut *hw, uint8_t *dst, int dst_len)
{
int src_len1 = dst_len;
int src_len2 = 0;
int pos = hw->rpos + dst_len;
struct st_sample *src1 = hw->mix_buf + hw->rpos;
struct st_sample *src2 = NULL;
if (pos > hw->samples) {
src_len1 = hw->samples - hw->rpos;
src2 = hw->mix_buf;
src_len2 = dst_len - src_len1;
pos = src_len2;
}
if (src_len1) {
hw->clip (dst, src1, src_len1);
}
if (src_len2) {
dst = advance (dst, src_len1 << hw->info.shift);
hw->clip (dst, src2, src_len2);
}
hw->rpos = pos % hw->samples;
}
static int fmod_unlock_sample (FSOUND_SAMPLE *sample, void *p1, void *p2,
unsigned int blen1, unsigned int blen2)
{
int status = FSOUND_Sample_Unlock (sample, p1, p2, blen1, blen2);
if (!status) {
fmod_logerr ("Failed to unlock sample\n");
return -1;
}
return 0;
}
static int fmod_lock_sample (
FSOUND_SAMPLE *sample,
struct audio_pcm_info *info,
int pos,
int len,
void **p1,
void **p2,
unsigned int *blen1,
unsigned int *blen2
)
{
int status;
status = FSOUND_Sample_Lock (
sample,
pos << info->shift,
len << info->shift,
p1,
p2,
blen1,
blen2
);
if (!status) {
fmod_logerr ("Failed to lock sample\n");
return -1;
}
if ((*blen1 & info->align) || (*blen2 & info->align)) {
dolog ("Lock returned misaligned length %d, %d, alignment %d\n",
*blen1, *blen2, info->align + 1);
fmod_unlock_sample (sample, *p1, *p2, *blen1, *blen2);
*p1 = NULL - 1;
*p2 = NULL - 1;
*blen1 = ~0U;
*blen2 = ~0U;
return -1;
}
if (!*p1 && *blen1) {
dolog ("warning: !p1 && blen1=%d\n", *blen1);
*blen1 = 0;
}
if (!p2 && *blen2) {
dolog ("warning: !p2 && blen2=%d\n", *blen2);
*blen2 = 0;
}
return 0;
}
static int fmod_run_out (HWVoiceOut *hw, int live)
{
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
int decr;
void *p1 = 0, *p2 = 0;
unsigned int blen1 = 0, blen2 = 0;
unsigned int len1 = 0, len2 = 0;
if (!hw->pending_disable) {
return 0;
}
decr = live;
if (fmd->channel >= 0) {
int len = decr;
int old_pos = fmd->old_pos;
int ppos = FSOUND_GetCurrentPosition (fmd->channel);
if (ppos == old_pos || !ppos) {
return 0;
}
if ((old_pos < ppos) && ((old_pos + len) > ppos)) {
len = ppos - old_pos;
}
else {
if ((old_pos > ppos) && ((old_pos + len) > (ppos + hw->samples))) {
len = hw->samples - old_pos + ppos;
}
}
decr = len;
if (audio_bug (AUDIO_FUNC, decr < 0)) {
dolog ("decr=%d live=%d ppos=%d old_pos=%d len=%d\n",
decr, live, ppos, old_pos, len);
return 0;
}
}
if (!decr) {
return 0;
}
if (fmod_lock_sample (fmd->fmod_sample, &fmd->hw.info,
fmd->old_pos, decr,
&p1, &p2,
&blen1, &blen2)) {
return 0;
}
len1 = blen1 >> hw->info.shift;
len2 = blen2 >> hw->info.shift;
ldebug ("%p %p %d %d %d %d\n", p1, p2, len1, len2, blen1, blen2);
decr = len1 + len2;
if (p1 && len1) {
fmod_write_sample (hw, p1, len1);
}
if (p2 && len2) {
fmod_write_sample (hw, p2, len2);
}
fmod_unlock_sample (fmd->fmod_sample, p1, p2, blen1, blen2);
fmd->old_pos = (fmd->old_pos + decr) % hw->samples;
return decr;
}
static int aud_to_fmodfmt (audfmt_e fmt, int stereo)
{
int mode = FSOUND_LOOP_NORMAL;
switch (fmt) {
case AUD_FMT_S8:
mode |= FSOUND_SIGNED | FSOUND_8BITS;
break;
case AUD_FMT_U8:
mode |= FSOUND_UNSIGNED | FSOUND_8BITS;
break;
case AUD_FMT_S16:
mode |= FSOUND_SIGNED | FSOUND_16BITS;
break;
case AUD_FMT_U16:
mode |= FSOUND_UNSIGNED | FSOUND_16BITS;
break;
default:
dolog ("Internal logic error: Bad audio format %d\n", fmt);
#ifdef DEBUG_FMOD
abort ();
#endif
mode |= FSOUND_8BITS;
}
mode |= stereo ? FSOUND_STEREO : FSOUND_MONO;
return mode;
}
static void fmod_fini_out (HWVoiceOut *hw)
{
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
if (fmd->fmod_sample) {
FSOUND_Sample_Free (fmd->fmod_sample);
fmd->fmod_sample = 0;
if (fmd->channel >= 0) {
FSOUND_StopSound (fmd->channel);
}
}
}
static int fmod_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int mode, channel;
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
struct audsettings obt_as = *as;
mode = aud_to_fmodfmt (as->fmt, as->nchannels == 2 ? 1 : 0);
fmd->fmod_sample = FSOUND_Sample_Alloc (
FSOUND_FREE, /* index */
conf.nb_samples, /* length */
mode, /* mode */
as->freq, /* freq */
255, /* volume */
128, /* pan */
255 /* priority */
);
if (!fmd->fmod_sample) {
fmod_logerr2 ("DAC", "Failed to allocate FMOD sample\n");
return -1;
}
channel = FSOUND_PlaySoundEx (FSOUND_FREE, fmd->fmod_sample, 0, 1);
if (channel < 0) {
fmod_logerr2 ("DAC", "Failed to start playing sound\n");
FSOUND_Sample_Free (fmd->fmod_sample);
return -1;
}
fmd->channel = channel;
/* FMOD always operates on little endian frames? */
obt_as.endianness = 0;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = conf.nb_samples;
return 0;
}
static int fmod_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
int status;
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
switch (cmd) {
case VOICE_ENABLE:
fmod_clear_sample (fmd);
status = FSOUND_SetPaused (fmd->channel, 0);
if (!status) {
fmod_logerr ("Failed to resume channel %d\n", fmd->channel);
}
break;
case VOICE_DISABLE:
status = FSOUND_SetPaused (fmd->channel, 1);
if (!status) {
fmod_logerr ("Failed to pause channel %d\n", fmd->channel);
}
break;
}
return 0;
}
static int fmod_init_in (HWVoiceIn *hw, struct audsettings *as)
{
int mode;
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
struct audsettings obt_as = *as;
if (conf.broken_adc) {
return -1;
}
mode = aud_to_fmodfmt (as->fmt, as->nchannels == 2 ? 1 : 0);
fmd->fmod_sample = FSOUND_Sample_Alloc (
FSOUND_FREE, /* index */
conf.nb_samples, /* length */
mode, /* mode */
as->freq, /* freq */
255, /* volume */
128, /* pan */
255 /* priority */
);
if (!fmd->fmod_sample) {
fmod_logerr2 ("ADC", "Failed to allocate FMOD sample\n");
return -1;
}
/* FMOD always operates on little endian frames? */
obt_as.endianness = 0;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = conf.nb_samples;
return 0;
}
static void fmod_fini_in (HWVoiceIn *hw)
{
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
if (fmd->fmod_sample) {
FSOUND_Record_Stop ();
FSOUND_Sample_Free (fmd->fmod_sample);
fmd->fmod_sample = 0;
}
}
static int fmod_run_in (HWVoiceIn *hw)
{
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
int hwshift = hw->info.shift;
int live, dead, new_pos, len;
unsigned int blen1 = 0, blen2 = 0;
unsigned int len1, len2;
unsigned int decr;
void *p1, *p2;
live = audio_pcm_hw_get_live_in (hw);
dead = hw->samples - live;
if (!dead) {
return 0;
}
new_pos = FSOUND_Record_GetPosition ();
if (new_pos < 0) {
fmod_logerr ("Could not get recording position\n");
return 0;
}
len = audio_ring_dist (new_pos, hw->wpos, hw->samples);
if (!len) {
return 0;
}
len = audio_MIN (len, dead);
if (fmod_lock_sample (fmd->fmod_sample, &fmd->hw.info,
hw->wpos, len,
&p1, &p2,
&blen1, &blen2)) {
return 0;
}
len1 = blen1 >> hwshift;
len2 = blen2 >> hwshift;
decr = len1 + len2;
if (p1 && blen1) {
hw->conv (hw->conv_buf + hw->wpos, p1, len1);
}
if (p2 && len2) {
hw->conv (hw->conv_buf, p2, len2);
}
fmod_unlock_sample (fmd->fmod_sample, p1, p2, blen1, blen2);
hw->wpos = (hw->wpos + decr) % hw->samples;
return decr;
}
static struct {
const char *name;
int type;
} drvtab[] = {
{ .name = "none", .type = FSOUND_OUTPUT_NOSOUND },
#ifdef _WIN32
{ .name = "winmm", .type = FSOUND_OUTPUT_WINMM },
{ .name = "dsound", .type = FSOUND_OUTPUT_DSOUND },
{ .name = "a3d", .type = FSOUND_OUTPUT_A3D },
{ .name = "asio", .type = FSOUND_OUTPUT_ASIO },
#endif
#ifdef __linux__
{ .name = "oss", .type = FSOUND_OUTPUT_OSS },
{ .name = "alsa", .type = FSOUND_OUTPUT_ALSA },
{ .name = "esd", .type = FSOUND_OUTPUT_ESD },
#endif
#ifdef __APPLE__
{ .name = "mac", .type = FSOUND_OUTPUT_MAC },
#endif
#if 0
{ .name = "xbox", .type = FSOUND_OUTPUT_XBOX },
{ .name = "ps2", .type = FSOUND_OUTPUT_PS2 },
{ .name = "gcube", .type = FSOUND_OUTPUT_GC },
#endif
{ .name = "none-realtime", .type = FSOUND_OUTPUT_NOSOUND_NONREALTIME }
};
static void *fmod_audio_init (void)
{
size_t i;
double ver;
int status;
int output_type = -1;
const char *drv = conf.drvname;
ver = FSOUND_GetVersion ();
if (ver < FMOD_VERSION) {
dolog ("Wrong FMOD version %f, need at least %f\n", ver, FMOD_VERSION);
return NULL;
}
#ifdef __linux__
if (ver < 3.75) {
dolog ("FMOD before 3.75 has bug preventing ADC from working\n"
"ADC will be disabled.\n");
conf.broken_adc = 1;
}
#endif
if (drv) {
int found = 0;
for (i = 0; i < ARRAY_SIZE (drvtab); i++) {
if (!strcmp (drv, drvtab[i].name)) {
output_type = drvtab[i].type;
found = 1;
break;
}
}
if (!found) {
dolog ("Unknown FMOD driver `%s'\n", drv);
dolog ("Valid drivers:\n");
for (i = 0; i < ARRAY_SIZE (drvtab); i++) {
dolog (" %s\n", drvtab[i].name);
}
}
}
if (output_type != -1) {
status = FSOUND_SetOutput (output_type);
if (!status) {
fmod_logerr ("FSOUND_SetOutput(%d) failed\n", output_type);
return NULL;
}
}
if (conf.bufsize) {
status = FSOUND_SetBufferSize (conf.bufsize);
if (!status) {
fmod_logerr ("FSOUND_SetBufferSize (%d) failed\n", conf.bufsize);
}
}
status = FSOUND_Init (conf.freq, conf.nb_channels, 0);
if (!status) {
fmod_logerr ("FSOUND_Init failed\n");
return NULL;
}
return &conf;
}
static int fmod_read (SWVoiceIn *sw, void *buf, int size)
{
return audio_pcm_sw_read (sw, buf, size);
}
static int fmod_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
int status;
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
switch (cmd) {
case VOICE_ENABLE:
status = FSOUND_Record_StartSample (fmd->fmod_sample, 1);
if (!status) {
fmod_logerr ("Failed to start recording\n");
}
break;
case VOICE_DISABLE:
status = FSOUND_Record_Stop ();
if (!status) {
fmod_logerr ("Failed to stop recording\n");
}
break;
}
return 0;
}
static void fmod_audio_fini (void *opaque)
{
(void) opaque;
FSOUND_Close ();
}
static struct audio_option fmod_options[] = {
{
.name = "DRV",
.tag = AUD_OPT_STR,
.valp = &conf.drvname,
.descr = "FMOD driver"
},
{
.name = "FREQ",
.tag = AUD_OPT_INT,
.valp = &conf.freq,
.descr = "Default frequency"
},
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.nb_samples,
.descr = "Buffer size in samples"
},
{
.name = "CHANNELS",
.tag = AUD_OPT_INT,
.valp = &conf.nb_channels,
.descr = "Number of default channels (1 - mono, 2 - stereo)"
},
{
.name = "BUFSIZE",
.tag = AUD_OPT_INT,
.valp = &conf.bufsize,
.descr = "(undocumented)"
},
{ /* End of list */ }
};
static struct audio_pcm_ops fmod_pcm_ops = {
.init_out = fmod_init_out,
.fini_out = fmod_fini_out,
.run_out = fmod_run_out,
.write = fmod_write,
.ctl_out = fmod_ctl_out,
.init_in = fmod_init_in,
.fini_in = fmod_fini_in,
.run_in = fmod_run_in,
.read = fmod_read,
.ctl_in = fmod_ctl_in
};
struct audio_driver fmod_audio_driver = {
.name = "fmod",
.descr = "FMOD 3.xx http://www.fmod.org",
.options = fmod_options,
.init = fmod_audio_init,
.fini = fmod_audio_fini,
.pcm_ops = &fmod_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (FMODVoiceOut),
.voice_size_in = sizeof (FMODVoiceIn)
};

View File

@@ -22,9 +22,7 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "qemu/bswap.h"
#include "audio.h"
#define AUDIO_CAP "mixeng"
@@ -271,7 +269,7 @@ f_sample *mixeng_clip[2][2][2][3] = {
* August 21, 1998
* Copyright 1998 Fabrice Bellard.
*
* [Rewrote completely the code of Lance Norskog And Sundry
* [Rewrote completly the code of Lance Norskog And Sundry
* Contributors with a more efficient algorithm.]
*
* This source code is freely redistributable and may be used for

View File

@@ -21,7 +21,6 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#ifndef QEMU_MIXENG_H
#define QEMU_MIXENG_H
@@ -49,4 +48,4 @@ void st_rate_stop (void *opaque);
void mixeng_clear (struct st_sample *buf, int len);
void mixeng_volume (struct st_sample *buf, int len, struct mixeng_volume *vol);
#endif /* QEMU_MIXENG_H */
#endif /* mixeng.h */

View File

@@ -21,9 +21,7 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "qemu/host-utils.h"
#include "audio.h"
#include "qemu/timer.h"
@@ -50,8 +48,8 @@ static int no_run_out (HWVoiceOut *hw, int live)
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
ticks = now - no->old_ticks;
bytes = muldiv64(ticks, hw->info.bytes_per_second, NANOSECONDS_PER_SECOND);
bytes = audio_MIN(bytes, INT_MAX);
bytes = muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());
bytes = audio_MIN (bytes, INT_MAX);
samples = bytes >> hw->info.shift;
no->old_ticks = now;
@@ -62,10 +60,10 @@ static int no_run_out (HWVoiceOut *hw, int live)
static int no_write (SWVoiceOut *sw, void *buf, int len)
{
return audio_pcm_sw_write(sw, buf, len);
return audio_pcm_sw_write (sw, buf, len);
}
static int no_init_out(HWVoiceOut *hw, struct audsettings *as, void *drv_opaque)
static int no_init_out (HWVoiceOut *hw, struct audsettings *as)
{
audio_pcm_init_info (&hw->info, as);
hw->samples = 1024;
@@ -84,7 +82,7 @@ static int no_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static int no_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int no_init_in (HWVoiceIn *hw, struct audsettings *as)
{
audio_pcm_init_info (&hw->info, as);
hw->samples = 1024;
@@ -107,7 +105,7 @@ static int no_run_in (HWVoiceIn *hw)
int64_t now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
int64_t ticks = now - no->old_ticks;
int64_t bytes =
muldiv64(ticks, hw->info.bytes_per_second, NANOSECONDS_PER_SECOND);
muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());
no->old_ticks = now;
bytes = audio_MIN (bytes, INT_MAX);

View File

@@ -21,14 +21,15 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/ioctl.h>
#include <sys/soundcard.h>
#include "qemu-common.h"
#include "qemu/main-loop.h"
#include "qemu/host-utils.h"
#include "audio.h"
#include "trace.h"
#define AUDIO_CAP "oss"
#include "audio_int.h"
@@ -37,16 +38,6 @@
#define USE_DSP_POLICY
#endif
typedef struct OSSConf {
int try_mmap;
int nfrags;
int fragsize;
const char *devpath_out;
const char *devpath_in;
int exclusive;
int policy;
} OSSConf;
typedef struct OSSVoiceOut {
HWVoiceOut hw;
void *pcm_buf;
@@ -56,7 +47,6 @@ typedef struct OSSVoiceOut {
int fragsize;
int mmapped;
int pending;
OSSConf *conf;
} OSSVoiceOut;
typedef struct OSSVoiceIn {
@@ -65,9 +55,28 @@ typedef struct OSSVoiceIn {
int fd;
int nfrags;
int fragsize;
OSSConf *conf;
} OSSVoiceIn;
static struct {
int try_mmap;
int nfrags;
int fragsize;
const char *devpath_out;
const char *devpath_in;
int debug;
int exclusive;
int policy;
} conf = {
.try_mmap = 0,
.nfrags = 4,
.fragsize = 4096,
.devpath_out = "/dev/dsp",
.devpath_in = "/dev/dsp",
.debug = 0,
.exclusive = 0,
.policy = 5
};
struct oss_params {
int freq;
audfmt_e fmt;
@@ -129,18 +138,18 @@ static void oss_helper_poll_in (void *opaque)
audio_run ("oss_poll_in");
}
static void oss_poll_out (HWVoiceOut *hw)
static int oss_poll_out (HWVoiceOut *hw)
{
OSSVoiceOut *oss = (OSSVoiceOut *) hw;
qemu_set_fd_handler (oss->fd, NULL, oss_helper_poll_out, NULL);
return qemu_set_fd_handler (oss->fd, NULL, oss_helper_poll_out, NULL);
}
static void oss_poll_in (HWVoiceIn *hw)
static int oss_poll_in (HWVoiceIn *hw)
{
OSSVoiceIn *oss = (OSSVoiceIn *) hw;
qemu_set_fd_handler (oss->fd, oss_helper_poll_in, NULL, NULL);
return qemu_set_fd_handler (oss->fd, oss_helper_poll_in, NULL, NULL);
}
static int oss_write (SWVoiceOut *sw, void *buf, int len)
@@ -263,18 +272,18 @@ static int oss_get_version (int fd, int *version, const char *typ)
#endif
static int oss_open (int in, struct oss_params *req,
struct oss_params *obt, int *pfd, OSSConf* conf)
struct oss_params *obt, int *pfd)
{
int fd;
int oflags = conf->exclusive ? O_EXCL : 0;
int oflags = conf.exclusive ? O_EXCL : 0;
audio_buf_info abinfo;
int fmt, freq, nchannels;
int setfragment = 1;
const char *dspname = in ? conf->devpath_in : conf->devpath_out;
const char *dspname = in ? conf.devpath_in : conf.devpath_out;
const char *typ = in ? "ADC" : "DAC";
/* Kludge needed to have working mmap on Linux */
oflags |= conf->try_mmap ? O_RDWR : (in ? O_RDONLY : O_WRONLY);
oflags |= conf.try_mmap ? O_RDWR : (in ? O_RDONLY : O_WRONLY);
fd = open (dspname, oflags | O_NONBLOCK);
if (-1 == fd) {
@@ -308,18 +317,20 @@ static int oss_open (int in, struct oss_params *req,
}
#ifdef USE_DSP_POLICY
if (conf->policy >= 0) {
if (conf.policy >= 0) {
int version;
if (!oss_get_version (fd, &version, typ)) {
trace_oss_version(version);
if (conf.debug) {
dolog ("OSS version = %#x\n", version);
}
if (version >= 0x040000) {
int policy = conf->policy;
int policy = conf.policy;
if (ioctl (fd, SNDCTL_DSP_POLICY, &policy)) {
oss_logerr2 (errno, typ,
"Failed to set timing policy to %d\n",
conf->policy);
conf.policy);
goto err;
}
setfragment = 0;
@@ -447,12 +458,19 @@ static int oss_run_out (HWVoiceOut *hw, int live)
}
if (abinfo.bytes > bufsize) {
trace_oss_invalid_available_size(abinfo.bytes, bufsize);
if (conf.debug) {
dolog ("warning: Invalid available size, size=%d bufsize=%d\n"
"please report your OS/audio hw to av1474@comtv.ru\n",
abinfo.bytes, bufsize);
}
abinfo.bytes = bufsize;
}
if (abinfo.bytes < 0) {
trace_oss_invalid_available_size(abinfo.bytes, bufsize);
if (conf.debug) {
dolog ("warning: Invalid available size, size=%d bufsize=%d\n",
abinfo.bytes, bufsize);
}
return 0;
}
@@ -492,8 +510,7 @@ static void oss_fini_out (HWVoiceOut *hw)
}
}
static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int oss_init_out (HWVoiceOut *hw, struct audsettings *as)
{
OSSVoiceOut *oss = (OSSVoiceOut *) hw;
struct oss_params req, obt;
@@ -502,17 +519,16 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
int fd;
audfmt_e effective_fmt;
struct audsettings obt_as;
OSSConf *conf = drv_opaque;
oss->fd = -1;
req.fmt = aud_to_ossfmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.fragsize = conf->fragsize;
req.nfrags = conf->nfrags;
req.fragsize = conf.fragsize;
req.nfrags = conf.nfrags;
if (oss_open (0, &req, &obt, &fd, conf)) {
if (oss_open (0, &req, &obt, &fd)) {
return -1;
}
@@ -539,7 +555,7 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
hw->samples = (obt.nfrags * obt.fragsize) >> hw->info.shift;
oss->mmapped = 0;
if (conf->try_mmap) {
if (conf.try_mmap) {
oss->pcm_buf = mmap (
NULL,
hw->samples << hw->info.shift,
@@ -599,7 +615,6 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
}
oss->fd = fd;
oss->conf = conf;
return 0;
}
@@ -619,8 +634,7 @@ static int oss_ctl_out (HWVoiceOut *hw, int cmd, ...)
va_end (ap);
ldebug ("enabling voice\n");
if (poll_mode) {
oss_poll_out (hw);
if (poll_mode && oss_poll_out (hw)) {
poll_mode = 0;
}
hw->poll_mode = poll_mode;
@@ -662,7 +676,7 @@ static int oss_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static int oss_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int oss_init_in (HWVoiceIn *hw, struct audsettings *as)
{
OSSVoiceIn *oss = (OSSVoiceIn *) hw;
struct oss_params req, obt;
@@ -671,16 +685,15 @@ static int oss_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
int fd;
audfmt_e effective_fmt;
struct audsettings obt_as;
OSSConf *conf = drv_opaque;
oss->fd = -1;
req.fmt = aud_to_ossfmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.fragsize = conf->fragsize;
req.nfrags = conf->nfrags;
if (oss_open (1, &req, &obt, &fd, conf)) {
req.fragsize = conf.fragsize;
req.nfrags = conf.nfrags;
if (oss_open (1, &req, &obt, &fd)) {
return -1;
}
@@ -714,7 +727,6 @@ static int oss_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
oss->fd = fd;
oss->conf = conf;
return 0;
}
@@ -724,8 +736,10 @@ static void oss_fini_in (HWVoiceIn *hw)
oss_anal_close (&oss->fd);
g_free(oss->pcm_buf);
oss->pcm_buf = NULL;
if (oss->pcm_buf) {
g_free (oss->pcm_buf);
oss->pcm_buf = NULL;
}
}
static int oss_run_in (HWVoiceIn *hw)
@@ -816,8 +830,7 @@ static int oss_ctl_in (HWVoiceIn *hw, int cmd, ...)
poll_mode = va_arg (ap, int);
va_end (ap);
if (poll_mode) {
oss_poll_in (hw);
if (poll_mode && oss_poll_in (hw)) {
poll_mode = 0;
}
hw->poll_mode = poll_mode;
@@ -834,79 +847,71 @@ static int oss_ctl_in (HWVoiceIn *hw, int cmd, ...)
return 0;
}
static OSSConf glob_conf = {
.try_mmap = 0,
.nfrags = 4,
.fragsize = 4096,
.devpath_out = "/dev/dsp",
.devpath_in = "/dev/dsp",
.exclusive = 0,
.policy = 5
};
static void *oss_audio_init (void)
{
OSSConf *conf = g_malloc(sizeof(OSSConf));
*conf = glob_conf;
if (access(conf->devpath_in, R_OK | W_OK) < 0 ||
access(conf->devpath_out, R_OK | W_OK) < 0) {
g_free(conf);
if (access(conf.devpath_in, R_OK | W_OK) < 0 ||
access(conf.devpath_out, R_OK | W_OK) < 0) {
return NULL;
}
return conf;
return &conf;
}
static void oss_audio_fini (void *opaque)
{
g_free(opaque);
(void) opaque;
}
static struct audio_option oss_options[] = {
{
.name = "FRAGSIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.fragsize,
.valp = &conf.fragsize,
.descr = "Fragment size in bytes"
},
{
.name = "NFRAGS",
.tag = AUD_OPT_INT,
.valp = &glob_conf.nfrags,
.valp = &conf.nfrags,
.descr = "Number of fragments"
},
{
.name = "MMAP",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.try_mmap,
.valp = &conf.try_mmap,
.descr = "Try using memory mapped access"
},
{
.name = "DAC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.devpath_out,
.valp = &conf.devpath_out,
.descr = "Path to DAC device"
},
{
.name = "ADC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.devpath_in,
.valp = &conf.devpath_in,
.descr = "Path to ADC device"
},
{
.name = "EXCLUSIVE",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.exclusive,
.descr = "Open device in exclusive mode (vmix won't work)"
.valp = &conf.exclusive,
.descr = "Open device in exclusive mode (vmix wont work)"
},
#ifdef USE_DSP_POLICY
{
.name = "POLICY",
.tag = AUD_OPT_INT,
.valp = &glob_conf.policy,
.valp = &conf.policy,
.descr = "Set the timing policy of the device, -1 to use fragment mode",
},
#endif
{
.name = "DEBUG",
.tag = AUD_OPT_BOOL,
.valp = &conf.debug,
.descr = "Turn on some debugging messages"
},
{ /* End of list */ }
};

View File

@@ -1,5 +1,4 @@
/* public domain */
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "audio.h"
@@ -9,19 +8,6 @@
#include "audio_int.h"
#include "audio_pt_int.h"
typedef struct {
int samples;
char *server;
char *sink;
char *source;
} PAConf;
typedef struct {
PAConf conf;
pa_threaded_mainloop *mainloop;
pa_context *context;
} paaudio;
typedef struct {
HWVoiceOut hw;
int done;
@@ -31,7 +17,6 @@ typedef struct {
pa_stream *stream;
void *pcm_buf;
struct audio_pt pt;
paaudio *g;
} PAVoiceOut;
typedef struct {
@@ -45,10 +30,20 @@ typedef struct {
struct audio_pt pt;
const void *read_data;
size_t read_index, read_length;
paaudio *g;
} PAVoiceIn;
static void qpa_audio_fini(void *opaque);
typedef struct {
int samples;
char *server;
char *sink;
char *source;
pa_threaded_mainloop *mainloop;
pa_context *context;
} paaudio;
static paaudio glob_paaudio = {
.samples = 4096,
};
static void GCC_FMT_ATTR (2, 3) qpa_logerr (int err, const char *fmt, ...)
{
@@ -111,7 +106,7 @@ static inline int PA_STREAM_IS_GOOD(pa_stream_state_t x)
static int qpa_simple_read (PAVoiceIn *p, void *data, size_t length, int *rerror)
{
paaudio *g = p->g;
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_lock (g->mainloop);
@@ -165,7 +160,7 @@ unlock_and_fail:
static int qpa_simple_write (PAVoiceOut *p, const void *data, size_t length, int *rerror)
{
paaudio *g = p->g;
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_lock (g->mainloop);
@@ -227,7 +222,7 @@ static void *qpa_thread_out (void *arg)
}
}
decr = to_mix = audio_MIN (pa->live, pa->g->conf.samples >> 2);
decr = to_mix = audio_MIN (pa->live, glob_paaudio.samples >> 2);
rpos = pa->rpos;
if (audio_pt_unlock (&pa->pt, AUDIO_FUNC)) {
@@ -319,7 +314,7 @@ static void *qpa_thread_in (void *arg)
}
}
incr = to_grab = audio_MIN (pa->dead, pa->g->conf.samples >> 2);
incr = to_grab = audio_MIN (pa->dead, glob_paaudio.samples >> 2);
wpos = pa->wpos;
if (audio_pt_unlock (&pa->pt, AUDIO_FUNC)) {
@@ -435,7 +430,7 @@ static audfmt_e pa_to_audfmt (pa_sample_format_t fmt, int *endianness)
static void context_state_cb (pa_context *c, void *userdata)
{
paaudio *g = userdata;
paaudio *g = &glob_paaudio;
switch (pa_context_get_state(c)) {
case PA_CONTEXT_READY:
@@ -454,7 +449,7 @@ static void context_state_cb (pa_context *c, void *userdata)
static void stream_state_cb (pa_stream *s, void * userdata)
{
paaudio *g = userdata;
paaudio *g = &glob_paaudio;
switch (pa_stream_get_state (s)) {
@@ -472,21 +467,23 @@ static void stream_state_cb (pa_stream *s, void * userdata)
static void stream_request_cb (pa_stream *s, size_t length, void *userdata)
{
paaudio *g = userdata;
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_signal (g->mainloop, 0);
}
static pa_stream *qpa_simple_new (
paaudio *g,
const char *server,
const char *name,
pa_stream_direction_t dir,
const char *dev,
const char *stream_name,
const pa_sample_spec *ss,
const pa_channel_map *map,
const pa_buffer_attr *attr,
int *rerror)
{
paaudio *g = &glob_paaudio;
int r;
pa_stream *stream;
@@ -537,15 +534,13 @@ fail:
return NULL;
}
static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int qpa_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int error;
pa_sample_spec ss;
pa_buffer_attr ba;
static pa_sample_spec ss;
static pa_buffer_attr ba;
struct audsettings obt_as = *as;
PAVoiceOut *pa = (PAVoiceOut *) hw;
paaudio *g = pa->g = drv_opaque;
ss.format = audfmt_to_pa (as->fmt, as->endianness);
ss.channels = as->nchannels;
@@ -563,10 +558,11 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
obt_as.fmt = pa_to_audfmt (ss.format, &obt_as.endianness);
pa->stream = qpa_simple_new (
g,
glob_paaudio.server,
"qemu",
PA_STREAM_PLAYBACK,
g->conf.sink,
glob_paaudio.sink,
"pcm.playback",
&ss,
NULL, /* channel map */
&ba, /* buffering attributes */
@@ -578,7 +574,7 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
}
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = g->conf.samples;
hw->samples = glob_paaudio.samples;
pa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
pa->rpos = hw->rpos;
if (!pa->pcm_buf) {
@@ -605,13 +601,12 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
return -1;
}
static int qpa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int qpa_init_in (HWVoiceIn *hw, struct audsettings *as)
{
int error;
pa_sample_spec ss;
static pa_sample_spec ss;
struct audsettings obt_as = *as;
PAVoiceIn *pa = (PAVoiceIn *) hw;
paaudio *g = pa->g = drv_opaque;
ss.format = audfmt_to_pa (as->fmt, as->endianness);
ss.channels = as->nchannels;
@@ -620,10 +615,11 @@ static int qpa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
obt_as.fmt = pa_to_audfmt (ss.format, &obt_as.endianness);
pa->stream = qpa_simple_new (
g,
glob_paaudio.server,
"qemu",
PA_STREAM_RECORD,
g->conf.source,
glob_paaudio.source,
"pcm.capture",
&ss,
NULL, /* channel map */
NULL, /* buffering attributes */
@@ -635,7 +631,7 @@ static int qpa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = g->conf.samples;
hw->samples = glob_paaudio.samples;
pa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
pa->wpos = hw->wpos;
if (!pa->pcm_buf) {
@@ -707,7 +703,7 @@ static int qpa_ctl_out (HWVoiceOut *hw, int cmd, ...)
PAVoiceOut *pa = (PAVoiceOut *) hw;
pa_operation *op;
pa_cvolume v;
paaudio *g = pa->g;
paaudio *g = &glob_paaudio;
#ifdef PA_CHECK_VERSION /* macro is present in 0.9.16+ */
pa_cvolume_init (&v); /* function is present in 0.9.13+ */
@@ -759,7 +755,7 @@ static int qpa_ctl_in (HWVoiceIn *hw, int cmd, ...)
PAVoiceIn *pa = (PAVoiceIn *) hw;
pa_operation *op;
pa_cvolume v;
paaudio *g = pa->g;
paaudio *g = &glob_paaudio;
#ifdef PA_CHECK_VERSION
pa_cvolume_init (&v);
@@ -781,22 +777,23 @@ static int qpa_ctl_in (HWVoiceIn *hw, int cmd, ...)
pa_threaded_mainloop_lock (g->mainloop);
op = pa_context_set_source_output_volume (g->context,
pa_stream_get_index (pa->stream),
/* FIXME: use the upcoming "set_source_output_{volume,mute}" */
op = pa_context_set_source_volume_by_index (g->context,
pa_stream_get_device_index (pa->stream),
&v, NULL, NULL);
if (!op) {
qpa_logerr (pa_context_errno (g->context),
"set_source_output_volume() failed\n");
"set_source_volume() failed\n");
} else {
pa_operation_unref(op);
}
op = pa_context_set_source_output_mute (g->context,
op = pa_context_set_source_mute_by_index (g->context,
pa_stream_get_index (pa->stream),
sw->vol.mute, NULL, NULL);
if (!op) {
qpa_logerr (pa_context_errno (g->context),
"set_source_output_mute() failed\n");
"set_source_mute() failed\n");
} else {
pa_operation_unref (op);
}
@@ -808,31 +805,23 @@ static int qpa_ctl_in (HWVoiceIn *hw, int cmd, ...)
}
/* common */
static PAConf glob_conf = {
.samples = 4096,
};
static void *qpa_audio_init (void)
{
paaudio *g = g_malloc(sizeof(paaudio));
g->conf = glob_conf;
g->mainloop = NULL;
g->context = NULL;
paaudio *g = &glob_paaudio;
g->mainloop = pa_threaded_mainloop_new ();
if (!g->mainloop) {
goto fail;
}
g->context = pa_context_new (pa_threaded_mainloop_get_api (g->mainloop),
g->conf.server);
g->context = pa_context_new (pa_threaded_mainloop_get_api (g->mainloop), glob_paaudio.server);
if (!g->context) {
goto fail;
}
pa_context_set_state_callback (g->context, context_state_cb, g);
if (pa_context_connect (g->context, g->conf.server, 0, NULL) < 0) {
if (pa_context_connect (g->context, glob_paaudio.server, 0, NULL) < 0) {
qpa_logerr (pa_context_errno (g->context),
"pa_context_connect() failed\n");
goto fail;
@@ -865,13 +854,12 @@ static void *qpa_audio_init (void)
pa_threaded_mainloop_unlock (g->mainloop);
return g;
return &glob_paaudio;
unlock_and_fail:
pa_threaded_mainloop_unlock (g->mainloop);
fail:
AUD_log (AUDIO_CAP, "Failed to initialize PA context");
qpa_audio_fini(g);
return NULL;
}
@@ -886,38 +874,39 @@ static void qpa_audio_fini (void *opaque)
if (g->context) {
pa_context_disconnect (g->context);
pa_context_unref (g->context);
g->context = NULL;
}
if (g->mainloop) {
pa_threaded_mainloop_free (g->mainloop);
}
g_free(g);
g->mainloop = NULL;
}
struct audio_option qpa_options[] = {
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &glob_conf.samples,
.valp = &glob_paaudio.samples,
.descr = "buffer size in samples"
},
{
.name = "SERVER",
.tag = AUD_OPT_STR,
.valp = &glob_conf.server,
.valp = &glob_paaudio.server,
.descr = "server address"
},
{
.name = "SINK",
.tag = AUD_OPT_STR,
.valp = &glob_conf.sink,
.valp = &glob_paaudio.sink,
.descr = "sink device name"
},
{
.name = "SOURCE",
.tag = AUD_OPT_STR,
.valp = &glob_conf.source,
.valp = &glob_paaudio.source,
.descr = "source device name"
},
{ /* End of list */ }

View File

@@ -21,7 +21,6 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include <SDL.h>
#include <SDL_thread.h>
#include "qemu-common.h"
@@ -56,7 +55,6 @@ static struct SDLAudioState {
SDL_mutex *mutex;
SDL_sem *sem;
int initialized;
bool driver_created;
} glob_sdl;
typedef struct SDLAudioState SDLAudioState;
@@ -334,8 +332,7 @@ static void sdl_fini_out (HWVoiceOut *hw)
sdl_close (&glob_sdl);
}
static int sdl_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int sdl_init_out (HWVoiceOut *hw, struct audsettings *as)
{
SDLVoiceOut *sdl = (SDLVoiceOut *) hw;
SDLAudioState *s = &glob_sdl;
@@ -395,10 +392,6 @@ static int sdl_ctl_out (HWVoiceOut *hw, int cmd, ...)
static void *sdl_audio_init (void)
{
SDLAudioState *s = &glob_sdl;
if (s->driver_created) {
sdl_logerr("Can't create multiple sdl backends\n");
return NULL;
}
if (SDL_InitSubSystem (SDL_INIT_AUDIO)) {
sdl_logerr ("SDL failed to initialize audio subsystem\n");
@@ -420,7 +413,6 @@ static void *sdl_audio_init (void)
return NULL;
}
s->driver_created = true;
return s;
}
@@ -431,7 +423,6 @@ static void sdl_audio_fini (void *opaque)
SDL_DestroySemaphore (s->sem);
SDL_DestroyMutex (s->mutex);
SDL_QuitSubSystem (SDL_INIT_AUDIO);
s->driver_created = false;
}
static struct audio_option sdl_options[] = {

View File

@@ -17,10 +17,7 @@
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "hw/hw.h"
#include "qemu/host-utils.h"
#include "qemu/error-report.h"
#include "qemu/timer.h"
#include "ui/qemu-spice.h"
@@ -105,11 +102,11 @@ static int rate_get_samples (struct audio_pcm_info *info, SpiceRateCtl *rate)
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
ticks = now - rate->start_ticks;
bytes = muldiv64(ticks, info->bytes_per_second, NANOSECONDS_PER_SECOND);
bytes = muldiv64 (ticks, info->bytes_per_second, get_ticks_per_sec ());
samples = (bytes - rate->bytes_sent) >> info->shift;
if (samples < 0 || samples > 65536) {
error_report("Resetting rate control (%" PRId64 " samples)", samples);
rate_start(rate);
fprintf (stderr, "Resetting rate control (%" PRId64 " samples)\n", samples);
rate_start (rate);
samples = 0;
}
rate->bytes_sent += samples << info->shift;
@@ -118,8 +115,7 @@ static int rate_get_samples (struct audio_pcm_info *info, SpiceRateCtl *rate)
/* playback */
static int line_out_init(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int line_out_init (HWVoiceOut *hw, struct audsettings *as)
{
SpiceVoiceOut *out = container_of (hw, SpiceVoiceOut, hw);
struct audsettings settings;
@@ -247,7 +243,7 @@ static int line_out_ctl (HWVoiceOut *hw, int cmd, ...)
/* record */
static int line_in_init(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int line_in_init (HWVoiceIn *hw, struct audsettings *as)
{
SpiceVoiceIn *in = container_of (hw, SpiceVoiceIn, hw);
struct audsettings settings;

View File

@@ -1,17 +0,0 @@
# See docs/tracing.txt for syntax documentation.
# audio/alsaaudio.c
alsa_revents(int revents) "revents = %d"
alsa_pollout(int i, int fd) "i = %d fd = %d"
alsa_set_handler(int events, int index, int fd, int err) "events=%#x index=%d fd=%d err=%d"
alsa_wrote_zero(int len) "Failed to write %d frames (wrote zero)"
alsa_read_zero(long len) "Failed to read %ld frames (read zero)"
alsa_xrun_out(void) "Recovering from playback xrun"
alsa_xrun_in(void) "Recovering from capture xrun"
alsa_resume_out(void) "Resuming suspended output stream"
alsa_resume_in(void) "Resuming suspended input stream"
alsa_no_frames(int state) "No frames available and ALSA state is %d"
# audio/ossaudio.c
oss_version(int version) "OSS version = %#x"
oss_invalid_available_size(int size, int bufsize) "Invalid available size, size=%d bufsize=%d"

View File

@@ -21,8 +21,7 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu/host-utils.h"
#include "hw/hw.h"
#include "qemu/timer.h"
#include "audio.h"
@@ -37,10 +36,15 @@ typedef struct WAVVoiceOut {
int total_samples;
} WAVVoiceOut;
typedef struct {
static struct {
struct audsettings settings;
const char *wav_path;
} WAVConf;
} conf = {
.settings.freq = 44100,
.settings.nchannels = 2,
.settings.fmt = AUD_FMT_S16,
.wav_path = "qemu.wav"
};
static int wav_run_out (HWVoiceOut *hw, int live)
{
@@ -51,7 +55,7 @@ static int wav_run_out (HWVoiceOut *hw, int live)
int64_t now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
int64_t ticks = now - wav->old_ticks;
int64_t bytes =
muldiv64(ticks, hw->info.bytes_per_second, NANOSECONDS_PER_SECOND);
muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());
if (bytes > INT_MAX) {
samples = INT_MAX >> hw->info.shift;
@@ -101,8 +105,7 @@ static void le_store (uint8_t *buf, uint32_t val, int len)
}
}
static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int wav_init_out (HWVoiceOut *hw, struct audsettings *as)
{
WAVVoiceOut *wav = (WAVVoiceOut *) hw;
int bits16 = 0, stereo = 0;
@@ -112,8 +115,9 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
0x02, 0x00, 0x44, 0xac, 0x00, 0x00, 0x10, 0xb1, 0x02, 0x00, 0x04,
0x00, 0x10, 0x00, 0x64, 0x61, 0x74, 0x61, 0x00, 0x00, 0x00, 0x00
};
WAVConf *conf = drv_opaque;
struct audsettings wav_as = conf->settings;
struct audsettings wav_as = conf.settings;
(void) as;
stereo = wav_as.nchannels == 2;
switch (wav_as.fmt) {
@@ -151,10 +155,10 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
le_store (hdr + 28, hw->info.freq << (bits16 + stereo), 4);
le_store (hdr + 32, 1 << (bits16 + stereo), 2);
wav->f = fopen (conf->wav_path, "wb");
wav->f = fopen (conf.wav_path, "wb");
if (!wav->f) {
dolog ("Failed to open wave file `%s'\nReason: %s\n",
conf->wav_path, strerror (errno));
conf.wav_path, strerror (errno));
g_free (wav->pcm_buf);
wav->pcm_buf = NULL;
return -1;
@@ -222,49 +226,40 @@ static int wav_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static WAVConf glob_conf = {
.settings.freq = 44100,
.settings.nchannels = 2,
.settings.fmt = AUD_FMT_S16,
.wav_path = "qemu.wav"
};
static void *wav_audio_init (void)
{
WAVConf *conf = g_malloc(sizeof(WAVConf));
*conf = glob_conf;
return conf;
return &conf;
}
static void wav_audio_fini (void *opaque)
{
(void) opaque;
ldebug ("wav_fini");
g_free(opaque);
}
static struct audio_option wav_options[] = {
{
.name = "FREQUENCY",
.tag = AUD_OPT_INT,
.valp = &glob_conf.settings.freq,
.valp = &conf.settings.freq,
.descr = "Frequency"
},
{
.name = "FORMAT",
.tag = AUD_OPT_FMT,
.valp = &glob_conf.settings.fmt,
.valp = &conf.settings.fmt,
.descr = "Format"
},
{
.name = "DAC_FIXED_CHANNELS",
.tag = AUD_OPT_INT,
.valp = &glob_conf.settings.nchannels,
.valp = &conf.settings.nchannels,
.descr = "Number of channels (1 - mono, 2 - stereo)"
},
{
.name = "PATH",
.tag = AUD_OPT_STR,
.valp = &glob_conf.wav_path,
.valp = &conf.wav_path,
.descr = "Path to wave file"
},
{ /* End of list */ }

View File

@@ -1,7 +1,5 @@
#include "qemu/osdep.h"
#include "hw/hw.h"
#include "monitor/monitor.h"
#include "qemu/error-report.h"
#include "audio.h"
typedef struct {
@@ -65,7 +63,8 @@ static void wav_destroy (void *opaque)
}
doclose:
if (fclose (wav->f)) {
error_report("wav_destroy: fclose failed: %s", strerror(errno));
fprintf (stderr, "wav_destroy: fclose failed: %s",
strerror (errno));
}
}

717
audio/winwaveaudio.c Normal file
View File

@@ -0,0 +1,717 @@
/* public domain */
#include "qemu-common.h"
#include "sysemu/sysemu.h"
#include "audio.h"
#define AUDIO_CAP "winwave"
#include "audio_int.h"
#include <windows.h>
#include <mmsystem.h>
#include "audio_win_int.h"
static struct {
int dac_headers;
int dac_samples;
int adc_headers;
int adc_samples;
} conf = {
.dac_headers = 4,
.dac_samples = 1024,
.adc_headers = 4,
.adc_samples = 1024
};
typedef struct {
HWVoiceOut hw;
HWAVEOUT hwo;
WAVEHDR *hdrs;
HANDLE event;
void *pcm_buf;
int avail;
int pending;
int curhdr;
int paused;
CRITICAL_SECTION crit_sect;
} WaveVoiceOut;
typedef struct {
HWVoiceIn hw;
HWAVEIN hwi;
WAVEHDR *hdrs;
HANDLE event;
void *pcm_buf;
int curhdr;
int paused;
int rpos;
int avail;
CRITICAL_SECTION crit_sect;
} WaveVoiceIn;
static void winwave_log_mmresult (MMRESULT mr)
{
const char *str = "BUG";
switch (mr) {
case MMSYSERR_NOERROR:
str = "Success";
break;
case MMSYSERR_INVALHANDLE:
str = "Specified device handle is invalid";
break;
case MMSYSERR_BADDEVICEID:
str = "Specified device id is out of range";
break;
case MMSYSERR_NODRIVER:
str = "No device driver is present";
break;
case MMSYSERR_NOMEM:
str = "Unable to allocate or lock memory";
break;
case WAVERR_SYNC:
str = "Device is synchronous but waveOutOpen was called "
"without using the WINWAVE_ALLOWSYNC flag";
break;
case WAVERR_UNPREPARED:
str = "The data block pointed to by the pwh parameter "
"hasn't been prepared";
break;
case WAVERR_STILLPLAYING:
str = "There are still buffers in the queue";
break;
default:
dolog ("Reason: Unknown (MMRESULT %#x)\n", mr);
return;
}
dolog ("Reason: %s\n", str);
}
static void GCC_FMT_ATTR (2, 3) winwave_logerr (
MMRESULT mr,
const char *fmt,
...
)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
AUD_log (NULL, " failed\n");
winwave_log_mmresult (mr);
}
static void winwave_anal_close_out (WaveVoiceOut *wave)
{
MMRESULT mr;
mr = waveOutClose (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutClose");
}
wave->hwo = NULL;
}
static void CALLBACK winwave_callback_out (
HWAVEOUT hwo,
UINT msg,
DWORD_PTR dwInstance,
DWORD_PTR dwParam1,
DWORD_PTR dwParam2
)
{
WaveVoiceOut *wave = (WaveVoiceOut *) dwInstance;
switch (msg) {
case WOM_DONE:
{
WAVEHDR *h = (WAVEHDR *) dwParam1;
if (!h->dwUser) {
h->dwUser = 1;
EnterCriticalSection (&wave->crit_sect);
{
wave->avail += conf.dac_samples;
}
LeaveCriticalSection (&wave->crit_sect);
if (wave->hw.poll_mode) {
if (!SetEvent (wave->event)) {
dolog ("DAC SetEvent failed %lx\n", GetLastError ());
}
}
}
}
break;
case WOM_CLOSE:
case WOM_OPEN:
break;
default:
dolog ("unknown wave out callback msg %x\n", msg);
}
}
static int winwave_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int i;
int err;
MMRESULT mr;
WAVEFORMATEX wfx;
WaveVoiceOut *wave;
wave = (WaveVoiceOut *) hw;
InitializeCriticalSection (&wave->crit_sect);
err = waveformat_from_audio_settings (&wfx, as);
if (err) {
goto err0;
}
mr = waveOutOpen (&wave->hwo, WAVE_MAPPER, &wfx,
(DWORD_PTR) winwave_callback_out,
(DWORD_PTR) wave, CALLBACK_FUNCTION);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutOpen");
goto err1;
}
wave->hdrs = audio_calloc (AUDIO_FUNC, conf.dac_headers,
sizeof (*wave->hdrs));
if (!wave->hdrs) {
goto err2;
}
audio_pcm_init_info (&hw->info, as);
hw->samples = conf.dac_samples * conf.dac_headers;
wave->avail = hw->samples;
wave->pcm_buf = audio_calloc (AUDIO_FUNC, conf.dac_samples,
conf.dac_headers << hw->info.shift);
if (!wave->pcm_buf) {
goto err3;
}
for (i = 0; i < conf.dac_headers; ++i) {
WAVEHDR *h = &wave->hdrs[i];
h->dwUser = 0;
h->dwBufferLength = conf.dac_samples << hw->info.shift;
h->lpData = advance (wave->pcm_buf, i * h->dwBufferLength);
h->dwFlags = 0;
mr = waveOutPrepareHeader (wave->hwo, h, sizeof (*h));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutPrepareHeader(%d)", i);
goto err4;
}
}
return 0;
err4:
g_free (wave->pcm_buf);
err3:
g_free (wave->hdrs);
err2:
winwave_anal_close_out (wave);
err1:
err0:
return -1;
}
static int winwave_write (SWVoiceOut *sw, void *buf, int len)
{
return audio_pcm_sw_write (sw, buf, len);
}
static int winwave_run_out (HWVoiceOut *hw, int live)
{
WaveVoiceOut *wave = (WaveVoiceOut *) hw;
int decr;
int doreset;
EnterCriticalSection (&wave->crit_sect);
{
decr = audio_MIN (live, wave->avail);
decr = audio_pcm_hw_clip_out (hw, wave->pcm_buf, decr, wave->pending);
wave->pending += decr;
wave->avail -= decr;
}
LeaveCriticalSection (&wave->crit_sect);
doreset = hw->poll_mode && (wave->pending >= conf.dac_samples);
if (doreset && !ResetEvent (wave->event)) {
dolog ("DAC ResetEvent failed %lx\n", GetLastError ());
}
while (wave->pending >= conf.dac_samples) {
MMRESULT mr;
WAVEHDR *h = &wave->hdrs[wave->curhdr];
h->dwUser = 0;
mr = waveOutWrite (wave->hwo, h, sizeof (*h));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutWrite(%d)", wave->curhdr);
break;
}
wave->pending -= conf.dac_samples;
wave->curhdr = (wave->curhdr + 1) % conf.dac_headers;
}
return decr;
}
static void winwave_poll (void *opaque)
{
(void) opaque;
audio_run ("winwave_poll");
}
static void winwave_fini_out (HWVoiceOut *hw)
{
int i;
MMRESULT mr;
WaveVoiceOut *wave = (WaveVoiceOut *) hw;
mr = waveOutReset (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutReset");
}
for (i = 0; i < conf.dac_headers; ++i) {
mr = waveOutUnprepareHeader (wave->hwo, &wave->hdrs[i],
sizeof (wave->hdrs[i]));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutUnprepareHeader(%d)", i);
}
}
winwave_anal_close_out (wave);
if (wave->event) {
qemu_del_wait_object (wave->event, winwave_poll, wave);
if (!CloseHandle (wave->event)) {
dolog ("DAC CloseHandle failed %lx\n", GetLastError ());
}
wave->event = NULL;
}
g_free (wave->pcm_buf);
wave->pcm_buf = NULL;
g_free (wave->hdrs);
wave->hdrs = NULL;
}
static int winwave_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
MMRESULT mr;
WaveVoiceOut *wave = (WaveVoiceOut *) hw;
switch (cmd) {
case VOICE_ENABLE:
{
va_list ap;
int poll_mode;
va_start (ap, cmd);
poll_mode = va_arg (ap, int);
va_end (ap);
if (poll_mode && !wave->event) {
wave->event = CreateEvent (NULL, TRUE, TRUE, NULL);
if (!wave->event) {
dolog ("DAC CreateEvent: %lx, poll mode will be disabled\n",
GetLastError ());
}
}
if (wave->event) {
int ret;
ret = qemu_add_wait_object (wave->event, winwave_poll, wave);
hw->poll_mode = (ret == 0);
}
else {
hw->poll_mode = 0;
}
wave->paused = 0;
}
return 0;
case VOICE_DISABLE:
if (!wave->paused) {
mr = waveOutReset (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutReset");
}
else {
wave->paused = 1;
}
}
if (wave->event) {
qemu_del_wait_object (wave->event, winwave_poll, wave);
}
return 0;
}
return -1;
}
static void winwave_anal_close_in (WaveVoiceIn *wave)
{
MMRESULT mr;
mr = waveInClose (wave->hwi);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInClose");
}
wave->hwi = NULL;
}
static void CALLBACK winwave_callback_in (
HWAVEIN *hwi,
UINT msg,
DWORD_PTR dwInstance,
DWORD_PTR dwParam1,
DWORD_PTR dwParam2
)
{
WaveVoiceIn *wave = (WaveVoiceIn *) dwInstance;
switch (msg) {
case WIM_DATA:
{
WAVEHDR *h = (WAVEHDR *) dwParam1;
if (!h->dwUser) {
h->dwUser = 1;
EnterCriticalSection (&wave->crit_sect);
{
wave->avail += conf.adc_samples;
}
LeaveCriticalSection (&wave->crit_sect);
if (wave->hw.poll_mode) {
if (!SetEvent (wave->event)) {
dolog ("ADC SetEvent failed %lx\n", GetLastError ());
}
}
}
}
break;
case WIM_CLOSE:
case WIM_OPEN:
break;
default:
dolog ("unknown wave in callback msg %x\n", msg);
}
}
static void winwave_add_buffers (WaveVoiceIn *wave, int samples)
{
int doreset;
doreset = wave->hw.poll_mode && (samples >= conf.adc_samples);
if (doreset && !ResetEvent (wave->event)) {
dolog ("ADC ResetEvent failed %lx\n", GetLastError ());
}
while (samples >= conf.adc_samples) {
MMRESULT mr;
WAVEHDR *h = &wave->hdrs[wave->curhdr];
h->dwUser = 0;
mr = waveInAddBuffer (wave->hwi, h, sizeof (*h));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInAddBuffer(%d)", wave->curhdr);
}
wave->curhdr = (wave->curhdr + 1) % conf.adc_headers;
samples -= conf.adc_samples;
}
}
static int winwave_init_in (HWVoiceIn *hw, struct audsettings *as)
{
int i;
int err;
MMRESULT mr;
WAVEFORMATEX wfx;
WaveVoiceIn *wave;
wave = (WaveVoiceIn *) hw;
InitializeCriticalSection (&wave->crit_sect);
err = waveformat_from_audio_settings (&wfx, as);
if (err) {
goto err0;
}
mr = waveInOpen (&wave->hwi, WAVE_MAPPER, &wfx,
(DWORD_PTR) winwave_callback_in,
(DWORD_PTR) wave, CALLBACK_FUNCTION);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInOpen");
goto err1;
}
wave->hdrs = audio_calloc (AUDIO_FUNC, conf.dac_headers,
sizeof (*wave->hdrs));
if (!wave->hdrs) {
goto err2;
}
audio_pcm_init_info (&hw->info, as);
hw->samples = conf.adc_samples * conf.adc_headers;
wave->avail = 0;
wave->pcm_buf = audio_calloc (AUDIO_FUNC, conf.adc_samples,
conf.adc_headers << hw->info.shift);
if (!wave->pcm_buf) {
goto err3;
}
for (i = 0; i < conf.adc_headers; ++i) {
WAVEHDR *h = &wave->hdrs[i];
h->dwUser = 0;
h->dwBufferLength = conf.adc_samples << hw->info.shift;
h->lpData = advance (wave->pcm_buf, i * h->dwBufferLength);
h->dwFlags = 0;
mr = waveInPrepareHeader (wave->hwi, h, sizeof (*h));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInPrepareHeader(%d)", i);
goto err4;
}
}
wave->paused = 1;
winwave_add_buffers (wave, hw->samples);
return 0;
err4:
g_free (wave->pcm_buf);
err3:
g_free (wave->hdrs);
err2:
winwave_anal_close_in (wave);
err1:
err0:
return -1;
}
static void winwave_fini_in (HWVoiceIn *hw)
{
int i;
MMRESULT mr;
WaveVoiceIn *wave = (WaveVoiceIn *) hw;
mr = waveInReset (wave->hwi);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInReset");
}
for (i = 0; i < conf.adc_headers; ++i) {
mr = waveInUnprepareHeader (wave->hwi, &wave->hdrs[i],
sizeof (wave->hdrs[i]));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInUnprepareHeader(%d)", i);
}
}
winwave_anal_close_in (wave);
if (wave->event) {
qemu_del_wait_object (wave->event, winwave_poll, wave);
if (!CloseHandle (wave->event)) {
dolog ("ADC CloseHandle failed %lx\n", GetLastError ());
}
wave->event = NULL;
}
g_free (wave->pcm_buf);
wave->pcm_buf = NULL;
g_free (wave->hdrs);
wave->hdrs = NULL;
}
static int winwave_run_in (HWVoiceIn *hw)
{
WaveVoiceIn *wave = (WaveVoiceIn *) hw;
int live = audio_pcm_hw_get_live_in (hw);
int dead = hw->samples - live;
int decr, ret;
if (!dead) {
return 0;
}
EnterCriticalSection (&wave->crit_sect);
{
decr = audio_MIN (dead, wave->avail);
wave->avail -= decr;
}
LeaveCriticalSection (&wave->crit_sect);
ret = decr;
while (decr) {
int left = hw->samples - hw->wpos;
int conv = audio_MIN (left, decr);
hw->conv (hw->conv_buf + hw->wpos,
advance (wave->pcm_buf, wave->rpos << hw->info.shift),
conv);
wave->rpos = (wave->rpos + conv) % hw->samples;
hw->wpos = (hw->wpos + conv) % hw->samples;
decr -= conv;
}
winwave_add_buffers (wave, ret);
return ret;
}
static int winwave_read (SWVoiceIn *sw, void *buf, int size)
{
return audio_pcm_sw_read (sw, buf, size);
}
static int winwave_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
MMRESULT mr;
WaveVoiceIn *wave = (WaveVoiceIn *) hw;
switch (cmd) {
case VOICE_ENABLE:
{
va_list ap;
int poll_mode;
va_start (ap, cmd);
poll_mode = va_arg (ap, int);
va_end (ap);
if (poll_mode && !wave->event) {
wave->event = CreateEvent (NULL, TRUE, TRUE, NULL);
if (!wave->event) {
dolog ("ADC CreateEvent: %lx, poll mode will be disabled\n",
GetLastError ());
}
}
if (wave->event) {
int ret;
ret = qemu_add_wait_object (wave->event, winwave_poll, wave);
hw->poll_mode = (ret == 0);
}
else {
hw->poll_mode = 0;
}
if (wave->paused) {
mr = waveInStart (wave->hwi);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInStart");
}
wave->paused = 0;
}
}
return 0;
case VOICE_DISABLE:
if (!wave->paused) {
mr = waveInStop (wave->hwi);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInStop");
}
else {
wave->paused = 1;
}
}
if (wave->event) {
qemu_del_wait_object (wave->event, winwave_poll, wave);
}
return 0;
}
return 0;
}
static void *winwave_audio_init (void)
{
return &conf;
}
static void winwave_audio_fini (void *opaque)
{
(void) opaque;
}
static struct audio_option winwave_options[] = {
{
.name = "DAC_HEADERS",
.tag = AUD_OPT_INT,
.valp = &conf.dac_headers,
.descr = "DAC number of headers",
},
{
.name = "DAC_SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.dac_samples,
.descr = "DAC number of samples per header",
},
{
.name = "ADC_HEADERS",
.tag = AUD_OPT_INT,
.valp = &conf.adc_headers,
.descr = "ADC number of headers",
},
{
.name = "ADC_SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.adc_samples,
.descr = "ADC number of samples per header",
},
{ /* End of list */ }
};
static struct audio_pcm_ops winwave_pcm_ops = {
.init_out = winwave_init_out,
.fini_out = winwave_fini_out,
.run_out = winwave_run_out,
.write = winwave_write,
.ctl_out = winwave_ctl_out,
.init_in = winwave_init_in,
.fini_in = winwave_fini_in,
.run_in = winwave_run_in,
.read = winwave_read,
.ctl_in = winwave_ctl_in
};
struct audio_driver winwave_audio_driver = {
.name = "winwave",
.descr = "Windows Waveform Audio http://msdn.microsoft.com",
.options = winwave_options,
.init = winwave_audio_init,
.fini = winwave_audio_fini,
.pcm_ops = &winwave_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (WaveVoiceOut),
.voice_size_in = sizeof (WaveVoiceIn)
};

View File

@@ -1,11 +1,8 @@
common-obj-y += rng.o rng-egd.o
common-obj-$(CONFIG_POSIX) += rng-random.o
common-obj-y += msmouse.o testdev.o
common-obj-y += msmouse.o
common-obj-$(CONFIG_BRLAPI) += baum.o
baum.o-cflags := $(SDL_CFLAGS)
$(obj)/baum.o: QEMU_CFLAGS += $(SDL_CFLAGS)
common-obj-$(CONFIG_TPM) += tpm.o
common-obj-y += hostmem.o hostmem-ram.o
common-obj-$(CONFIG_LINUX) += hostmem-file.o

View File

@@ -21,8 +21,6 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "sysemu/char.h"
#include "qemu/timer.h"
@@ -305,7 +303,7 @@ static int baum_eat_packet(BaumDriverState *baum, const uint8_t *buf, int len)
return 0;
cur++;
}
DPRINTF("Dropped %td bytes!\n", cur - buf);
DPRINTF("Dropped %d bytes!\n", cur - buf);
}
#define EAT(c) do {\
@@ -337,7 +335,7 @@ static int baum_eat_packet(BaumDriverState *baum, const uint8_t *buf, int len)
/* Allow 100ms to complete the DisplayData packet */
timer_mod(baum->cellCount_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
NANOSECONDS_PER_SECOND / 10);
get_ticks_per_sec() / 10);
for (i = 0; i < baum->x * baum->y ; i++) {
EAT(c);
cells[i] = c;
@@ -563,12 +561,8 @@ static void baum_close(struct CharDriverState *chr)
g_free(baum);
}
static CharDriverState *chr_baum_init(const char *id,
ChardevBackend *backend,
ChardevReturn *ret,
Error **errp)
CharDriverState *chr_baum_init(void)
{
ChardevCommon *common = backend->u.braille.data;
BaumDriverState *baum;
CharDriverState *chr;
brlapi_handle_t *handle;
@@ -579,12 +573,8 @@ static CharDriverState *chr_baum_init(const char *id,
#endif
int tty;
chr = qemu_chr_alloc(common, errp);
if (!chr) {
return NULL;
}
baum = g_malloc0(sizeof(BaumDriverState));
baum->chr = chr;
baum->chr = chr = g_malloc0(sizeof(CharDriverState));
chr->opaque = baum;
chr->chr_write = baum_write;
@@ -596,16 +586,14 @@ static CharDriverState *chr_baum_init(const char *id,
baum->brlapi_fd = brlapi__openConnection(handle, NULL, NULL);
if (baum->brlapi_fd == -1) {
error_setg(errp, "brlapi__openConnection: %s",
brlapi_strerror(brlapi_error_location()));
brlapi_perror("baum_init: brlapi_openConnection");
goto fail_handle;
}
baum->cellCount_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, baum_cellCount_timer_cb, baum);
if (brlapi__getDisplaySize(handle, &baum->x, &baum->y) == -1) {
error_setg(errp, "brlapi__getDisplaySize: %s",
brlapi_strerror(brlapi_error_location()));
brlapi_perror("baum_init: brlapi_getDisplaySize");
goto fail;
}
@@ -621,8 +609,7 @@ static CharDriverState *chr_baum_init(const char *id,
tty = BRLAPI_TTY_DEFAULT;
if (brlapi__enterTtyMode(handle, tty, NULL) == -1) {
error_setg(errp, "brlapi__enterTtyMode: %s",
brlapi_strerror(brlapi_error_location()));
brlapi_perror("baum_init: brlapi_enterTtyMode");
goto fail;
}
@@ -642,8 +629,7 @@ fail_handle:
static void register_types(void)
{
register_char_driver("braille", CHARDEV_BACKEND_KIND_BRAILLE, NULL,
chr_baum_init);
register_char_driver_qapi("braille", CHARDEV_BACKEND_KIND_BRAILLE, NULL);
}
type_init(register_types);

View File

@@ -1,145 +0,0 @@
/*
* QEMU Host Memory Backend for hugetlbfs
*
* Copyright (C) 2013-2014 Red Hat Inc
*
* Authors:
* Paolo Bonzini <pbonzini@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "sysemu/hostmem.h"
#include "sysemu/sysemu.h"
#include "qom/object_interfaces.h"
/* hostmem-file.c */
/**
* @TYPE_MEMORY_BACKEND_FILE:
* name of backend that uses mmap on a file descriptor
*/
#define TYPE_MEMORY_BACKEND_FILE "memory-backend-file"
#define MEMORY_BACKEND_FILE(obj) \
OBJECT_CHECK(HostMemoryBackendFile, (obj), TYPE_MEMORY_BACKEND_FILE)
typedef struct HostMemoryBackendFile HostMemoryBackendFile;
struct HostMemoryBackendFile {
HostMemoryBackend parent_obj;
bool share;
char *mem_path;
};
static void
file_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(backend);
if (!backend->size) {
error_setg(errp, "can't create backend with size 0");
return;
}
if (!fb->mem_path) {
error_setg(errp, "mem-path property not set");
return;
}
#ifndef CONFIG_LINUX
error_setg(errp, "-mem-path not supported on this host");
#else
if (!memory_region_size(&backend->mr)) {
gchar *path;
backend->force_prealloc = mem_prealloc;
path = object_get_canonical_path(OBJECT(backend));
memory_region_init_ram_from_file(&backend->mr, OBJECT(backend),
path,
backend->size, fb->share,
fb->mem_path, errp);
g_free(path);
}
#endif
}
static void
file_backend_class_init(ObjectClass *oc, void *data)
{
HostMemoryBackendClass *bc = MEMORY_BACKEND_CLASS(oc);
bc->alloc = file_backend_memory_alloc;
}
static char *get_mem_path(Object *o, Error **errp)
{
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
return g_strdup(fb->mem_path);
}
static void set_mem_path(Object *o, const char *str, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(o);
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
if (memory_region_size(&backend->mr)) {
error_setg(errp, "cannot change property value");
return;
}
g_free(fb->mem_path);
fb->mem_path = g_strdup(str);
}
static bool file_memory_backend_get_share(Object *o, Error **errp)
{
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
return fb->share;
}
static void file_memory_backend_set_share(Object *o, bool value, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(o);
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
if (memory_region_size(&backend->mr)) {
error_setg(errp, "cannot change property value");
return;
}
fb->share = value;
}
static void
file_backend_instance_init(Object *o)
{
object_property_add_bool(o, "share",
file_memory_backend_get_share,
file_memory_backend_set_share, NULL);
object_property_add_str(o, "mem-path", get_mem_path,
set_mem_path, NULL);
}
static void file_backend_instance_finalize(Object *o)
{
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
g_free(fb->mem_path);
}
static const TypeInfo file_backend_info = {
.name = TYPE_MEMORY_BACKEND_FILE,
.parent = TYPE_MEMORY_BACKEND,
.class_init = file_backend_class_init,
.instance_init = file_backend_instance_init,
.instance_finalize = file_backend_instance_finalize,
.instance_size = sizeof(HostMemoryBackendFile),
};
static void register_types(void)
{
type_register_static(&file_backend_info);
}
type_init(register_types);

View File

@@ -1,55 +0,0 @@
/*
* QEMU Host Memory Backend
*
* Copyright (C) 2013-2014 Red Hat Inc
*
* Authors:
* Igor Mammedov <imammedo@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/hostmem.h"
#include "qapi/error.h"
#include "qom/object_interfaces.h"
#define TYPE_MEMORY_BACKEND_RAM "memory-backend-ram"
static void
ram_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
char *path;
if (!backend->size) {
error_setg(errp, "can't create backend with size 0");
return;
}
path = object_get_canonical_path_component(OBJECT(backend));
memory_region_init_ram(&backend->mr, OBJECT(backend), path,
backend->size, errp);
g_free(path);
}
static void
ram_backend_class_init(ObjectClass *oc, void *data)
{
HostMemoryBackendClass *bc = MEMORY_BACKEND_CLASS(oc);
bc->alloc = ram_backend_memory_alloc;
}
static const TypeInfo ram_backend_info = {
.name = TYPE_MEMORY_BACKEND_RAM,
.parent = TYPE_MEMORY_BACKEND,
.class_init = ram_backend_class_init,
};
static void register_types(void)
{
type_register_static(&ram_backend_info);
}
type_init(register_types);

View File

@@ -1,399 +0,0 @@
/*
* QEMU Host Memory Backend
*
* Copyright (C) 2013-2014 Red Hat Inc
*
* Authors:
* Igor Mammedov <imammedo@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/hostmem.h"
#include "hw/boards.h"
#include "qapi/error.h"
#include "qapi/visitor.h"
#include "qapi-types.h"
#include "qapi-visit.h"
#include "qemu/config-file.h"
#include "qom/object_interfaces.h"
#ifdef CONFIG_NUMA
#include <numaif.h>
QEMU_BUILD_BUG_ON(HOST_MEM_POLICY_DEFAULT != MPOL_DEFAULT);
QEMU_BUILD_BUG_ON(HOST_MEM_POLICY_PREFERRED != MPOL_PREFERRED);
QEMU_BUILD_BUG_ON(HOST_MEM_POLICY_BIND != MPOL_BIND);
QEMU_BUILD_BUG_ON(HOST_MEM_POLICY_INTERLEAVE != MPOL_INTERLEAVE);
#endif
static void
host_memory_backend_get_size(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
uint64_t value = backend->size;
visit_type_size(v, name, &value, errp);
}
static void
host_memory_backend_set_size(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
Error *local_err = NULL;
uint64_t value;
if (memory_region_size(&backend->mr)) {
error_setg(&local_err, "cannot change property value");
goto out;
}
visit_type_size(v, name, &value, &local_err);
if (local_err) {
goto out;
}
if (!value) {
error_setg(&local_err, "Property '%s.%s' doesn't take value '%"
PRIu64 "'", object_get_typename(obj), name, value);
goto out;
}
backend->size = value;
out:
error_propagate(errp, local_err);
}
static uint16List **host_memory_append_node(uint16List **node,
unsigned long value)
{
*node = g_malloc0(sizeof(**node));
(*node)->value = value;
return &(*node)->next;
}
static void
host_memory_backend_get_host_nodes(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
uint16List *host_nodes = NULL;
uint16List **node = &host_nodes;
unsigned long value;
value = find_first_bit(backend->host_nodes, MAX_NODES);
node = host_memory_append_node(node, value);
if (value == MAX_NODES) {
goto out;
}
do {
value = find_next_bit(backend->host_nodes, MAX_NODES, value + 1);
if (value == MAX_NODES) {
break;
}
node = host_memory_append_node(node, value);
} while (true);
out:
visit_type_uint16List(v, name, &host_nodes, errp);
}
static void
host_memory_backend_set_host_nodes(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
#ifdef CONFIG_NUMA
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
uint16List *l = NULL;
visit_type_uint16List(v, name, &l, errp);
while (l) {
bitmap_set(backend->host_nodes, l->value, 1);
l = l->next;
}
#else
error_setg(errp, "NUMA node binding are not supported by this QEMU");
#endif
}
static int
host_memory_backend_get_policy(Object *obj, Error **errp G_GNUC_UNUSED)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
return backend->policy;
}
static void
host_memory_backend_set_policy(Object *obj, int policy, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
backend->policy = policy;
#ifndef CONFIG_NUMA
if (policy != HOST_MEM_POLICY_DEFAULT) {
error_setg(errp, "NUMA policies are not supported by this QEMU");
}
#endif
}
static bool host_memory_backend_get_merge(Object *obj, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
return backend->merge;
}
static void host_memory_backend_set_merge(Object *obj, bool value, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (!memory_region_size(&backend->mr)) {
backend->merge = value;
return;
}
if (value != backend->merge) {
void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr);
qemu_madvise(ptr, sz,
value ? QEMU_MADV_MERGEABLE : QEMU_MADV_UNMERGEABLE);
backend->merge = value;
}
}
static bool host_memory_backend_get_dump(Object *obj, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
return backend->dump;
}
static void host_memory_backend_set_dump(Object *obj, bool value, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (!memory_region_size(&backend->mr)) {
backend->dump = value;
return;
}
if (value != backend->dump) {
void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr);
qemu_madvise(ptr, sz,
value ? QEMU_MADV_DODUMP : QEMU_MADV_DONTDUMP);
backend->dump = value;
}
}
static bool host_memory_backend_get_prealloc(Object *obj, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
return backend->prealloc || backend->force_prealloc;
}
static void host_memory_backend_set_prealloc(Object *obj, bool value,
Error **errp)
{
Error *local_err = NULL;
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (backend->force_prealloc) {
if (value) {
error_setg(errp,
"remove -mem-prealloc to use the prealloc property");
return;
}
}
if (!memory_region_size(&backend->mr)) {
backend->prealloc = value;
return;
}
if (value && !backend->prealloc) {
int fd = memory_region_get_fd(&backend->mr);
void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr);
os_mem_prealloc(fd, ptr, sz, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
backend->prealloc = true;
}
}
static void host_memory_backend_init(Object *obj)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
MachineState *machine = MACHINE(qdev_get_machine());
backend->merge = machine_mem_merge(machine);
backend->dump = machine_dump_guest_core(machine);
backend->prealloc = mem_prealloc;
object_property_add_bool(obj, "merge",
host_memory_backend_get_merge,
host_memory_backend_set_merge, NULL);
object_property_add_bool(obj, "dump",
host_memory_backend_get_dump,
host_memory_backend_set_dump, NULL);
object_property_add_bool(obj, "prealloc",
host_memory_backend_get_prealloc,
host_memory_backend_set_prealloc, NULL);
object_property_add(obj, "size", "int",
host_memory_backend_get_size,
host_memory_backend_set_size, NULL, NULL, NULL);
object_property_add(obj, "host-nodes", "int",
host_memory_backend_get_host_nodes,
host_memory_backend_set_host_nodes, NULL, NULL, NULL);
object_property_add_enum(obj, "policy", "HostMemPolicy",
HostMemPolicy_lookup,
host_memory_backend_get_policy,
host_memory_backend_set_policy, NULL);
}
MemoryRegion *
host_memory_backend_get_memory(HostMemoryBackend *backend, Error **errp)
{
return memory_region_size(&backend->mr) ? &backend->mr : NULL;
}
void host_memory_backend_set_mapped(HostMemoryBackend *backend, bool mapped)
{
backend->is_mapped = mapped;
}
bool host_memory_backend_is_mapped(HostMemoryBackend *backend)
{
return backend->is_mapped;
}
static void
host_memory_backend_memory_complete(UserCreatable *uc, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(uc);
HostMemoryBackendClass *bc = MEMORY_BACKEND_GET_CLASS(uc);
Error *local_err = NULL;
void *ptr;
uint64_t sz;
if (bc->alloc) {
bc->alloc(backend, &local_err);
if (local_err) {
goto out;
}
ptr = memory_region_get_ram_ptr(&backend->mr);
sz = memory_region_size(&backend->mr);
if (backend->merge) {
qemu_madvise(ptr, sz, QEMU_MADV_MERGEABLE);
}
if (!backend->dump) {
qemu_madvise(ptr, sz, QEMU_MADV_DONTDUMP);
}
#ifdef CONFIG_NUMA
unsigned long lastbit = find_last_bit(backend->host_nodes, MAX_NODES);
/* lastbit == MAX_NODES means maxnode = 0 */
unsigned long maxnode = (lastbit + 1) % (MAX_NODES + 1);
/* ensure policy won't be ignored in case memory is preallocated
* before mbind(). note: MPOL_MF_STRICT is ignored on hugepages so
* this doesn't catch hugepage case. */
unsigned flags = MPOL_MF_STRICT | MPOL_MF_MOVE;
/* check for invalid host-nodes and policies and give more verbose
* error messages than mbind(). */
if (maxnode && backend->policy == MPOL_DEFAULT) {
error_setg(errp, "host-nodes must be empty for policy default,"
" or you should explicitly specify a policy other"
" than default");
return;
} else if (maxnode == 0 && backend->policy != MPOL_DEFAULT) {
error_setg(errp, "host-nodes must be set for policy %s",
HostMemPolicy_lookup[backend->policy]);
return;
}
/* We can have up to MAX_NODES nodes, but we need to pass maxnode+1
* as argument to mbind() due to an old Linux bug (feature?) which
* cuts off the last specified node. This means backend->host_nodes
* must have MAX_NODES+1 bits available.
*/
assert(sizeof(backend->host_nodes) >=
BITS_TO_LONGS(MAX_NODES + 1) * sizeof(unsigned long));
assert(maxnode <= MAX_NODES);
if (mbind(ptr, sz, backend->policy,
maxnode ? backend->host_nodes : NULL, maxnode + 1, flags)) {
if (backend->policy != MPOL_DEFAULT || errno != ENOSYS) {
error_setg_errno(errp, errno,
"cannot bind memory to host NUMA nodes");
return;
}
}
#endif
/* Preallocate memory after the NUMA policy has been instantiated.
* This is necessary to guarantee memory is allocated with
* specified NUMA policy in place.
*/
if (backend->prealloc) {
os_mem_prealloc(memory_region_get_fd(&backend->mr), ptr, sz,
&local_err);
if (local_err) {
goto out;
}
}
}
out:
error_propagate(errp, local_err);
}
static bool
host_memory_backend_can_be_deleted(UserCreatable *uc, Error **errp)
{
if (host_memory_backend_is_mapped(MEMORY_BACKEND(uc))) {
return false;
} else {
return true;
}
}
static void
host_memory_backend_class_init(ObjectClass *oc, void *data)
{
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
ucc->complete = host_memory_backend_memory_complete;
ucc->can_be_deleted = host_memory_backend_can_be_deleted;
}
static const TypeInfo host_memory_backend_info = {
.name = TYPE_MEMORY_BACKEND,
.parent = TYPE_OBJECT,
.abstract = true,
.class_size = sizeof(HostMemoryBackendClass),
.class_init = host_memory_backend_class_init,
.instance_size = sizeof(HostMemoryBackend),
.instance_init = host_memory_backend_init,
.interfaces = (InterfaceInfo[]) {
{ TYPE_USER_CREATABLE },
{ }
}
};
static void register_types(void)
{
type_register_static(&host_memory_backend_info);
}
type_init(register_types);

View File

@@ -21,55 +21,20 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include <stdlib.h>
#include "qemu-common.h"
#include "sysemu/char.h"
#include "ui/console.h"
#include "ui/input.h"
#define MSMOUSE_LO6(n) ((n) & 0x3f)
#define MSMOUSE_HI2(n) (((n) & 0xc0) >> 6)
typedef struct {
CharDriverState *chr;
QemuInputHandlerState *hs;
int axis[INPUT_AXIS__MAX];
bool btns[INPUT_BUTTON__MAX];
bool btnc[INPUT_BUTTON__MAX];
uint8_t outbuf[32];
int outlen;
} MouseState;
static void msmouse_chr_accept_input(CharDriverState *chr)
static void msmouse_event(void *opaque,
int dx, int dy, int dz, int buttons_state)
{
MouseState *mouse = chr->opaque;
int len;
CharDriverState *chr = (CharDriverState *)opaque;
len = qemu_chr_be_can_write(chr);
if (len > mouse->outlen) {
len = mouse->outlen;
}
if (!len) {
return;
}
qemu_chr_be_write(chr, mouse->outbuf, len);
mouse->outlen -= len;
if (mouse->outlen) {
memmove(mouse->outbuf, mouse->outbuf + len, mouse->outlen);
}
}
static void msmouse_queue_event(MouseState *mouse)
{
unsigned char bytes[4] = { 0x40, 0x00, 0x00, 0x00 };
int dx, dy, count = 3;
dx = mouse->axis[INPUT_AXIS_X];
mouse->axis[INPUT_AXIS_X] = 0;
dy = mouse->axis[INPUT_AXIS_Y];
mouse->axis[INPUT_AXIS_Y] = 0;
/* Movement deltas */
bytes[0] |= (MSMOUSE_HI2(dy) << 2) | MSMOUSE_HI2(dx);
@@ -77,54 +42,14 @@ static void msmouse_queue_event(MouseState *mouse)
bytes[2] |= MSMOUSE_LO6(dy);
/* Buttons */
bytes[0] |= (mouse->btns[INPUT_BUTTON_LEFT] ? 0x20 : 0x00);
bytes[0] |= (mouse->btns[INPUT_BUTTON_RIGHT] ? 0x10 : 0x00);
if (mouse->btns[INPUT_BUTTON_MIDDLE] ||
mouse->btnc[INPUT_BUTTON_MIDDLE]) {
bytes[3] |= (mouse->btns[INPUT_BUTTON_MIDDLE] ? 0x20 : 0x00);
mouse->btnc[INPUT_BUTTON_MIDDLE] = false;
count = 4;
}
bytes[0] |= (buttons_state & 0x01 ? 0x20 : 0x00);
bytes[0] |= (buttons_state & 0x02 ? 0x10 : 0x00);
bytes[3] |= (buttons_state & 0x04 ? 0x20 : 0x00);
if (mouse->outlen <= sizeof(mouse->outbuf) - count) {
memcpy(mouse->outbuf + mouse->outlen, bytes, count);
mouse->outlen += count;
} else {
/* queue full -> drop event */
}
}
static void msmouse_input_event(DeviceState *dev, QemuConsole *src,
InputEvent *evt)
{
MouseState *mouse = (MouseState *)dev;
InputMoveEvent *move;
InputBtnEvent *btn;
switch (evt->type) {
case INPUT_EVENT_KIND_REL:
move = evt->u.rel.data;
mouse->axis[move->axis] += move->value;
break;
case INPUT_EVENT_KIND_BTN:
btn = evt->u.btn.data;
mouse->btns[btn->button] = btn->down;
mouse->btnc[btn->button] = true;
break;
default:
/* keep gcc happy */
break;
}
}
static void msmouse_input_sync(DeviceState *dev)
{
MouseState *mouse = (MouseState *)dev;
msmouse_queue_event(mouse);
msmouse_chr_accept_input(mouse->chr);
/* We always send the packet of, so that we do not have to keep track
of previous state of the middle button. This can potentially confuse
some very old drivers for two button mice though. */
qemu_chr_be_write(chr, bytes, 4);
}
static int msmouse_chr_write (struct CharDriverState *s, const uint8_t *buf, int len)
@@ -135,51 +60,26 @@ static int msmouse_chr_write (struct CharDriverState *s, const uint8_t *buf, int
static void msmouse_chr_close (struct CharDriverState *chr)
{
MouseState *mouse = chr->opaque;
qemu_input_handler_unregister(mouse->hs);
g_free(mouse);
g_free (chr);
}
static QemuInputHandler msmouse_handler = {
.name = "QEMU Microsoft Mouse",
.mask = INPUT_EVENT_MASK_BTN | INPUT_EVENT_MASK_REL,
.event = msmouse_input_event,
.sync = msmouse_input_sync,
};
static CharDriverState *qemu_chr_open_msmouse(const char *id,
ChardevBackend *backend,
ChardevReturn *ret,
Error **errp)
CharDriverState *qemu_chr_open_msmouse(void)
{
ChardevCommon *common = backend->u.msmouse.data;
MouseState *mouse;
CharDriverState *chr;
chr = qemu_chr_alloc(common, errp);
if (!chr) {
return NULL;
}
chr = g_malloc0(sizeof(CharDriverState));
chr->chr_write = msmouse_chr_write;
chr->chr_close = msmouse_chr_close;
chr->chr_accept_input = msmouse_chr_accept_input;
chr->explicit_be_open = true;
mouse = g_new0(MouseState, 1);
mouse->hs = qemu_input_handler_register((DeviceState *)mouse,
&msmouse_handler);
mouse->chr = chr;
chr->opaque = mouse;
qemu_add_mouse_event_handler(msmouse_event, chr, 0, "QEMU Microsoft Mouse");
return chr;
}
static void register_types(void)
{
register_char_driver("msmouse", CHARDEV_BACKEND_KIND_MSMOUSE, NULL,
qemu_chr_open_msmouse);
register_char_driver_qapi("msmouse", CHARDEV_BACKEND_KIND_MSMOUSE, NULL);
}
type_init(register_types);

View File

@@ -10,10 +10,8 @@
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/rng.h"
#include "sysemu/char.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "hw/qdev.h" /* just for DEFINE_PROP_CHR */
@@ -26,12 +24,33 @@ typedef struct RngEgd
CharDriverState *chr;
char *chr_name;
GSList *requests;
} RngEgd;
static void rng_egd_request_entropy(RngBackend *b, RngRequest *req)
typedef struct RngRequest
{
EntropyReceiveFunc *receive_entropy;
uint8_t *data;
void *opaque;
size_t offset;
size_t size;
} RngRequest;
static void rng_egd_request_entropy(RngBackend *b, size_t size,
EntropyReceiveFunc *receive_entropy,
void *opaque)
{
RngEgd *s = RNG_EGD(b);
size_t size = req->size;
RngRequest *req;
req = g_malloc(sizeof(*req));
req->offset = 0;
req->size = size;
req->receive_entropy = receive_entropy;
req->opaque = opaque;
req->data = g_malloc(req->size);
while (size > 0) {
uint8_t header[2];
@@ -41,21 +60,28 @@ static void rng_egd_request_entropy(RngBackend *b, RngRequest *req)
header[0] = 0x02;
header[1] = len;
/* XXX this blocks entire thread. Rewrite to use
* qemu_chr_fe_write and background I/O callbacks */
qemu_chr_fe_write_all(s->chr, header, sizeof(header));
qemu_chr_fe_write(s->chr, header, sizeof(header));
size -= len;
}
s->requests = g_slist_append(s->requests, req);
}
static void rng_egd_free_request(RngRequest *req)
{
g_free(req->data);
g_free(req);
}
static int rng_egd_chr_can_read(void *opaque)
{
RngEgd *s = RNG_EGD(opaque);
RngRequest *req;
GSList *i;
int size = 0;
QSIMPLEQ_FOREACH(req, &s->parent.requests, next) {
for (i = s->requests; i; i = i->next) {
RngRequest *req = i->data;
size += req->size - req->offset;
}
@@ -67,8 +93,8 @@ static void rng_egd_chr_read(void *opaque, const uint8_t *buf, int size)
RngEgd *s = RNG_EGD(opaque);
size_t buf_offset = 0;
while (size > 0 && !QSIMPLEQ_EMPTY(&s->parent.requests)) {
RngRequest *req = QSIMPLEQ_FIRST(&s->parent.requests);
while (size > 0 && s->requests) {
RngRequest *req = s->requests->data;
int len = MIN(size, req->size - req->offset);
memcpy(req->data + req->offset, buf + buf_offset, len);
@@ -77,32 +103,56 @@ static void rng_egd_chr_read(void *opaque, const uint8_t *buf, int size)
size -= len;
if (req->offset == req->size) {
s->requests = g_slist_remove_link(s->requests, s->requests);
req->receive_entropy(req->opaque, req->data, req->size);
rng_backend_finalize_request(&s->parent, req);
rng_egd_free_request(req);
}
}
}
static void rng_egd_free_requests(RngEgd *s)
{
GSList *i;
for (i = s->requests; i; i = i->next) {
rng_egd_free_request(i->data);
}
g_slist_free(s->requests);
s->requests = NULL;
}
static void rng_egd_cancel_requests(RngBackend *b)
{
RngEgd *s = RNG_EGD(b);
/* We simply delete the list of pending requests. If there is data in the
* queue waiting to be read, this is okay, because there will always be
* more data than we requested originally
*/
rng_egd_free_requests(s);
}
static void rng_egd_opened(RngBackend *b, Error **errp)
{
RngEgd *s = RNG_EGD(b);
if (s->chr_name == NULL) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
"chardev", "a valid character device");
error_set(errp, QERR_INVALID_PARAMETER_VALUE,
"chardev", "a valid character device");
return;
}
s->chr = qemu_chr_find(s->chr_name);
if (s->chr == NULL) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", s->chr_name);
error_set(errp, QERR_DEVICE_NOT_FOUND, s->chr_name);
return;
}
if (qemu_chr_fe_claim(s->chr) != 0) {
error_setg(errp, QERR_DEVICE_IN_USE, s->chr_name);
error_set(errp, QERR_DEVICE_IN_USE, s->chr_name);
return;
}
@@ -117,9 +167,8 @@ static void rng_egd_set_chardev(Object *obj, const char *value, Error **errp)
RngEgd *s = RNG_EGD(b);
if (b->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
error_set(errp, QERR_PERMISSION_DENIED);
} else {
g_free(s->chr_name);
s->chr_name = g_strdup(value);
}
}
@@ -152,6 +201,8 @@ static void rng_egd_finalize(Object *obj)
}
g_free(s->chr_name);
rng_egd_free_requests(s);
}
static void rng_egd_class_init(ObjectClass *klass, void *data)
@@ -159,6 +210,7 @@ static void rng_egd_class_init(ObjectClass *klass, void *data)
RngBackendClass *rbc = RNG_BACKEND_CLASS(klass);
rbc->request_entropy = rng_egd_request_entropy;
rbc->cancel_requests = rng_egd_cancel_requests;
rbc->opened = rng_egd_opened;
}

View File

@@ -10,19 +10,21 @@
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/rng-random.h"
#include "sysemu/rng.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "qemu/main-loop.h"
struct RngRandom
struct RndRandom
{
RngBackend parent;
int fd;
char *filename;
EntropyReceiveFunc *receive_func;
void *opaque;
size_t size;
};
/**
@@ -34,45 +36,46 @@ struct RngRandom
static void entropy_available(void *opaque)
{
RngRandom *s = RNG_RANDOM(opaque);
RndRandom *s = RNG_RANDOM(opaque);
uint8_t buffer[s->size];
ssize_t len;
while (!QSIMPLEQ_EMPTY(&s->parent.requests)) {
RngRequest *req = QSIMPLEQ_FIRST(&s->parent.requests);
ssize_t len;
len = read(s->fd, req->data, req->size);
if (len < 0 && errno == EAGAIN) {
return;
}
g_assert(len != -1);
req->receive_entropy(req->opaque, req->data, len);
rng_backend_finalize_request(&s->parent, req);
len = read(s->fd, buffer, s->size);
if (len < 0 && errno == EAGAIN) {
return;
}
g_assert(len != -1);
s->receive_func(s->opaque, buffer, len);
s->receive_func = NULL;
/* We've drained all requests, the fd handler can be reset. */
qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
}
static void rng_random_request_entropy(RngBackend *b, RngRequest *req)
static void rng_random_request_entropy(RngBackend *b, size_t size,
EntropyReceiveFunc *receive_entropy,
void *opaque)
{
RngRandom *s = RNG_RANDOM(b);
RndRandom *s = RNG_RANDOM(b);
if (QSIMPLEQ_EMPTY(&s->parent.requests)) {
/* If there are no pending requests yet, we need to
* install our fd handler. */
qemu_set_fd_handler(s->fd, entropy_available, NULL, s);
if (s->receive_func) {
s->receive_func(s->opaque, NULL, 0);
}
s->receive_func = receive_entropy;
s->opaque = opaque;
s->size = size;
qemu_set_fd_handler(s->fd, entropy_available, NULL, s);
}
static void rng_random_opened(RngBackend *b, Error **errp)
{
RngRandom *s = RNG_RANDOM(b);
RndRandom *s = RNG_RANDOM(b);
if (s->filename == NULL) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
"filename", "a valid filename");
error_set(errp, QERR_INVALID_PARAMETER_VALUE,
"filename", "a valid filename");
} else {
s->fd = qemu_open(s->filename, O_RDONLY | O_NONBLOCK);
if (s->fd == -1) {
@@ -83,29 +86,36 @@ static void rng_random_opened(RngBackend *b, Error **errp)
static char *rng_random_get_filename(Object *obj, Error **errp)
{
RngRandom *s = RNG_RANDOM(obj);
RndRandom *s = RNG_RANDOM(obj);
return g_strdup(s->filename);
if (s->filename) {
return g_strdup(s->filename);
}
return NULL;
}
static void rng_random_set_filename(Object *obj, const char *filename,
Error **errp)
{
RngBackend *b = RNG_BACKEND(obj);
RngRandom *s = RNG_RANDOM(obj);
RndRandom *s = RNG_RANDOM(obj);
if (b->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
error_set(errp, QERR_PERMISSION_DENIED);
return;
}
g_free(s->filename);
if (s->filename) {
g_free(s->filename);
}
s->filename = g_strdup(filename);
}
static void rng_random_init(Object *obj)
{
RngRandom *s = RNG_RANDOM(obj);
RndRandom *s = RNG_RANDOM(obj);
object_property_add_str(obj, "filename",
rng_random_get_filename,
@@ -118,7 +128,7 @@ static void rng_random_init(Object *obj)
static void rng_random_finalize(Object *obj)
{
RngRandom *s = RNG_RANDOM(obj);
RndRandom *s = RNG_RANDOM(obj);
if (s->fd != -1) {
qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
@@ -139,7 +149,7 @@ static void rng_random_class_init(ObjectClass *klass, void *data)
static const TypeInfo rng_random_info = {
.name = TYPE_RNG_RANDOM,
.parent = TYPE_RNG_BACKEND,
.instance_size = sizeof(RngRandom),
.instance_size = sizeof(RndRandom),
.class_init = rng_random_class_init,
.instance_init = rng_random_init,
.instance_finalize = rng_random_finalize,

View File

@@ -10,9 +10,7 @@
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/rng.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "qom/object_interfaces.h"
@@ -21,20 +19,18 @@ void rng_backend_request_entropy(RngBackend *s, size_t size,
void *opaque)
{
RngBackendClass *k = RNG_BACKEND_GET_CLASS(s);
RngRequest *req;
if (k->request_entropy) {
req = g_malloc(sizeof(*req));
k->request_entropy(s, size, receive_entropy, opaque);
}
}
req->offset = 0;
req->size = size;
req->receive_entropy = receive_entropy;
req->opaque = opaque;
req->data = g_malloc(req->size);
void rng_backend_cancel_requests(RngBackend *s)
{
RngBackendClass *k = RNG_BACKEND_GET_CLASS(s);
k->request_entropy(s, req);
QSIMPLEQ_INSERT_TAIL(&s->requests, req, next);
if (k->cancel_requests) {
k->cancel_requests(s);
}
}
@@ -54,70 +50,33 @@ static void rng_backend_prop_set_opened(Object *obj, bool value, Error **errp)
{
RngBackend *s = RNG_BACKEND(obj);
RngBackendClass *k = RNG_BACKEND_GET_CLASS(s);
Error *local_err = NULL;
if (value == s->opened) {
return;
}
if (!value && s->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
error_set(errp, QERR_PERMISSION_DENIED);
return;
}
if (k->opened) {
k->opened(s, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
k->opened(s, errp);
}
s->opened = true;
}
static void rng_backend_free_request(RngRequest *req)
{
g_free(req->data);
g_free(req);
}
static void rng_backend_free_requests(RngBackend *s)
{
RngRequest *req, *next;
QSIMPLEQ_FOREACH_SAFE(req, &s->requests, next, next) {
rng_backend_free_request(req);
if (!error_is_set(errp)) {
s->opened = value;
}
QSIMPLEQ_INIT(&s->requests);
}
void rng_backend_finalize_request(RngBackend *s, RngRequest *req)
{
QSIMPLEQ_REMOVE(&s->requests, req, RngRequest, next);
rng_backend_free_request(req);
}
static void rng_backend_init(Object *obj)
{
RngBackend *s = RNG_BACKEND(obj);
QSIMPLEQ_INIT(&s->requests);
object_property_add_bool(obj, "opened",
rng_backend_prop_get_opened,
rng_backend_prop_set_opened,
NULL);
}
static void rng_backend_finalize(Object *obj)
{
RngBackend *s = RNG_BACKEND(obj);
rng_backend_free_requests(s);
}
static void rng_backend_class_init(ObjectClass *oc, void *data)
{
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
@@ -130,7 +89,6 @@ static const TypeInfo rng_backend_info = {
.parent = TYPE_OBJECT,
.instance_size = sizeof(RngBackend),
.instance_init = rng_backend_init,
.instance_finalize = rng_backend_finalize,
.class_size = sizeof(RngBackendClass),
.class_init = rng_backend_class_init,
.abstract = true,

View File

@@ -1,136 +0,0 @@
/*
* QEMU Char Device for testsuite control
*
* Copyright (c) 2014 Red Hat, Inc.
*
* Author: Paolo Bonzini <pbonzini@redhat.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "sysemu/char.h"
#define BUF_SIZE 32
typedef struct {
CharDriverState *chr;
uint8_t in_buf[32];
int in_buf_used;
} TestdevCharState;
/* Try to interpret a whole incoming packet */
static int testdev_eat_packet(TestdevCharState *testdev)
{
const uint8_t *cur = testdev->in_buf;
int len = testdev->in_buf_used;
uint8_t c;
int arg;
#define EAT(c) do { \
if (!len--) { \
return 0; \
} \
c = *cur++; \
} while (0)
EAT(c);
while (isspace(c)) {
EAT(c);
}
arg = 0;
while (isdigit(c)) {
arg = arg * 10 + c - '0';
EAT(c);
}
while (isspace(c)) {
EAT(c);
}
switch (c) {
case 'q':
exit((arg << 1) | 1);
break;
default:
break;
}
return cur - testdev->in_buf;
}
/* The other end is writing some data. Store it and try to interpret */
static int testdev_write(CharDriverState *chr, const uint8_t *buf, int len)
{
TestdevCharState *testdev = chr->opaque;
int tocopy, eaten, orig_len = len;
while (len) {
/* Complete our buffer as much as possible */
tocopy = MIN(len, BUF_SIZE - testdev->in_buf_used);
memcpy(testdev->in_buf + testdev->in_buf_used, buf, tocopy);
testdev->in_buf_used += tocopy;
buf += tocopy;
len -= tocopy;
/* Interpret it as much as possible */
while (testdev->in_buf_used > 0 &&
(eaten = testdev_eat_packet(testdev)) > 0) {
memmove(testdev->in_buf, testdev->in_buf + eaten,
testdev->in_buf_used - eaten);
testdev->in_buf_used -= eaten;
}
}
return orig_len;
}
static void testdev_close(struct CharDriverState *chr)
{
TestdevCharState *testdev = chr->opaque;
g_free(testdev);
}
static CharDriverState *chr_testdev_init(const char *id,
ChardevBackend *backend,
ChardevReturn *ret,
Error **errp)
{
TestdevCharState *testdev;
CharDriverState *chr;
testdev = g_new0(TestdevCharState, 1);
testdev->chr = chr = g_new0(CharDriverState, 1);
chr->opaque = testdev;
chr->chr_write = testdev_write;
chr->chr_close = testdev_close;
return chr;
}
static void register_types(void)
{
register_char_driver("testdev", CHARDEV_BACKEND_KIND_TESTDEV, NULL,
chr_testdev_init);
}
type_init(register_types);

View File

@@ -12,9 +12,7 @@
* Based on backends/rng.c by Anthony Liguori
*/
#include "qemu/osdep.h"
#include "sysemu/tpm_backend.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "sysemu/tpm.h"
#include "qemu/thread.h"
@@ -38,7 +36,7 @@ void tpm_backend_destroy(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
k->ops->destroy(s);
return k->ops->destroy(s);
}
int tpm_backend_init(TPMBackend *s, TPMState *state,
@@ -98,20 +96,6 @@ bool tpm_backend_get_tpm_established_flag(TPMBackend *s)
return k->ops->get_tpm_established_flag(s);
}
int tpm_backend_reset_tpm_established_flag(TPMBackend *s, uint8_t locty)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->reset_tpm_established_flag(s, locty);
}
TPMVersion tpm_backend_get_tpm_version(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->get_tpm_version(s);
}
static bool tpm_backend_prop_get_opened(Object *obj, Error **errp)
{
TPMBackend *s = TPM_BACKEND(obj);
@@ -128,26 +112,23 @@ static void tpm_backend_prop_set_opened(Object *obj, bool value, Error **errp)
{
TPMBackend *s = TPM_BACKEND(obj);
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
Error *local_err = NULL;
if (value == s->opened) {
return;
}
if (!value && s->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
error_set(errp, QERR_PERMISSION_DENIED);
return;
}
if (k->opened) {
k->opened(s, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
k->opened(s, errp);
}
s->opened = true;
if (!error_is_set(errp)) {
s->opened = value;
}
}
static void tpm_backend_instance_init(Object *obj)
@@ -181,6 +162,17 @@ void tpm_backend_thread_end(TPMBackendThread *tbt)
}
}
void tpm_backend_thread_tpm_reset(TPMBackendThread *tbt,
GFunc func, gpointer user_data)
{
if (!tbt->pool) {
tpm_backend_thread_create(tbt, func, user_data);
} else {
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_TPM_RESET,
NULL);
}
}
static const TypeInfo tpm_backend_info = {
.name = TYPE_TPM_BACKEND,
.parent = TYPE_OBJECT,

View File

@@ -24,45 +24,17 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "monitor/monitor.h"
#include "exec/cpu-common.h"
#include "sysemu/kvm.h"
#include "sysemu/balloon.h"
#include "trace.h"
#include "qmp-commands.h"
#include "qapi/qmp/qerror.h"
#include "qapi/qmp/qjson.h"
static QEMUBalloonEvent *balloon_event_fn;
static QEMUBalloonStatus *balloon_stat_fn;
static void *balloon_opaque;
static bool balloon_inhibited;
bool qemu_balloon_is_inhibited(void)
{
return balloon_inhibited;
}
void qemu_balloon_inhibit(bool state)
{
balloon_inhibited = state;
}
static bool have_balloon(Error **errp)
{
if (kvm_enabled() && !kvm_has_sync_mmu()) {
error_set(errp, ERROR_CLASS_KVM_MISSING_CAP,
"Using KVM without synchronous MMU, balloon unavailable");
return false;
}
if (!balloon_event_fn) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_ACTIVE,
"No balloon device has been activated");
return false;
}
return true;
}
int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
QEMUBalloonStatus *stat_func, void *opaque)
@@ -71,6 +43,7 @@ int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
/* We're already registered one balloon handler. How many can
* a guest really have?
*/
error_report("Another balloon device already registered");
return -1;
}
balloon_event_fn = event_func;
@@ -89,30 +62,71 @@ void qemu_remove_balloon_handler(void *opaque)
balloon_opaque = NULL;
}
static int qemu_balloon(ram_addr_t target)
{
if (!balloon_event_fn) {
return 0;
}
trace_balloon_event(balloon_opaque, target);
balloon_event_fn(balloon_opaque, target);
return 1;
}
static int qemu_balloon_status(BalloonInfo *info)
{
if (!balloon_stat_fn) {
return 0;
}
balloon_stat_fn(balloon_opaque, info);
return 1;
}
void qemu_balloon_changed(int64_t actual)
{
QObject *data;
data = qobject_from_jsonf("{ 'actual': %" PRId64 " }",
actual);
monitor_protocol_event(QEVENT_BALLOON_CHANGE, data);
qobject_decref(data);
}
BalloonInfo *qmp_query_balloon(Error **errp)
{
BalloonInfo *info;
if (!have_balloon(errp)) {
if (kvm_enabled() && !kvm_has_sync_mmu()) {
error_set(errp, QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
return NULL;
}
info = g_malloc0(sizeof(*info));
balloon_stat_fn(balloon_opaque, info);
if (qemu_balloon_status(info) == 0) {
error_set(errp, QERR_DEVICE_NOT_ACTIVE, "balloon");
qapi_free_BalloonInfo(info);
return NULL;
}
return info;
}
void qmp_balloon(int64_t target, Error **errp)
void qmp_balloon(int64_t value, Error **errp)
{
if (!have_balloon(errp)) {
if (kvm_enabled() && !kvm_has_sync_mmu()) {
error_set(errp, QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
return;
}
if (target <= 0) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "target", "a size");
if (value <= 0) {
error_set(errp, QERR_INVALID_PARAMETER_VALUE, "target", "a size");
return;
}
trace_balloon_event(balloon_opaque, target);
balloon_event_fn(balloon_opaque, target);
if (qemu_balloon(value) == 0) {
error_set(errp, QERR_DEVICE_NOT_ACTIVE, "balloon");
}
}

View File

@@ -13,20 +13,15 @@
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "block/block.h"
#include "qemu/error-report.h"
#include "qemu/main-loop.h"
#include "block/block_int.h"
#include "hw/hw.h"
#include "qemu/cutils.h"
#include "qemu/queue.h"
#include "qemu/timer.h"
#include "migration/block.h"
#include "migration/migration.h"
#include "sysemu/blockdev.h"
#include "sysemu/block-backend.h"
#include <assert.h>
#define BLOCK_SIZE (1 << 20)
#define BDRV_SECTORS_PER_DIRTY_CHUNK (BLOCK_SIZE >> BDRV_SECTOR_BITS)
@@ -38,8 +33,6 @@
#define MAX_IS_ALLOCATED_SEARCH 65536
#define MAX_INFLIGHT_IO 512
//#define DEBUG_BLK_MIGRATION
#ifdef DEBUG_BLK_MIGRATION
@@ -52,29 +45,19 @@
typedef struct BlkMigDevState {
/* Written during setup phase. Can be read without a lock. */
BlockBackend *blk;
char *blk_name;
BlockDriverState *bs;
int shared_base;
int64_t total_sectors;
QSIMPLEQ_ENTRY(BlkMigDevState) entry;
Error *blocker;
/* Only used by migration thread. Does not need a lock. */
int bulk_completed;
int64_t cur_sector;
int64_t cur_dirty;
/* Data in the aio_bitmap is protected by block migration lock.
* Allocation and free happen during setup and cleanup respectively.
*/
unsigned long *aio_bitmap;
/* Protected by block migration lock. */
unsigned long *aio_bitmap;
int64_t completed_sectors;
/* During migration this is protected by iothread lock / AioContext.
* Allocation and free happen during setup and cleanup respectively.
*/
BdrvDirtyBitmap *dirty_bitmap;
} BlkMigDevState;
@@ -86,7 +69,7 @@ typedef struct BlkMigBlock {
int nr_sectors;
struct iovec iov;
QEMUIOVector qiov;
BlockAIOCB *aiocb;
BlockDriverAIOCB *aiocb;
/* Protected by block migration lock. */
int ret;
@@ -111,7 +94,7 @@ typedef struct BlkMigState {
int prev_progress;
int bulk_completed;
/* Lock must be taken _inside_ the iothread lock and any AioContexts. */
/* Lock must be taken _inside_ the iothread lock. */
QemuMutex lock;
} BlkMigState;
@@ -146,9 +129,9 @@ static void blk_send(QEMUFile *f, BlkMigBlock * blk)
| flags);
/* device name */
len = strlen(blk->bmds->blk_name);
len = strlen(blk->bmds->bs->device_name);
qemu_put_byte(f, len);
qemu_put_buffer(f, (uint8_t *) blk->bmds->blk_name, len);
qemu_put_buffer(f, (uint8_t *)blk->bmds->bs->device_name, len);
/* if a block is zero we need to flush here since the network
* bandwidth is now a lot higher than the storage device bandwidth.
@@ -202,7 +185,7 @@ static int bmds_aio_inflight(BlkMigDevState *bmds, int64_t sector)
{
int64_t chunk = sector / (int64_t)BDRV_SECTORS_PER_DIRTY_CHUNK;
if (sector < blk_nb_sectors(bmds->blk)) {
if ((sector << BDRV_SECTOR_BITS) < bdrv_getlength(bmds->bs)) {
return !!(bmds->aio_bitmap[chunk / (sizeof(unsigned long) * 8)] &
(1UL << (chunk % (sizeof(unsigned long) * 8))));
} else {
@@ -236,10 +219,11 @@ static void bmds_set_aio_inflight(BlkMigDevState *bmds, int64_t sector_num,
static void alloc_aio_bitmap(BlkMigDevState *bmds)
{
BlockBackend *bb = bmds->blk;
BlockDriverState *bs = bmds->bs;
int64_t bitmap_size;
bitmap_size = blk_nb_sectors(bb) + BDRV_SECTORS_PER_DIRTY_CHUNK * 8 - 1;
bitmap_size = (bdrv_getlength(bs) >> BDRV_SECTOR_BITS) +
BDRV_SECTORS_PER_DIRTY_CHUNK * 8 - 1;
bitmap_size /= BDRV_SECTORS_PER_DIRTY_CHUNK * 8;
bmds->aio_bitmap = g_malloc0(bitmap_size);
@@ -269,19 +253,17 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
{
int64_t total_sectors = bmds->total_sectors;
int64_t cur_sector = bmds->cur_sector;
BlockBackend *bb = bmds->blk;
BlockDriverState *bs = bmds->bs;
BlkMigBlock *blk;
int nr_sectors;
if (bmds->shared_base) {
qemu_mutex_lock_iothread();
aio_context_acquire(blk_get_aio_context(bb));
while (cur_sector < total_sectors &&
!bdrv_is_allocated(blk_bs(bb), cur_sector,
MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
!bdrv_is_allocated(bs, cur_sector, MAX_IS_ALLOCATED_SEARCH,
&nr_sectors)) {
cur_sector += nr_sectors;
}
aio_context_release(blk_get_aio_context(bb));
qemu_mutex_unlock_iothread();
}
@@ -301,7 +283,7 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
nr_sectors = total_sectors - cur_sector;
}
blk = g_new(BlkMigBlock, 1);
blk = g_malloc(sizeof(BlkMigBlock));
blk->buf = g_malloc(BLOCK_SIZE);
blk->bmds = bmds;
blk->sector = cur_sector;
@@ -315,21 +297,11 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
block_mig_state.submitted++;
blk_mig_unlock();
/* We do not know if bs is under the main thread (and thus does
* not acquire the AioContext when doing AIO) or rather under
* dataplane. Thus acquire both the iothread mutex and the
* AioContext.
*
* This is ugly and will disappear when we make bdrv_* thread-safe,
* without the need to acquire the AioContext.
*/
qemu_mutex_lock_iothread();
aio_context_acquire(blk_get_aio_context(bmds->blk));
blk->aiocb = blk_aio_preadv(bb, cur_sector * BDRV_SECTOR_SIZE, &blk->qiov,
0, blk_mig_read_cb, blk);
blk->aiocb = bdrv_aio_readv(bs, cur_sector, &blk->qiov,
nr_sectors, blk_mig_read_cb, blk);
bdrv_reset_dirty_bitmap(bmds->dirty_bitmap, cur_sector, nr_sectors);
aio_context_release(blk_get_aio_context(bmds->blk));
bdrv_reset_dirty(bs, cur_sector, nr_sectors);
qemu_mutex_unlock_iothread();
bmds->cur_sector = cur_sector + nr_sectors;
@@ -338,59 +310,60 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
/* Called with iothread lock taken. */
static int set_dirty_tracking(void)
static void set_dirty_tracking(void)
{
BlkMigDevState *bmds;
int ret;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
aio_context_acquire(blk_get_aio_context(bmds->blk));
bmds->dirty_bitmap = bdrv_create_dirty_bitmap(blk_bs(bmds->blk),
BLOCK_SIZE, NULL, NULL);
aio_context_release(blk_get_aio_context(bmds->blk));
if (!bmds->dirty_bitmap) {
ret = -errno;
goto fail;
}
bmds->dirty_bitmap = bdrv_create_dirty_bitmap(bmds->bs, BLOCK_SIZE);
}
return 0;
fail:
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
if (bmds->dirty_bitmap) {
aio_context_acquire(blk_get_aio_context(bmds->blk));
bdrv_release_dirty_bitmap(blk_bs(bmds->blk), bmds->dirty_bitmap);
aio_context_release(blk_get_aio_context(bmds->blk));
}
}
return ret;
}
/* Called with iothread lock taken. */
static void unset_dirty_tracking(void)
{
BlkMigDevState *bmds;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
aio_context_acquire(blk_get_aio_context(bmds->blk));
bdrv_release_dirty_bitmap(blk_bs(bmds->blk), bmds->dirty_bitmap);
aio_context_release(blk_get_aio_context(bmds->blk));
bdrv_release_dirty_bitmap(bmds->bs, bmds->dirty_bitmap);
}
}
static void init_blk_migration_it(void *opaque, BlockDriverState *bs)
{
BlkMigDevState *bmds;
int64_t sectors;
if (!bdrv_is_read_only(bs)) {
sectors = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
if (sectors <= 0) {
return;
}
bmds = g_malloc0(sizeof(BlkMigDevState));
bmds->bs = bs;
bmds->bulk_completed = 0;
bmds->total_sectors = sectors;
bmds->completed_sectors = 0;
bmds->shared_base = block_mig_state.shared_base;
alloc_aio_bitmap(bmds);
bdrv_set_in_use(bs, 1);
bdrv_ref(bs);
block_mig_state.total_sector_sum += sectors;
if (bmds->shared_base) {
DPRINTF("Start migration for %s with shared base image\n",
bs->device_name);
} else {
DPRINTF("Start full migration for %s\n", bs->device_name);
}
QSIMPLEQ_INSERT_TAIL(&block_mig_state.bmds_list, bmds, entry);
}
}
static void init_blk_migration(QEMUFile *f)
{
BlockDriverState *bs;
BlkMigDevState *bmds;
int64_t sectors;
BdrvNextIterator it;
int i, num_bs = 0;
struct {
BlkMigDevState *bmds;
BlockDriverState *bs;
} *bmds_bs;
block_mig_state.submitted = 0;
block_mig_state.read_done = 0;
block_mig_state.transferred = 0;
@@ -399,62 +372,7 @@ static void init_blk_migration(QEMUFile *f)
block_mig_state.bulk_completed = 0;
block_mig_state.zero_blocks = migrate_zero_blocks();
for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
num_bs++;
}
bmds_bs = g_malloc0(num_bs * sizeof(*bmds_bs));
for (i = 0, bs = bdrv_first(&it); bs; bs = bdrv_next(&it), i++) {
if (bdrv_is_read_only(bs)) {
continue;
}
sectors = bdrv_nb_sectors(bs);
if (sectors <= 0) {
goto out;
}
bmds = g_new0(BlkMigDevState, 1);
bmds->blk = blk_new();
bmds->blk_name = g_strdup(bdrv_get_device_name(bs));
bmds->bulk_completed = 0;
bmds->total_sectors = sectors;
bmds->completed_sectors = 0;
bmds->shared_base = block_mig_state.shared_base;
assert(i < num_bs);
bmds_bs[i].bmds = bmds;
bmds_bs[i].bs = bs;
block_mig_state.total_sector_sum += sectors;
if (bmds->shared_base) {
DPRINTF("Start migration for %s with shared base image\n",
bdrv_get_device_name(bs));
} else {
DPRINTF("Start full migration for %s\n", bdrv_get_device_name(bs));
}
QSIMPLEQ_INSERT_TAIL(&block_mig_state.bmds_list, bmds, entry);
}
/* Can only insert new BDSes now because doing so while iterating block
* devices may end up in a deadlock (iterating the new BDSes, too). */
for (i = 0; i < num_bs; i++) {
BlkMigDevState *bmds = bmds_bs[i].bmds;
BlockDriverState *bs = bmds_bs[i].bs;
if (bmds) {
blk_insert_bs(bmds->blk, bs);
alloc_aio_bitmap(bmds);
error_setg(&bmds->blocker, "block device is in use by migration");
bdrv_op_block_all(bs, bmds->blocker);
}
}
out:
g_free(bmds_bs);
bdrv_iterate(init_blk_migration_it, NULL);
}
/* Called with no lock taken. */
@@ -505,13 +423,12 @@ static void blk_mig_reset_dirty_cursor(void)
}
}
/* Called with iothread lock and AioContext taken. */
/* Called with iothread lock taken. */
static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
int is_async)
{
BlkMigBlock *blk;
BlockDriverState *bs = blk_bs(bmds->blk);
int64_t total_sectors = bmds->total_sectors;
int64_t sector;
int nr_sectors;
@@ -521,18 +438,18 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
blk_mig_lock();
if (bmds_aio_inflight(bmds, sector)) {
blk_mig_unlock();
blk_drain(bmds->blk);
bdrv_drain_all();
} else {
blk_mig_unlock();
}
if (bdrv_get_dirty(bs, bmds->dirty_bitmap, sector)) {
if (bdrv_get_dirty(bmds->bs, bmds->dirty_bitmap, sector)) {
if (total_sectors - sector < BDRV_SECTORS_PER_DIRTY_CHUNK) {
nr_sectors = total_sectors - sector;
} else {
nr_sectors = BDRV_SECTORS_PER_DIRTY_CHUNK;
}
blk = g_new(BlkMigBlock, 1);
blk = g_malloc(sizeof(BlkMigBlock));
blk->buf = g_malloc(BLOCK_SIZE);
blk->bmds = bmds;
blk->sector = sector;
@@ -543,18 +460,15 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
blk->iov.iov_len = nr_sectors * BDRV_SECTOR_SIZE;
qemu_iovec_init_external(&blk->qiov, &blk->iov, 1);
blk->aiocb = blk_aio_preadv(bmds->blk,
sector * BDRV_SECTOR_SIZE,
&blk->qiov, 0, blk_mig_read_cb,
blk);
blk->aiocb = bdrv_aio_readv(bmds->bs, sector, &blk->qiov,
nr_sectors, blk_mig_read_cb, blk);
blk_mig_lock();
block_mig_state.submitted++;
bmds_set_aio_inflight(bmds, sector, nr_sectors, 1);
blk_mig_unlock();
} else {
ret = blk_pread(bmds->blk, sector * BDRV_SECTOR_SIZE, blk->buf,
nr_sectors * BDRV_SECTOR_SIZE);
ret = bdrv_read(bmds->bs, sector, blk->buf, nr_sectors);
if (ret < 0) {
goto error;
}
@@ -564,7 +478,7 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
g_free(blk);
}
bdrv_reset_dirty_bitmap(bmds->dirty_bitmap, sector, nr_sectors);
bdrv_reset_dirty(bmds->bs, sector, nr_sectors);
break;
}
sector += BDRV_SECTORS_PER_DIRTY_CHUNK;
@@ -592,9 +506,7 @@ static int blk_mig_save_dirty_block(QEMUFile *f, int is_async)
int ret = 1;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
aio_context_acquire(blk_get_aio_context(bmds->blk));
ret = mig_save_device_dirty(f, bmds, is_async);
aio_context_release(blk_get_aio_context(bmds->blk));
if (ret <= 0) {
break;
}
@@ -652,9 +564,7 @@ static int64_t get_remaining_dirty(void)
int64_t dirty = 0;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
aio_context_acquire(blk_get_aio_context(bmds->blk));
dirty += bdrv_get_dirty_count(bmds->dirty_bitmap);
aio_context_release(blk_get_aio_context(bmds->blk));
dirty += bdrv_get_dirty_count(bmds->bs, bmds->dirty_bitmap);
}
return dirty << BDRV_SECTOR_BITS;
@@ -662,33 +572,24 @@ static int64_t get_remaining_dirty(void)
/* Called with iothread lock taken. */
static void block_migration_cleanup(void *opaque)
static void blk_mig_cleanup(void)
{
BlkMigDevState *bmds;
BlkMigBlock *blk;
AioContext *ctx;
bdrv_drain_all();
unset_dirty_tracking();
blk_mig_lock();
while ((bmds = QSIMPLEQ_FIRST(&block_mig_state.bmds_list)) != NULL) {
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.bmds_list, entry);
bdrv_op_unblock_all(blk_bs(bmds->blk), bmds->blocker);
error_free(bmds->blocker);
/* Save ctx, because bmds->blk can disappear during blk_unref. */
ctx = blk_get_aio_context(bmds->blk);
aio_context_acquire(ctx);
blk_unref(bmds->blk);
aio_context_release(ctx);
g_free(bmds->blk_name);
bdrv_set_in_use(bmds->bs, 0);
bdrv_unref(bmds->bs);
g_free(bmds->aio_bitmap);
g_free(bmds);
}
blk_mig_lock();
while ((blk = QSIMPLEQ_FIRST(&block_mig_state.blk_list)) != NULL) {
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.blk_list, entry);
g_free(blk->buf);
@@ -697,6 +598,11 @@ static void block_migration_cleanup(void *opaque)
blk_mig_unlock();
}
static void block_migration_cancel(void *opaque)
{
blk_mig_cleanup();
}
static int block_save_setup(QEMUFile *f, void *opaque)
{
int ret;
@@ -708,14 +614,9 @@ static int block_save_setup(QEMUFile *f, void *opaque)
init_blk_migration(f);
/* start track dirty blocks */
ret = set_dirty_tracking();
set_dirty_tracking();
qemu_mutex_unlock_iothread();
if (ret) {
return ret;
}
ret = flush_blks(f);
blk_mig_reset_dirty_cursor();
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
@@ -727,7 +628,6 @@ static int block_save_iterate(QEMUFile *f, void *opaque)
{
int ret;
int64_t last_ftell = qemu_ftell(f);
int64_t delta_ftell;
DPRINTF("Enter save live iterate submitted %d transferred %d\n",
block_mig_state.submitted, block_mig_state.transferred);
@@ -743,10 +643,7 @@ static int block_save_iterate(QEMUFile *f, void *opaque)
blk_mig_lock();
while ((block_mig_state.submitted +
block_mig_state.read_done) * BLOCK_SIZE <
qemu_file_get_rate_limit(f) &&
(block_mig_state.submitted +
block_mig_state.read_done) <
MAX_INFLIGHT_IO) {
qemu_file_get_rate_limit(f)) {
blk_mig_unlock();
if (block_mig_state.bulk_completed == 0) {
/* first finish the bulk phase */
@@ -780,14 +677,7 @@ static int block_save_iterate(QEMUFile *f, void *opaque)
}
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
delta_ftell = qemu_ftell(f) - last_ftell;
if (delta_ftell > 0) {
return 1;
} else if (delta_ftell < 0) {
return -1;
} else {
return 0;
}
return qemu_ftell(f) - last_ftell;
}
/* Called with iothread lock taken. */
@@ -826,33 +716,30 @@ static int block_save_complete(QEMUFile *f, void *opaque)
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
blk_mig_cleanup();
return 0;
}
static void block_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
uint64_t *non_postcopiable_pending,
uint64_t *postcopiable_pending)
static uint64_t block_save_pending(QEMUFile *f, void *opaque, uint64_t max_size)
{
/* Estimate pending number of bytes to send */
uint64_t pending;
qemu_mutex_lock_iothread();
pending = get_remaining_dirty();
qemu_mutex_unlock_iothread();
blk_mig_lock();
pending += block_mig_state.submitted * BLOCK_SIZE +
block_mig_state.read_done * BLOCK_SIZE;
blk_mig_unlock();
pending = get_remaining_dirty() +
block_mig_state.submitted * BLOCK_SIZE +
block_mig_state.read_done * BLOCK_SIZE;
/* Report at least one block pending during bulk phase */
if (pending <= max_size && !block_mig_state.bulk_completed) {
pending = max_size + BLOCK_SIZE;
if (pending == 0 && !block_mig_state.bulk_completed) {
pending = BLOCK_SIZE;
}
blk_mig_unlock();
qemu_mutex_unlock_iothread();
DPRINTF("Enter save live pending %" PRIu64 "\n", pending);
/* We don't do postcopy */
*non_postcopiable_pending += pending;
return pending;
}
static int block_load(QEMUFile *f, void *opaque, int version_id)
@@ -861,8 +748,7 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
int len, flags;
char device_name[256];
int64_t addr;
BlockBackend *blk, *blk_prev = NULL;;
Error *local_err = NULL;
BlockDriverState *bs, *bs_prev = NULL;
uint8_t *buf;
int64_t total_sectors = 0;
int nr_sectors;
@@ -880,27 +766,21 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
qemu_get_buffer(f, (uint8_t *)device_name, len);
device_name[len] = '\0';
blk = blk_by_name(device_name);
if (!blk) {
bs = bdrv_find(device_name);
if (!bs) {
fprintf(stderr, "Error unknown block device %s\n",
device_name);
return -EINVAL;
}
if (blk != blk_prev) {
blk_prev = blk;
total_sectors = blk_nb_sectors(blk);
if (bs != bs_prev) {
bs_prev = bs;
total_sectors = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
if (total_sectors <= 0) {
error_report("Error getting length of block device %s",
device_name);
return -EINVAL;
}
blk_invalidate_cache(blk, &local_err);
if (local_err) {
error_report_err(local_err);
return -EINVAL;
}
}
if (total_sectors - addr < BDRV_SECTORS_PER_DIRTY_CHUNK) {
@@ -910,14 +790,12 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
}
if (flags & BLK_MIG_FLAG_ZERO_BLOCK) {
ret = blk_pwrite_zeroes(blk, addr * BDRV_SECTOR_SIZE,
nr_sectors * BDRV_SECTOR_SIZE,
ret = bdrv_write_zeroes(bs, addr, nr_sectors,
BDRV_REQ_MAY_UNMAP);
} else {
buf = g_malloc(BLOCK_SIZE);
qemu_get_buffer(f, buf, BLOCK_SIZE);
ret = blk_pwrite(blk, addr * BDRV_SECTOR_SIZE, buf,
nr_sectors * BDRV_SECTOR_SIZE, 0);
ret = bdrv_write(bs, addr, buf, nr_sectors);
g_free(buf);
}
@@ -959,14 +837,14 @@ static bool block_is_active(void *opaque)
return block_mig_state.blk_enable == 1;
}
static SaveVMHandlers savevm_block_handlers = {
SaveVMHandlers savevm_block_handlers = {
.set_params = block_set_params,
.save_live_setup = block_save_setup,
.save_live_iterate = block_save_iterate,
.save_live_complete_precopy = block_save_complete,
.save_live_complete = block_save_complete,
.save_live_pending = block_save_pending,
.load_state = block_load,
.cleanup = block_migration_cleanup,
.cancel = block_migration_cancel,
.is_active = block_is_active,
};

5589
block.c

File diff suppressed because it is too large Load Diff

View File

@@ -1,35 +1,32 @@
block-obj-y += raw_bsd.o qcow.o vdi.o vmdk.o cloop.o bochs.o vpc.o vvfat.o dmg.o
block-obj-y += raw_bsd.o cow.o qcow.o vdi.o vmdk.o cloop.o dmg.o bochs.o vpc.o vvfat.o
block-obj-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-cache.o
block-obj-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o qed-cluster.o
block-obj-y += qed-check.o
block-obj-y += vhdx.o vhdx-endian.o vhdx-log.o
block-obj-y += quorum.o
block-obj-y += parallels.o blkdebug.o blkverify.o blkreplay.o
block-obj-y += block-backend.o snapshot.o qapi.o
block-obj-$(CONFIG_VHDX) += vhdx.o vhdx-endian.o vhdx-log.o
block-obj-$(CONFIG_QUORUM) += quorum.o
block-obj-y += parallels.o blkdebug.o blkverify.o
block-obj-y += snapshot.o qapi.o
block-obj-$(CONFIG_WIN32) += raw-win32.o win32-aio.o
block-obj-$(CONFIG_POSIX) += raw-posix.o
block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
block-obj-y += null.o mirror.o commit.o io.o
block-obj-y += throttle-groups.o
ifeq ($(CONFIG_POSIX),y)
block-obj-y += nbd.o nbd-client.o sheepdog.o
block-obj-$(CONFIG_LIBISCSI) += iscsi.o
block-obj-$(CONFIG_LIBNFS) += nfs.o
block-obj-$(CONFIG_CURL) += curl.o
block-obj-$(CONFIG_RBD) += rbd.o
block-obj-$(CONFIG_GLUSTERFS) += gluster.o
block-obj-$(CONFIG_ARCHIPELAGO) += archipelago.o
block-obj-$(CONFIG_LIBSSH2) += ssh.o
block-obj-y += accounting.o dirty-bitmap.o
block-obj-y += write-threshold.o
block-obj-y += backup.o
block-obj-$(CONFIG_REPLICATION) += replication.o
block-obj-y += crypto.o
endif
common-obj-y += stream.o
common-obj-y += commit.o
common-obj-y += mirror.o
common-obj-y += backup.o
common-obj-y += dictzip.o
common-obj-y += tar.o
nfs.o-libs := $(LIBNFS_LIBS)
iscsi.o-cflags := $(LIBISCSI_CFLAGS)
iscsi.o-libs := $(LIBISCSI_LIBS)
curl.o-cflags := $(CURL_CFLAGS)
@@ -40,7 +37,5 @@ gluster.o-cflags := $(GLUSTERFS_CFLAGS)
gluster.o-libs := $(GLUSTERFS_LIBS)
ssh.o-cflags := $(LIBSSH2_CFLAGS)
ssh.o-libs := $(LIBSSH2_LIBS)
archipelago.o-libs := $(ARCHIPELAGO_LIBS)
dmg.o-libs := $(BZIP2_LIBS)
qcow.o-libs := -lz
linux-aio.o-libs := -laio

View File

@@ -1,173 +0,0 @@
/*
* QEMU System Emulator block accounting
*
* Copyright (c) 2011 Christoph Hellwig
* Copyright (c) 2015 Igalia, S.L.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "block/accounting.h"
#include "block/block_int.h"
#include "qemu/timer.h"
#include "sysemu/qtest.h"
static QEMUClockType clock_type = QEMU_CLOCK_REALTIME;
static const int qtest_latency_ns = NANOSECONDS_PER_SECOND / 1000;
void block_acct_init(BlockAcctStats *stats, bool account_invalid,
bool account_failed)
{
stats->account_invalid = account_invalid;
stats->account_failed = account_failed;
if (qtest_enabled()) {
clock_type = QEMU_CLOCK_VIRTUAL;
}
}
void block_acct_cleanup(BlockAcctStats *stats)
{
BlockAcctTimedStats *s, *next;
QSLIST_FOREACH_SAFE(s, &stats->intervals, entries, next) {
g_free(s);
}
}
void block_acct_add_interval(BlockAcctStats *stats, unsigned interval_length)
{
BlockAcctTimedStats *s;
unsigned i;
s = g_new0(BlockAcctTimedStats, 1);
s->interval_length = interval_length;
QSLIST_INSERT_HEAD(&stats->intervals, s, entries);
for (i = 0; i < BLOCK_MAX_IOTYPE; i++) {
timed_average_init(&s->latency[i], clock_type,
(uint64_t) interval_length * NANOSECONDS_PER_SECOND);
}
}
BlockAcctTimedStats *block_acct_interval_next(BlockAcctStats *stats,
BlockAcctTimedStats *s)
{
if (s == NULL) {
return QSLIST_FIRST(&stats->intervals);
} else {
return QSLIST_NEXT(s, entries);
}
}
void block_acct_start(BlockAcctStats *stats, BlockAcctCookie *cookie,
int64_t bytes, enum BlockAcctType type)
{
assert(type < BLOCK_MAX_IOTYPE);
cookie->bytes = bytes;
cookie->start_time_ns = qemu_clock_get_ns(clock_type);
cookie->type = type;
}
void block_acct_done(BlockAcctStats *stats, BlockAcctCookie *cookie)
{
BlockAcctTimedStats *s;
int64_t time_ns = qemu_clock_get_ns(clock_type);
int64_t latency_ns = time_ns - cookie->start_time_ns;
if (qtest_enabled()) {
latency_ns = qtest_latency_ns;
}
assert(cookie->type < BLOCK_MAX_IOTYPE);
stats->nr_bytes[cookie->type] += cookie->bytes;
stats->nr_ops[cookie->type]++;
stats->total_time_ns[cookie->type] += latency_ns;
stats->last_access_time_ns = time_ns;
QSLIST_FOREACH(s, &stats->intervals, entries) {
timed_average_account(&s->latency[cookie->type], latency_ns);
}
}
void block_acct_failed(BlockAcctStats *stats, BlockAcctCookie *cookie)
{
assert(cookie->type < BLOCK_MAX_IOTYPE);
stats->failed_ops[cookie->type]++;
if (stats->account_failed) {
BlockAcctTimedStats *s;
int64_t time_ns = qemu_clock_get_ns(clock_type);
int64_t latency_ns = time_ns - cookie->start_time_ns;
if (qtest_enabled()) {
latency_ns = qtest_latency_ns;
}
stats->total_time_ns[cookie->type] += latency_ns;
stats->last_access_time_ns = time_ns;
QSLIST_FOREACH(s, &stats->intervals, entries) {
timed_average_account(&s->latency[cookie->type], latency_ns);
}
}
}
void block_acct_invalid(BlockAcctStats *stats, enum BlockAcctType type)
{
assert(type < BLOCK_MAX_IOTYPE);
/* block_acct_done() and block_acct_failed() update
* total_time_ns[], but this one does not. The reason is that
* invalid requests are accounted during their submission,
* therefore there's no actual I/O involved. */
stats->invalid_ops[type]++;
if (stats->account_invalid) {
stats->last_access_time_ns = qemu_clock_get_ns(clock_type);
}
}
void block_acct_merge_done(BlockAcctStats *stats, enum BlockAcctType type,
int num_requests)
{
assert(type < BLOCK_MAX_IOTYPE);
stats->merged[type] += num_requests;
}
int64_t block_acct_idle_time_ns(BlockAcctStats *stats)
{
return qemu_clock_get_ns(clock_type) - stats->last_access_time_ns;
}
double block_acct_queue_depth(BlockAcctTimedStats *stats,
enum BlockAcctType type)
{
uint64_t sum, elapsed;
assert(type < BLOCK_MAX_IOTYPE);
sum = timed_average_sum(&stats->latency[type], &elapsed);
return (double) sum / elapsed;
}

File diff suppressed because it is too large Load Diff

View File

@@ -11,47 +11,42 @@
*
*/
#include "qemu/osdep.h"
#include <stdio.h>
#include <errno.h>
#include <unistd.h>
#include "trace.h"
#include "block/block.h"
#include "block/block_int.h"
#include "block/blockjob.h"
#include "block/block_backup.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "qemu/ratelimit.h"
#include "qemu/cutils.h"
#include "sysemu/block-backend.h"
#include "qemu/bitmap.h"
#define BACKUP_CLUSTER_SIZE_DEFAULT (1 << 16)
#define BACKUP_CLUSTER_BITS 16
#define BACKUP_CLUSTER_SIZE (1 << BACKUP_CLUSTER_BITS)
#define BACKUP_SECTORS_PER_CLUSTER (BACKUP_CLUSTER_SIZE / BDRV_SECTOR_SIZE)
#define SLICE_TIME 100000000ULL /* ns */
typedef struct CowRequest {
int64_t start;
int64_t end;
QLIST_ENTRY(CowRequest) list;
CoQueue wait_queue; /* coroutines blocked on this request */
} CowRequest;
typedef struct BackupBlockJob {
BlockJob common;
BlockBackend *target;
/* bitmap for sync=incremental */
BdrvDirtyBitmap *sync_bitmap;
BlockDriverState *target;
MirrorSyncMode sync_mode;
RateLimit limit;
BlockdevOnError on_source_error;
BlockdevOnError on_target_error;
CoRwlock flush_rwlock;
uint64_t sectors_read;
unsigned long *done_bitmap;
int64_t cluster_size;
bool compress;
NotifierWithReturn before_write;
HBitmap *bitmap;
QLIST_HEAD(, CowRequest) inflight_reqs;
} BackupBlockJob;
/* Size of a cluster in sectors, instead of bytes. */
static inline int64_t cluster_size_sectors(BackupBlockJob *job)
{
return job->cluster_size / BDRV_SECTOR_SIZE;
}
/* See if in-flight requests overlap and wait for them to complete */
static void coroutine_fn wait_for_overlapping_requests(BackupBlockJob *job,
int64_t start,
@@ -89,25 +84,23 @@ static void cow_request_end(CowRequest *req)
qemu_co_queue_restart_all(&req->wait_queue);
}
static int coroutine_fn backup_do_cow(BackupBlockJob *job,
static int coroutine_fn backup_do_cow(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
bool *error_is_read,
bool is_write_notifier)
bool *error_is_read)
{
BlockBackend *blk = job->common.blk;
BackupBlockJob *job = (BackupBlockJob *)bs->job;
CowRequest cow_request;
struct iovec iov;
QEMUIOVector bounce_qiov;
void *bounce_buffer = NULL;
int ret = 0;
int64_t sectors_per_cluster = cluster_size_sectors(job);
int64_t start, end;
int n;
qemu_co_rwlock_rdlock(&job->flush_rwlock);
start = sector_num / sectors_per_cluster;
end = DIV_ROUND_UP(sector_num + nb_sectors, sectors_per_cluster);
start = sector_num / BACKUP_SECTORS_PER_CLUSTER;
end = DIV_ROUND_UP(sector_num + nb_sectors, BACKUP_SECTORS_PER_CLUSTER);
trace_backup_do_cow_enter(job, start, sector_num, nb_sectors);
@@ -115,27 +108,26 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
cow_request_begin(&cow_request, job, start, end);
for (; start < end; start++) {
if (test_bit(start, job->done_bitmap)) {
if (hbitmap_get(job->bitmap, start)) {
trace_backup_do_cow_skip(job, start);
continue; /* already copied */
}
trace_backup_do_cow_process(job, start);
n = MIN(sectors_per_cluster,
n = MIN(BACKUP_SECTORS_PER_CLUSTER,
job->common.len / BDRV_SECTOR_SIZE -
start * sectors_per_cluster);
start * BACKUP_SECTORS_PER_CLUSTER);
if (!bounce_buffer) {
bounce_buffer = blk_blockalign(blk, job->cluster_size);
bounce_buffer = qemu_blockalign(bs, BACKUP_CLUSTER_SIZE);
}
iov.iov_base = bounce_buffer;
iov.iov_len = n * BDRV_SECTOR_SIZE;
qemu_iovec_init_external(&bounce_qiov, &iov, 1);
ret = blk_co_preadv(blk, start * job->cluster_size,
bounce_qiov.size, &bounce_qiov,
is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0);
ret = bdrv_co_readv(bs, start * BACKUP_SECTORS_PER_CLUSTER, n,
&bounce_qiov);
if (ret < 0) {
trace_backup_do_cow_read_fail(job, start, ret);
if (error_is_read) {
@@ -145,12 +137,13 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
}
if (buffer_is_zero(iov.iov_base, iov.iov_len)) {
ret = blk_co_pwrite_zeroes(job->target, start * job->cluster_size,
bounce_qiov.size, BDRV_REQ_MAY_UNMAP);
ret = bdrv_co_write_zeroes(job->target,
start * BACKUP_SECTORS_PER_CLUSTER,
n, BDRV_REQ_MAY_UNMAP);
} else {
ret = blk_co_pwritev(job->target, start * job->cluster_size,
bounce_qiov.size, &bounce_qiov,
job->compress ? BDRV_REQ_WRITE_COMPRESSED : 0);
ret = bdrv_co_writev(job->target,
start * BACKUP_SECTORS_PER_CLUSTER, n,
&bounce_qiov);
}
if (ret < 0) {
trace_backup_do_cow_write_fail(job, start, ret);
@@ -160,7 +153,7 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
goto out;
}
set_bit(start, job->done_bitmap);
hbitmap_set(job->bitmap, start, 1);
/* Publish progress, guest I/O counts as progress too. Note that the
* offset field is an opaque progress value, it is not a disk offset.
@@ -187,16 +180,14 @@ static int coroutine_fn backup_before_write_notify(
NotifierWithReturn *notifier,
void *opaque)
{
BackupBlockJob *job = container_of(notifier, BackupBlockJob, before_write);
BdrvTrackedRequest *req = opaque;
int64_t sector_num = req->offset >> BDRV_SECTOR_BITS;
int nb_sectors = req->bytes >> BDRV_SECTOR_BITS;
assert(req->bs == blk_bs(job->common.blk));
assert((req->offset & (BDRV_SECTOR_SIZE - 1)) == 0);
assert((req->bytes & (BDRV_SECTOR_SIZE - 1)) == 0);
return backup_do_cow(job, sector_num, nb_sectors, NULL, true);
return backup_do_cow(req->bs, sector_num, nb_sectors, NULL);
}
static void backup_set_speed(BlockJob *job, int64_t speed, Error **errp)
@@ -204,258 +195,95 @@ static void backup_set_speed(BlockJob *job, int64_t speed, Error **errp)
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
if (speed < 0) {
error_setg(errp, QERR_INVALID_PARAMETER, "speed");
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
}
static void backup_cleanup_sync_bitmap(BackupBlockJob *job, int ret)
{
BdrvDirtyBitmap *bm;
BlockDriverState *bs = blk_bs(job->common.blk);
if (ret < 0 || block_job_is_cancelled(&job->common)) {
/* Merge the successor back into the parent, delete nothing. */
bm = bdrv_reclaim_dirty_bitmap(bs, job->sync_bitmap, NULL);
assert(bm);
} else {
/* Everything is fine, delete this bitmap and install the backup. */
bm = bdrv_dirty_bitmap_abdicate(bs, job->sync_bitmap, NULL);
assert(bm);
}
}
static void backup_commit(BlockJob *job)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
if (s->sync_bitmap) {
backup_cleanup_sync_bitmap(s, 0);
}
}
static void backup_abort(BlockJob *job)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
if (s->sync_bitmap) {
backup_cleanup_sync_bitmap(s, -1);
}
}
static void backup_attached_aio_context(BlockJob *job, AioContext *aio_context)
static void backup_iostatus_reset(BlockJob *job)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
blk_set_aio_context(s->target, aio_context);
}
void backup_do_checkpoint(BlockJob *job, Error **errp)
{
BackupBlockJob *backup_job = container_of(job, BackupBlockJob, common);
int64_t len;
assert(job->driver->job_type == BLOCK_JOB_TYPE_BACKUP);
if (backup_job->sync_mode != MIRROR_SYNC_MODE_NONE) {
error_setg(errp, "The backup job only supports block checkpoint in"
" sync=none mode");
return;
}
len = DIV_ROUND_UP(backup_job->common.len, backup_job->cluster_size);
bitmap_zero(backup_job->done_bitmap, len);
}
void backup_wait_for_overlapping_requests(BlockJob *job, int64_t sector_num,
int nb_sectors)
{
BackupBlockJob *backup_job = container_of(job, BackupBlockJob, common);
int64_t sectors_per_cluster = cluster_size_sectors(backup_job);
int64_t start, end;
assert(job->driver->job_type == BLOCK_JOB_TYPE_BACKUP);
start = sector_num / sectors_per_cluster;
end = DIV_ROUND_UP(sector_num + nb_sectors, sectors_per_cluster);
wait_for_overlapping_requests(backup_job, start, end);
}
void backup_cow_request_begin(CowRequest *req, BlockJob *job,
int64_t sector_num,
int nb_sectors)
{
BackupBlockJob *backup_job = container_of(job, BackupBlockJob, common);
int64_t sectors_per_cluster = cluster_size_sectors(backup_job);
int64_t start, end;
assert(job->driver->job_type == BLOCK_JOB_TYPE_BACKUP);
start = sector_num / sectors_per_cluster;
end = DIV_ROUND_UP(sector_num + nb_sectors, sectors_per_cluster);
cow_request_begin(req, backup_job, start, end);
}
void backup_cow_request_end(CowRequest *req)
{
cow_request_end(req);
bdrv_iostatus_reset(s->target);
}
static const BlockJobDriver backup_job_driver = {
.instance_size = sizeof(BackupBlockJob),
.job_type = BLOCK_JOB_TYPE_BACKUP,
.set_speed = backup_set_speed,
.commit = backup_commit,
.abort = backup_abort,
.attached_aio_context = backup_attached_aio_context,
.instance_size = sizeof(BackupBlockJob),
.job_type = BLOCK_JOB_TYPE_BACKUP,
.set_speed = backup_set_speed,
.iostatus_reset = backup_iostatus_reset,
};
static BlockErrorAction backup_error_action(BackupBlockJob *job,
bool read, int error)
{
if (read) {
return block_job_error_action(&job->common, job->on_source_error,
true, error);
return block_job_error_action(&job->common, job->common.bs,
job->on_source_error, true, error);
} else {
return block_job_error_action(&job->common, job->on_target_error,
false, error);
return block_job_error_action(&job->common, job->target,
job->on_target_error, false, error);
}
}
typedef struct {
int ret;
} BackupCompleteData;
static void backup_complete(BlockJob *job, void *opaque)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
BackupCompleteData *data = opaque;
blk_unref(s->target);
block_job_completed(job, data->ret);
g_free(data);
}
static bool coroutine_fn yield_and_check(BackupBlockJob *job)
{
if (block_job_is_cancelled(&job->common)) {
return true;
}
/* we need to yield so that bdrv_drain_all() returns.
* (without, VM does not reboot)
*/
if (job->common.speed) {
uint64_t delay_ns = ratelimit_calculate_delay(&job->limit,
job->sectors_read);
job->sectors_read = 0;
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, delay_ns);
} else {
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, 0);
}
if (block_job_is_cancelled(&job->common)) {
return true;
}
return false;
}
static int coroutine_fn backup_run_incremental(BackupBlockJob *job)
{
bool error_is_read;
int ret = 0;
int clusters_per_iter;
uint32_t granularity;
int64_t sector;
int64_t cluster;
int64_t end;
int64_t last_cluster = -1;
int64_t sectors_per_cluster = cluster_size_sectors(job);
HBitmapIter hbi;
granularity = bdrv_dirty_bitmap_granularity(job->sync_bitmap);
clusters_per_iter = MAX((granularity / job->cluster_size), 1);
bdrv_dirty_iter_init(job->sync_bitmap, &hbi);
/* Find the next dirty sector(s) */
while ((sector = hbitmap_iter_next(&hbi)) != -1) {
cluster = sector / sectors_per_cluster;
/* Fake progress updates for any clusters we skipped */
if (cluster != last_cluster + 1) {
job->common.offset += ((cluster - last_cluster - 1) *
job->cluster_size);
}
for (end = cluster + clusters_per_iter; cluster < end; cluster++) {
do {
if (yield_and_check(job)) {
return ret;
}
ret = backup_do_cow(job, cluster * sectors_per_cluster,
sectors_per_cluster, &error_is_read,
false);
if ((ret < 0) &&
backup_error_action(job, error_is_read, -ret) ==
BLOCK_ERROR_ACTION_REPORT) {
return ret;
}
} while (ret < 0);
}
/* If the bitmap granularity is smaller than the backup granularity,
* we need to advance the iterator pointer to the next cluster. */
if (granularity < job->cluster_size) {
bdrv_set_dirty_iter(&hbi, cluster * sectors_per_cluster);
}
last_cluster = cluster - 1;
}
/* Play some final catchup with the progress meter */
end = DIV_ROUND_UP(job->common.len, job->cluster_size);
if (last_cluster + 1 < end) {
job->common.offset += ((end - last_cluster - 1) * job->cluster_size);
}
return ret;
}
static void coroutine_fn backup_run(void *opaque)
{
BackupBlockJob *job = opaque;
BackupCompleteData *data;
BlockDriverState *bs = blk_bs(job->common.blk);
BlockBackend *target = job->target;
BlockDriverState *bs = job->common.bs;
BlockDriverState *target = job->target;
BlockdevOnError on_target_error = job->on_target_error;
NotifierWithReturn before_write = {
.notify = backup_before_write_notify,
};
int64_t start, end;
int64_t sectors_per_cluster = cluster_size_sectors(job);
int ret = 0;
QLIST_INIT(&job->inflight_reqs);
qemu_co_rwlock_init(&job->flush_rwlock);
start = 0;
end = DIV_ROUND_UP(job->common.len, job->cluster_size);
end = DIV_ROUND_UP(job->common.len / BDRV_SECTOR_SIZE,
BACKUP_SECTORS_PER_CLUSTER);
job->done_bitmap = bitmap_new(end);
job->bitmap = hbitmap_alloc(end, 0);
job->before_write.notify = backup_before_write_notify;
bdrv_add_before_write_notifier(bs, &job->before_write);
bdrv_set_enable_write_cache(target, true);
bdrv_set_on_error(target, on_target_error, on_target_error);
bdrv_iostatus_enable(target);
bdrv_add_before_write_notifier(bs, &before_write);
if (job->sync_mode == MIRROR_SYNC_MODE_NONE) {
while (!block_job_is_cancelled(&job->common)) {
/* Yield until the job is cancelled. We just let our before_write
* notify callback service CoW requests. */
block_job_yield(&job->common);
job->common.busy = false;
qemu_coroutine_yield();
job->common.busy = true;
}
} else if (job->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
ret = backup_run_incremental(job);
} else {
/* Both FULL and TOP SYNC_MODE's require copying.. */
for (; start < end; start++) {
bool error_is_read;
if (yield_and_check(job)) {
if (block_job_is_cancelled(&job->common)) {
break;
}
/* we need to yield so that qemu_aio_flush() returns.
* (without, VM does not reboot)
*/
if (job->common.speed) {
uint64_t delay_ns = ratelimit_calculate_delay(
&job->limit, job->sectors_read);
job->sectors_read = 0;
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, delay_ns);
} else {
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, 0);
}
if (block_job_is_cancelled(&job->common)) {
break;
}
@@ -466,7 +294,7 @@ static void coroutine_fn backup_run(void *opaque)
/* Check to see if these blocks are already in the
* backing file. */
for (i = 0; i < sectors_per_cluster;) {
for (i = 0; i < BACKUP_SECTORS_PER_CLUSTER;) {
/* bdrv_is_allocated() only returns true/false based
* on the first set of sectors it comes across that
* are are all in the same state.
@@ -475,11 +303,11 @@ static void coroutine_fn backup_run(void *opaque)
* needed but at some point that is always the case. */
alloced =
bdrv_is_allocated(bs,
start * sectors_per_cluster + i,
sectors_per_cluster - i, &n);
start * BACKUP_SECTORS_PER_CLUSTER + i,
BACKUP_SECTORS_PER_CLUSTER - i, &n);
i += n;
if (alloced == 1 || n == 0) {
if (alloced == 1) {
break;
}
}
@@ -491,13 +319,13 @@ static void coroutine_fn backup_run(void *opaque)
}
}
/* FULL sync mode we copy the whole drive. */
ret = backup_do_cow(job, start * sectors_per_cluster,
sectors_per_cluster, &error_is_read, false);
ret = backup_do_cow(bs, start * BACKUP_SECTORS_PER_CLUSTER,
BACKUP_SECTORS_PER_CLUSTER, &error_is_read);
if (ret < 0) {
/* Depending on error action, fail now or retry cluster */
BlockErrorAction action =
backup_error_action(job, error_is_read, -ret);
if (action == BLOCK_ERROR_ACTION_REPORT) {
if (action == BDRV_ACTION_REPORT) {
break;
} else {
start--;
@@ -507,84 +335,37 @@ static void coroutine_fn backup_run(void *opaque)
}
}
notifier_with_return_remove(&job->before_write);
notifier_with_return_remove(&before_write);
/* wait until pending backup_do_cow() calls have completed */
qemu_co_rwlock_wrlock(&job->flush_rwlock);
qemu_co_rwlock_unlock(&job->flush_rwlock);
g_free(job->done_bitmap);
bdrv_op_unblock_all(blk_bs(target), job->common.blocker);
hbitmap_free(job->bitmap);
data = g_malloc(sizeof(*data));
data->ret = ret;
block_job_defer_to_main_loop(&job->common, backup_complete, data);
bdrv_iostatus_disable(target);
bdrv_unref(target);
block_job_completed(&job->common, ret);
}
void backup_start(const char *job_id, BlockDriverState *bs,
BlockDriverState *target, int64_t speed,
MirrorSyncMode sync_mode, BdrvDirtyBitmap *sync_bitmap,
bool compress,
void backup_start(BlockDriverState *bs, BlockDriverState *target,
int64_t speed, MirrorSyncMode sync_mode,
BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockCompletionFunc *cb, void *opaque,
BlockJobTxn *txn, Error **errp)
BlockDriverCompletionFunc *cb, void *opaque,
Error **errp)
{
int64_t len;
BlockDriverInfo bdi;
BackupBlockJob *job = NULL;
int ret;
assert(bs);
assert(target);
assert(cb);
if (bs == target) {
error_setg(errp, "Source and target cannot be the same");
return;
}
if (!bdrv_is_inserted(bs)) {
error_setg(errp, "Device is not inserted: %s",
bdrv_get_device_name(bs));
return;
}
if (!bdrv_is_inserted(target)) {
error_setg(errp, "Device is not inserted: %s",
bdrv_get_device_name(target));
return;
}
if (compress && target->drv->bdrv_co_pwritev_compressed == NULL) {
error_setg(errp, "Compression is not supported for this drive %s",
bdrv_get_device_name(target));
return;
}
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) {
return;
}
if (bdrv_op_is_blocked(target, BLOCK_OP_TYPE_BACKUP_TARGET, errp)) {
return;
}
if (sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
if (!sync_bitmap) {
error_setg(errp, "must provide a valid bitmap name for "
"\"incremental\" sync mode");
return;
}
/* Create a new bitmap, and freeze/disable this one. */
if (bdrv_dirty_bitmap_create_successor(bs, sync_bitmap, errp) < 0) {
return;
}
} else if (sync_bitmap) {
error_setg(errp,
"a sync_bitmap was provided to backup_run, "
"but received an incompatible sync_mode (%s)",
MirrorSyncMode_lookup[sync_mode]);
if ((on_source_error == BLOCKDEV_ON_ERROR_STOP ||
on_source_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
error_set(errp, QERR_INVALID_PARAMETER, "on-source-error");
return;
}
@@ -592,56 +373,20 @@ void backup_start(const char *job_id, BlockDriverState *bs,
if (len < 0) {
error_setg_errno(errp, -len, "unable to get length for '%s'",
bdrv_get_device_name(bs));
goto error;
return;
}
job = block_job_create(job_id, &backup_job_driver, bs, speed,
cb, opaque, errp);
BackupBlockJob *job = block_job_create(&backup_job_driver, bs, speed,
cb, opaque, errp);
if (!job) {
goto error;
return;
}
job->target = blk_new();
blk_insert_bs(job->target, target);
job->on_source_error = on_source_error;
job->on_target_error = on_target_error;
job->target = target;
job->sync_mode = sync_mode;
job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_INCREMENTAL ?
sync_bitmap : NULL;
job->compress = compress;
/* If there is no backing file on the target, we cannot rely on COW if our
* backup cluster size is smaller than the target cluster size. Even for
* targets with a backing file, try to avoid COW if possible. */
ret = bdrv_get_info(target, &bdi);
if (ret < 0 && !target->backing) {
error_setg_errno(errp, -ret,
"Couldn't determine the cluster size of the target image, "
"which has no backing file");
error_append_hint(errp,
"Aborting, since this may create an unusable destination image\n");
goto error;
} else if (ret < 0 && target->backing) {
/* Not fatal; just trudge on ahead. */
job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT;
} else {
job->cluster_size = MAX(BACKUP_CLUSTER_SIZE_DEFAULT, bdi.cluster_size);
}
bdrv_op_block_all(target, job->common.blocker);
job->common.len = len;
job->common.co = qemu_coroutine_create(backup_run, job);
block_job_txn_add_job(txn, &job->common);
qemu_coroutine_enter(job->common.co);
return;
error:
if (sync_bitmap) {
bdrv_reclaim_dirty_bitmap(bs, sync_bitmap, NULL);
}
if (job) {
blk_unref(job->target);
block_job_unref(&job->common);
}
job->common.co = qemu_coroutine_create(backup_run);
qemu_coroutine_enter(job->common.co, job);
}

View File

@@ -22,33 +22,22 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu/cutils.h"
#include "qemu-common.h"
#include "qemu/config-file.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "qapi/qmp/qbool.h"
#include "qapi/qmp/qdict.h"
#include "qapi/qmp/qint.h"
#include "qapi/qmp/qstring.h"
#include "sysemu/qtest.h"
typedef struct BDRVBlkdebugState {
int state;
int new_state;
int align;
/* For blkdebug_refresh_filename() */
char *config_file;
QLIST_HEAD(, BlkdebugRule) rules[BLKDBG__MAX];
QLIST_HEAD(, BlkdebugRule) rules[BLKDBG_EVENT_MAX];
QSIMPLEQ_HEAD(, BlkdebugRule) active_rules;
QLIST_HEAD(, BlkdebugSuspendedReq) suspended_reqs;
} BDRVBlkdebugState;
typedef struct BlkdebugAIOCB {
BlockAIOCB common;
BlockDriverAIOCB common;
QEMUBH *bh;
int ret;
} BlkdebugAIOCB;
@@ -59,8 +48,11 @@ typedef struct BlkdebugSuspendedReq {
QLIST_ENTRY(BlkdebugSuspendedReq) next;
} BlkdebugSuspendedReq;
static void blkdebug_aio_cancel(BlockDriverAIOCB *blockacb);
static const AIOCBInfo blkdebug_aiocb_info = {
.aiocb_size = sizeof(BlkdebugAIOCB),
.aiocb_size = sizeof(BlkdebugAIOCB),
.cancel = blkdebug_aio_cancel,
};
enum {
@@ -70,7 +62,7 @@ enum {
};
typedef struct BlkdebugRule {
BlkdebugEvent event;
BlkDebugEvent event;
int action;
int state;
union {
@@ -149,12 +141,67 @@ static QemuOptsList *config_groups[] = {
NULL
};
static int get_event_by_name(const char *name, BlkdebugEvent *event)
static const char *event_names[BLKDBG_EVENT_MAX] = {
[BLKDBG_L1_UPDATE] = "l1_update",
[BLKDBG_L1_GROW_ALLOC_TABLE] = "l1_grow.alloc_table",
[BLKDBG_L1_GROW_WRITE_TABLE] = "l1_grow.write_table",
[BLKDBG_L1_GROW_ACTIVATE_TABLE] = "l1_grow.activate_table",
[BLKDBG_L2_LOAD] = "l2_load",
[BLKDBG_L2_UPDATE] = "l2_update",
[BLKDBG_L2_UPDATE_COMPRESSED] = "l2_update_compressed",
[BLKDBG_L2_ALLOC_COW_READ] = "l2_alloc.cow_read",
[BLKDBG_L2_ALLOC_WRITE] = "l2_alloc.write",
[BLKDBG_READ_AIO] = "read_aio",
[BLKDBG_READ_BACKING_AIO] = "read_backing_aio",
[BLKDBG_READ_COMPRESSED] = "read_compressed",
[BLKDBG_WRITE_AIO] = "write_aio",
[BLKDBG_WRITE_COMPRESSED] = "write_compressed",
[BLKDBG_VMSTATE_LOAD] = "vmstate_load",
[BLKDBG_VMSTATE_SAVE] = "vmstate_save",
[BLKDBG_COW_READ] = "cow_read",
[BLKDBG_COW_WRITE] = "cow_write",
[BLKDBG_REFTABLE_LOAD] = "reftable_load",
[BLKDBG_REFTABLE_GROW] = "reftable_grow",
[BLKDBG_REFTABLE_UPDATE] = "reftable_update",
[BLKDBG_REFBLOCK_LOAD] = "refblock_load",
[BLKDBG_REFBLOCK_UPDATE] = "refblock_update",
[BLKDBG_REFBLOCK_UPDATE_PART] = "refblock_update_part",
[BLKDBG_REFBLOCK_ALLOC] = "refblock_alloc",
[BLKDBG_REFBLOCK_ALLOC_HOOKUP] = "refblock_alloc.hookup",
[BLKDBG_REFBLOCK_ALLOC_WRITE] = "refblock_alloc.write",
[BLKDBG_REFBLOCK_ALLOC_WRITE_BLOCKS] = "refblock_alloc.write_blocks",
[BLKDBG_REFBLOCK_ALLOC_WRITE_TABLE] = "refblock_alloc.write_table",
[BLKDBG_REFBLOCK_ALLOC_SWITCH_TABLE] = "refblock_alloc.switch_table",
[BLKDBG_CLUSTER_ALLOC] = "cluster_alloc",
[BLKDBG_CLUSTER_ALLOC_BYTES] = "cluster_alloc_bytes",
[BLKDBG_CLUSTER_FREE] = "cluster_free",
[BLKDBG_FLUSH_TO_OS] = "flush_to_os",
[BLKDBG_FLUSH_TO_DISK] = "flush_to_disk",
[BLKDBG_PWRITEV_RMW_HEAD] = "pwritev_rmw.head",
[BLKDBG_PWRITEV_RMW_AFTER_HEAD] = "pwritev_rmw.after_head",
[BLKDBG_PWRITEV_RMW_TAIL] = "pwritev_rmw.tail",
[BLKDBG_PWRITEV_RMW_AFTER_TAIL] = "pwritev_rmw.after_tail",
[BLKDBG_PWRITEV] = "pwritev",
[BLKDBG_PWRITEV_ZERO] = "pwritev_zero",
[BLKDBG_PWRITEV_DONE] = "pwritev_done",
};
static int get_event_by_name(const char *name, BlkDebugEvent *event)
{
int i;
for (i = 0; i < BLKDBG__MAX; i++) {
if (!strcmp(BlkdebugEvent_lookup[i], name)) {
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
if (!strcmp(event_names[i], name)) {
*event = i;
return 0;
}
@@ -168,21 +215,17 @@ struct add_rule_data {
int action;
};
static int add_rule(void *opaque, QemuOpts *opts, Error **errp)
static int add_rule(QemuOpts *opts, void *opaque)
{
struct add_rule_data *d = opaque;
BDRVBlkdebugState *s = d->s;
const char* event_name;
BlkdebugEvent event;
BlkDebugEvent event;
struct BlkdebugRule *rule;
/* Find the right event for the rule */
event_name = qemu_opt_get(opts, "event");
if (!event_name) {
error_setg(errp, "Missing event name for rule");
return -1;
} else if (get_event_by_name(event_name, &event) < 0) {
error_setg(errp, "Invalid event name \"%s\"", event_name);
if (!event_name || get_event_by_name(event_name, &event) < 0) {
return -1;
}
@@ -268,20 +311,10 @@ static int read_config(BDRVBlkdebugState *s, const char *filename,
d.s = s;
d.action = ACTION_INJECT_ERROR;
qemu_opts_foreach(&inject_error_opts, add_rule, &d, &local_err);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
}
qemu_opts_foreach(&inject_error_opts, add_rule, &d, 0);
d.action = ACTION_SET_STATE;
qemu_opts_foreach(&set_state_opts, add_rule, &d, &local_err);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
}
qemu_opts_foreach(&set_state_opts, add_rule, &d, 0);
ret = 0;
fail:
@@ -354,6 +387,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
BDRVBlkdebugState *s = bs->opaque;
QemuOpts *opts;
Error *local_err = NULL;
const char *config;
uint64_t align;
int ret;
@@ -366,8 +400,8 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
}
/* Read rules from config file or command line options */
s->config_file = g_strdup(qemu_opt_get(opts, "config"));
ret = read_config(s, s->config_file, options, errp);
config = qemu_opt_get(opts, "config");
ret = read_config(s, config, options, errp);
if (ret) {
goto out;
}
@@ -375,20 +409,20 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
/* Set initial state */
s->state = 1;
/* Open the image file */
bs->file = bdrv_open_child(qemu_opt_get(opts, "x-image"), options, "image",
bs, &child_file, false, &local_err);
if (local_err) {
ret = -EINVAL;
/* Open the backing file */
assert(bs->file == NULL);
ret = bdrv_open_image(&bs->file, qemu_opt_get(opts, "x-image"), options, "image",
flags | BDRV_O_PROTOCOL, false, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
goto out;
}
/* Set request alignment */
align = qemu_opt_get_size(opts, "align", 0);
if (align < INT_MAX && is_power_of_2(align)) {
s->align = align;
} else if (align) {
align = qemu_opt_get_size(opts, "align", bs->request_alignment);
if (align > 0 && align < INT_MAX && !(align & (align - 1))) {
bs->request_alignment = align;
} else {
error_setg(errp, "Invalid alignment");
ret = -EINVAL;
goto fail_unref;
@@ -398,11 +432,8 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
goto out;
fail_unref:
bdrv_unref_child(bs, bs->file);
bdrv_unref(bs->file);
out:
if (ret < 0) {
g_free(s->config_file);
}
qemu_opts_del(opts);
return ret;
}
@@ -412,40 +443,44 @@ static void error_callback_bh(void *opaque)
struct BlkdebugAIOCB *acb = opaque;
qemu_bh_delete(acb->bh);
acb->common.cb(acb->common.opaque, acb->ret);
qemu_aio_unref(acb);
qemu_aio_release(acb);
}
static BlockAIOCB *inject_error(BlockDriverState *bs,
BlockCompletionFunc *cb, void *opaque, BlkdebugRule *rule)
static void blkdebug_aio_cancel(BlockDriverAIOCB *blockacb)
{
BlkdebugAIOCB *acb = container_of(blockacb, BlkdebugAIOCB, common);
qemu_aio_release(acb);
}
static BlockDriverAIOCB *inject_error(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque, BlkdebugRule *rule)
{
BDRVBlkdebugState *s = bs->opaque;
int error = rule->options.inject.error;
struct BlkdebugAIOCB *acb;
QEMUBH *bh;
bool immediately = rule->options.inject.immediately;
if (rule->options.inject.once) {
QSIMPLEQ_REMOVE(&s->active_rules, rule, BlkdebugRule, active_next);
remove_rule(rule);
QSIMPLEQ_INIT(&s->active_rules);
}
if (immediately) {
if (rule->options.inject.immediately) {
return NULL;
}
acb = qemu_aio_get(&blkdebug_aiocb_info, bs, cb, opaque);
acb->ret = -error;
bh = aio_bh_new(bdrv_get_aio_context(bs), error_callback_bh, acb);
bh = qemu_bh_new(error_callback_bh, acb);
acb->bh = bh;
qemu_bh_schedule(bh);
return &acb->common;
}
static BlockAIOCB *blkdebug_aio_readv(BlockDriverState *bs,
static BlockDriverAIOCB *blkdebug_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque)
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugRule *rule = NULL;
@@ -462,13 +497,12 @@ static BlockAIOCB *blkdebug_aio_readv(BlockDriverState *bs,
return inject_error(bs, cb, opaque, rule);
}
return bdrv_aio_readv(bs->file, sector_num, qiov, nb_sectors,
cb, opaque);
return bdrv_aio_readv(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
}
static BlockAIOCB *blkdebug_aio_writev(BlockDriverState *bs,
static BlockDriverAIOCB *blkdebug_aio_writev(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque)
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugRule *rule = NULL;
@@ -485,27 +519,7 @@ static BlockAIOCB *blkdebug_aio_writev(BlockDriverState *bs,
return inject_error(bs, cb, opaque, rule);
}
return bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors,
cb, opaque);
}
static BlockAIOCB *blkdebug_aio_flush(BlockDriverState *bs,
BlockCompletionFunc *cb, void *opaque)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugRule *rule = NULL;
QSIMPLEQ_FOREACH(rule, &s->active_rules, active_next) {
if (rule->options.inject.sector == -1) {
break;
}
}
if (rule && rule->options.inject.error) {
return inject_error(bs, cb, opaque, rule);
}
return bdrv_aio_flush(bs->file->bs, cb, opaque);
return bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
}
@@ -515,13 +529,11 @@ static void blkdebug_close(BlockDriverState *bs)
BlkdebugRule *rule, *next;
int i;
for (i = 0; i < BLKDBG__MAX; i++) {
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
QLIST_FOREACH_SAFE(rule, &s->rules[i], next, next) {
remove_rule(rule);
}
}
g_free(s->config_file);
}
static void suspend_request(BlockDriverState *bs, BlkdebugRule *rule)
@@ -537,13 +549,9 @@ static void suspend_request(BlockDriverState *bs, BlkdebugRule *rule)
remove_rule(rule);
QLIST_INSERT_HEAD(&s->suspended_reqs, &r, next);
if (!qtest_enabled()) {
printf("blkdebug: Suspended request '%s'\n", r.tag);
}
printf("blkdebug: Suspended request '%s'\n", r.tag);
qemu_coroutine_yield();
if (!qtest_enabled()) {
printf("blkdebug: Resuming request '%s'\n", r.tag);
}
printf("blkdebug: Resuming request '%s'\n", r.tag);
QLIST_REMOVE(&r, next);
g_free(r.tag);
@@ -580,13 +588,13 @@ static bool process_rule(BlockDriverState *bs, struct BlkdebugRule *rule,
return injected;
}
static void blkdebug_debug_event(BlockDriverState *bs, BlkdebugEvent event)
static void blkdebug_debug_event(BlockDriverState *bs, BlkDebugEvent event)
{
BDRVBlkdebugState *s = bs->opaque;
struct BlkdebugRule *rule, *next;
bool injected;
assert((int)event >= 0 && event < BLKDBG__MAX);
assert((int)event >= 0 && event < BLKDBG_EVENT_MAX);
injected = false;
s->new_state = s->state;
@@ -601,7 +609,7 @@ static int blkdebug_debug_breakpoint(BlockDriverState *bs, const char *event,
{
BDRVBlkdebugState *s = bs->opaque;
struct BlkdebugRule *rule;
BlkdebugEvent blkdebug_event;
BlkDebugEvent blkdebug_event;
if (get_event_by_name(event, &blkdebug_event) < 0) {
return -ENOENT;
@@ -628,7 +636,7 @@ static int blkdebug_debug_resume(BlockDriverState *bs, const char *tag)
QLIST_FOREACH_SAFE(r, &s->suspended_reqs, next, next) {
if (!strcmp(r->tag, tag)) {
qemu_coroutine_enter(r->co);
qemu_coroutine_enter(r->co, NULL);
return 0;
}
}
@@ -643,7 +651,7 @@ static int blkdebug_debug_remove_breakpoint(BlockDriverState *bs,
BlkdebugRule *rule, *next;
int i, ret = -ENOENT;
for (i = 0; i < BLKDBG__MAX; i++) {
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
QLIST_FOREACH_SAFE(rule, &s->rules[i], next, next) {
if (rule->action == ACTION_SUSPEND &&
!strcmp(rule->options.suspend.tag, tag)) {
@@ -654,7 +662,7 @@ static int blkdebug_debug_remove_breakpoint(BlockDriverState *bs,
}
QLIST_FOREACH_SAFE(r, &s->suspended_reqs, next, r_next) {
if (!strcmp(r->tag, tag)) {
qemu_coroutine_enter(r->co);
qemu_coroutine_enter(r->co, NULL);
ret = 0;
}
}
@@ -676,71 +684,7 @@ static bool blkdebug_debug_is_suspended(BlockDriverState *bs, const char *tag)
static int64_t blkdebug_getlength(BlockDriverState *bs)
{
return bdrv_getlength(bs->file->bs);
}
static int blkdebug_truncate(BlockDriverState *bs, int64_t offset)
{
return bdrv_truncate(bs->file->bs, offset);
}
static void blkdebug_refresh_filename(BlockDriverState *bs, QDict *options)
{
BDRVBlkdebugState *s = bs->opaque;
QDict *opts;
const QDictEntry *e;
bool force_json = false;
for (e = qdict_first(options); e; e = qdict_next(options, e)) {
if (strcmp(qdict_entry_key(e), "config") &&
strcmp(qdict_entry_key(e), "x-image"))
{
force_json = true;
break;
}
}
if (force_json && !bs->file->bs->full_open_options) {
/* The config file cannot be recreated, so creating a plain filename
* is impossible */
return;
}
if (!force_json && bs->file->bs->exact_filename[0]) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"blkdebug:%s:%s", s->config_file ?: "",
bs->file->bs->exact_filename);
}
opts = qdict_new();
qdict_put_obj(opts, "driver", QOBJECT(qstring_from_str("blkdebug")));
QINCREF(bs->file->bs->full_open_options);
qdict_put_obj(opts, "image", QOBJECT(bs->file->bs->full_open_options));
for (e = qdict_first(options); e; e = qdict_next(options, e)) {
if (strcmp(qdict_entry_key(e), "x-image")) {
qobject_incref(qdict_entry_value(e));
qdict_put_obj(opts, qdict_entry_key(e), qdict_entry_value(e));
}
}
bs->full_open_options = opts;
}
static void blkdebug_refresh_limits(BlockDriverState *bs, Error **errp)
{
BDRVBlkdebugState *s = bs->opaque;
if (s->align) {
bs->bl.request_alignment = s->align;
}
}
static int blkdebug_reopen_prepare(BDRVReopenState *reopen_state,
BlockReopenQueue *queue, Error **errp)
{
return 0;
return bdrv_getlength(bs->file);
}
static BlockDriver bdrv_blkdebug = {
@@ -751,15 +695,10 @@ static BlockDriver bdrv_blkdebug = {
.bdrv_parse_filename = blkdebug_parse_filename,
.bdrv_file_open = blkdebug_open,
.bdrv_close = blkdebug_close,
.bdrv_reopen_prepare = blkdebug_reopen_prepare,
.bdrv_getlength = blkdebug_getlength,
.bdrv_truncate = blkdebug_truncate,
.bdrv_refresh_filename = blkdebug_refresh_filename,
.bdrv_refresh_limits = blkdebug_refresh_limits,
.bdrv_aio_readv = blkdebug_aio_readv,
.bdrv_aio_writev = blkdebug_aio_writev,
.bdrv_aio_flush = blkdebug_aio_flush,
.bdrv_debug_event = blkdebug_debug_event,
.bdrv_debug_breakpoint = blkdebug_debug_breakpoint,

View File

@@ -1,160 +0,0 @@
/*
* Block protocol for record/replay
*
* Copyright (c) 2010-2016 Institute for System Programming
* of the Russian Academy of Sciences.
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "block/block_int.h"
#include "sysemu/replay.h"
#include "qapi/error.h"
typedef struct Request {
Coroutine *co;
QEMUBH *bh;
} Request;
/* Next request id.
This counter is global, because requests from different
block devices should not get overlapping ids. */
static uint64_t request_id;
static int blkreplay_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
Error *local_err = NULL;
int ret;
/* Open the image file */
bs->file = bdrv_open_child(NULL, options, "image",
bs, &child_file, false, &local_err);
if (local_err) {
ret = -EINVAL;
error_propagate(errp, local_err);
goto fail;
}
ret = 0;
fail:
if (ret < 0) {
bdrv_unref_child(bs, bs->file);
}
return ret;
}
static void blkreplay_close(BlockDriverState *bs)
{
}
static int64_t blkreplay_getlength(BlockDriverState *bs)
{
return bdrv_getlength(bs->file->bs);
}
/* This bh is used for synchronization of return from coroutines.
It continues yielded coroutine which then finishes its execution.
BH is called adjusted to some replay checkpoint, therefore
record and replay will always finish coroutines deterministically.
*/
static void blkreplay_bh_cb(void *opaque)
{
Request *req = opaque;
qemu_coroutine_enter(req->co);
qemu_bh_delete(req->bh);
g_free(req);
}
static void block_request_create(uint64_t reqid, BlockDriverState *bs,
Coroutine *co)
{
Request *req = g_new(Request, 1);
*req = (Request) {
.co = co,
.bh = aio_bh_new(bdrv_get_aio_context(bs), blkreplay_bh_cb, req),
};
replay_block_event(req->bh, reqid);
}
static int coroutine_fn blkreplay_co_preadv(BlockDriverState *bs,
uint64_t offset, uint64_t bytes, QEMUIOVector *qiov, int flags)
{
uint64_t reqid = request_id++;
int ret = bdrv_co_preadv(bs->file, offset, bytes, qiov, flags);
block_request_create(reqid, bs, qemu_coroutine_self());
qemu_coroutine_yield();
return ret;
}
static int coroutine_fn blkreplay_co_pwritev(BlockDriverState *bs,
uint64_t offset, uint64_t bytes, QEMUIOVector *qiov, int flags)
{
uint64_t reqid = request_id++;
int ret = bdrv_co_pwritev(bs->file, offset, bytes, qiov, flags);
block_request_create(reqid, bs, qemu_coroutine_self());
qemu_coroutine_yield();
return ret;
}
static int coroutine_fn blkreplay_co_pwrite_zeroes(BlockDriverState *bs,
int64_t offset, int count, BdrvRequestFlags flags)
{
uint64_t reqid = request_id++;
int ret = bdrv_co_pwrite_zeroes(bs->file, offset, count, flags);
block_request_create(reqid, bs, qemu_coroutine_self());
qemu_coroutine_yield();
return ret;
}
static int coroutine_fn blkreplay_co_pdiscard(BlockDriverState *bs,
int64_t offset, int count)
{
uint64_t reqid = request_id++;
int ret = bdrv_co_pdiscard(bs->file->bs, offset, count);
block_request_create(reqid, bs, qemu_coroutine_self());
qemu_coroutine_yield();
return ret;
}
static int coroutine_fn blkreplay_co_flush(BlockDriverState *bs)
{
uint64_t reqid = request_id++;
int ret = bdrv_co_flush(bs->file->bs);
block_request_create(reqid, bs, qemu_coroutine_self());
qemu_coroutine_yield();
return ret;
}
static BlockDriver bdrv_blkreplay = {
.format_name = "blkreplay",
.protocol_name = "blkreplay",
.instance_size = 0,
.bdrv_file_open = blkreplay_open,
.bdrv_close = blkreplay_close,
.bdrv_getlength = blkreplay_getlength,
.bdrv_co_preadv = blkreplay_co_preadv,
.bdrv_co_pwritev = blkreplay_co_pwritev,
.bdrv_co_pwrite_zeroes = blkreplay_co_pwrite_zeroes,
.bdrv_co_pdiscard = blkreplay_co_pdiscard,
.bdrv_co_flush = blkreplay_co_flush,
};
static void bdrv_blkreplay_init(void)
{
bdrv_register(&bdrv_blkreplay);
}
block_init(bdrv_blkreplay_init);

View File

@@ -7,21 +7,17 @@
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include <stdarg.h>
#include "qemu/sockets.h" /* for EINPROGRESS on Windows */
#include "block/block_int.h"
#include "qapi/qmp/qdict.h"
#include "qapi/qmp/qstring.h"
#include "qemu/cutils.h"
typedef struct {
BdrvChild *test_file;
BlockDriverState *test_file;
} BDRVBlkverifyState;
typedef struct BlkverifyAIOCB BlkverifyAIOCB;
struct BlkverifyAIOCB {
BlockAIOCB common;
BlockDriverAIOCB common;
QEMUBH *bh;
/* Request metadata */
@@ -31,6 +27,7 @@ struct BlkverifyAIOCB {
int ret; /* first completed request's result */
unsigned int done; /* completion counter */
bool *finished; /* completion signal for cancel */
QEMUIOVector *qiov; /* user I/O vector */
QEMUIOVector raw_qiov; /* cloned I/O vector for raw file */
@@ -39,8 +36,21 @@ struct BlkverifyAIOCB {
void (*verify)(BlkverifyAIOCB *acb);
};
static void blkverify_aio_cancel(BlockDriverAIOCB *blockacb)
{
BlkverifyAIOCB *acb = (BlkverifyAIOCB *)blockacb;
bool finished = false;
/* Wait until request completes, invokes its callback, and frees itself */
acb->finished = &finished;
while (!finished) {
qemu_aio_wait();
}
}
static const AIOCBInfo blkverify_aiocb_info = {
.aiocb_size = sizeof(BlkverifyAIOCB),
.cancel = blkverify_aio_cancel,
};
static void GCC_FMT_ATTR(2, 3) blkverify_err(BlkverifyAIOCB *acb,
@@ -125,30 +135,26 @@ static int blkverify_open(BlockDriverState *bs, QDict *options, int flags,
}
/* Open the raw file */
bs->file = bdrv_open_child(qemu_opt_get(opts, "x-raw"), options, "raw",
bs, &child_file, false, &local_err);
if (local_err) {
ret = -EINVAL;
assert(bs->file == NULL);
ret = bdrv_open_image(&bs->file, qemu_opt_get(opts, "x-raw"), options,
"raw", flags | BDRV_O_PROTOCOL, false, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
goto fail;
}
/* Open the test file */
s->test_file = bdrv_open_child(qemu_opt_get(opts, "x-image"), options,
"test", bs, &child_format, false,
&local_err);
if (local_err) {
ret = -EINVAL;
assert(s->test_file == NULL);
ret = bdrv_open_image(&s->test_file, qemu_opt_get(opts, "x-image"), options,
"test", flags, false, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
s->test_file = NULL;
goto fail;
}
ret = 0;
fail:
if (ret < 0) {
bdrv_unref_child(bs, bs->file);
}
qemu_opts_del(opts);
return ret;
}
@@ -156,7 +162,7 @@ static void blkverify_close(BlockDriverState *bs)
{
BDRVBlkverifyState *s = bs->opaque;
bdrv_unref_child(bs, s->test_file);
bdrv_unref(s->test_file);
s->test_file = NULL;
}
@@ -164,13 +170,13 @@ static int64_t blkverify_getlength(BlockDriverState *bs)
{
BDRVBlkverifyState *s = bs->opaque;
return bdrv_getlength(s->test_file->bs);
return bdrv_getlength(s->test_file);
}
static BlkverifyAIOCB *blkverify_aio_get(BlockDriverState *bs, bool is_write,
int64_t sector_num, QEMUIOVector *qiov,
int nb_sectors,
BlockCompletionFunc *cb,
BlockDriverCompletionFunc *cb,
void *opaque)
{
BlkverifyAIOCB *acb = qemu_aio_get(&blkverify_aiocb_info, bs, cb, opaque);
@@ -184,6 +190,7 @@ static BlkverifyAIOCB *blkverify_aio_get(BlockDriverState *bs, bool is_write,
acb->qiov = qiov;
acb->buf = NULL;
acb->verify = NULL;
acb->finished = NULL;
return acb;
}
@@ -197,7 +204,10 @@ static void blkverify_aio_bh(void *opaque)
qemu_vfree(acb->buf);
}
acb->common.cb(acb->common.opaque, acb->ret);
qemu_aio_unref(acb);
if (acb->finished) {
*acb->finished = true;
}
qemu_aio_release(acb);
}
static void blkverify_aio_cb(void *opaque, int ret)
@@ -218,8 +228,7 @@ static void blkverify_aio_cb(void *opaque, int ret)
acb->verify(acb);
}
acb->bh = aio_bh_new(bdrv_get_aio_context(acb->common.bs),
blkverify_aio_bh, acb);
acb->bh = qemu_bh_new(blkverify_aio_bh, acb);
qemu_bh_schedule(acb->bh);
break;
}
@@ -234,16 +243,16 @@ static void blkverify_verify_readv(BlkverifyAIOCB *acb)
}
}
static BlockAIOCB *blkverify_aio_readv(BlockDriverState *bs,
static BlockDriverAIOCB *blkverify_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque)
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVBlkverifyState *s = bs->opaque;
BlkverifyAIOCB *acb = blkverify_aio_get(bs, false, sector_num, qiov,
nb_sectors, cb, opaque);
acb->verify = blkverify_verify_readv;
acb->buf = qemu_blockalign(bs->file->bs, qiov->size);
acb->buf = qemu_blockalign(bs->file, qiov->size);
qemu_iovec_init(&acb->raw_qiov, acb->qiov->niov);
qemu_iovec_clone(&acb->raw_qiov, qiov, acb->buf);
@@ -254,9 +263,9 @@ static BlockAIOCB *blkverify_aio_readv(BlockDriverState *bs,
return &acb->common;
}
static BlockAIOCB *blkverify_aio_writev(BlockDriverState *bs,
static BlockDriverAIOCB *blkverify_aio_writev(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque)
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVBlkverifyState *s = bs->opaque;
BlkverifyAIOCB *acb = blkverify_aio_get(bs, true, sector_num, qiov,
@@ -269,14 +278,14 @@ static BlockAIOCB *blkverify_aio_writev(BlockDriverState *bs,
return &acb->common;
}
static BlockAIOCB *blkverify_aio_flush(BlockDriverState *bs,
BlockCompletionFunc *cb,
void *opaque)
static BlockDriverAIOCB *blkverify_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb,
void *opaque)
{
BDRVBlkverifyState *s = bs->opaque;
/* Only flush test file, the raw file is not important */
return bdrv_aio_flush(s->test_file->bs, cb, opaque);
return bdrv_aio_flush(s->test_file, cb, opaque);
}
static bool blkverify_recurse_is_first_non_filter(BlockDriverState *bs,
@@ -284,63 +293,30 @@ static bool blkverify_recurse_is_first_non_filter(BlockDriverState *bs,
{
BDRVBlkverifyState *s = bs->opaque;
bool perm = bdrv_recurse_is_first_non_filter(bs->file->bs, candidate);
bool perm = bdrv_recurse_is_first_non_filter(bs->file, candidate);
if (perm) {
return true;
}
return bdrv_recurse_is_first_non_filter(s->test_file->bs, candidate);
}
static void blkverify_refresh_filename(BlockDriverState *bs, QDict *options)
{
BDRVBlkverifyState *s = bs->opaque;
/* bs->file->bs has already been refreshed */
bdrv_refresh_filename(s->test_file->bs);
if (bs->file->bs->full_open_options
&& s->test_file->bs->full_open_options)
{
QDict *opts = qdict_new();
qdict_put_obj(opts, "driver", QOBJECT(qstring_from_str("blkverify")));
QINCREF(bs->file->bs->full_open_options);
qdict_put_obj(opts, "raw", QOBJECT(bs->file->bs->full_open_options));
QINCREF(s->test_file->bs->full_open_options);
qdict_put_obj(opts, "test",
QOBJECT(s->test_file->bs->full_open_options));
bs->full_open_options = opts;
}
if (bs->file->bs->exact_filename[0]
&& s->test_file->bs->exact_filename[0])
{
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"blkverify:%s:%s",
bs->file->bs->exact_filename,
s->test_file->bs->exact_filename);
}
return bdrv_recurse_is_first_non_filter(s->test_file, candidate);
}
static BlockDriver bdrv_blkverify = {
.format_name = "blkverify",
.protocol_name = "blkverify",
.instance_size = sizeof(BDRVBlkverifyState),
.format_name = "blkverify",
.protocol_name = "blkverify",
.instance_size = sizeof(BDRVBlkverifyState),
.bdrv_parse_filename = blkverify_parse_filename,
.bdrv_file_open = blkverify_open,
.bdrv_close = blkverify_close,
.bdrv_getlength = blkverify_getlength,
.bdrv_refresh_filename = blkverify_refresh_filename,
.bdrv_parse_filename = blkverify_parse_filename,
.bdrv_file_open = blkverify_open,
.bdrv_close = blkverify_close,
.bdrv_getlength = blkverify_getlength,
.bdrv_aio_readv = blkverify_aio_readv,
.bdrv_aio_writev = blkverify_aio_writev,
.bdrv_aio_flush = blkverify_aio_flush,
.bdrv_aio_readv = blkverify_aio_readv,
.bdrv_aio_writev = blkverify_aio_writev,
.bdrv_aio_flush = blkverify_aio_flush,
.is_filter = true,
.is_filter = true,
.bdrv_recurse_is_first_non_filter = blkverify_recurse_is_first_non_filter,
};

File diff suppressed because it is too large Load Diff

View File

@@ -22,12 +22,9 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "qemu/bswap.h"
/**************************************************************/
@@ -104,7 +101,7 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
struct bochs_header bochs;
int ret;
bs->read_only = true; /* no write support yet */
bs->read_only = 1; // no write support yet
ret = bdrv_pread(bs->file, 0, &bochs, sizeof(bochs));
if (ret < 0) {
@@ -134,11 +131,7 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
return -EFBIG;
}
s->catalog_bitmap = g_try_new(uint32_t, s->catalog_size);
if (s->catalog_size && s->catalog_bitmap == NULL) {
error_setg(errp, "Could not allocate memory for catalog");
return -ENOMEM;
}
s->catalog_bitmap = g_malloc(s->catalog_size * 4);
ret = bdrv_pread(bs->file, le32_to_cpu(bochs.header), s->catalog_bitmap,
s->catalog_size * 4);
@@ -188,25 +181,19 @@ fail:
return ret;
}
static void bochs_refresh_limits(BlockDriverState *bs, Error **errp)
{
bs->bl.request_alignment = BDRV_SECTOR_SIZE; /* No sub-sector I/O */
}
static int64_t seek_to_sector(BlockDriverState *bs, int64_t sector_num)
{
BDRVBochsState *s = bs->opaque;
uint64_t offset = sector_num * 512;
uint64_t extent_index, extent_offset, bitmap_offset;
char bitmap_entry;
int ret;
// seek to sector
extent_index = offset / s->extent_size;
extent_offset = (offset % s->extent_size) / 512;
if (s->catalog_bitmap[extent_index] == 0xffffffff) {
return 0; /* not allocated */
return -1; /* not allocated */
}
bitmap_offset = s->data_offset +
@@ -214,65 +201,47 @@ static int64_t seek_to_sector(BlockDriverState *bs, int64_t sector_num)
(s->extent_blocks + s->bitmap_blocks));
/* read in bitmap for current extent */
ret = bdrv_pread(bs->file, bitmap_offset + (extent_offset / 8),
&bitmap_entry, 1);
if (ret < 0) {
return ret;
if (bdrv_pread(bs->file, bitmap_offset + (extent_offset / 8),
&bitmap_entry, 1) != 1) {
return -1;
}
if (!((bitmap_entry >> (extent_offset % 8)) & 1)) {
return 0; /* not allocated */
return -1; /* not allocated */
}
return bitmap_offset + (512 * (s->bitmap_blocks + extent_offset));
}
static int coroutine_fn
bochs_co_preadv(BlockDriverState *bs, uint64_t offset, uint64_t bytes,
QEMUIOVector *qiov, int flags)
static int bochs_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BDRVBochsState *s = bs->opaque;
uint64_t sector_num = offset >> BDRV_SECTOR_BITS;
int nb_sectors = bytes >> BDRV_SECTOR_BITS;
uint64_t bytes_done = 0;
QEMUIOVector local_qiov;
int ret;
assert((offset & (BDRV_SECTOR_SIZE - 1)) == 0);
assert((bytes & (BDRV_SECTOR_SIZE - 1)) == 0);
qemu_iovec_init(&local_qiov, qiov->niov);
qemu_co_mutex_lock(&s->lock);
while (nb_sectors > 0) {
int64_t block_offset = seek_to_sector(bs, sector_num);
if (block_offset < 0) {
ret = block_offset;
goto fail;
}
qemu_iovec_reset(&local_qiov);
qemu_iovec_concat(&local_qiov, qiov, bytes_done, 512);
if (block_offset > 0) {
ret = bdrv_co_preadv(bs->file, block_offset, 512,
&local_qiov, 0);
if (ret < 0) {
goto fail;
if (block_offset >= 0) {
ret = bdrv_pread(bs->file, block_offset, buf, 512);
if (ret != 512) {
return -1;
}
} else {
qemu_iovec_memset(&local_qiov, 0, 0, 512);
}
} else
memset(buf, 0, 512);
nb_sectors--;
sector_num++;
bytes_done += 512;
buf += 512;
}
return 0;
}
ret = 0;
fail:
static coroutine_fn int bochs_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVBochsState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = bochs_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
qemu_iovec_destroy(&local_qiov);
return ret;
}
@@ -287,8 +256,7 @@ static BlockDriver bdrv_bochs = {
.instance_size = sizeof(BDRVBochsState),
.bdrv_probe = bochs_probe,
.bdrv_open = bochs_open,
.bdrv_refresh_limits = bochs_refresh_limits,
.bdrv_co_preadv = bochs_co_preadv,
.bdrv_read = bochs_co_read,
.bdrv_close = bochs_close,
};

View File

@@ -21,12 +21,9 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "qemu/bswap.h"
#include <zlib.h>
/* Maximum compressed block size */
@@ -66,7 +63,7 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
uint32_t offsets_size, max_compressed_block_size = 1, i;
int ret;
bs->read_only = true;
bs->read_only = 1;
/* read header */
ret = bdrv_pread(bs->file, 128, &s->block_size, 4);
@@ -75,7 +72,7 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
}
s->block_size = be32_to_cpu(s->block_size);
if (s->block_size % 512) {
error_setg(errp, "block_size %" PRIu32 " must be a multiple of 512",
error_setg(errp, "block_size %u must be a multiple of 512",
s->block_size);
return -EINVAL;
}
@@ -89,7 +86,7 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
* need a buffer this big.
*/
if (s->block_size > MAX_BLOCK_SIZE) {
error_setg(errp, "block_size %" PRIu32 " must be %u MB or less",
error_setg(errp, "block_size %u must be %u MB or less",
s->block_size,
MAX_BLOCK_SIZE / (1024 * 1024));
return -EINVAL;
@@ -104,7 +101,7 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
/* read offsets */
if (s->n_blocks > (UINT32_MAX - 1) / sizeof(uint64_t)) {
/* Prevent integer overflow */
error_setg(errp, "n_blocks %" PRIu32 " must be %zu or less",
error_setg(errp, "n_blocks %u must be %zu or less",
s->n_blocks,
(UINT32_MAX - 1) / sizeof(uint64_t));
return -EINVAL;
@@ -119,12 +116,7 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
"try increasing block size");
return -EINVAL;
}
s->offsets = g_try_malloc(offsets_size);
if (s->offsets == NULL) {
error_setg(errp, "Could not allocate offsets table");
return -ENOMEM;
}
s->offsets = g_malloc(offsets_size);
ret = bdrv_pread(bs->file, 128 + 4 + 4, s->offsets, offsets_size);
if (ret < 0) {
@@ -141,7 +133,7 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
if (s->offsets[i] < s->offsets[i - 1]) {
error_setg(errp, "offsets not monotonically increasing at "
"index %" PRIu32 ", image file is corrupt", i);
"index %u, image file is corrupt", i);
ret = -EINVAL;
goto fail;
}
@@ -154,8 +146,8 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
* ridiculous s->compressed_block allocation.
*/
if (size > 2 * MAX_BLOCK_SIZE) {
error_setg(errp, "invalid compressed block size at index %" PRIu32
", image file is corrupt", i);
error_setg(errp, "invalid compressed block size at index %u, "
"image file is corrupt", i);
ret = -EINVAL;
goto fail;
}
@@ -166,20 +158,8 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
}
/* initialize zlib engine */
s->compressed_block = g_try_malloc(max_compressed_block_size + 1);
if (s->compressed_block == NULL) {
error_setg(errp, "Could not allocate compressed_block");
ret = -ENOMEM;
goto fail;
}
s->uncompressed_block = g_try_malloc(s->block_size);
if (s->uncompressed_block == NULL) {
error_setg(errp, "Could not allocate uncompressed_block");
ret = -ENOMEM;
goto fail;
}
s->compressed_block = g_malloc(max_compressed_block_size + 1);
s->uncompressed_block = g_malloc(s->block_size);
if (inflateInit(&s->zstream) != Z_OK) {
ret = -EINVAL;
goto fail;
@@ -198,11 +178,6 @@ fail:
return ret;
}
static void cloop_refresh_limits(BlockDriverState *bs, Error **errp)
{
bs->bl.request_alignment = BDRV_SECTOR_SIZE; /* No sub-sector I/O */
}
static inline int cloop_read_block(BlockDriverState *bs, int block_num)
{
BDRVCloopState *s = bs->opaque;
@@ -211,8 +186,8 @@ static inline int cloop_read_block(BlockDriverState *bs, int block_num)
int ret;
uint32_t bytes = s->offsets[block_num + 1] - s->offsets[block_num];
ret = bdrv_pread(bs->file, s->offsets[block_num],
s->compressed_block, bytes);
ret = bdrv_pread(bs->file, s->offsets[block_num], s->compressed_block,
bytes);
if (ret != bytes) {
return -1;
}
@@ -235,38 +210,33 @@ static inline int cloop_read_block(BlockDriverState *bs, int block_num)
return 0;
}
static int coroutine_fn
cloop_co_preadv(BlockDriverState *bs, uint64_t offset, uint64_t bytes,
QEMUIOVector *qiov, int flags)
static int cloop_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BDRVCloopState *s = bs->opaque;
uint64_t sector_num = offset >> BDRV_SECTOR_BITS;
int nb_sectors = bytes >> BDRV_SECTOR_BITS;
int ret, i;
assert((offset & (BDRV_SECTOR_SIZE - 1)) == 0);
assert((bytes & (BDRV_SECTOR_SIZE - 1)) == 0);
qemu_co_mutex_lock(&s->lock);
int i;
for (i = 0; i < nb_sectors; i++) {
void *data;
uint32_t sector_offset_in_block =
((sector_num + i) % s->sectors_per_block),
block_num = (sector_num + i) / s->sectors_per_block;
if (cloop_read_block(bs, block_num) != 0) {
ret = -EIO;
goto fail;
return -1;
}
data = s->uncompressed_block + sector_offset_in_block * 512;
qemu_iovec_from_buf(qiov, i * 512, data, 512);
memcpy(buf + i * 512,
s->uncompressed_block + sector_offset_in_block * 512, 512);
}
return 0;
}
ret = 0;
fail:
static coroutine_fn int cloop_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVCloopState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = cloop_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
@@ -284,8 +254,7 @@ static BlockDriver bdrv_cloop = {
.instance_size = sizeof(BDRVCloopState),
.bdrv_probe = cloop_probe,
.bdrv_open = cloop_open,
.bdrv_refresh_limits = cloop_refresh_limits,
.bdrv_co_preadv = cloop_co_preadv,
.bdrv_read = cloop_co_read,
.bdrv_close = cloop_close,
};

View File

@@ -12,14 +12,10 @@
*
*/
#include "qemu/osdep.h"
#include "trace.h"
#include "block/block_int.h"
#include "block/blockjob.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "qemu/ratelimit.h"
#include "sysemu/block-backend.h"
enum {
/*
@@ -36,114 +32,74 @@ typedef struct CommitBlockJob {
BlockJob common;
RateLimit limit;
BlockDriverState *active;
BlockBackend *top;
BlockBackend *base;
BlockDriverState *top;
BlockDriverState *base;
BlockdevOnError on_error;
int base_flags;
int orig_overlay_flags;
char *backing_file_str;
} CommitBlockJob;
static int coroutine_fn commit_populate(BlockBackend *bs, BlockBackend *base,
static int coroutine_fn commit_populate(BlockDriverState *bs,
BlockDriverState *base,
int64_t sector_num, int nb_sectors,
void *buf)
{
int ret = 0;
QEMUIOVector qiov;
struct iovec iov = {
.iov_base = buf,
.iov_len = nb_sectors * BDRV_SECTOR_SIZE,
};
qemu_iovec_init_external(&qiov, &iov, 1);
ret = blk_co_preadv(bs, sector_num * BDRV_SECTOR_SIZE,
qiov.size, &qiov, 0);
if (ret < 0) {
ret = bdrv_read(bs, sector_num, buf, nb_sectors);
if (ret) {
return ret;
}
ret = blk_co_pwritev(base, sector_num * BDRV_SECTOR_SIZE,
qiov.size, &qiov, 0);
if (ret < 0) {
ret = bdrv_write(base, sector_num, buf, nb_sectors);
if (ret) {
return ret;
}
return 0;
}
typedef struct {
int ret;
} CommitCompleteData;
static void commit_complete(BlockJob *job, void *opaque)
{
CommitBlockJob *s = container_of(job, CommitBlockJob, common);
CommitCompleteData *data = opaque;
BlockDriverState *active = s->active;
BlockDriverState *top = blk_bs(s->top);
BlockDriverState *base = blk_bs(s->base);
BlockDriverState *overlay_bs = bdrv_find_overlay(active, top);
int ret = data->ret;
if (!block_job_is_cancelled(&s->common) && ret == 0) {
/* success */
ret = bdrv_drop_intermediate(active, top, base, s->backing_file_str);
}
/* restore base open flags here if appropriate (e.g., change the base back
* to r/o). These reopens do not need to be atomic, since we won't abort
* even on failure here */
if (s->base_flags != bdrv_get_flags(base)) {
bdrv_reopen(base, s->base_flags, NULL);
}
if (overlay_bs && s->orig_overlay_flags != bdrv_get_flags(overlay_bs)) {
bdrv_reopen(overlay_bs, s->orig_overlay_flags, NULL);
}
g_free(s->backing_file_str);
blk_unref(s->top);
blk_unref(s->base);
block_job_completed(&s->common, ret);
g_free(data);
}
static void coroutine_fn commit_run(void *opaque)
{
CommitBlockJob *s = opaque;
CommitCompleteData *data;
BlockDriverState *active = s->active;
BlockDriverState *top = s->top;
BlockDriverState *base = s->base;
BlockDriverState *overlay_bs;
int64_t sector_num, end;
uint64_t delay_ns = 0;
int ret = 0;
int n = 0;
void *buf = NULL;
void *buf;
int bytes_written = 0;
int64_t base_len;
ret = s->common.len = blk_getlength(s->top);
ret = s->common.len = bdrv_getlength(top);
if (s->common.len < 0) {
goto out;
goto exit_restore_reopen;
}
ret = base_len = blk_getlength(s->base);
ret = base_len = bdrv_getlength(base);
if (base_len < 0) {
goto out;
goto exit_restore_reopen;
}
if (base_len < s->common.len) {
ret = blk_truncate(s->base, s->common.len);
ret = bdrv_truncate(base, s->common.len);
if (ret) {
goto out;
goto exit_restore_reopen;
}
}
end = s->common.len >> BDRV_SECTOR_BITS;
buf = blk_blockalign(s->top, COMMIT_BUFFER_SIZE);
buf = qemu_blockalign(top, COMMIT_BUFFER_SIZE);
for (sector_num = 0; sector_num < end; sector_num += n) {
uint64_t delay_ns = 0;
bool copy;
wait:
/* Note that even when no rate limit is applied we need to yield
* with no pending I/O here so that bdrv_drain_all() returns.
*/
@@ -152,21 +108,26 @@ static void coroutine_fn commit_run(void *opaque)
break;
}
/* Copy if allocated above the base */
ret = bdrv_is_allocated_above(blk_bs(s->top), blk_bs(s->base),
sector_num,
ret = bdrv_is_allocated_above(top, base, sector_num,
COMMIT_BUFFER_SIZE / BDRV_SECTOR_SIZE,
&n);
copy = (ret == 1);
trace_commit_one_iteration(s, sector_num, n, ret);
if (copy) {
ret = commit_populate(s->top, s->base, sector_num, n, buf);
if (s->common.speed) {
delay_ns = ratelimit_calculate_delay(&s->limit, n);
if (delay_ns > 0) {
goto wait;
}
}
ret = commit_populate(top, base, sector_num, n, buf);
bytes_written += n * BDRV_SECTOR_SIZE;
}
if (ret < 0) {
BlockErrorAction action =
block_job_error_action(&s->common, false, s->on_error, -ret);
if (action == BLOCK_ERROR_ACTION_REPORT) {
goto out;
if (s->on_error == BLOCKDEV_ON_ERROR_STOP ||
s->on_error == BLOCKDEV_ON_ERROR_REPORT||
(s->on_error == BLOCKDEV_ON_ERROR_ENOSPC && ret == -ENOSPC)) {
goto exit_free_buf;
} else {
n = 0;
continue;
@@ -174,20 +135,31 @@ static void coroutine_fn commit_run(void *opaque)
}
/* Publish progress */
s->common.offset += n * BDRV_SECTOR_SIZE;
if (copy && s->common.speed) {
delay_ns = ratelimit_calculate_delay(&s->limit, n);
}
}
ret = 0;
out:
if (!block_job_is_cancelled(&s->common) && sector_num == end) {
/* success */
ret = bdrv_drop_intermediate(active, top, base);
}
exit_free_buf:
qemu_vfree(buf);
data = g_malloc(sizeof(*data));
data->ret = ret;
block_job_defer_to_main_loop(&s->common, commit_complete, data);
exit_restore_reopen:
/* restore base open flags here if appropriate (e.g., change the base back
* to r/o). These reopens do not need to be atomic, since we won't abort
* even on failure here */
if (s->base_flags != bdrv_get_flags(base)) {
bdrv_reopen(base, s->base_flags, NULL);
}
overlay_bs = bdrv_find_overlay(active, top);
if (overlay_bs && s->orig_overlay_flags != bdrv_get_flags(overlay_bs)) {
bdrv_reopen(overlay_bs, s->orig_overlay_flags, NULL);
}
block_job_completed(&s->common, ret);
}
static void commit_set_speed(BlockJob *job, int64_t speed, Error **errp)
@@ -195,7 +167,7 @@ static void commit_set_speed(BlockJob *job, int64_t speed, Error **errp)
CommitBlockJob *s = container_of(job, CommitBlockJob, common);
if (speed < 0) {
error_setg(errp, QERR_INVALID_PARAMETER, "speed");
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
@@ -207,10 +179,10 @@ static const BlockJobDriver commit_job_driver = {
.set_speed = commit_set_speed,
};
void commit_start(const char *job_id, BlockDriverState *bs,
BlockDriverState *base, BlockDriverState *top, int64_t speed,
BlockdevOnError on_error, BlockCompletionFunc *cb,
void *opaque, const char *backing_file_str, Error **errp)
void commit_start(BlockDriverState *bs, BlockDriverState *base,
BlockDriverState *top, int64_t speed,
BlockdevOnError on_error, BlockDriverCompletionFunc *cb,
void *opaque, Error **errp)
{
CommitBlockJob *s;
BlockReopenQueue *reopen_queue = NULL;
@@ -219,6 +191,13 @@ void commit_start(const char *job_id, BlockDriverState *bs,
BlockDriverState *overlay_bs;
Error *local_err = NULL;
if ((on_error == BLOCKDEV_ON_ERROR_STOP ||
on_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
error_set(errp, QERR_INVALID_PARAMETER_COMBINATION);
return;
}
assert(top != bs);
if (top == base) {
error_setg(errp, "Invalid files for merge: top and base are the same");
@@ -232,171 +211,42 @@ void commit_start(const char *job_id, BlockDriverState *bs,
return;
}
s = block_job_create(job_id, &commit_job_driver, bs, speed,
cb, opaque, errp);
if (!s) {
return;
}
orig_base_flags = bdrv_get_flags(base);
orig_overlay_flags = bdrv_get_flags(overlay_bs);
/* convert base & overlay_bs to r/w, if necessary */
if (!(orig_base_flags & BDRV_O_RDWR)) {
reopen_queue = bdrv_reopen_queue(reopen_queue, base, NULL,
reopen_queue = bdrv_reopen_queue(reopen_queue, base,
orig_base_flags | BDRV_O_RDWR);
}
if (!(orig_overlay_flags & BDRV_O_RDWR)) {
reopen_queue = bdrv_reopen_queue(reopen_queue, overlay_bs, NULL,
reopen_queue = bdrv_reopen_queue(reopen_queue, overlay_bs,
orig_overlay_flags | BDRV_O_RDWR);
}
if (reopen_queue) {
bdrv_reopen_multiple(reopen_queue, &local_err);
if (local_err != NULL) {
error_propagate(errp, local_err);
block_job_unref(&s->common);
return;
}
}
s->base = blk_new();
blk_insert_bs(s->base, base);
s->top = blk_new();
blk_insert_bs(s->top, top);
s = block_job_create(&commit_job_driver, bs, speed, cb, opaque, errp);
if (!s) {
return;
}
s->base = base;
s->top = top;
s->active = bs;
s->base_flags = orig_base_flags;
s->orig_overlay_flags = orig_overlay_flags;
s->backing_file_str = g_strdup(backing_file_str);
s->on_error = on_error;
s->common.co = qemu_coroutine_create(commit_run, s);
s->common.co = qemu_coroutine_create(commit_run);
trace_commit_start(bs, base, top, s, s->common.co, opaque);
qemu_coroutine_enter(s->common.co);
}
#define COMMIT_BUF_SECTORS 2048
/* commit COW file into the raw image */
int bdrv_commit(BlockDriverState *bs)
{
BlockBackend *src, *backing;
BlockDriver *drv = bs->drv;
int64_t sector, total_sectors, length, backing_length;
int n, ro, open_flags;
int ret = 0;
uint8_t *buf = NULL;
if (!drv)
return -ENOMEDIUM;
if (!bs->backing) {
return -ENOTSUP;
}
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_COMMIT_SOURCE, NULL) ||
bdrv_op_is_blocked(bs->backing->bs, BLOCK_OP_TYPE_COMMIT_TARGET, NULL)) {
return -EBUSY;
}
ro = bs->backing->bs->read_only;
open_flags = bs->backing->bs->open_flags;
if (ro) {
if (bdrv_reopen(bs->backing->bs, open_flags | BDRV_O_RDWR, NULL)) {
return -EACCES;
}
}
src = blk_new();
blk_insert_bs(src, bs);
backing = blk_new();
blk_insert_bs(backing, bs->backing->bs);
length = blk_getlength(src);
if (length < 0) {
ret = length;
goto ro_cleanup;
}
backing_length = blk_getlength(backing);
if (backing_length < 0) {
ret = backing_length;
goto ro_cleanup;
}
/* If our top snapshot is larger than the backing file image,
* grow the backing file image if possible. If not possible,
* we must return an error */
if (length > backing_length) {
ret = blk_truncate(backing, length);
if (ret < 0) {
goto ro_cleanup;
}
}
total_sectors = length >> BDRV_SECTOR_BITS;
/* blk_try_blockalign() for src will choose an alignment that works for
* backing as well, so no need to compare the alignment manually. */
buf = blk_try_blockalign(src, COMMIT_BUF_SECTORS * BDRV_SECTOR_SIZE);
if (buf == NULL) {
ret = -ENOMEM;
goto ro_cleanup;
}
for (sector = 0; sector < total_sectors; sector += n) {
ret = bdrv_is_allocated(bs, sector, COMMIT_BUF_SECTORS, &n);
if (ret < 0) {
goto ro_cleanup;
}
if (ret) {
ret = blk_pread(src, sector * BDRV_SECTOR_SIZE, buf,
n * BDRV_SECTOR_SIZE);
if (ret < 0) {
goto ro_cleanup;
}
ret = blk_pwrite(backing, sector * BDRV_SECTOR_SIZE, buf,
n * BDRV_SECTOR_SIZE, 0);
if (ret < 0) {
goto ro_cleanup;
}
}
}
if (drv->bdrv_make_empty) {
ret = drv->bdrv_make_empty(bs);
if (ret < 0) {
goto ro_cleanup;
}
blk_flush(src);
}
/*
* Make sure all data we wrote to the backing device is actually
* stable on disk.
*/
blk_flush(backing);
ret = 0;
ro_cleanup:
qemu_vfree(buf);
blk_unref(src);
blk_unref(backing);
if (ro) {
/* ignoring error return here */
bdrv_reopen(bs->backing->bs, open_flags & ~BDRV_O_RDWR, NULL);
}
return ret;
qemu_coroutine_enter(s->common.co, s);
}

432
block/cow.c Normal file
View File

@@ -0,0 +1,432 @@
/*
* Block driver for the COW format
*
* Copyright (c) 2004 Fabrice Bellard
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/module.h"
/**************************************************************/
/* COW block driver using file system holes */
/* user mode linux compatible COW file */
#define COW_MAGIC 0x4f4f4f4d /* MOOO */
#define COW_VERSION 2
struct cow_header_v2 {
uint32_t magic;
uint32_t version;
char backing_file[1024];
int32_t mtime;
uint64_t size;
uint32_t sectorsize;
};
typedef struct BDRVCowState {
CoMutex lock;
int64_t cow_sectors_offset;
} BDRVCowState;
static int cow_probe(const uint8_t *buf, int buf_size, const char *filename)
{
const struct cow_header_v2 *cow_header = (const void *)buf;
if (buf_size >= sizeof(struct cow_header_v2) &&
be32_to_cpu(cow_header->magic) == COW_MAGIC &&
be32_to_cpu(cow_header->version) == COW_VERSION)
return 100;
else
return 0;
}
static int cow_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVCowState *s = bs->opaque;
struct cow_header_v2 cow_header;
int bitmap_size;
int64_t size;
int ret;
/* see if it is a cow image */
ret = bdrv_pread(bs->file, 0, &cow_header, sizeof(cow_header));
if (ret < 0) {
goto fail;
}
if (be32_to_cpu(cow_header.magic) != COW_MAGIC) {
error_setg(errp, "Image not in COW format");
ret = -EINVAL;
goto fail;
}
if (be32_to_cpu(cow_header.version) != COW_VERSION) {
char version[64];
snprintf(version, sizeof(version),
"COW version %d", cow_header.version);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "cow", version);
ret = -ENOTSUP;
goto fail;
}
/* cow image found */
size = be64_to_cpu(cow_header.size);
bs->total_sectors = size / 512;
pstrcpy(bs->backing_file, sizeof(bs->backing_file),
cow_header.backing_file);
bitmap_size = ((bs->total_sectors + 7) >> 3) + sizeof(cow_header);
s->cow_sectors_offset = (bitmap_size + 511) & ~511;
qemu_co_mutex_init(&s->lock);
return 0;
fail:
return ret;
}
static inline void cow_set_bits(uint8_t *bitmap, int start, int64_t nb_sectors)
{
int64_t bitnum = start, last = start + nb_sectors;
while (bitnum < last) {
if ((bitnum & 7) == 0 && bitnum + 8 <= last) {
bitmap[bitnum / 8] = 0xFF;
bitnum += 8;
continue;
}
bitmap[bitnum/8] |= (1 << (bitnum % 8));
bitnum++;
}
}
#define BITS_PER_BITMAP_SECTOR (512 * 8)
/* Cannot use bitmap.c on big-endian machines. */
static int cow_test_bit(int64_t bitnum, const uint8_t *bitmap)
{
return (bitmap[bitnum / 8] & (1 << (bitnum & 7))) != 0;
}
static int cow_find_streak(const uint8_t *bitmap, int value, int start, int nb_sectors)
{
int streak_value = value ? 0xFF : 0;
int last = MIN(start + nb_sectors, BITS_PER_BITMAP_SECTOR);
int bitnum = start;
while (bitnum < last) {
if ((bitnum & 7) == 0 && bitmap[bitnum / 8] == streak_value) {
bitnum += 8;
continue;
}
if (cow_test_bit(bitnum, bitmap) == value) {
bitnum++;
continue;
}
break;
}
return MIN(bitnum, last) - start;
}
/* Return true if first block has been changed (ie. current version is
* in COW file). Set the number of continuous blocks for which that
* is true. */
static int coroutine_fn cow_co_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *num_same)
{
int64_t bitnum = sector_num + sizeof(struct cow_header_v2) * 8;
uint64_t offset = (bitnum / 8) & -BDRV_SECTOR_SIZE;
bool first = true;
int changed = 0, same = 0;
do {
int ret;
uint8_t bitmap[BDRV_SECTOR_SIZE];
bitnum &= BITS_PER_BITMAP_SECTOR - 1;
int sector_bits = MIN(nb_sectors, BITS_PER_BITMAP_SECTOR - bitnum);
ret = bdrv_pread(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
if (first) {
changed = cow_test_bit(bitnum, bitmap);
first = false;
}
same += cow_find_streak(bitmap, changed, bitnum, nb_sectors);
bitnum += sector_bits;
nb_sectors -= sector_bits;
offset += BDRV_SECTOR_SIZE;
} while (nb_sectors);
*num_same = same;
return changed;
}
static int64_t coroutine_fn cow_co_get_block_status(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *num_same)
{
BDRVCowState *s = bs->opaque;
int ret = cow_co_is_allocated(bs, sector_num, nb_sectors, num_same);
int64_t offset = s->cow_sectors_offset + (sector_num << BDRV_SECTOR_BITS);
if (ret < 0) {
return ret;
}
return (ret ? BDRV_BLOCK_DATA : 0) | offset | BDRV_BLOCK_OFFSET_VALID;
}
static int cow_update_bitmap(BlockDriverState *bs, int64_t sector_num,
int nb_sectors)
{
int64_t bitnum = sector_num + sizeof(struct cow_header_v2) * 8;
uint64_t offset = (bitnum / 8) & -BDRV_SECTOR_SIZE;
bool first = true;
int sector_bits;
for ( ; nb_sectors;
bitnum += sector_bits,
nb_sectors -= sector_bits,
offset += BDRV_SECTOR_SIZE) {
int ret, set;
uint8_t bitmap[BDRV_SECTOR_SIZE];
bitnum &= BITS_PER_BITMAP_SECTOR - 1;
sector_bits = MIN(nb_sectors, BITS_PER_BITMAP_SECTOR - bitnum);
ret = bdrv_pread(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
/* Skip over any already set bits */
set = cow_find_streak(bitmap, 1, bitnum, sector_bits);
bitnum += set;
sector_bits -= set;
nb_sectors -= set;
if (!sector_bits) {
continue;
}
if (first) {
ret = bdrv_flush(bs->file);
if (ret < 0) {
return ret;
}
first = false;
}
cow_set_bits(bitmap, bitnum, sector_bits);
ret = bdrv_pwrite(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
}
return 0;
}
static int coroutine_fn cow_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BDRVCowState *s = bs->opaque;
int ret, n;
while (nb_sectors > 0) {
ret = cow_co_is_allocated(bs, sector_num, nb_sectors, &n);
if (ret < 0) {
return ret;
}
if (ret) {
ret = bdrv_pread(bs->file,
s->cow_sectors_offset + sector_num * 512,
buf, n * 512);
if (ret < 0) {
return ret;
}
} else {
if (bs->backing_hd) {
/* read from the base image */
ret = bdrv_read(bs->backing_hd, sector_num, buf, n);
if (ret < 0) {
return ret;
}
} else {
memset(buf, 0, n * 512);
}
}
nb_sectors -= n;
sector_num += n;
buf += n * 512;
}
return 0;
}
static coroutine_fn int cow_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVCowState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = cow_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static int cow_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
BDRVCowState *s = bs->opaque;
int ret;
ret = bdrv_pwrite(bs->file, s->cow_sectors_offset + sector_num * 512,
buf, nb_sectors * 512);
if (ret < 0) {
return ret;
}
return cow_update_bitmap(bs, sector_num, nb_sectors);
}
static coroutine_fn int cow_co_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
int ret;
BDRVCowState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = cow_write(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void cow_close(BlockDriverState *bs)
{
}
static int cow_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
{
struct cow_header_v2 cow_header;
struct stat st;
int64_t image_sectors = 0;
const char *image_filename = NULL;
Error *local_err = NULL;
int ret;
BlockDriverState *cow_bs;
/* Read out options */
while (options && options->name) {
if (!strcmp(options->name, BLOCK_OPT_SIZE)) {
image_sectors = options->value.n / 512;
} else if (!strcmp(options->name, BLOCK_OPT_BACKING_FILE)) {
image_filename = options->value.s;
}
options++;
}
ret = bdrv_create_file(filename, options, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
cow_bs = NULL;
ret = bdrv_open(&cow_bs, filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, NULL, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
memset(&cow_header, 0, sizeof(cow_header));
cow_header.magic = cpu_to_be32(COW_MAGIC);
cow_header.version = cpu_to_be32(COW_VERSION);
if (image_filename) {
/* Note: if no file, we put a dummy mtime */
cow_header.mtime = cpu_to_be32(0);
if (stat(image_filename, &st) != 0) {
goto mtime_fail;
}
cow_header.mtime = cpu_to_be32(st.st_mtime);
mtime_fail:
pstrcpy(cow_header.backing_file, sizeof(cow_header.backing_file),
image_filename);
}
cow_header.sectorsize = cpu_to_be32(512);
cow_header.size = cpu_to_be64(image_sectors * 512);
ret = bdrv_pwrite(cow_bs, 0, &cow_header, sizeof(cow_header));
if (ret < 0) {
goto exit;
}
/* resize to include at least all the bitmap */
ret = bdrv_truncate(cow_bs,
sizeof(cow_header) + ((image_sectors + 7) >> 3));
if (ret < 0) {
goto exit;
}
exit:
bdrv_unref(cow_bs);
return ret;
}
static QEMUOptionParameter cow_create_options[] = {
{
.name = BLOCK_OPT_SIZE,
.type = OPT_SIZE,
.help = "Virtual disk size"
},
{
.name = BLOCK_OPT_BACKING_FILE,
.type = OPT_STRING,
.help = "File name of a base image"
},
{ NULL }
};
static BlockDriver bdrv_cow = {
.format_name = "cow",
.instance_size = sizeof(BDRVCowState),
.bdrv_probe = cow_probe,
.bdrv_open = cow_open,
.bdrv_close = cow_close,
.bdrv_create = cow_create,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_read = cow_co_read,
.bdrv_write = cow_co_write,
.bdrv_co_get_block_status = cow_co_get_block_status,
.create_options = cow_create_options,
};
static void bdrv_cow_init(void)
{
bdrv_register(&bdrv_cow);
}
block_init(bdrv_cow_init);

View File

@@ -1,641 +0,0 @@
/*
* QEMU block full disk encryption
*
* Copyright (c) 2015-2016 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu/osdep.h"
#include "block/block_int.h"
#include "sysemu/block-backend.h"
#include "crypto/block.h"
#include "qapi/opts-visitor.h"
#include "qapi-visit.h"
#include "qapi/error.h"
#define BLOCK_CRYPTO_OPT_LUKS_KEY_SECRET "key-secret"
#define BLOCK_CRYPTO_OPT_LUKS_CIPHER_ALG "cipher-alg"
#define BLOCK_CRYPTO_OPT_LUKS_CIPHER_MODE "cipher-mode"
#define BLOCK_CRYPTO_OPT_LUKS_IVGEN_ALG "ivgen-alg"
#define BLOCK_CRYPTO_OPT_LUKS_IVGEN_HASH_ALG "ivgen-hash-alg"
#define BLOCK_CRYPTO_OPT_LUKS_HASH_ALG "hash-alg"
#define BLOCK_CRYPTO_OPT_LUKS_ITER_TIME "iter-time"
typedef struct BlockCrypto BlockCrypto;
struct BlockCrypto {
QCryptoBlock *block;
};
static int block_crypto_probe_generic(QCryptoBlockFormat format,
const uint8_t *buf,
int buf_size,
const char *filename)
{
if (qcrypto_block_has_format(format, buf, buf_size)) {
return 100;
} else {
return 0;
}
}
static ssize_t block_crypto_read_func(QCryptoBlock *block,
size_t offset,
uint8_t *buf,
size_t buflen,
Error **errp,
void *opaque)
{
BlockDriverState *bs = opaque;
ssize_t ret;
ret = bdrv_pread(bs->file, offset, buf, buflen);
if (ret < 0) {
error_setg_errno(errp, -ret, "Could not read encryption header");
return ret;
}
return ret;
}
struct BlockCryptoCreateData {
const char *filename;
QemuOpts *opts;
BlockBackend *blk;
uint64_t size;
};
static ssize_t block_crypto_write_func(QCryptoBlock *block,
size_t offset,
const uint8_t *buf,
size_t buflen,
Error **errp,
void *opaque)
{
struct BlockCryptoCreateData *data = opaque;
ssize_t ret;
ret = blk_pwrite(data->blk, offset, buf, buflen, 0);
if (ret < 0) {
error_setg_errno(errp, -ret, "Could not write encryption header");
return ret;
}
return ret;
}
static ssize_t block_crypto_init_func(QCryptoBlock *block,
size_t headerlen,
Error **errp,
void *opaque)
{
struct BlockCryptoCreateData *data = opaque;
int ret;
/* User provided size should reflect amount of space made
* available to the guest, so we must take account of that
* which will be used by the crypto header
*/
data->size += headerlen;
qemu_opt_set_number(data->opts, BLOCK_OPT_SIZE, data->size, &error_abort);
ret = bdrv_create_file(data->filename, data->opts, errp);
if (ret < 0) {
return -1;
}
data->blk = blk_new_open(data->filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, errp);
if (!data->blk) {
return -1;
}
return 0;
}
static QemuOptsList block_crypto_runtime_opts_luks = {
.name = "crypto",
.head = QTAILQ_HEAD_INITIALIZER(block_crypto_runtime_opts_luks.head),
.desc = {
{
.name = BLOCK_CRYPTO_OPT_LUKS_KEY_SECRET,
.type = QEMU_OPT_STRING,
.help = "ID of the secret that provides the encryption key",
},
{ /* end of list */ }
},
};
static QemuOptsList block_crypto_create_opts_luks = {
.name = "crypto",
.head = QTAILQ_HEAD_INITIALIZER(block_crypto_create_opts_luks.head),
.desc = {
{
.name = BLOCK_OPT_SIZE,
.type = QEMU_OPT_SIZE,
.help = "Virtual disk size"
},
{
.name = BLOCK_CRYPTO_OPT_LUKS_KEY_SECRET,
.type = QEMU_OPT_STRING,
.help = "ID of the secret that provides the encryption key",
},
{
.name = BLOCK_CRYPTO_OPT_LUKS_CIPHER_ALG,
.type = QEMU_OPT_STRING,
.help = "Name of encryption cipher algorithm",
},
{
.name = BLOCK_CRYPTO_OPT_LUKS_CIPHER_MODE,
.type = QEMU_OPT_STRING,
.help = "Name of encryption cipher mode",
},
{
.name = BLOCK_CRYPTO_OPT_LUKS_IVGEN_ALG,
.type = QEMU_OPT_STRING,
.help = "Name of IV generator algorithm",
},
{
.name = BLOCK_CRYPTO_OPT_LUKS_IVGEN_HASH_ALG,
.type = QEMU_OPT_STRING,
.help = "Name of IV generator hash algorithm",
},
{
.name = BLOCK_CRYPTO_OPT_LUKS_HASH_ALG,
.type = QEMU_OPT_STRING,
.help = "Name of encryption hash algorithm",
},
{
.name = BLOCK_CRYPTO_OPT_LUKS_ITER_TIME,
.type = QEMU_OPT_NUMBER,
.help = "Time to spend in PBKDF in milliseconds",
},
{ /* end of list */ }
},
};
static QCryptoBlockOpenOptions *
block_crypto_open_opts_init(QCryptoBlockFormat format,
QemuOpts *opts,
Error **errp)
{
Visitor *v;
QCryptoBlockOpenOptions *ret = NULL;
Error *local_err = NULL;
ret = g_new0(QCryptoBlockOpenOptions, 1);
ret->format = format;
v = opts_visitor_new(opts);
visit_start_struct(v, NULL, NULL, 0, &local_err);
if (local_err) {
goto out;
}
switch (format) {
case Q_CRYPTO_BLOCK_FORMAT_LUKS:
visit_type_QCryptoBlockOptionsLUKS_members(
v, &ret->u.luks, &local_err);
break;
default:
error_setg(&local_err, "Unsupported block format %d", format);
break;
}
if (!local_err) {
visit_check_struct(v, &local_err);
}
visit_end_struct(v, NULL);
out:
if (local_err) {
error_propagate(errp, local_err);
qapi_free_QCryptoBlockOpenOptions(ret);
ret = NULL;
}
visit_free(v);
return ret;
}
static QCryptoBlockCreateOptions *
block_crypto_create_opts_init(QCryptoBlockFormat format,
QemuOpts *opts,
Error **errp)
{
Visitor *v;
QCryptoBlockCreateOptions *ret = NULL;
Error *local_err = NULL;
ret = g_new0(QCryptoBlockCreateOptions, 1);
ret->format = format;
v = opts_visitor_new(opts);
visit_start_struct(v, NULL, NULL, 0, &local_err);
if (local_err) {
goto out;
}
switch (format) {
case Q_CRYPTO_BLOCK_FORMAT_LUKS:
visit_type_QCryptoBlockCreateOptionsLUKS_members(
v, &ret->u.luks, &local_err);
break;
default:
error_setg(&local_err, "Unsupported block format %d", format);
break;
}
if (!local_err) {
visit_check_struct(v, &local_err);
}
visit_end_struct(v, NULL);
out:
if (local_err) {
error_propagate(errp, local_err);
qapi_free_QCryptoBlockCreateOptions(ret);
ret = NULL;
}
visit_free(v);
return ret;
}
static int block_crypto_open_generic(QCryptoBlockFormat format,
QemuOptsList *opts_spec,
BlockDriverState *bs,
QDict *options,
int flags,
Error **errp)
{
BlockCrypto *crypto = bs->opaque;
QemuOpts *opts = NULL;
Error *local_err = NULL;
int ret = -EINVAL;
QCryptoBlockOpenOptions *open_opts = NULL;
unsigned int cflags = 0;
opts = qemu_opts_create(opts_spec, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
goto cleanup;
}
open_opts = block_crypto_open_opts_init(format, opts, errp);
if (!open_opts) {
goto cleanup;
}
if (flags & BDRV_O_NO_IO) {
cflags |= QCRYPTO_BLOCK_OPEN_NO_IO;
}
crypto->block = qcrypto_block_open(open_opts,
block_crypto_read_func,
bs,
cflags,
errp);
if (!crypto->block) {
ret = -EIO;
goto cleanup;
}
bs->encrypted = true;
bs->valid_key = true;
ret = 0;
cleanup:
qapi_free_QCryptoBlockOpenOptions(open_opts);
return ret;
}
static int block_crypto_create_generic(QCryptoBlockFormat format,
const char *filename,
QemuOpts *opts,
Error **errp)
{
int ret = -EINVAL;
QCryptoBlockCreateOptions *create_opts = NULL;
QCryptoBlock *crypto = NULL;
struct BlockCryptoCreateData data = {
.size = ROUND_UP(qemu_opt_get_size_del(opts, BLOCK_OPT_SIZE, 0),
BDRV_SECTOR_SIZE),
.opts = opts,
.filename = filename,
};
create_opts = block_crypto_create_opts_init(format, opts, errp);
if (!create_opts) {
return -1;
}
crypto = qcrypto_block_create(create_opts,
block_crypto_init_func,
block_crypto_write_func,
&data,
errp);
if (!crypto) {
ret = -EIO;
goto cleanup;
}
ret = 0;
cleanup:
qcrypto_block_free(crypto);
blk_unref(data.blk);
qapi_free_QCryptoBlockCreateOptions(create_opts);
return ret;
}
static int block_crypto_truncate(BlockDriverState *bs, int64_t offset)
{
BlockCrypto *crypto = bs->opaque;
size_t payload_offset =
qcrypto_block_get_payload_offset(crypto->block);
offset += payload_offset;
return bdrv_truncate(bs->file->bs, offset);
}
static void block_crypto_close(BlockDriverState *bs)
{
BlockCrypto *crypto = bs->opaque;
qcrypto_block_free(crypto->block);
}
#define BLOCK_CRYPTO_MAX_SECTORS 32
static coroutine_fn int
block_crypto_co_readv(BlockDriverState *bs, int64_t sector_num,
int remaining_sectors, QEMUIOVector *qiov)
{
BlockCrypto *crypto = bs->opaque;
int cur_nr_sectors; /* number of sectors in current iteration */
uint64_t bytes_done = 0;
uint8_t *cipher_data = NULL;
QEMUIOVector hd_qiov;
int ret = 0;
size_t payload_offset =
qcrypto_block_get_payload_offset(crypto->block) / 512;
qemu_iovec_init(&hd_qiov, qiov->niov);
/* Bounce buffer so we have a linear mem region for
* entire sector. XXX optimize so we avoid bounce
* buffer in case that qiov->niov == 1
*/
cipher_data =
qemu_try_blockalign(bs->file->bs, MIN(BLOCK_CRYPTO_MAX_SECTORS * 512,
qiov->size));
if (cipher_data == NULL) {
ret = -ENOMEM;
goto cleanup;
}
while (remaining_sectors) {
cur_nr_sectors = remaining_sectors;
if (cur_nr_sectors > BLOCK_CRYPTO_MAX_SECTORS) {
cur_nr_sectors = BLOCK_CRYPTO_MAX_SECTORS;
}
qemu_iovec_reset(&hd_qiov);
qemu_iovec_add(&hd_qiov, cipher_data, cur_nr_sectors * 512);
ret = bdrv_co_readv(bs->file,
payload_offset + sector_num,
cur_nr_sectors, &hd_qiov);
if (ret < 0) {
goto cleanup;
}
if (qcrypto_block_decrypt(crypto->block,
sector_num,
cipher_data, cur_nr_sectors * 512,
NULL) < 0) {
ret = -EIO;
goto cleanup;
}
qemu_iovec_from_buf(qiov, bytes_done,
cipher_data, cur_nr_sectors * 512);
remaining_sectors -= cur_nr_sectors;
sector_num += cur_nr_sectors;
bytes_done += cur_nr_sectors * 512;
}
cleanup:
qemu_iovec_destroy(&hd_qiov);
qemu_vfree(cipher_data);
return ret;
}
static coroutine_fn int
block_crypto_co_writev(BlockDriverState *bs, int64_t sector_num,
int remaining_sectors, QEMUIOVector *qiov)
{
BlockCrypto *crypto = bs->opaque;
int cur_nr_sectors; /* number of sectors in current iteration */
uint64_t bytes_done = 0;
uint8_t *cipher_data = NULL;
QEMUIOVector hd_qiov;
int ret = 0;
size_t payload_offset =
qcrypto_block_get_payload_offset(crypto->block) / 512;
qemu_iovec_init(&hd_qiov, qiov->niov);
/* Bounce buffer so we have a linear mem region for
* entire sector. XXX optimize so we avoid bounce
* buffer in case that qiov->niov == 1
*/
cipher_data =
qemu_try_blockalign(bs->file->bs, MIN(BLOCK_CRYPTO_MAX_SECTORS * 512,
qiov->size));
if (cipher_data == NULL) {
ret = -ENOMEM;
goto cleanup;
}
while (remaining_sectors) {
cur_nr_sectors = remaining_sectors;
if (cur_nr_sectors > BLOCK_CRYPTO_MAX_SECTORS) {
cur_nr_sectors = BLOCK_CRYPTO_MAX_SECTORS;
}
qemu_iovec_to_buf(qiov, bytes_done,
cipher_data, cur_nr_sectors * 512);
if (qcrypto_block_encrypt(crypto->block,
sector_num,
cipher_data, cur_nr_sectors * 512,
NULL) < 0) {
ret = -EIO;
goto cleanup;
}
qemu_iovec_reset(&hd_qiov);
qemu_iovec_add(&hd_qiov, cipher_data, cur_nr_sectors * 512);
ret = bdrv_co_writev(bs->file,
payload_offset + sector_num,
cur_nr_sectors, &hd_qiov);
if (ret < 0) {
goto cleanup;
}
remaining_sectors -= cur_nr_sectors;
sector_num += cur_nr_sectors;
bytes_done += cur_nr_sectors * 512;
}
cleanup:
qemu_iovec_destroy(&hd_qiov);
qemu_vfree(cipher_data);
return ret;
}
static int64_t block_crypto_getlength(BlockDriverState *bs)
{
BlockCrypto *crypto = bs->opaque;
int64_t len = bdrv_getlength(bs->file->bs);
ssize_t offset = qcrypto_block_get_payload_offset(crypto->block);
len -= offset;
return len;
}
static int block_crypto_probe_luks(const uint8_t *buf,
int buf_size,
const char *filename) {
return block_crypto_probe_generic(Q_CRYPTO_BLOCK_FORMAT_LUKS,
buf, buf_size, filename);
}
static int block_crypto_open_luks(BlockDriverState *bs,
QDict *options,
int flags,
Error **errp)
{
return block_crypto_open_generic(Q_CRYPTO_BLOCK_FORMAT_LUKS,
&block_crypto_runtime_opts_luks,
bs, options, flags, errp);
}
static int block_crypto_create_luks(const char *filename,
QemuOpts *opts,
Error **errp)
{
return block_crypto_create_generic(Q_CRYPTO_BLOCK_FORMAT_LUKS,
filename, opts, errp);
}
static int block_crypto_get_info_luks(BlockDriverState *bs,
BlockDriverInfo *bdi)
{
BlockDriverInfo subbdi;
int ret;
ret = bdrv_get_info(bs->file->bs, &subbdi);
if (ret != 0) {
return ret;
}
bdi->unallocated_blocks_are_zero = false;
bdi->can_write_zeroes_with_unmap = false;
bdi->cluster_size = subbdi.cluster_size;
return 0;
}
static ImageInfoSpecific *
block_crypto_get_specific_info_luks(BlockDriverState *bs)
{
BlockCrypto *crypto = bs->opaque;
ImageInfoSpecific *spec_info;
QCryptoBlockInfo *info;
info = qcrypto_block_get_info(crypto->block, NULL);
if (!info) {
return NULL;
}
if (info->format != Q_CRYPTO_BLOCK_FORMAT_LUKS) {
qapi_free_QCryptoBlockInfo(info);
return NULL;
}
spec_info = g_new(ImageInfoSpecific, 1);
spec_info->type = IMAGE_INFO_SPECIFIC_KIND_LUKS;
spec_info->u.luks.data = g_new(QCryptoBlockInfoLUKS, 1);
*spec_info->u.luks.data = info->u.luks;
/* Blank out pointers we've just stolen to avoid double free */
memset(&info->u.luks, 0, sizeof(info->u.luks));
qapi_free_QCryptoBlockInfo(info);
return spec_info;
}
BlockDriver bdrv_crypto_luks = {
.format_name = "luks",
.instance_size = sizeof(BlockCrypto),
.bdrv_probe = block_crypto_probe_luks,
.bdrv_open = block_crypto_open_luks,
.bdrv_close = block_crypto_close,
.bdrv_create = block_crypto_create_luks,
.bdrv_truncate = block_crypto_truncate,
.create_opts = &block_crypto_create_opts_luks,
.bdrv_co_readv = block_crypto_co_readv,
.bdrv_co_writev = block_crypto_co_writev,
.bdrv_getlength = block_crypto_getlength,
.bdrv_get_info = block_crypto_get_info_luks,
.bdrv_get_specific_info = block_crypto_get_specific_info_luks,
};
static void block_crypto_init(void)
{
bdrv_register(&bdrv_crypto_luks);
}
block_init(block_crypto_init);

View File

@@ -21,50 +21,22 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "qemu/error-report.h"
#include "block/block_int.h"
#include "qapi/qmp/qbool.h"
#include "qapi/qmp/qstring.h"
#include "crypto/secret.h"
#include <curl/curl.h>
#include "qemu/cutils.h"
// #define DEBUG_CURL
// #define DEBUG
// #define DEBUG_VERBOSE
#ifdef DEBUG_CURL
#define DEBUG_CURL_PRINT 1
#define DPRINTF(fmt, ...) do { printf(fmt, ## __VA_ARGS__); } while (0)
#else
#define DEBUG_CURL_PRINT 0
#define DPRINTF(fmt, ...) do { } while (0)
#endif
#define DPRINTF(fmt, ...) \
do { \
if (DEBUG_CURL_PRINT) { \
fprintf(stderr, fmt, ## __VA_ARGS__); \
} \
} while (0)
#if LIBCURL_VERSION_NUM >= 0x071000
/* The multi interface timer callback was introduced in 7.16.0 */
#define NEED_CURL_TIMER_CALLBACK
#define HAVE_SOCKET_ACTION
#endif
#ifndef HAVE_SOCKET_ACTION
/* If curl_multi_socket_action isn't available, define it statically here in
* terms of curl_multi_socket. Note that ev_bitmask will be ignored, which is
* less efficient but still safe. */
static CURLMcode __curl_multi_socket_action(CURLM *multi_handle,
curl_socket_t sockfd,
int ev_bitmask,
int *running_handles)
{
return curl_multi_socket(multi_handle, sockfd, running_handles);
}
#define curl_multi_socket_action __curl_multi_socket_action
#endif
#define PROTOCOLS (CURLPROTO_HTTP | CURLPROTO_HTTPS | \
@@ -74,28 +46,16 @@ static CURLMcode __curl_multi_socket_action(CURLM *multi_handle,
#define CURL_NUM_STATES 8
#define CURL_NUM_ACB 8
#define SECTOR_SIZE 512
#define READ_AHEAD_DEFAULT (256 * 1024)
#define CURL_TIMEOUT_DEFAULT 5
#define CURL_TIMEOUT_MAX 10000
#define READ_AHEAD_SIZE (256 * 1024)
#define FIND_RET_NONE 0
#define FIND_RET_OK 1
#define FIND_RET_WAIT 2
#define CURL_BLOCK_OPT_URL "url"
#define CURL_BLOCK_OPT_READAHEAD "readahead"
#define CURL_BLOCK_OPT_SSLVERIFY "sslverify"
#define CURL_BLOCK_OPT_TIMEOUT "timeout"
#define CURL_BLOCK_OPT_COOKIE "cookie"
#define CURL_BLOCK_OPT_USERNAME "username"
#define CURL_BLOCK_OPT_PASSWORD_SECRET "password-secret"
#define CURL_BLOCK_OPT_PROXY_USERNAME "proxy-username"
#define CURL_BLOCK_OPT_PROXY_PASSWORD_SECRET "proxy-password-secret"
struct BDRVCURLState;
typedef struct CURLAIOCB {
BlockAIOCB common;
BlockDriverAIOCB common;
QEMUBH *bh;
QEMUIOVector *qiov;
@@ -111,7 +71,6 @@ typedef struct CURLState
struct BDRVCURLState *s;
CURLAIOCB *acb[CURL_NUM_ACB];
CURL *curl;
curl_socket_t sock_fd;
char *orig_buf;
size_t buf_start;
size_t buf_off;
@@ -128,20 +87,11 @@ typedef struct BDRVCURLState {
CURLState states[CURL_NUM_STATES];
char *url;
size_t readahead_size;
bool sslverify;
uint64_t timeout;
char *cookie;
bool accept_range;
AioContext *aio_context;
char *username;
char *password;
char *proxyusername;
char *proxypassword;
} BDRVCURLState;
static void curl_clean_state(CURLState *s);
static void curl_multi_do(void *arg);
static void curl_multi_read(void *arg);
#ifdef NEED_CURL_TIMER_CALLBACK
static int curl_timer_cb(CURLM *multi, long timeout_ms, void *opaque)
@@ -161,31 +111,21 @@ static int curl_timer_cb(CURLM *multi, long timeout_ms, void *opaque)
#endif
static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
void *userp, void *sp)
void *s, void *sp)
{
BDRVCURLState *s;
CURLState *state = NULL;
curl_easy_getinfo(curl, CURLINFO_PRIVATE, (char **)&state);
state->sock_fd = fd;
s = state->s;
DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, (int)fd);
DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd);
switch (action) {
case CURL_POLL_IN:
aio_set_fd_handler(s->aio_context, fd, false,
curl_multi_read, NULL, state);
qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, s);
break;
case CURL_POLL_OUT:
aio_set_fd_handler(s->aio_context, fd, false,
NULL, curl_multi_do, state);
qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, s);
break;
case CURL_POLL_INOUT:
aio_set_fd_handler(s->aio_context, fd, false,
curl_multi_read, curl_multi_do, state);
qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do, s);
break;
case CURL_POLL_REMOVE:
aio_set_fd_handler(s->aio_context, fd, false,
NULL, NULL, NULL);
qemu_aio_set_fd_handler(fd, NULL, NULL, NULL);
break;
}
@@ -215,7 +155,7 @@ static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
DPRINTF("CURL: Just reading %zd bytes\n", realsize);
if (!s || !s->orig_buf)
return 0;
goto read_end;
if (s->buf_off >= s->buf_len) {
/* buffer full, read nothing */
@@ -235,11 +175,12 @@ static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
qemu_iovec_from_buf(acb->qiov, 0, s->orig_buf + acb->start,
acb->end - acb->start);
acb->common.cb(acb->common.opaque, 0);
qemu_aio_unref(acb);
qemu_aio_release(acb);
s->acb[i] = NULL;
}
}
read_end:
return realsize;
}
@@ -274,8 +215,7 @@ static int curl_find_buf(BDRVCURLState *s, size_t start, size_t len,
}
// Wait for unfinished chunks
if (state->in_use &&
(start >= state->buf_start) &&
if ((start >= state->buf_start) &&
(start <= buf_fend) &&
(end >= state->buf_start) &&
(end <= buf_fend))
@@ -297,81 +237,68 @@ static int curl_find_buf(BDRVCURLState *s, size_t start, size_t len,
return FIND_RET_NONE;
}
static void curl_multi_check_completion(BDRVCURLState *s)
static void curl_multi_read(BDRVCURLState *s)
{
int msgs_in_queue;
/* Try to find done transfers, so we can free the easy
* handle again. */
for (;;) {
do {
CURLMsg *msg;
msg = curl_multi_info_read(s->multi, &msgs_in_queue);
/* Quit when there are no more completions */
if (!msg)
break;
if (msg->msg == CURLMSG_DONE) {
CURLState *state = NULL;
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE,
(char **)&state);
/* ACBs for successful messages get completed in curl_read_cb */
if (msg->data.result != CURLE_OK) {
int i;
static int errcount = 100;
/* Don't lose the original error message from curl, since
* it contains extra data.
*/
if (errcount > 0) {
error_report("curl: %s", state->errmsg);
if (--errcount == 0) {
error_report("curl: further errors suppressed");
}
}
for (i = 0; i < CURL_NUM_ACB; i++) {
CURLAIOCB *acb = state->acb[i];
if (acb == NULL) {
continue;
}
acb->common.cb(acb->common.opaque, -EPROTO);
qemu_aio_unref(acb);
state->acb[i] = NULL;
}
}
curl_clean_state(state);
if (msg->msg == CURLMSG_NONE)
break;
switch (msg->msg) {
case CURLMSG_DONE:
{
CURLState *state = NULL;
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE, (char**)&state);
/* ACBs for successful messages get completed in curl_read_cb */
if (msg->data.result != CURLE_OK) {
int i;
for (i = 0; i < CURL_NUM_ACB; i++) {
CURLAIOCB *acb = state->acb[i];
if (acb == NULL) {
continue;
}
acb->common.cb(acb->common.opaque, -EIO);
qemu_aio_release(acb);
state->acb[i] = NULL;
}
}
curl_clean_state(state);
break;
}
default:
msgs_in_queue = 0;
break;
}
}
} while(msgs_in_queue);
}
static void curl_multi_do(void *arg)
{
CURLState *s = (CURLState *)arg;
BDRVCURLState *s = (BDRVCURLState *)arg;
int running;
int r;
if (!s->s->multi) {
if (!s->multi) {
return;
}
do {
r = curl_multi_socket_action(s->s->multi, s->sock_fd, 0, &running);
r = curl_multi_socket_all(s->multi, &running);
} while(r == CURLM_CALL_MULTI_PERFORM);
}
static void curl_multi_read(void *arg)
{
CURLState *s = (CURLState *)arg;
curl_multi_do(arg);
curl_multi_check_completion(s->s);
curl_multi_read(s);
}
static void curl_multi_timeout_do(void *arg)
@@ -386,13 +313,13 @@ static void curl_multi_timeout_do(void *arg)
curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running);
curl_multi_check_completion(s);
curl_multi_read(s);
#else
abort();
#endif
}
static CURLState *curl_init_state(BlockDriverState *bs, BDRVCURLState *s)
static CURLState *curl_init_state(BDRVCURLState *s)
{
CURLState *state = NULL;
int i, j;
@@ -410,62 +337,44 @@ static CURLState *curl_init_state(BlockDriverState *bs, BDRVCURLState *s)
break;
}
if (!state) {
aio_poll(bdrv_get_aio_context(bs), true);
g_usleep(100);
curl_multi_do(s);
}
} while(!state);
if (!state->curl) {
state->curl = curl_easy_init();
if (!state->curl) {
return NULL;
}
curl_easy_setopt(state->curl, CURLOPT_URL, s->url);
curl_easy_setopt(state->curl, CURLOPT_SSL_VERIFYPEER,
(long) s->sslverify);
if (s->cookie) {
curl_easy_setopt(state->curl, CURLOPT_COOKIE, s->cookie);
}
curl_easy_setopt(state->curl, CURLOPT_TIMEOUT, (long)s->timeout);
curl_easy_setopt(state->curl, CURLOPT_WRITEFUNCTION,
(void *)curl_read_cb);
curl_easy_setopt(state->curl, CURLOPT_WRITEDATA, (void *)state);
curl_easy_setopt(state->curl, CURLOPT_PRIVATE, (void *)state);
curl_easy_setopt(state->curl, CURLOPT_AUTOREFERER, 1);
curl_easy_setopt(state->curl, CURLOPT_FOLLOWLOCATION, 1);
curl_easy_setopt(state->curl, CURLOPT_NOSIGNAL, 1);
curl_easy_setopt(state->curl, CURLOPT_ERRORBUFFER, state->errmsg);
curl_easy_setopt(state->curl, CURLOPT_FAILONERROR, 1);
if (state->curl)
goto has_curl;
if (s->username) {
curl_easy_setopt(state->curl, CURLOPT_USERNAME, s->username);
}
if (s->password) {
curl_easy_setopt(state->curl, CURLOPT_PASSWORD, s->password);
}
if (s->proxyusername) {
curl_easy_setopt(state->curl,
CURLOPT_PROXYUSERNAME, s->proxyusername);
}
if (s->proxypassword) {
curl_easy_setopt(state->curl,
CURLOPT_PROXYPASSWORD, s->proxypassword);
}
state->curl = curl_easy_init();
if (!state->curl)
return NULL;
curl_easy_setopt(state->curl, CURLOPT_URL, s->url);
curl_easy_setopt(state->curl, CURLOPT_TIMEOUT, 5);
curl_easy_setopt(state->curl, CURLOPT_WRITEFUNCTION, (void *)curl_read_cb);
curl_easy_setopt(state->curl, CURLOPT_WRITEDATA, (void *)state);
curl_easy_setopt(state->curl, CURLOPT_PRIVATE, (void *)state);
curl_easy_setopt(state->curl, CURLOPT_AUTOREFERER, 1);
curl_easy_setopt(state->curl, CURLOPT_FOLLOWLOCATION, 1);
curl_easy_setopt(state->curl, CURLOPT_NOSIGNAL, 1);
curl_easy_setopt(state->curl, CURLOPT_ERRORBUFFER, state->errmsg);
curl_easy_setopt(state->curl, CURLOPT_FAILONERROR, 1);
/* Restrict supported protocols to avoid security issues in the more
* obscure protocols. For example, do not allow POP3/SMTP/IMAP see
* CVE-2013-0249.
*
* Restricting protocols is only supported from 7.19.4 upwards.
*/
/* Restrict supported protocols to avoid security issues in the more
* obscure protocols. For example, do not allow POP3/SMTP/IMAP see
* CVE-2013-0249.
*
* Restricting protocols is only supported from 7.19.4 upwards.
*/
#if LIBCURL_VERSION_NUM >= 0x071304
curl_easy_setopt(state->curl, CURLOPT_PROTOCOLS, PROTOCOLS);
curl_easy_setopt(state->curl, CURLOPT_REDIR_PROTOCOLS, PROTOCOLS);
curl_easy_setopt(state->curl, CURLOPT_PROTOCOLS, PROTOCOLS);
curl_easy_setopt(state->curl, CURLOPT_REDIR_PROTOCOLS, PROTOCOLS);
#endif
#ifdef DEBUG_VERBOSE
curl_easy_setopt(state->curl, CURLOPT_VERBOSE, 1);
curl_easy_setopt(state->curl, CURLOPT_VERBOSE, 1);
#endif
}
has_curl:
state->s = s;
@@ -482,50 +391,43 @@ static void curl_clean_state(CURLState *s)
static void curl_parse_filename(const char *filename, QDict *options,
Error **errp)
{
qdict_put(options, CURL_BLOCK_OPT_URL, qstring_from_str(filename));
}
static void curl_detach_aio_context(BlockDriverState *bs)
{
BDRVCURLState *s = bs->opaque;
int i;
#define RA_OPTSTR ":readahead="
char *file;
char *ra;
const char *ra_val;
int parse_state = 0;
for (i = 0; i < CURL_NUM_STATES; i++) {
if (s->states[i].in_use) {
curl_clean_state(&s->states[i]);
file = g_strdup(filename);
/* Parse a trailing ":readahead=#:" param, if present. */
ra = file + strlen(file) - 1;
while (ra >= file) {
if (parse_state == 0) {
if (*ra == ':') {
parse_state++;
} else {
break;
}
} else if (parse_state == 1) {
if (*ra > '9' || *ra < '0') {
char *opt_start = ra - strlen(RA_OPTSTR) + 1;
if (opt_start > file &&
strncmp(opt_start, RA_OPTSTR, strlen(RA_OPTSTR)) == 0) {
ra_val = ra + 1;
ra -= strlen(RA_OPTSTR) - 1;
*ra = '\0';
qdict_put(options, "readahead", qstring_from_str(ra_val));
}
break;
}
}
if (s->states[i].curl) {
curl_easy_cleanup(s->states[i].curl);
s->states[i].curl = NULL;
}
g_free(s->states[i].orig_buf);
s->states[i].orig_buf = NULL;
}
if (s->multi) {
curl_multi_cleanup(s->multi);
s->multi = NULL;
ra--;
}
timer_del(&s->timer);
}
qdict_put(options, "url", qstring_from_str(file));
static void curl_attach_aio_context(BlockDriverState *bs,
AioContext *new_context)
{
BDRVCURLState *s = bs->opaque;
aio_timer_init(new_context, &s->timer,
QEMU_CLOCK_REALTIME, SCALE_NS,
curl_multi_timeout_do, s);
assert(!s->multi);
s->multi = curl_multi_init();
s->aio_context = new_context;
curl_multi_setopt(s->multi, CURLMOPT_SOCKETFUNCTION, curl_sock_cb);
#ifdef NEED_CURL_TIMER_CALLBACK
curl_multi_setopt(s->multi, CURLMOPT_TIMERDATA, s);
curl_multi_setopt(s->multi, CURLMOPT_TIMERFUNCTION, curl_timer_cb);
#endif
g_free(file);
}
static QemuOptsList runtime_opts = {
@@ -533,55 +435,19 @@ static QemuOptsList runtime_opts = {
.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
.desc = {
{
.name = CURL_BLOCK_OPT_URL,
.name = "url",
.type = QEMU_OPT_STRING,
.help = "URL to open",
},
{
.name = CURL_BLOCK_OPT_READAHEAD,
.name = "readahead",
.type = QEMU_OPT_SIZE,
.help = "Readahead size",
},
{
.name = CURL_BLOCK_OPT_SSLVERIFY,
.type = QEMU_OPT_BOOL,
.help = "Verify SSL certificate"
},
{
.name = CURL_BLOCK_OPT_TIMEOUT,
.type = QEMU_OPT_NUMBER,
.help = "Curl timeout"
},
{
.name = CURL_BLOCK_OPT_COOKIE,
.type = QEMU_OPT_STRING,
.help = "Pass the cookie or list of cookies with each request"
},
{
.name = CURL_BLOCK_OPT_USERNAME,
.type = QEMU_OPT_STRING,
.help = "Username for HTTP auth"
},
{
.name = CURL_BLOCK_OPT_PASSWORD_SECRET,
.type = QEMU_OPT_STRING,
.help = "ID of secret used as password for HTTP auth",
},
{
.name = CURL_BLOCK_OPT_PROXY_USERNAME,
.type = QEMU_OPT_STRING,
.help = "Username for HTTP proxy auth"
},
{
.name = CURL_BLOCK_OPT_PROXY_PASSWORD_SECRET,
.type = QEMU_OPT_STRING,
.help = "ID of secret used as password for HTTP proxy auth",
},
{ /* end of list */ }
},
};
static int curl_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
@@ -590,9 +456,7 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags,
QemuOpts *opts;
Error *local_err = NULL;
const char *file;
const char *cookie;
double d;
const char *secretid;
static int inited = 0;
@@ -608,61 +472,27 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags,
goto out_noclean;
}
s->readahead_size = qemu_opt_get_size(opts, CURL_BLOCK_OPT_READAHEAD,
READ_AHEAD_DEFAULT);
s->readahead_size = qemu_opt_get_size(opts, "readahead", READ_AHEAD_SIZE);
if ((s->readahead_size & 0x1ff) != 0) {
error_setg(errp, "HTTP_READAHEAD_SIZE %zd is not a multiple of 512",
s->readahead_size);
goto out_noclean;
}
s->timeout = qemu_opt_get_number(opts, CURL_BLOCK_OPT_TIMEOUT,
CURL_TIMEOUT_DEFAULT);
if (s->timeout > CURL_TIMEOUT_MAX) {
error_setg(errp, "timeout parameter is too large or negative");
goto out_noclean;
}
s->sslverify = qemu_opt_get_bool(opts, CURL_BLOCK_OPT_SSLVERIFY, true);
cookie = qemu_opt_get(opts, CURL_BLOCK_OPT_COOKIE);
s->cookie = g_strdup(cookie);
file = qemu_opt_get(opts, CURL_BLOCK_OPT_URL);
file = qemu_opt_get(opts, "url");
if (file == NULL) {
error_setg(errp, "curl block driver requires an 'url' option");
goto out_noclean;
}
s->username = g_strdup(qemu_opt_get(opts, CURL_BLOCK_OPT_USERNAME));
secretid = qemu_opt_get(opts, CURL_BLOCK_OPT_PASSWORD_SECRET);
if (secretid) {
s->password = qcrypto_secret_lookup_as_utf8(secretid, errp);
if (!s->password) {
goto out_noclean;
}
}
s->proxyusername = g_strdup(
qemu_opt_get(opts, CURL_BLOCK_OPT_PROXY_USERNAME));
secretid = qemu_opt_get(opts, CURL_BLOCK_OPT_PROXY_PASSWORD_SECRET);
if (secretid) {
s->proxypassword = qcrypto_secret_lookup_as_utf8(secretid, errp);
if (!s->proxypassword) {
goto out_noclean;
}
}
if (!inited) {
curl_global_init(CURL_GLOBAL_ALL);
inited = 1;
}
DPRINTF("CURL: Opening %s\n", file);
s->aio_context = bdrv_get_aio_context(bs);
s->url = g_strdup(file);
state = curl_init_state(bs, s);
state = curl_init_state(s);
if (!state)
goto out_noclean;
@@ -675,28 +505,11 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags,
curl_easy_setopt(state->curl, CURLOPT_HEADERDATA, s);
if (curl_easy_perform(state->curl))
goto out;
if (curl_easy_getinfo(state->curl, CURLINFO_CONTENT_LENGTH_DOWNLOAD, &d)) {
curl_easy_getinfo(state->curl, CURLINFO_CONTENT_LENGTH_DOWNLOAD, &d);
if (d)
s->len = (size_t)d;
else if(!s->len)
goto out;
}
/* Prior CURL 7.19.4 return value of 0 could mean that the file size is not
* know or the size is zero. From 7.19.4 CURL returns -1 if size is not
* known and zero if it is realy zero-length file. */
#if LIBCURL_VERSION_NUM >= 0x071304
if (d < 0) {
pstrcpy(state->errmsg, CURL_ERROR_SIZE,
"Server didn't report file size.");
goto out;
}
#else
if (d <= 0) {
pstrcpy(state->errmsg, CURL_ERROR_SIZE,
"Unknown file size or zero-length file.");
goto out;
}
#endif
s->len = (size_t)d;
if ((!strncasecmp(s->url, "http://", strlen("http://"))
|| !strncasecmp(s->url, "https://", strlen("https://")))
&& !s->accept_range) {
@@ -710,31 +523,49 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags,
curl_easy_cleanup(state->curl);
state->curl = NULL;
curl_attach_aio_context(bs, bdrv_get_aio_context(bs));
aio_timer_init(bdrv_get_aio_context(bs), &s->timer,
QEMU_CLOCK_REALTIME, SCALE_NS,
curl_multi_timeout_do, s);
// Now we know the file exists and its size, so let's
// initialize the multi interface!
s->multi = curl_multi_init();
curl_multi_setopt(s->multi, CURLMOPT_SOCKETDATA, s);
curl_multi_setopt(s->multi, CURLMOPT_SOCKETFUNCTION, curl_sock_cb);
#ifdef NEED_CURL_TIMER_CALLBACK
curl_multi_setopt(s->multi, CURLMOPT_TIMERDATA, s);
curl_multi_setopt(s->multi, CURLMOPT_TIMERFUNCTION, curl_timer_cb);
#endif
curl_multi_do(s);
qemu_opts_del(opts);
return 0;
out:
error_setg(errp, "CURL: Error opening file: %s", state->errmsg);
fprintf(stderr, "CURL: Error opening file: %s\n", state->errmsg);
curl_easy_cleanup(state->curl);
state->curl = NULL;
out_noclean:
g_free(s->cookie);
g_free(s->url);
qemu_opts_del(opts);
return -EINVAL;
}
static void curl_aio_cancel(BlockDriverAIOCB *blockacb)
{
// Do we have to implement canceling? Seems to work without...
}
static const AIOCBInfo curl_aiocb_info = {
.aiocb_size = sizeof(CURLAIOCB),
.cancel = curl_aio_cancel,
};
static void curl_readv_bh_cb(void *p)
{
CURLState *state;
int running;
CURLAIOCB *acb = p;
BDRVCURLState *s = acb->common.bs->opaque;
@@ -749,7 +580,7 @@ static void curl_readv_bh_cb(void *p)
// we can just call the callback and be done.
switch (curl_find_buf(s, start, acb->nb_sectors * SECTOR_SIZE, acb)) {
case FIND_RET_OK:
qemu_aio_unref(acb);
qemu_aio_release(acb);
// fall through
case FIND_RET_WAIT:
return;
@@ -758,10 +589,10 @@ static void curl_readv_bh_cb(void *p)
}
// No cache found, so let's start a new request
state = curl_init_state(acb->common.bs, s);
state = curl_init_state(s);
if (!state) {
acb->common.cb(acb->common.opaque, -EIO);
qemu_aio_unref(acb);
qemu_aio_release(acb);
return;
}
@@ -769,17 +600,12 @@ static void curl_readv_bh_cb(void *p)
acb->end = (acb->nb_sectors * SECTOR_SIZE);
state->buf_off = 0;
g_free(state->orig_buf);
if (state->orig_buf)
g_free(state->orig_buf);
state->buf_start = start;
state->buf_len = acb->end + s->readahead_size;
end = MIN(start + state->buf_len, s->len) - 1;
state->orig_buf = g_try_malloc(state->buf_len);
if (state->buf_len && state->orig_buf == NULL) {
curl_clean_state(state);
acb->common.cb(acb->common.opaque, -ENOMEM);
qemu_aio_unref(acb);
return;
}
state->orig_buf = g_malloc(state->buf_len);
state->acb[0] = acb;
snprintf(state->range, 127, "%zd-%zd", start, end);
@@ -788,14 +614,13 @@ static void curl_readv_bh_cb(void *p)
curl_easy_setopt(state->curl, CURLOPT_RANGE, state->range);
curl_multi_add_handle(s->multi, state->curl);
curl_multi_do(s);
/* Tell curl it needs to kick things off */
curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running);
}
static BlockAIOCB *curl_aio_readv(BlockDriverState *bs,
static BlockDriverAIOCB *curl_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque)
BlockDriverCompletionFunc *cb, void *opaque)
{
CURLAIOCB *acb;
@@ -805,7 +630,7 @@ static BlockAIOCB *curl_aio_readv(BlockDriverState *bs,
acb->sector_num = sector_num;
acb->nb_sectors = nb_sectors;
acb->bh = aio_bh_new(bdrv_get_aio_context(bs), curl_readv_bh_cb, acb);
acb->bh = qemu_bh_new(curl_readv_bh_cb, acb);
qemu_bh_schedule(acb->bh);
return &acb->common;
}
@@ -813,11 +638,26 @@ static BlockAIOCB *curl_aio_readv(BlockDriverState *bs,
static void curl_close(BlockDriverState *bs)
{
BDRVCURLState *s = bs->opaque;
int i;
DPRINTF("CURL: Close\n");
curl_detach_aio_context(bs);
for (i=0; i<CURL_NUM_STATES; i++) {
if (s->states[i].in_use)
curl_clean_state(&s->states[i]);
if (s->states[i].curl) {
curl_easy_cleanup(s->states[i].curl);
s->states[i].curl = NULL;
}
if (s->states[i].orig_buf) {
g_free(s->states[i].orig_buf);
s->states[i].orig_buf = NULL;
}
}
if (s->multi)
curl_multi_cleanup(s->multi);
timer_del(&s->timer);
g_free(s->cookie);
g_free(s->url);
}
@@ -828,83 +668,68 @@ static int64_t curl_getlength(BlockDriverState *bs)
}
static BlockDriver bdrv_http = {
.format_name = "http",
.protocol_name = "http",
.format_name = "http",
.protocol_name = "http",
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.bdrv_aio_readv = curl_aio_readv,
.bdrv_detach_aio_context = curl_detach_aio_context,
.bdrv_attach_aio_context = curl_attach_aio_context,
.bdrv_aio_readv = curl_aio_readv,
};
static BlockDriver bdrv_https = {
.format_name = "https",
.protocol_name = "https",
.format_name = "https",
.protocol_name = "https",
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.bdrv_aio_readv = curl_aio_readv,
.bdrv_detach_aio_context = curl_detach_aio_context,
.bdrv_attach_aio_context = curl_attach_aio_context,
.bdrv_aio_readv = curl_aio_readv,
};
static BlockDriver bdrv_ftp = {
.format_name = "ftp",
.protocol_name = "ftp",
.format_name = "ftp",
.protocol_name = "ftp",
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.bdrv_aio_readv = curl_aio_readv,
.bdrv_detach_aio_context = curl_detach_aio_context,
.bdrv_attach_aio_context = curl_attach_aio_context,
.bdrv_aio_readv = curl_aio_readv,
};
static BlockDriver bdrv_ftps = {
.format_name = "ftps",
.protocol_name = "ftps",
.format_name = "ftps",
.protocol_name = "ftps",
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.bdrv_aio_readv = curl_aio_readv,
.bdrv_detach_aio_context = curl_detach_aio_context,
.bdrv_attach_aio_context = curl_attach_aio_context,
.bdrv_aio_readv = curl_aio_readv,
};
static BlockDriver bdrv_tftp = {
.format_name = "tftp",
.protocol_name = "tftp",
.format_name = "tftp",
.protocol_name = "tftp",
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.bdrv_aio_readv = curl_aio_readv,
.bdrv_detach_aio_context = curl_detach_aio_context,
.bdrv_attach_aio_context = curl_attach_aio_context,
.bdrv_aio_readv = curl_aio_readv,
};
static void curl_block_init(void)

596
block/dictzip.c Normal file
View File

@@ -0,0 +1,596 @@
/*
* DictZip Block driver for dictzip enabled gzip files
*
* Use the "dictzip" tool from the "dictd" package to create gzip files that
* contain the extra DictZip headers.
*
* dictzip(1) is a compression program which creates compressed files in the
* gzip format (see RFC 1952). However, unlike gzip(1), dictzip(1) compresses
* the file in pieces and stores an index to the pieces in the gzip header.
* This allows random access to the file at the granularity of the compressed
* pieces (currently about 64kB) while maintaining good compression ratios
* (within 5% of the expected ratio for dictionary data).
* dictd(8) uses files stored in this format.
*
* For details on DictZip see http://dict.org/.
*
* Copyright (c) 2009 Alexander Graf <agraf@suse.de>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include <zlib.h>
// #define DEBUG
#ifdef DEBUG
#define dprintf(fmt, ...) do { printf("dzip: " fmt, ## __VA_ARGS__); } while (0)
#else
#define dprintf(fmt, ...) do { } while (0)
#endif
#define SECTOR_SIZE 512
#define Z_STREAM_COUNT 4
#define CACHE_COUNT 20
/* magic values */
#define GZ_MAGIC1 0x1f
#define GZ_MAGIC2 0x8b
#define DZ_MAGIC1 'R'
#define DZ_MAGIC2 'A'
#define GZ_FEXTRA 0x04 /* Optional field (random access index) */
#define GZ_FNAME 0x08 /* Original name */
#define GZ_COMMENT 0x10 /* Zero-terminated, human-readable comment */
#define GZ_FHCRC 0x02 /* Header CRC16 */
/* offsets */
#define GZ_ID 0 /* GZ_MAGIC (16bit) */
#define GZ_FLG 3 /* FLaGs (see above) */
#define GZ_XLEN 10 /* eXtra LENgth (16bit) */
#define GZ_SI 12 /* Subfield ID (16bit) */
#define GZ_VERSION 16 /* Version for subfield format */
#define GZ_CHUNKSIZE 18 /* Chunk size (16bit) */
#define GZ_CHUNKCNT 20 /* Number of chunks (16bit) */
#define GZ_RNDDATA 22 /* Random access data (16bit) */
#define GZ_99_CHUNKSIZE 18 /* Chunk size (32bit) */
#define GZ_99_CHUNKCNT 22 /* Number of chunks (32bit) */
#define GZ_99_FILESIZE 26 /* Size of unpacked file (64bit) */
#define GZ_99_RNDDATA 34 /* Random access data (32bit) */
struct BDRVDictZipState;
typedef struct DictZipAIOCB {
BlockDriverAIOCB common;
struct BDRVDictZipState *s;
QEMUIOVector *qiov; /* QIOV of the original request */
QEMUIOVector *qiov_gz; /* QIOV of the gz subrequest */
QEMUBH *bh; /* BH for cache */
z_stream *zStream; /* stream to use for decoding */
int zStream_id; /* stream id of the above pointer */
size_t start; /* offset into the uncompressed file */
size_t len; /* uncompressed bytes to read */
uint8_t *gzipped; /* the gzipped data */
uint8_t *buf; /* cached result */
size_t gz_len; /* amount of gzip data */
size_t gz_start; /* uncompressed starting point of gzip data */
uint64_t offset; /* offset for "start" into the uncompressed chunk */
int chunks_len; /* amount of uncompressed data in all gzip data */
} DictZipAIOCB;
typedef struct dict_cache {
size_t start;
size_t len;
uint8_t *buf;
} DictCache;
typedef struct BDRVDictZipState {
BlockDriverState *hd;
z_stream zStream[Z_STREAM_COUNT];
DictCache cache[CACHE_COUNT];
int cache_index;
uint8_t stream_in_use;
uint64_t chunk_len;
uint32_t chunk_cnt;
uint16_t *chunks;
uint32_t *chunks32;
uint64_t *offsets;
int64_t file_len;
} BDRVDictZipState;
static int dictzip_probe(const uint8_t *buf, int buf_size, const char *filename)
{
if (buf_size < 2)
return 0;
/* We match on every gzip file */
if ((buf[0] == GZ_MAGIC1) && (buf[1] == GZ_MAGIC2))
return 100;
return 0;
}
static int start_zStream(z_stream *zStream)
{
zStream->zalloc = NULL;
zStream->zfree = NULL;
zStream->opaque = NULL;
zStream->next_in = 0;
zStream->avail_in = 0;
zStream->next_out = NULL;
zStream->avail_out = 0;
return inflateInit2( zStream, -15 );
}
static QemuOptsList runtime_opts = {
.name = "dzip",
.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
.desc = {
{
.name = "filename",
.type = QEMU_OPT_STRING,
.help = "URL to the dictzip file",
},
{ /* end of list */ }
},
};
static int dictzip_open(BlockDriverState *bs, QDict *options, int flags, Error **errp)
{
BDRVDictZipState *s = bs->opaque;
const char *err = "Unknown (read error?)";
uint8_t magic[2];
char buf[100];
uint8_t header_flags;
uint16_t chunk_len16;
uint16_t chunk_cnt16;
uint16_t header_ver;
uint16_t tmp_short;
uint64_t offset;
int chunks_len;
int headerLength = GZ_XLEN - 1;
int rnd_offs;
int ret;
int i;
QemuOpts *opts;
Error *local_err = NULL;
const char *filename;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
}
filename = qemu_opt_get(opts, "filename");
if (!strncmp(filename, "dzip://", 7))
filename += 7;
else if (!strncmp(filename, "dzip:", 5))
filename += 5;
ret = bdrv_open(&s->hd, filename, NULL, NULL, flags | BDRV_O_PROTOCOL, NULL, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
qemu_opts_del(opts);
return ret;
}
/* initialize zlib streams */
for (i = 0; i < Z_STREAM_COUNT; i++) {
if (start_zStream( &s->zStream[i] ) != Z_OK) {
err = s->zStream[i].msg;
goto fail;
}
}
/* gzip header */
if (bdrv_pread(s->hd, GZ_ID, &magic, sizeof(magic)) != sizeof(magic))
goto fail;
if (!((magic[0] == GZ_MAGIC1) && (magic[1] == GZ_MAGIC2))) {
err = "No gzip file";
goto fail;
}
/* dzip header */
if (bdrv_pread(s->hd, GZ_FLG, &header_flags, 1) != 1)
goto fail;
if (!(header_flags & GZ_FEXTRA)) {
err = "Not a dictzip file (wrong flags)";
goto fail;
}
/* extra length */
if (bdrv_pread(s->hd, GZ_XLEN, &tmp_short, 2) != 2)
goto fail;
headerLength += le16_to_cpu(tmp_short) + 2;
/* DictZip magic */
if (bdrv_pread(s->hd, GZ_SI, &magic, 2) != 2)
goto fail;
if (magic[0] != DZ_MAGIC1 || magic[1] != DZ_MAGIC2) {
err = "Not a dictzip file (missing extra magic)";
goto fail;
}
/* DictZip version */
if (bdrv_pread(s->hd, GZ_VERSION, &header_ver, 2) != 2)
goto fail;
header_ver = le16_to_cpu(header_ver);
switch (header_ver) {
case 1: /* Normal DictZip */
/* number of chunks */
if (bdrv_pread(s->hd, GZ_CHUNKSIZE, &chunk_len16, 2) != 2)
goto fail;
s->chunk_len = le16_to_cpu(chunk_len16);
/* chunk count */
if (bdrv_pread(s->hd, GZ_CHUNKCNT, &chunk_cnt16, 2) != 2)
goto fail;
s->chunk_cnt = le16_to_cpu(chunk_cnt16);
chunks_len = sizeof(short) * s->chunk_cnt;
rnd_offs = GZ_RNDDATA;
break;
case 99: /* Special Alex pigz version */
/* number of chunks */
if (bdrv_pread(s->hd, GZ_99_CHUNKSIZE, &s->chunk_len, 4) != 4)
goto fail;
dprintf("chunk len [%#x] = %d\n", GZ_99_CHUNKSIZE, s->chunk_len);
s->chunk_len = le32_to_cpu(s->chunk_len);
/* chunk count */
if (bdrv_pread(s->hd, GZ_99_CHUNKCNT, &s->chunk_cnt, 4) != 4)
goto fail;
s->chunk_cnt = le32_to_cpu(s->chunk_cnt);
dprintf("chunk len | count = %d | %d\n", s->chunk_len, s->chunk_cnt);
/* file size */
if (bdrv_pread(s->hd, GZ_99_FILESIZE, &s->file_len, 8) != 8)
goto fail;
s->file_len = le64_to_cpu(s->file_len);
chunks_len = sizeof(int) * s->chunk_cnt;
rnd_offs = GZ_99_RNDDATA;
break;
default:
err = "Invalid DictZip version";
goto fail;
}
/* random access data */
s->chunks = g_malloc(chunks_len);
if (header_ver == 99)
s->chunks32 = (uint32_t *)s->chunks;
if (bdrv_pread(s->hd, rnd_offs, s->chunks, chunks_len) != chunks_len)
goto fail;
/* orig filename */
if (header_flags & GZ_FNAME) {
if (bdrv_pread(s->hd, headerLength + 1, buf, sizeof(buf)) != sizeof(buf))
goto fail;
buf[sizeof(buf) - 1] = '\0';
headerLength += strlen(buf) + 1;
if (strlen(buf) == sizeof(buf))
goto fail;
dprintf("filename: %s\n", buf);
}
/* comment field */
if (header_flags & GZ_COMMENT) {
if (bdrv_pread(s->hd, headerLength, buf, sizeof(buf)) != sizeof(buf))
goto fail;
buf[sizeof(buf) - 1] = '\0';
headerLength += strlen(buf) + 1;
if (strlen(buf) == sizeof(buf))
goto fail;
dprintf("comment: %s\n", buf);
}
if (header_flags & GZ_FHCRC)
headerLength += 2;
/* uncompressed file length*/
if (!s->file_len) {
uint32_t file_len;
if (bdrv_pread(s->hd, bdrv_getlength(s->hd) - 4, &file_len, 4) != 4)
goto fail;
s->file_len = le32_to_cpu(file_len);
}
/* compute offsets */
s->offsets = g_malloc(sizeof( *s->offsets ) * s->chunk_cnt);
for (offset = headerLength + 1, i = 0; i < s->chunk_cnt; i++) {
s->offsets[i] = offset;
switch (header_ver) {
case 1:
offset += s->chunks[i];
break;
case 99:
offset += s->chunks32[i];
break;
}
dprintf("chunk %#x - %#x = offset %#x -> %#x\n", i * s->chunk_len, (i+1) * s->chunk_len, s->offsets[i], offset);
}
qemu_opts_del(opts);
return 0;
fail:
fprintf(stderr, "DictZip: Error opening file: %s\n", err);
bdrv_unref(s->hd);
if (s->chunks)
g_free(s->chunks);
qemu_opts_del(opts);
return -EINVAL;
}
/* This callback gets invoked when we have the result in cache already */
static void dictzip_cache_cb(void *opaque)
{
DictZipAIOCB *acb = (DictZipAIOCB *)opaque;
qemu_iovec_from_buf(acb->qiov, 0, acb->buf, acb->len);
acb->common.cb(acb->common.opaque, 0);
qemu_bh_delete(acb->bh);
qemu_aio_release(acb);
}
/* This callback gets invoked by the underlying block reader when we have
* all compressed data. We uncompress in here. */
static void dictzip_read_cb(void *opaque, int ret)
{
DictZipAIOCB *acb = (DictZipAIOCB *)opaque;
struct BDRVDictZipState *s = acb->s;
uint8_t *buf;
DictCache *cache;
int r;
buf = g_malloc(acb->chunks_len);
/* uncompress the chunk */
acb->zStream->next_in = acb->gzipped;
acb->zStream->avail_in = acb->gz_len;
acb->zStream->next_out = buf;
acb->zStream->avail_out = acb->chunks_len;
r = inflate( acb->zStream, Z_PARTIAL_FLUSH );
if ( (r != Z_OK) && (r != Z_STREAM_END) )
fprintf(stderr, "Error inflating: [%d] %s\n", r, acb->zStream->msg);
if ( r == Z_STREAM_END )
inflateReset(acb->zStream);
dprintf("inflating [%d] left: %d | %d bytes\n", r, acb->zStream->avail_in, acb->zStream->avail_out);
s->stream_in_use &= ~(1 << acb->zStream_id);
/* nofity the caller */
qemu_iovec_from_buf(acb->qiov, 0, buf + acb->offset, acb->len);
acb->common.cb(acb->common.opaque, 0);
/* fill the cache */
cache = &s->cache[s->cache_index];
s->cache_index++;
if (s->cache_index == CACHE_COUNT)
s->cache_index = 0;
cache->len = 0;
if (cache->buf)
g_free(cache->buf);
cache->start = acb->gz_start;
cache->buf = buf;
cache->len = acb->chunks_len;
/* free occupied ressources */
g_free(acb->qiov_gz);
qemu_aio_release(acb);
}
static void dictzip_aio_cancel(BlockDriverAIOCB *blockacb)
{
}
static const AIOCBInfo dictzip_aiocb_info = {
.aiocb_size = sizeof(DictZipAIOCB),
.cancel = dictzip_aio_cancel,
};
/* This is where we get a request from a caller to read something */
static BlockDriverAIOCB *dictzip_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVDictZipState *s = bs->opaque;
DictZipAIOCB *acb;
QEMUIOVector *qiov_gz;
struct iovec *iov;
uint8_t *buf;
size_t start = sector_num * SECTOR_SIZE;
size_t len = nb_sectors * SECTOR_SIZE;
size_t end = start + len;
size_t gz_start;
size_t gz_len;
int64_t gz_sector_num;
int gz_nb_sectors;
int first_chunk, last_chunk;
int first_offset;
int i;
acb = qemu_aio_get(&dictzip_aiocb_info, bs, cb, opaque);
if (!acb)
return NULL;
/* Search Cache */
for (i = 0; i < CACHE_COUNT; i++) {
if (!s->cache[i].len)
continue;
if ((start >= s->cache[i].start) &&
(end <= (s->cache[i].start + s->cache[i].len))) {
acb->buf = s->cache[i].buf + (start - s->cache[i].start);
acb->len = len;
acb->qiov = qiov;
acb->bh = qemu_bh_new(dictzip_cache_cb, acb);
qemu_bh_schedule(acb->bh);
return &acb->common;
}
}
/* No cache, so let's decode */
do {
for (i = 0; i < Z_STREAM_COUNT; i++) {
if (!(s->stream_in_use & (1 << i))) {
s->stream_in_use |= (1 << i);
acb->zStream_id = i;
acb->zStream = &s->zStream[i];
break;
}
}
} while(!acb->zStream);
/* We need to read these chunks */
first_chunk = start / s->chunk_len;
first_offset = start - first_chunk * s->chunk_len;
last_chunk = end / s->chunk_len;
gz_start = s->offsets[first_chunk];
gz_len = 0;
for (i = first_chunk; i <= last_chunk; i++) {
if (s->chunks32)
gz_len += s->chunks32[i];
else
gz_len += s->chunks[i];
}
gz_sector_num = gz_start / SECTOR_SIZE;
gz_nb_sectors = (gz_len / SECTOR_SIZE);
/* account for tail and heads */
while ((gz_start + gz_len) > ((gz_sector_num + gz_nb_sectors) * SECTOR_SIZE))
gz_nb_sectors++;
/* Allocate qiov, iov and buf in one chunk so we only need to free qiov */
qiov_gz = g_malloc0(sizeof(QEMUIOVector) + sizeof(struct iovec) +
(gz_nb_sectors * SECTOR_SIZE));
iov = (struct iovec *)(((char *)qiov_gz) + sizeof(QEMUIOVector));
buf = ((uint8_t *)iov) + sizeof(struct iovec *);
/* Kick off the read by the backing file, so we can start decompressing */
iov->iov_base = (void *)buf;
iov->iov_len = gz_nb_sectors * 512;
qemu_iovec_init_external(qiov_gz, iov, 1);
dprintf("read %d - %d => %d - %d\n", start, end, gz_start, gz_start + gz_len);
acb->s = s;
acb->qiov = qiov;
acb->qiov_gz = qiov_gz;
acb->start = start;
acb->len = len;
acb->gzipped = buf + (gz_start % SECTOR_SIZE);
acb->gz_len = gz_len;
acb->gz_start = first_chunk * s->chunk_len;
acb->offset = first_offset;
acb->chunks_len = (last_chunk - first_chunk + 1) * s->chunk_len;
return bdrv_aio_readv(s->hd, gz_sector_num, qiov_gz, gz_nb_sectors,
dictzip_read_cb, acb);
}
static void dictzip_close(BlockDriverState *bs)
{
BDRVDictZipState *s = bs->opaque;
int i;
for (i = 0; i < CACHE_COUNT; i++) {
if (!s->cache[i].len)
continue;
g_free(s->cache[i].buf);
}
for (i = 0; i < Z_STREAM_COUNT; i++) {
inflateEnd(&s->zStream[i]);
}
if (s->chunks)
g_free(s->chunks);
if (s->offsets)
g_free(s->offsets);
dprintf("Close\n");
}
static int64_t dictzip_getlength(BlockDriverState *bs)
{
BDRVDictZipState *s = bs->opaque;
dprintf("getlength -> %ld\n", s->file_len);
return s->file_len;
}
static BlockDriver bdrv_dictzip = {
.format_name = "dzip",
.protocol_name = "dzip",
.instance_size = sizeof(BDRVDictZipState),
.bdrv_file_open = dictzip_open,
.bdrv_close = dictzip_close,
.bdrv_getlength = dictzip_getlength,
.bdrv_probe = dictzip_probe,
.bdrv_aio_readv = dictzip_aio_readv,
};
static void dictzip_block_init(void)
{
bdrv_register(&bdrv_dictzip);
}
block_init(dictzip_block_init);

View File

@@ -1,387 +0,0 @@
/*
* Block Dirty Bitmap
*
* Copyright (c) 2016 Red Hat. Inc
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "trace.h"
#include "block/block_int.h"
#include "block/blockjob.h"
/**
* A BdrvDirtyBitmap can be in three possible states:
* (1) successor is NULL and disabled is false: full r/w mode
* (2) successor is NULL and disabled is true: read only mode ("disabled")
* (3) successor is set: frozen mode.
* A frozen bitmap cannot be renamed, deleted, anonymized, cleared, set,
* or enabled. A frozen bitmap can only abdicate() or reclaim().
*/
struct BdrvDirtyBitmap {
HBitmap *bitmap; /* Dirty sector bitmap implementation */
BdrvDirtyBitmap *successor; /* Anonymous child; implies frozen status */
char *name; /* Optional non-empty unique ID */
int64_t size; /* Size of the bitmap (Number of sectors) */
bool disabled; /* Bitmap is read-only */
QLIST_ENTRY(BdrvDirtyBitmap) list;
};
BdrvDirtyBitmap *bdrv_find_dirty_bitmap(BlockDriverState *bs, const char *name)
{
BdrvDirtyBitmap *bm;
assert(name);
QLIST_FOREACH(bm, &bs->dirty_bitmaps, list) {
if (bm->name && !strcmp(name, bm->name)) {
return bm;
}
}
return NULL;
}
void bdrv_dirty_bitmap_make_anon(BdrvDirtyBitmap *bitmap)
{
assert(!bdrv_dirty_bitmap_frozen(bitmap));
g_free(bitmap->name);
bitmap->name = NULL;
}
BdrvDirtyBitmap *bdrv_create_dirty_bitmap(BlockDriverState *bs,
uint32_t granularity,
const char *name,
Error **errp)
{
int64_t bitmap_size;
BdrvDirtyBitmap *bitmap;
uint32_t sector_granularity;
assert((granularity & (granularity - 1)) == 0);
if (name && bdrv_find_dirty_bitmap(bs, name)) {
error_setg(errp, "Bitmap already exists: %s", name);
return NULL;
}
sector_granularity = granularity >> BDRV_SECTOR_BITS;
assert(sector_granularity);
bitmap_size = bdrv_nb_sectors(bs);
if (bitmap_size < 0) {
error_setg_errno(errp, -bitmap_size, "could not get length of device");
errno = -bitmap_size;
return NULL;
}
bitmap = g_new0(BdrvDirtyBitmap, 1);
bitmap->bitmap = hbitmap_alloc(bitmap_size, ctz32(sector_granularity));
bitmap->size = bitmap_size;
bitmap->name = g_strdup(name);
bitmap->disabled = false;
QLIST_INSERT_HEAD(&bs->dirty_bitmaps, bitmap, list);
return bitmap;
}
bool bdrv_dirty_bitmap_frozen(BdrvDirtyBitmap *bitmap)
{
return bitmap->successor;
}
bool bdrv_dirty_bitmap_enabled(BdrvDirtyBitmap *bitmap)
{
return !(bitmap->disabled || bitmap->successor);
}
DirtyBitmapStatus bdrv_dirty_bitmap_status(BdrvDirtyBitmap *bitmap)
{
if (bdrv_dirty_bitmap_frozen(bitmap)) {
return DIRTY_BITMAP_STATUS_FROZEN;
} else if (!bdrv_dirty_bitmap_enabled(bitmap)) {
return DIRTY_BITMAP_STATUS_DISABLED;
} else {
return DIRTY_BITMAP_STATUS_ACTIVE;
}
}
/**
* Create a successor bitmap destined to replace this bitmap after an operation.
* Requires that the bitmap is not frozen and has no successor.
*/
int bdrv_dirty_bitmap_create_successor(BlockDriverState *bs,
BdrvDirtyBitmap *bitmap, Error **errp)
{
uint64_t granularity;
BdrvDirtyBitmap *child;
if (bdrv_dirty_bitmap_frozen(bitmap)) {
error_setg(errp, "Cannot create a successor for a bitmap that is "
"currently frozen");
return -1;
}
assert(!bitmap->successor);
/* Create an anonymous successor */
granularity = bdrv_dirty_bitmap_granularity(bitmap);
child = bdrv_create_dirty_bitmap(bs, granularity, NULL, errp);
if (!child) {
return -1;
}
/* Successor will be on or off based on our current state. */
child->disabled = bitmap->disabled;
/* Install the successor and freeze the parent */
bitmap->successor = child;
return 0;
}
/**
* For a bitmap with a successor, yield our name to the successor,
* delete the old bitmap, and return a handle to the new bitmap.
*/
BdrvDirtyBitmap *bdrv_dirty_bitmap_abdicate(BlockDriverState *bs,
BdrvDirtyBitmap *bitmap,
Error **errp)
{
char *name;
BdrvDirtyBitmap *successor = bitmap->successor;
if (successor == NULL) {
error_setg(errp, "Cannot relinquish control if "
"there's no successor present");
return NULL;
}
name = bitmap->name;
bitmap->name = NULL;
successor->name = name;
bitmap->successor = NULL;
bdrv_release_dirty_bitmap(bs, bitmap);
return successor;
}
/**
* In cases of failure where we can no longer safely delete the parent,
* we may wish to re-join the parent and child/successor.
* The merged parent will be un-frozen, but not explicitly re-enabled.
*/
BdrvDirtyBitmap *bdrv_reclaim_dirty_bitmap(BlockDriverState *bs,
BdrvDirtyBitmap *parent,
Error **errp)
{
BdrvDirtyBitmap *successor = parent->successor;
if (!successor) {
error_setg(errp, "Cannot reclaim a successor when none is present");
return NULL;
}
if (!hbitmap_merge(parent->bitmap, successor->bitmap)) {
error_setg(errp, "Merging of parent and successor bitmap failed");
return NULL;
}
bdrv_release_dirty_bitmap(bs, successor);
parent->successor = NULL;
return parent;
}
/**
* Truncates _all_ bitmaps attached to a BDS.
*/
void bdrv_dirty_bitmap_truncate(BlockDriverState *bs)
{
BdrvDirtyBitmap *bitmap;
uint64_t size = bdrv_nb_sectors(bs);
QLIST_FOREACH(bitmap, &bs->dirty_bitmaps, list) {
assert(!bdrv_dirty_bitmap_frozen(bitmap));
hbitmap_truncate(bitmap->bitmap, size);
bitmap->size = size;
}
}
static void bdrv_do_release_matching_dirty_bitmap(BlockDriverState *bs,
BdrvDirtyBitmap *bitmap,
bool only_named)
{
BdrvDirtyBitmap *bm, *next;
QLIST_FOREACH_SAFE(bm, &bs->dirty_bitmaps, list, next) {
if ((!bitmap || bm == bitmap) && (!only_named || bm->name)) {
assert(!bdrv_dirty_bitmap_frozen(bm));
QLIST_REMOVE(bm, list);
hbitmap_free(bm->bitmap);
g_free(bm->name);
g_free(bm);
if (bitmap) {
return;
}
}
}
}
void bdrv_release_dirty_bitmap(BlockDriverState *bs, BdrvDirtyBitmap *bitmap)
{
bdrv_do_release_matching_dirty_bitmap(bs, bitmap, false);
}
/**
* Release all named dirty bitmaps attached to a BDS (for use in bdrv_close()).
* There must not be any frozen bitmaps attached.
*/
void bdrv_release_named_dirty_bitmaps(BlockDriverState *bs)
{
bdrv_do_release_matching_dirty_bitmap(bs, NULL, true);
}
void bdrv_disable_dirty_bitmap(BdrvDirtyBitmap *bitmap)
{
assert(!bdrv_dirty_bitmap_frozen(bitmap));
bitmap->disabled = true;
}
void bdrv_enable_dirty_bitmap(BdrvDirtyBitmap *bitmap)
{
assert(!bdrv_dirty_bitmap_frozen(bitmap));
bitmap->disabled = false;
}
BlockDirtyInfoList *bdrv_query_dirty_bitmaps(BlockDriverState *bs)
{
BdrvDirtyBitmap *bm;
BlockDirtyInfoList *list = NULL;
BlockDirtyInfoList **plist = &list;
QLIST_FOREACH(bm, &bs->dirty_bitmaps, list) {
BlockDirtyInfo *info = g_new0(BlockDirtyInfo, 1);
BlockDirtyInfoList *entry = g_new0(BlockDirtyInfoList, 1);
info->count = bdrv_get_dirty_count(bm);
info->granularity = bdrv_dirty_bitmap_granularity(bm);
info->has_name = !!bm->name;
info->name = g_strdup(bm->name);
info->status = bdrv_dirty_bitmap_status(bm);
entry->value = info;
*plist = entry;
plist = &entry->next;
}
return list;
}
int bdrv_get_dirty(BlockDriverState *bs, BdrvDirtyBitmap *bitmap,
int64_t sector)
{
if (bitmap) {
return hbitmap_get(bitmap->bitmap, sector);
} else {
return 0;
}
}
/**
* Chooses a default granularity based on the existing cluster size,
* but clamped between [4K, 64K]. Defaults to 64K in the case that there
* is no cluster size information available.
*/
uint32_t bdrv_get_default_bitmap_granularity(BlockDriverState *bs)
{
BlockDriverInfo bdi;
uint32_t granularity;
if (bdrv_get_info(bs, &bdi) >= 0 && bdi.cluster_size > 0) {
granularity = MAX(4096, bdi.cluster_size);
granularity = MIN(65536, granularity);
} else {
granularity = 65536;
}
return granularity;
}
uint32_t bdrv_dirty_bitmap_granularity(BdrvDirtyBitmap *bitmap)
{
return BDRV_SECTOR_SIZE << hbitmap_granularity(bitmap->bitmap);
}
void bdrv_dirty_iter_init(BdrvDirtyBitmap *bitmap, HBitmapIter *hbi)
{
hbitmap_iter_init(hbi, bitmap->bitmap, 0);
}
void bdrv_set_dirty_bitmap(BdrvDirtyBitmap *bitmap,
int64_t cur_sector, int64_t nr_sectors)
{
assert(bdrv_dirty_bitmap_enabled(bitmap));
hbitmap_set(bitmap->bitmap, cur_sector, nr_sectors);
}
void bdrv_reset_dirty_bitmap(BdrvDirtyBitmap *bitmap,
int64_t cur_sector, int64_t nr_sectors)
{
assert(bdrv_dirty_bitmap_enabled(bitmap));
hbitmap_reset(bitmap->bitmap, cur_sector, nr_sectors);
}
void bdrv_clear_dirty_bitmap(BdrvDirtyBitmap *bitmap, HBitmap **out)
{
assert(bdrv_dirty_bitmap_enabled(bitmap));
if (!out) {
hbitmap_reset_all(bitmap->bitmap);
} else {
HBitmap *backup = bitmap->bitmap;
bitmap->bitmap = hbitmap_alloc(bitmap->size,
hbitmap_granularity(backup));
*out = backup;
}
}
void bdrv_undo_clear_dirty_bitmap(BdrvDirtyBitmap *bitmap, HBitmap *in)
{
HBitmap *tmp = bitmap->bitmap;
assert(bdrv_dirty_bitmap_enabled(bitmap));
bitmap->bitmap = in;
hbitmap_free(tmp);
}
void bdrv_set_dirty(BlockDriverState *bs, int64_t cur_sector,
int64_t nr_sectors)
{
BdrvDirtyBitmap *bitmap;
QLIST_FOREACH(bitmap, &bs->dirty_bitmaps, list) {
if (!bdrv_dirty_bitmap_enabled(bitmap)) {
continue;
}
hbitmap_set(bitmap->bitmap, cur_sector, nr_sectors);
}
}
/**
* Advance an HBitmapIter to an arbitrary offset.
*/
void bdrv_set_dirty_iter(HBitmapIter *hbi, int64_t offset)
{
assert(hbi->hb);
hbitmap_iter_init(hbi, hbi->hb, offset);
}
int64_t bdrv_get_dirty_count(BdrvDirtyBitmap *bitmap)
{
return hbitmap_count(bitmap->bitmap);
}

View File

@@ -21,17 +21,11 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/bswap.h"
#include "qemu/error-report.h"
#include "qemu/module.h"
#include <zlib.h>
#ifdef CONFIG_BZIP2
#include <bzlib.h>
#endif
enum {
/* Limit chunk sizes to prevent unreasonable amounts of memory being used
@@ -61,9 +55,6 @@ typedef struct BDRVDMGState {
uint8_t *compressed_chunk;
uint8_t *uncompressed_chunk;
z_stream zstream;
#ifdef CONFIG_BZIP2
bz_stream bzstream;
#endif
} BDRVDMGState;
static int dmg_probe(const uint8_t *buf, int buf_size, const char *filename)
@@ -109,16 +100,6 @@ static int read_uint32(BlockDriverState *bs, int64_t offset, uint32_t *result)
return 0;
}
static inline uint64_t buff_read_uint64(const uint8_t *buffer, int64_t offset)
{
return be64_to_cpu(*(uint64_t *)&buffer[offset]);
}
static inline uint32_t buff_read_uint32(const uint8_t *buffer, int64_t offset)
{
return be32_to_cpu(*(uint32_t *)&buffer[offset]);
}
/* Increase max chunk sizes, if necessary. This function is used to calculate
* the buffer sizes needed for compressed/uncompressed chunk I/O.
*/
@@ -131,7 +112,6 @@ static void update_max_chunk_size(BDRVDMGState *s, uint32_t chunk,
switch (s->types[chunk]) {
case 0x80000005: /* zlib compressed */
case 0x80000006: /* bzip2 compressed */
compressed_size = s->lengths[chunk];
uncompressed_sectors = s->sectorcounts[chunk];
break;
@@ -139,9 +119,7 @@ static void update_max_chunk_size(BDRVDMGState *s, uint32_t chunk,
uncompressed_sectors = (s->lengths[chunk] + 511) / 512;
break;
case 2: /* zero */
/* as the all-zeroes block may be large, it is treated specially: the
* sector is not copied from a large buffer, a simple memset is used
* instead. Therefore uncompressed_sectors does not need to be set. */
uncompressed_sectors = s->sectorcounts[chunk];
break;
}
@@ -153,379 +131,161 @@ static void update_max_chunk_size(BDRVDMGState *s, uint32_t chunk,
}
}
static int64_t dmg_find_koly_offset(BdrvChild *file, Error **errp)
{
BlockDriverState *file_bs = file->bs;
int64_t length;
int64_t offset = 0;
uint8_t buffer[515];
int i, ret;
/* bdrv_getlength returns a multiple of block size (512), rounded up. Since
* dmg images can have odd sizes, try to look for the "koly" magic which
* marks the begin of the UDIF trailer (512 bytes). This magic can be found
* in the last 511 bytes of the second-last sector or the first 4 bytes of
* the last sector (search space: 515 bytes) */
length = bdrv_getlength(file_bs);
if (length < 0) {
error_setg_errno(errp, -length,
"Failed to get file size while reading UDIF trailer");
return length;
} else if (length < 512) {
error_setg(errp, "dmg file must be at least 512 bytes long");
return -EINVAL;
}
if (length > 511 + 512) {
offset = length - 511 - 512;
}
length = length < 515 ? length : 515;
ret = bdrv_pread(file, offset, buffer, length);
if (ret < 0) {
error_setg_errno(errp, -ret, "Failed while reading UDIF trailer");
return ret;
}
for (i = 0; i < length - 3; i++) {
if (buffer[i] == 'k' && buffer[i+1] == 'o' &&
buffer[i+2] == 'l' && buffer[i+3] == 'y') {
return offset + i;
}
}
error_setg(errp, "Could not locate UDIF trailer in dmg file");
return -EINVAL;
}
/* used when building the sector table */
typedef struct DmgHeaderState {
/* used internally by dmg_read_mish_block to remember offsets of blocks
* across calls */
uint64_t data_fork_offset;
/* exported for dmg_open */
uint32_t max_compressed_size;
uint32_t max_sectors_per_chunk;
} DmgHeaderState;
static bool dmg_is_known_block_type(uint32_t entry_type)
{
switch (entry_type) {
case 0x00000001: /* uncompressed */
case 0x00000002: /* zeroes */
case 0x80000005: /* zlib */
#ifdef CONFIG_BZIP2
case 0x80000006: /* bzip2 */
#endif
return true;
default:
return false;
}
}
static int dmg_read_mish_block(BDRVDMGState *s, DmgHeaderState *ds,
uint8_t *buffer, uint32_t count)
{
uint32_t type, i;
int ret;
size_t new_size;
uint32_t chunk_count;
int64_t offset = 0;
uint64_t data_offset;
uint64_t in_offset = ds->data_fork_offset;
uint64_t out_offset;
type = buff_read_uint32(buffer, offset);
/* skip data that is not a valid MISH block (invalid magic or too small) */
if (type != 0x6d697368 || count < 244) {
/* assume success for now */
return 0;
}
/* chunk offsets are relative to this sector number */
out_offset = buff_read_uint64(buffer, offset + 8);
/* location in data fork for (compressed) blob (in bytes) */
data_offset = buff_read_uint64(buffer, offset + 0x18);
in_offset += data_offset;
/* move to begin of chunk entries */
offset += 204;
chunk_count = (count - 204) / 40;
new_size = sizeof(uint64_t) * (s->n_chunks + chunk_count);
s->types = g_realloc(s->types, new_size / 2);
s->offsets = g_realloc(s->offsets, new_size);
s->lengths = g_realloc(s->lengths, new_size);
s->sectors = g_realloc(s->sectors, new_size);
s->sectorcounts = g_realloc(s->sectorcounts, new_size);
for (i = s->n_chunks; i < s->n_chunks + chunk_count; i++) {
s->types[i] = buff_read_uint32(buffer, offset);
if (!dmg_is_known_block_type(s->types[i])) {
chunk_count--;
i--;
offset += 40;
continue;
}
/* sector number */
s->sectors[i] = buff_read_uint64(buffer, offset + 8);
s->sectors[i] += out_offset;
/* sector count */
s->sectorcounts[i] = buff_read_uint64(buffer, offset + 0x10);
/* all-zeroes sector (type 2) does not need to be "uncompressed" and can
* therefore be unbounded. */
if (s->types[i] != 2 && s->sectorcounts[i] > DMG_SECTORCOUNTS_MAX) {
error_report("sector count %" PRIu64 " for chunk %" PRIu32
" is larger than max (%u)",
s->sectorcounts[i], i, DMG_SECTORCOUNTS_MAX);
ret = -EINVAL;
goto fail;
}
/* offset in (compressed) data fork */
s->offsets[i] = buff_read_uint64(buffer, offset + 0x18);
s->offsets[i] += in_offset;
/* length in (compressed) data fork */
s->lengths[i] = buff_read_uint64(buffer, offset + 0x20);
if (s->lengths[i] > DMG_LENGTHS_MAX) {
error_report("length %" PRIu64 " for chunk %" PRIu32
" is larger than max (%u)",
s->lengths[i], i, DMG_LENGTHS_MAX);
ret = -EINVAL;
goto fail;
}
update_max_chunk_size(s, i, &ds->max_compressed_size,
&ds->max_sectors_per_chunk);
offset += 40;
}
s->n_chunks += chunk_count;
return 0;
fail:
return ret;
}
static int dmg_read_resource_fork(BlockDriverState *bs, DmgHeaderState *ds,
uint64_t info_begin, uint64_t info_length)
static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVDMGState *s = bs->opaque;
uint64_t info_begin, info_end, last_in_offset, last_out_offset;
uint32_t count, tmp;
uint32_t max_compressed_size = 1, max_sectors_per_chunk = 1, i;
int64_t offset;
int ret;
uint32_t count, rsrc_data_offset;
uint8_t *buffer = NULL;
uint64_t info_end;
uint64_t offset;
/* read offset from begin of resource fork (info_begin) to resource data */
ret = read_uint32(bs, info_begin, &rsrc_data_offset);
bs->read_only = 1;
s->n_chunks = 0;
s->offsets = s->lengths = s->sectors = s->sectorcounts = NULL;
/* read offset of info blocks */
offset = bdrv_getlength(bs->file);
if (offset < 0) {
ret = offset;
goto fail;
}
offset -= 0x1d8;
ret = read_uint64(bs, offset, &info_begin);
if (ret < 0) {
goto fail;
} else if (rsrc_data_offset > info_length) {
} else if (info_begin == 0) {
ret = -EINVAL;
goto fail;
}
/* read length of resource data */
ret = read_uint32(bs, info_begin + 8, &count);
ret = read_uint32(bs, info_begin, &tmp);
if (ret < 0) {
goto fail;
} else if (count == 0 || rsrc_data_offset + count > info_length) {
} else if (tmp != 0x100) {
ret = -EINVAL;
goto fail;
}
/* begin of resource data (consisting of one or more resources) */
offset = info_begin + rsrc_data_offset;
ret = read_uint32(bs, info_begin + 4, &count);
if (ret < 0) {
goto fail;
} else if (count == 0) {
ret = -EINVAL;
goto fail;
}
info_end = info_begin + count;
/* end of resource data (there is possibly a following resource map
* which will be ignored). */
info_end = offset + count;
offset = info_begin + 0x100;
/* read offsets (mish blocks) from one or more resources in resource data */
/* read offsets */
last_in_offset = last_out_offset = 0;
while (offset < info_end) {
/* size of following resource */
uint32_t type;
ret = read_uint32(bs, offset, &count);
if (ret < 0) {
goto fail;
} else if (count == 0 || count > info_end - offset) {
} else if (count == 0) {
ret = -EINVAL;
goto fail;
}
offset += 4;
buffer = g_realloc(buffer, count);
ret = bdrv_pread(bs->file, offset, buffer, count);
ret = read_uint32(bs, offset, &type);
if (ret < 0) {
goto fail;
}
ret = dmg_read_mish_block(s, ds, buffer, count);
if (ret < 0) {
goto fail;
if (type == 0x6d697368 && count >= 244) {
size_t new_size;
uint32_t chunk_count;
offset += 4;
offset += 200;
chunk_count = (count - 204) / 40;
new_size = sizeof(uint64_t) * (s->n_chunks + chunk_count);
s->types = g_realloc(s->types, new_size / 2);
s->offsets = g_realloc(s->offsets, new_size);
s->lengths = g_realloc(s->lengths, new_size);
s->sectors = g_realloc(s->sectors, new_size);
s->sectorcounts = g_realloc(s->sectorcounts, new_size);
for (i = s->n_chunks; i < s->n_chunks + chunk_count; i++) {
ret = read_uint32(bs, offset, &s->types[i]);
if (ret < 0) {
goto fail;
}
offset += 4;
if (s->types[i] != 0x80000005 && s->types[i] != 1 &&
s->types[i] != 2) {
if (s->types[i] == 0xffffffff && i > 0) {
last_in_offset = s->offsets[i - 1] + s->lengths[i - 1];
last_out_offset = s->sectors[i - 1] +
s->sectorcounts[i - 1];
}
chunk_count--;
i--;
offset += 36;
continue;
}
offset += 4;
ret = read_uint64(bs, offset, &s->sectors[i]);
if (ret < 0) {
goto fail;
}
s->sectors[i] += last_out_offset;
offset += 8;
ret = read_uint64(bs, offset, &s->sectorcounts[i]);
if (ret < 0) {
goto fail;
}
offset += 8;
if (s->sectorcounts[i] > DMG_SECTORCOUNTS_MAX) {
error_report("sector count %" PRIu64 " for chunk %u is "
"larger than max (%u)",
s->sectorcounts[i], i, DMG_SECTORCOUNTS_MAX);
ret = -EINVAL;
goto fail;
}
ret = read_uint64(bs, offset, &s->offsets[i]);
if (ret < 0) {
goto fail;
}
s->offsets[i] += last_in_offset;
offset += 8;
ret = read_uint64(bs, offset, &s->lengths[i]);
if (ret < 0) {
goto fail;
}
offset += 8;
if (s->lengths[i] > DMG_LENGTHS_MAX) {
error_report("length %" PRIu64 " for chunk %u is larger "
"than max (%u)",
s->lengths[i], i, DMG_LENGTHS_MAX);
ret = -EINVAL;
goto fail;
}
update_max_chunk_size(s, i, &max_compressed_size,
&max_sectors_per_chunk);
}
s->n_chunks += chunk_count;
}
/* advance offset by size of resource */
offset += count;
}
ret = 0;
fail:
g_free(buffer);
return ret;
}
static int dmg_read_plist_xml(BlockDriverState *bs, DmgHeaderState *ds,
uint64_t info_begin, uint64_t info_length)
{
BDRVDMGState *s = bs->opaque;
int ret;
uint8_t *buffer = NULL;
char *data_begin, *data_end;
/* Have at least some length to avoid NULL for g_malloc. Attempt to set a
* safe upper cap on the data length. A test sample had a XML length of
* about 1 MiB. */
if (info_length == 0 || info_length > 16 * 1024 * 1024) {
ret = -EINVAL;
goto fail;
}
buffer = g_malloc(info_length + 1);
buffer[info_length] = '\0';
ret = bdrv_pread(bs->file, info_begin, buffer, info_length);
if (ret != info_length) {
ret = -EINVAL;
goto fail;
}
/* look for <data>...</data>. The data is 284 (0x11c) bytes after base64
* decode. The actual data element has 431 (0x1af) bytes which includes tabs
* and line feeds. */
data_end = (char *)buffer;
while ((data_begin = strstr(data_end, "<data>")) != NULL) {
guchar *mish;
gsize out_len = 0;
data_begin += 6;
data_end = strstr(data_begin, "</data>");
/* malformed XML? */
if (data_end == NULL) {
ret = -EINVAL;
goto fail;
}
*data_end++ = '\0';
mish = g_base64_decode(data_begin, &out_len);
ret = dmg_read_mish_block(s, ds, mish, (uint32_t)out_len);
g_free(mish);
if (ret < 0) {
goto fail;
}
}
ret = 0;
fail:
g_free(buffer);
return ret;
}
static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVDMGState *s = bs->opaque;
DmgHeaderState ds;
uint64_t rsrc_fork_offset, rsrc_fork_length;
uint64_t plist_xml_offset, plist_xml_length;
int64_t offset;
int ret;
bs->read_only = true;
s->n_chunks = 0;
s->offsets = s->lengths = s->sectors = s->sectorcounts = NULL;
/* used by dmg_read_mish_block to keep track of the current I/O position */
ds.data_fork_offset = 0;
ds.max_compressed_size = 1;
ds.max_sectors_per_chunk = 1;
/* locate the UDIF trailer */
offset = dmg_find_koly_offset(bs->file, errp);
if (offset < 0) {
ret = offset;
goto fail;
}
/* offset of data fork (DataForkOffset) */
ret = read_uint64(bs, offset + 0x18, &ds.data_fork_offset);
if (ret < 0) {
goto fail;
} else if (ds.data_fork_offset > offset) {
ret = -EINVAL;
goto fail;
}
/* offset of resource fork (RsrcForkOffset) */
ret = read_uint64(bs, offset + 0x28, &rsrc_fork_offset);
if (ret < 0) {
goto fail;
}
ret = read_uint64(bs, offset + 0x30, &rsrc_fork_length);
if (ret < 0) {
goto fail;
}
if (rsrc_fork_offset >= offset ||
rsrc_fork_length > offset - rsrc_fork_offset) {
ret = -EINVAL;
goto fail;
}
/* offset of property list (XMLOffset) */
ret = read_uint64(bs, offset + 0xd8, &plist_xml_offset);
if (ret < 0) {
goto fail;
}
ret = read_uint64(bs, offset + 0xe0, &plist_xml_length);
if (ret < 0) {
goto fail;
}
if (plist_xml_offset >= offset ||
plist_xml_length > offset - plist_xml_offset) {
ret = -EINVAL;
goto fail;
}
ret = read_uint64(bs, offset + 0x1ec, (uint64_t *)&bs->total_sectors);
if (ret < 0) {
goto fail;
}
if (bs->total_sectors < 0) {
ret = -EINVAL;
goto fail;
}
if (rsrc_fork_length != 0) {
ret = dmg_read_resource_fork(bs, &ds,
rsrc_fork_offset, rsrc_fork_length);
if (ret < 0) {
goto fail;
}
} else if (plist_xml_length != 0) {
ret = dmg_read_plist_xml(bs, &ds, plist_xml_offset, plist_xml_length);
if (ret < 0) {
goto fail;
}
} else {
ret = -EINVAL;
goto fail;
}
/* initialize zlib engine */
s->compressed_chunk = qemu_try_blockalign(bs->file->bs,
ds.max_compressed_size + 1);
s->uncompressed_chunk = qemu_try_blockalign(bs->file->bs,
512 * ds.max_sectors_per_chunk);
if (s->compressed_chunk == NULL || s->uncompressed_chunk == NULL) {
ret = -ENOMEM;
goto fail;
}
s->compressed_chunk = g_malloc(max_compressed_size + 1);
s->uncompressed_chunk = g_malloc(512 * max_sectors_per_chunk);
if (inflateInit(&s->zstream) != Z_OK) {
ret = -EINVAL;
goto fail;
@@ -542,16 +302,11 @@ fail:
g_free(s->lengths);
g_free(s->sectors);
g_free(s->sectorcounts);
qemu_vfree(s->compressed_chunk);
qemu_vfree(s->uncompressed_chunk);
g_free(s->compressed_chunk);
g_free(s->uncompressed_chunk);
return ret;
}
static void dmg_refresh_limits(BlockDriverState *bs, Error **errp)
{
bs->bl.request_alignment = BDRV_SECTOR_SIZE; /* No sub-sector I/O */
}
static inline int is_sector_in_chunk(BDRVDMGState* s,
uint32_t chunk_num, uint64_t sector_num)
{
@@ -587,16 +342,13 @@ static inline int dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
if (!is_sector_in_chunk(s, s->current_chunk, sector_num)) {
int ret;
uint32_t chunk = search_chunk(s, sector_num);
#ifdef CONFIG_BZIP2
uint64_t total_out;
#endif
if (chunk >= s->n_chunks) {
return -1;
}
s->current_chunk = s->n_chunks;
switch (s->types[chunk]) { /* block entry type */
switch (s->types[chunk]) {
case 0x80000005: { /* zlib compressed */
/* we need to buffer, because only the chunk as whole can be
* inflated. */
@@ -620,34 +372,6 @@ static inline int dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
return -1;
}
break; }
#ifdef CONFIG_BZIP2
case 0x80000006: /* bzip2 compressed */
/* we need to buffer, because only the chunk as whole can be
* inflated. */
ret = bdrv_pread(bs->file, s->offsets[chunk],
s->compressed_chunk, s->lengths[chunk]);
if (ret != s->lengths[chunk]) {
return -1;
}
ret = BZ2_bzDecompressInit(&s->bzstream, 0, 0);
if (ret != BZ_OK) {
return -1;
}
s->bzstream.next_in = (char *)s->compressed_chunk;
s->bzstream.avail_in = (unsigned int) s->lengths[chunk];
s->bzstream.next_out = (char *)s->uncompressed_chunk;
s->bzstream.avail_out = (unsigned int) 512 * s->sectorcounts[chunk];
ret = BZ2_bzDecompress(&s->bzstream);
total_out = ((uint64_t)s->bzstream.total_out_hi32 << 32) +
s->bzstream.total_out_lo32;
BZ2_bzDecompressEnd(&s->bzstream);
if (ret != BZ_STREAM_END ||
total_out != 512 * s->sectorcounts[chunk]) {
return -1;
}
break;
#endif /* CONFIG_BZIP2 */
case 1: /* copy */
ret = bdrv_pread(bs->file, s->offsets[chunk],
s->uncompressed_chunk, s->lengths[chunk]);
@@ -656,8 +380,7 @@ static inline int dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
}
break;
case 2: /* zero */
/* see dmg_read, it is treated specially. No buffer needs to be
* pre-filled, the zeroes can be set directly. */
memset(s->uncompressed_chunk, 0, 512 * s->sectorcounts[chunk]);
break;
}
s->current_chunk = chunk;
@@ -665,42 +388,31 @@ static inline int dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
return 0;
}
static int coroutine_fn
dmg_co_preadv(BlockDriverState *bs, uint64_t offset, uint64_t bytes,
QEMUIOVector *qiov, int flags)
static int dmg_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BDRVDMGState *s = bs->opaque;
uint64_t sector_num = offset >> BDRV_SECTOR_BITS;
int nb_sectors = bytes >> BDRV_SECTOR_BITS;
int ret, i;
assert((offset & (BDRV_SECTOR_SIZE - 1)) == 0);
assert((bytes & (BDRV_SECTOR_SIZE - 1)) == 0);
qemu_co_mutex_lock(&s->lock);
int i;
for (i = 0; i < nb_sectors; i++) {
uint32_t sector_offset_in_chunk;
void *data;
if (dmg_read_chunk(bs, sector_num + i) != 0) {
ret = -EIO;
goto fail;
}
/* Special case: current chunk is all zeroes. Do not perform a memcpy as
* s->uncompressed_chunk may be too small to cover the large all-zeroes
* section. dmg_read_chunk is called to find s->current_chunk */
if (s->types[s->current_chunk] == 2) { /* all zeroes block entry */
qemu_iovec_memset(qiov, i * 512, 0, 512);
continue;
return -1;
}
sector_offset_in_chunk = sector_num + i - s->sectors[s->current_chunk];
data = s->uncompressed_chunk + sector_offset_in_chunk * 512;
qemu_iovec_from_buf(qiov, i * 512, data, 512);
memcpy(buf + i * 512,
s->uncompressed_chunk + sector_offset_in_chunk * 512, 512);
}
return 0;
}
ret = 0;
fail:
static coroutine_fn int dmg_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVDMGState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = dmg_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
@@ -714,8 +426,8 @@ static void dmg_close(BlockDriverState *bs)
g_free(s->lengths);
g_free(s->sectors);
g_free(s->sectorcounts);
qemu_vfree(s->compressed_chunk);
qemu_vfree(s->uncompressed_chunk);
g_free(s->compressed_chunk);
g_free(s->uncompressed_chunk);
inflateEnd(&s->zstream);
}
@@ -725,8 +437,7 @@ static BlockDriver bdrv_dmg = {
.instance_size = sizeof(BDRVDMGState),
.bdrv_probe = dmg_probe,
.bdrv_open = dmg_open,
.bdrv_refresh_limits = dmg_refresh_limits,
.bdrv_co_preadv = dmg_co_preadv,
.bdrv_read = dmg_co_read,
.bdrv_close = dmg_close,
};

File diff suppressed because it is too large Load Diff

2718
block/io.c

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -7,14 +7,11 @@
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "block/aio.h"
#include "qemu/queue.h"
#include "block/block.h"
#include "block/raw-aio.h"
#include "qemu/event_notifier.h"
#include "qemu/coroutine.h"
#include <libaio.h>
@@ -29,42 +26,21 @@
#define MAX_EVENTS 128
struct qemu_laiocb {
BlockAIOCB common;
Coroutine *co;
LinuxAioState *ctx;
BlockDriverAIOCB common;
struct qemu_laio_state *ctx;
struct iocb iocb;
ssize_t ret;
size_t nbytes;
QEMUIOVector *qiov;
bool is_read;
QSIMPLEQ_ENTRY(qemu_laiocb) next;
QLIST_ENTRY(qemu_laiocb) node;
};
typedef struct {
int plugged;
unsigned int in_queue;
unsigned int in_flight;
bool blocked;
QSIMPLEQ_HEAD(, qemu_laiocb) pending;
} LaioQueue;
struct LinuxAioState {
AioContext *aio_context;
struct qemu_laio_state {
io_context_t ctx;
EventNotifier e;
/* io queue for submit at batch */
LaioQueue io_q;
/* I/O completion processing */
QEMUBH *completion_bh;
int event_idx;
int event_max;
};
static void ioq_submit(LinuxAioState *s);
static inline ssize_t io_event_ret(struct io_event *ev)
{
return (ssize_t)(((uint64_t)ev->res2 << 32) | ev->res);
@@ -73,7 +49,8 @@ static inline ssize_t io_event_ret(struct io_event *ev)
/*
* Completes an AIO request (calls the callback and frees the ACB).
*/
static void qemu_laio_process_completion(struct qemu_laiocb *laiocb)
static void qemu_laio_process_completion(struct qemu_laio_state *s,
struct qemu_laiocb *laiocb)
{
int ret;
@@ -87,278 +64,94 @@ static void qemu_laio_process_completion(struct qemu_laiocb *laiocb)
qemu_iovec_memset(laiocb->qiov, ret, 0,
laiocb->qiov->size - ret);
} else {
ret = -ENOSPC;
ret = -EINVAL;
}
}
}
laiocb->ret = ret;
if (laiocb->co) {
/* Jump and continue completion for foreign requests, don't do
* anything for current request, it will be completed shortly. */
if (laiocb->co != qemu_coroutine_self()) {
qemu_coroutine_enter(laiocb->co);
}
} else {
laiocb->common.cb(laiocb->common.opaque, ret);
qemu_aio_unref(laiocb);
}
}
/**
* aio_ring buffer which is shared between userspace and kernel.
*
* This copied from linux/fs/aio.c, common header does not exist
* but AIO exists for ages so we assume ABI is stable.
*/
struct aio_ring {
unsigned id; /* kernel internal index number */
unsigned nr; /* number of io_events */
unsigned head; /* Written to by userland or by kernel. */
unsigned tail;
unsigned magic;
unsigned compat_features;
unsigned incompat_features;
unsigned header_length; /* size of aio_ring */
struct io_event io_events[0];
};
/**
* io_getevents_peek:
* @ctx: AIO context
* @events: pointer on events array, output value
* Returns the number of completed events and sets a pointer
* on events array. This function does not update the internal
* ring buffer, only reads head and tail. When @events has been
* processed io_getevents_commit() must be called.
*/
static inline unsigned int io_getevents_peek(io_context_t ctx,
struct io_event **events)
{
struct aio_ring *ring = (struct aio_ring *)ctx;
unsigned int head = ring->head, tail = ring->tail;
unsigned int nr;
nr = tail >= head ? tail - head : ring->nr - head;
*events = ring->io_events + head;
/* To avoid speculative loads of s->events[i] before observing tail.
Paired with smp_wmb() inside linux/fs/aio.c: aio_complete(). */
smp_rmb();
return nr;
}
/**
* io_getevents_commit:
* @ctx: AIO context
* @nr: the number of events on which head should be advanced
*
* Advances head of a ring buffer.
*/
static inline void io_getevents_commit(io_context_t ctx, unsigned int nr)
{
struct aio_ring *ring = (struct aio_ring *)ctx;
if (nr) {
ring->head = (ring->head + nr) % ring->nr;
}
}
/**
* io_getevents_advance_and_peek:
* @ctx: AIO context
* @events: pointer on events array, output value
* @nr: the number of events on which head should be advanced
*
* Advances head of a ring buffer and returns number of elements left.
*/
static inline unsigned int
io_getevents_advance_and_peek(io_context_t ctx,
struct io_event **events,
unsigned int nr)
{
io_getevents_commit(ctx, nr);
return io_getevents_peek(ctx, events);
}
/**
* qemu_laio_process_completions:
* @s: AIO state
*
* Fetches completed I/O requests and invokes their callbacks.
*
* The function is somewhat tricky because it supports nested event loops, for
* example when a request callback invokes aio_poll(). In order to do this,
* indices are kept in LinuxAioState. Function schedules BH completion so it
* can be called again in a nested event loop. When there are no events left
* to complete the BH is being canceled.
*/
static void qemu_laio_process_completions(LinuxAioState *s)
{
struct io_event *events;
/* Reschedule so nested event loops see currently pending completions */
qemu_bh_schedule(s->completion_bh);
while ((s->event_max = io_getevents_advance_and_peek(s->ctx, &events,
s->event_idx))) {
for (s->event_idx = 0; s->event_idx < s->event_max; ) {
struct iocb *iocb = events[s->event_idx].obj;
struct qemu_laiocb *laiocb =
container_of(iocb, struct qemu_laiocb, iocb);
laiocb->ret = io_event_ret(&events[s->event_idx]);
/* Change counters one-by-one because we can be nested. */
s->io_q.in_flight--;
s->event_idx++;
qemu_laio_process_completion(laiocb);
}
}
qemu_bh_cancel(s->completion_bh);
/* If we are nested we have to notify the level above that we are done
* by setting event_max to zero, upper level will then jump out of it's
* own `for` loop. If we are the last all counters droped to zero. */
s->event_max = 0;
s->event_idx = 0;
}
static void qemu_laio_process_completions_and_submit(LinuxAioState *s)
{
qemu_laio_process_completions(s);
if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
ioq_submit(s);
}
}
static void qemu_laio_completion_bh(void *opaque)
{
LinuxAioState *s = opaque;
qemu_laio_process_completions_and_submit(s);
qemu_aio_release(laiocb);
}
static void qemu_laio_completion_cb(EventNotifier *e)
{
LinuxAioState *s = container_of(e, LinuxAioState, e);
struct qemu_laio_state *s = container_of(e, struct qemu_laio_state, e);
if (event_notifier_test_and_clear(&s->e)) {
qemu_laio_process_completions_and_submit(s);
while (event_notifier_test_and_clear(&s->e)) {
struct io_event events[MAX_EVENTS];
struct timespec ts = { 0 };
int nevents, i;
do {
nevents = io_getevents(s->ctx, MAX_EVENTS, MAX_EVENTS, events, &ts);
} while (nevents == -EINTR);
for (i = 0; i < nevents; i++) {
struct iocb *iocb = events[i].obj;
struct qemu_laiocb *laiocb =
container_of(iocb, struct qemu_laiocb, iocb);
laiocb->ret = io_event_ret(&events[i]);
qemu_laio_process_completion(s, laiocb);
}
}
}
static void laio_cancel(BlockAIOCB *blockacb)
static void laio_cancel(BlockDriverAIOCB *blockacb)
{
struct qemu_laiocb *laiocb = (struct qemu_laiocb *)blockacb;
struct io_event event;
int ret;
if (laiocb->ret != -EINPROGRESS) {
if (laiocb->ret != -EINPROGRESS)
return;
}
/*
* Note that as of Linux 2.6.31 neither the block device code nor any
* filesystem implements cancellation of AIO request.
* Thus the polling loop below is the normal code path.
*/
ret = io_cancel(laiocb->ctx->ctx, &laiocb->iocb, &event);
laiocb->ret = -ECANCELED;
if (ret != 0) {
/* iocb is not cancelled, cb will be called by the event loop later */
if (ret == 0) {
laiocb->ret = -ECANCELED;
return;
}
laiocb->common.cb(laiocb->common.opaque, laiocb->ret);
/*
* We have to wait for the iocb to finish.
*
* The only way to get the iocb status update is by polling the io context.
* We might be able to do this slightly more optimal by removing the
* O_NONBLOCK flag.
*/
while (laiocb->ret == -EINPROGRESS) {
qemu_laio_completion_cb(&laiocb->ctx->e);
}
}
static const AIOCBInfo laio_aiocb_info = {
.aiocb_size = sizeof(struct qemu_laiocb),
.cancel_async = laio_cancel,
.cancel = laio_cancel,
};
static void ioq_init(LaioQueue *io_q)
BlockDriverAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque, int type)
{
QSIMPLEQ_INIT(&io_q->pending);
io_q->plugged = 0;
io_q->in_queue = 0;
io_q->in_flight = 0;
io_q->blocked = false;
}
struct qemu_laio_state *s = aio_ctx;
struct qemu_laiocb *laiocb;
struct iocb *iocbs;
off_t offset = sector_num * 512;
static void ioq_submit(LinuxAioState *s)
{
int ret, len;
struct qemu_laiocb *aiocb;
struct iocb *iocbs[MAX_EVENTS];
QSIMPLEQ_HEAD(, qemu_laiocb) completed;
laiocb = qemu_aio_get(&laio_aiocb_info, bs, cb, opaque);
laiocb->nbytes = nb_sectors * 512;
laiocb->ctx = s;
laiocb->ret = -EINPROGRESS;
laiocb->is_read = (type == QEMU_AIO_READ);
laiocb->qiov = qiov;
do {
if (s->io_q.in_flight >= MAX_EVENTS) {
break;
}
len = 0;
QSIMPLEQ_FOREACH(aiocb, &s->io_q.pending, next) {
iocbs[len++] = &aiocb->iocb;
if (s->io_q.in_flight + len >= MAX_EVENTS) {
break;
}
}
ret = io_submit(s->ctx, len, iocbs);
if (ret == -EAGAIN) {
break;
}
if (ret < 0) {
/* Fail the first request, retry the rest */
aiocb = QSIMPLEQ_FIRST(&s->io_q.pending);
QSIMPLEQ_REMOVE_HEAD(&s->io_q.pending, next);
s->io_q.in_queue--;
aiocb->ret = ret;
qemu_laio_process_completion(aiocb);
continue;
}
s->io_q.in_flight += ret;
s->io_q.in_queue -= ret;
aiocb = container_of(iocbs[ret - 1], struct qemu_laiocb, iocb);
QSIMPLEQ_SPLIT_AFTER(&s->io_q.pending, aiocb, next, &completed);
} while (ret == len && !QSIMPLEQ_EMPTY(&s->io_q.pending));
s->io_q.blocked = (s->io_q.in_queue > 0);
if (s->io_q.in_flight) {
/* We can try to complete something just right away if there are
* still requests in-flight. */
qemu_laio_process_completions(s);
/*
* Even we have completed everything (in_flight == 0), the queue can
* have still pended requests (in_queue > 0). We do not attempt to
* repeat submission to avoid IO hang. The reason is simple: s->e is
* still set and completion callback will be called shortly and all
* pended requests will be submitted from there.
*/
}
}
void laio_io_plug(BlockDriverState *bs, LinuxAioState *s)
{
s->io_q.plugged++;
}
void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s)
{
assert(s->io_q.plugged);
if (--s->io_q.plugged == 0 &&
!s->io_q.blocked && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
ioq_submit(s);
}
}
static int laio_do_submit(int fd, struct qemu_laiocb *laiocb, off_t offset,
int type)
{
LinuxAioState *s = laiocb->ctx;
struct iocb *iocbs = &laiocb->iocb;
QEMUIOVector *qiov = laiocb->qiov;
iocbs = &laiocb->iocb;
switch (type) {
case QEMU_AIO_WRITE:
@@ -371,86 +164,22 @@ static int laio_do_submit(int fd, struct qemu_laiocb *laiocb, off_t offset,
default:
fprintf(stderr, "%s: invalid AIO request type 0x%x.\n",
__func__, type);
return -EIO;
goto out_free_aiocb;
}
io_set_eventfd(&laiocb->iocb, event_notifier_get_fd(&s->e));
QSIMPLEQ_INSERT_TAIL(&s->io_q.pending, laiocb, next);
s->io_q.in_queue++;
if (!s->io_q.blocked &&
(!s->io_q.plugged ||
s->io_q.in_flight + s->io_q.in_queue >= MAX_EVENTS)) {
ioq_submit(s);
}
return 0;
}
int coroutine_fn laio_co_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
uint64_t offset, QEMUIOVector *qiov, int type)
{
int ret;
struct qemu_laiocb laiocb = {
.co = qemu_coroutine_self(),
.nbytes = qiov->size,
.ctx = s,
.ret = -EINPROGRESS,
.is_read = (type == QEMU_AIO_READ),
.qiov = qiov,
};
ret = laio_do_submit(fd, &laiocb, offset, type);
if (ret < 0) {
return ret;
}
if (laiocb.ret == -EINPROGRESS) {
qemu_coroutine_yield();
}
return laiocb.ret;
}
BlockAIOCB *laio_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque, int type)
{
struct qemu_laiocb *laiocb;
off_t offset = sector_num * BDRV_SECTOR_SIZE;
int ret;
laiocb = qemu_aio_get(&laio_aiocb_info, bs, cb, opaque);
laiocb->nbytes = nb_sectors * BDRV_SECTOR_SIZE;
laiocb->ctx = s;
laiocb->ret = -EINPROGRESS;
laiocb->is_read = (type == QEMU_AIO_READ);
laiocb->qiov = qiov;
ret = laio_do_submit(fd, laiocb, offset, type);
if (ret < 0) {
qemu_aio_unref(laiocb);
return NULL;
}
if (io_submit(s->ctx, 1, &iocbs) < 0)
goto out_free_aiocb;
return &laiocb->common;
out_free_aiocb:
qemu_aio_release(laiocb);
return NULL;
}
void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context)
void *laio_init(void)
{
aio_set_event_notifier(old_context, &s->e, false, NULL);
qemu_bh_delete(s->completion_bh);
}
void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context)
{
s->aio_context = new_context;
s->completion_bh = aio_bh_new(new_context, qemu_laio_completion_bh, s);
aio_set_event_notifier(new_context, &s->e, false,
qemu_laio_completion_cb);
}
LinuxAioState *laio_init(void)
{
LinuxAioState *s;
struct qemu_laio_state *s;
s = g_malloc0(sizeof(*s));
if (event_notifier_init(&s->e, false) < 0) {
@@ -461,7 +190,7 @@ LinuxAioState *laio_init(void)
goto out_close_efd;
}
ioq_init(&s->io_q);
qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb);
return s;
@@ -471,14 +200,3 @@ out_free_state:
g_free(s);
return NULL;
}
void laio_cleanup(LinuxAioState *s)
{
event_notifier_cleanup(&s->e);
if (io_destroy(s->ctx) != 0) {
fprintf(stderr, "%s: destroy AIO context %p failed\n",
__func__, &s->ctx);
}
g_free(s);
}

File diff suppressed because it is too large Load Diff

View File

@@ -26,8 +26,8 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "nbd-client.h"
#include "qemu/sockets.h"
#define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs))
#define INDEX_TO_HANDLE(bs, index) ((index) ^ ((uint64_t)(intptr_t)bs))
@@ -38,49 +38,34 @@ static void nbd_recv_coroutines_enter_all(NbdClientSession *s)
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i]);
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
}
}
}
static void nbd_teardown_connection(BlockDriverState *bs)
static void nbd_teardown_connection(NbdClientSession *client)
{
NbdClientSession *client = nbd_get_client_session(bs);
if (!client->ioc) { /* Already closed */
return;
}
/* finish any pending coroutines */
qio_channel_shutdown(client->ioc,
QIO_CHANNEL_SHUTDOWN_BOTH,
NULL);
shutdown(client->sock, 2);
nbd_recv_coroutines_enter_all(client);
nbd_client_detach_aio_context(bs);
object_unref(OBJECT(client->sioc));
client->sioc = NULL;
object_unref(OBJECT(client->ioc));
client->ioc = NULL;
qemu_aio_set_fd_handler(client->sock, NULL, NULL, NULL);
closesocket(client->sock);
client->sock = -1;
}
static void nbd_reply_ready(void *opaque)
{
BlockDriverState *bs = opaque;
NbdClientSession *s = nbd_get_client_session(bs);
NbdClientSession *s = opaque;
uint64_t i;
int ret;
if (!s->ioc) { /* Already closed */
return;
}
if (s->reply.handle == 0) {
/* No reply already in flight. Fetch a header. It is possible
* that another thread has done the same thing in parallel, so
* the socket is not readable anymore.
*/
ret = nbd_receive_reply(s->ioc, &s->reply);
ret = nbd_receive_reply(s->sock, &s->reply);
if (ret == -EAGAIN) {
return;
}
@@ -99,77 +84,57 @@ static void nbd_reply_ready(void *opaque)
}
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i]);
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
return;
}
fail:
nbd_teardown_connection(bs);
nbd_teardown_connection(s);
}
static void nbd_restart_write(void *opaque)
{
BlockDriverState *bs = opaque;
NbdClientSession *s = opaque;
qemu_coroutine_enter(nbd_get_client_session(bs)->send_coroutine);
qemu_coroutine_enter(s->send_coroutine, NULL);
}
static int nbd_co_send_request(BlockDriverState *bs,
struct nbd_request *request,
QEMUIOVector *qiov)
static int nbd_co_send_request(NbdClientSession *s,
struct nbd_request *request,
QEMUIOVector *qiov, int offset)
{
NbdClientSession *s = nbd_get_client_session(bs);
AioContext *aio_context;
int rc, ret, i;
int rc, ret;
qemu_co_mutex_lock(&s->send_mutex);
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i] == NULL) {
s->recv_coroutine[i] = qemu_coroutine_self();
break;
}
}
g_assert(qemu_in_coroutine());
assert(i < MAX_NBD_REQUESTS);
request->handle = INDEX_TO_HANDLE(s, i);
if (!s->ioc) {
qemu_co_mutex_unlock(&s->send_mutex);
return -EPIPE;
}
s->send_coroutine = qemu_coroutine_self();
aio_context = bdrv_get_aio_context(bs);
aio_set_fd_handler(aio_context, s->sioc->fd, false,
nbd_reply_ready, nbd_restart_write, bs);
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write, s);
if (qiov) {
qio_channel_set_cork(s->ioc, true);
rc = nbd_send_request(s->ioc, request);
if (!s->is_unix) {
socket_set_cork(s->sock, 1);
}
rc = nbd_send_request(s->sock, request);
if (rc >= 0) {
ret = nbd_wr_syncv(s->ioc, qiov->iov, qiov->niov, request->len,
false);
ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
rc = -EIO;
}
}
qio_channel_set_cork(s->ioc, false);
if (!s->is_unix) {
socket_set_cork(s->sock, 0);
}
} else {
rc = nbd_send_request(s->ioc, request);
rc = nbd_send_request(s->sock, request);
}
aio_set_fd_handler(aio_context, s->sioc->fd, false,
nbd_reply_ready, NULL, bs);
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL, s);
s->send_coroutine = NULL;
qemu_co_mutex_unlock(&s->send_mutex);
return rc;
}
static void nbd_co_receive_reply(NbdClientSession *s,
struct nbd_request *request,
struct nbd_reply *reply,
QEMUIOVector *qiov)
struct nbd_request *request, struct nbd_reply *reply,
QEMUIOVector *qiov, int offset)
{
int ret;
@@ -177,13 +142,12 @@ static void nbd_co_receive_reply(NbdClientSession *s,
* peek at the next reply and avoid yielding if it's ours? */
qemu_coroutine_yield();
*reply = s->reply;
if (reply->handle != request->handle ||
!s->ioc) {
if (reply->handle != request->handle) {
reply->error = EIO;
} else {
if (qiov && reply->error == 0) {
ret = nbd_wr_syncv(s->ioc, qiov->iov, qiov->niov, request->len,
true);
ret = qemu_co_recvv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
reply->error = EIO;
}
@@ -197,6 +161,8 @@ static void nbd_co_receive_reply(NbdClientSession *s,
static void nbd_coroutine_start(NbdClientSession *s,
struct nbd_request *request)
{
int i;
/* Poor man semaphore. The free_sema is locked when no other request
* can be accepted, and unlocked after receiving one reply. */
if (s->in_flight >= MAX_NBD_REQUESTS - 1) {
@@ -205,7 +171,15 @@ static void nbd_coroutine_start(NbdClientSession *s,
}
s->in_flight++;
/* s->recv_coroutine[i] is set as soon as we get the send_lock. */
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i] == NULL) {
s->recv_coroutine[i] = qemu_coroutine_self();
break;
}
}
assert(i < MAX_NBD_REQUESTS);
request->handle = INDEX_TO_HANDLE(s, i);
}
static void nbd_coroutine_end(NbdClientSession *s,
@@ -218,65 +192,98 @@ static void nbd_coroutine_end(NbdClientSession *s,
}
}
int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
uint64_t bytes, QEMUIOVector *qiov, int flags)
static int nbd_co_readv_1(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
NbdClientSession *client = nbd_get_client_session(bs);
struct nbd_request request = {
.type = NBD_CMD_READ,
.from = offset,
.len = bytes,
};
struct nbd_request request = { .type = NBD_CMD_READ };
struct nbd_reply reply;
ssize_t ret;
assert(bytes <= NBD_MAX_BUFFER_SIZE);
assert(!flags);
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(bs, &request, NULL);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, qiov);
nbd_co_receive_reply(client, &request, &reply, qiov, offset);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
int nbd_client_co_pwritev(BlockDriverState *bs, uint64_t offset,
uint64_t bytes, QEMUIOVector *qiov, int flags)
static int nbd_co_writev_1(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
NbdClientSession *client = nbd_get_client_session(bs);
struct nbd_request request = {
.type = NBD_CMD_WRITE,
.from = offset,
.len = bytes,
};
struct nbd_request request = { .type = NBD_CMD_WRITE };
struct nbd_reply reply;
ssize_t ret;
if (flags & BDRV_REQ_FUA) {
assert(client->nbdflags & NBD_FLAG_SEND_FUA);
if (!bdrv_enable_write_cache(client->bs) &&
(client->nbdflags & NBD_FLAG_SEND_FUA)) {
request.type |= NBD_CMD_FLAG_FUA;
}
assert(bytes <= NBD_MAX_BUFFER_SIZE);
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(bs, &request, qiov);
ret = nbd_co_send_request(client, &request, qiov, offset);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, NULL);
nbd_co_receive_reply(client, &request, &reply, NULL, 0);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
int nbd_client_co_flush(BlockDriverState *bs)
/* qemu-nbd has a limit of slightly less than 1M per request. Try to
* remain aligned to 4K. */
#define NBD_MAX_SECTORS 2040
int nbd_client_session_co_readv(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_readv_1(client, sector_num,
NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_readv_1(client, sector_num, nb_sectors, qiov, offset);
}
int nbd_client_session_co_writev(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_writev_1(client, sector_num,
NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_writev_1(client, sector_num, nb_sectors, qiov, offset);
}
int nbd_client_session_co_flush(NbdClientSession *client)
{
NbdClientSession *client = nbd_get_client_session(bs);
struct nbd_request request = { .type = NBD_CMD_FLUSH };
struct nbd_reply reply;
ssize_t ret;
@@ -285,121 +292,96 @@ int nbd_client_co_flush(BlockDriverState *bs)
return 0;
}
if (client->nbdflags & NBD_FLAG_SEND_FUA) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = 0;
request.len = 0;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(bs, &request, NULL);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, NULL);
nbd_co_receive_reply(client, &request, &reply, NULL, 0);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
int nbd_client_co_pdiscard(BlockDriverState *bs, int64_t offset, int count)
int nbd_client_session_co_discard(NbdClientSession *client, int64_t sector_num,
int nb_sectors)
{
NbdClientSession *client = nbd_get_client_session(bs);
struct nbd_request request = {
.type = NBD_CMD_TRIM,
.from = offset,
.len = count,
};
struct nbd_request request = { .type = NBD_CMD_TRIM };
struct nbd_reply reply;
ssize_t ret;
if (!(client->nbdflags & NBD_FLAG_SEND_TRIM)) {
return 0;
}
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(bs, &request, NULL);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, NULL);
nbd_co_receive_reply(client, &request, &reply, NULL, 0);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
void nbd_client_detach_aio_context(BlockDriverState *bs)
void nbd_client_session_close(NbdClientSession *client)
{
aio_set_fd_handler(bdrv_get_aio_context(bs),
nbd_get_client_session(bs)->sioc->fd,
false, NULL, NULL, NULL);
}
void nbd_client_attach_aio_context(BlockDriverState *bs,
AioContext *new_context)
{
aio_set_fd_handler(new_context, nbd_get_client_session(bs)->sioc->fd,
false, nbd_reply_ready, NULL, bs);
}
void nbd_client_close(BlockDriverState *bs)
{
NbdClientSession *client = nbd_get_client_session(bs);
struct nbd_request request = {
.type = NBD_CMD_DISC,
.from = 0,
.len = 0
};
if (client->ioc == NULL) {
if (!client->bs) {
return;
}
if (client->sock == -1) {
return;
}
nbd_send_request(client->ioc, &request);
nbd_send_request(client->sock, &request);
nbd_teardown_connection(bs);
nbd_teardown_connection(client);
client->bs = NULL;
}
int nbd_client_init(BlockDriverState *bs,
QIOChannelSocket *sioc,
const char *export,
QCryptoTLSCreds *tlscreds,
const char *hostname,
Error **errp)
int nbd_client_session_init(NbdClientSession *client, BlockDriverState *bs,
int sock, const char *export)
{
NbdClientSession *client = nbd_get_client_session(bs);
int ret;
/* NBD handshake */
logout("session init %s\n", export);
qio_channel_set_blocking(QIO_CHANNEL(sioc), true, NULL);
ret = nbd_receive_negotiate(QIO_CHANNEL(sioc), export,
&client->nbdflags,
tlscreds, hostname,
&client->ioc,
&client->size, errp);
qemu_set_block(sock);
ret = nbd_receive_negotiate(sock, export,
&client->nbdflags, &client->size,
&client->blocksize);
if (ret < 0) {
logout("Failed to negotiate with the NBD server\n");
closesocket(sock);
return ret;
}
if (client->nbdflags & NBD_FLAG_SEND_FUA) {
bs->supported_write_flags = BDRV_REQ_FUA;
}
qemu_co_mutex_init(&client->send_mutex);
qemu_co_mutex_init(&client->free_sema);
client->sioc = sioc;
object_ref(OBJECT(client->sioc));
if (!client->ioc) {
client->ioc = QIO_CHANNEL(sioc);
object_ref(OBJECT(client->ioc));
}
client->bs = bs;
client->sock = sock;
/* Now that we're connected, set the socket to be non-blocking and
* kick the reply mechanism. */
qio_channel_set_blocking(QIO_CHANNEL(sioc), false, NULL);
nbd_client_attach_aio_context(bs, bdrv_get_aio_context(bs));
qemu_set_nonblock(sock);
qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL, client);
logout("Established connection with NBD server\n");
return 0;

View File

@@ -4,7 +4,6 @@
#include "qemu-common.h"
#include "block/nbd.h"
#include "block/block_int.h"
#include "io/channel-socket.h"
/* #define DEBUG_NBD */
@@ -18,10 +17,10 @@
#define MAX_NBD_REQUESTS 16
typedef struct NbdClientSession {
QIOChannelSocket *sioc; /* The master data channel */
QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */
uint16_t nbdflags;
int sock;
uint32_t nbdflags;
off_t size;
size_t blocksize;
CoMutex send_mutex;
CoMutex free_sema;
@@ -32,27 +31,20 @@ typedef struct NbdClientSession {
struct nbd_reply reply;
bool is_unix;
BlockDriverState *bs;
} NbdClientSession;
NbdClientSession *nbd_get_client_session(BlockDriverState *bs);
int nbd_client_session_init(NbdClientSession *client, BlockDriverState *bs,
int sock, const char *export_name);
void nbd_client_session_close(NbdClientSession *client);
int nbd_client_init(BlockDriverState *bs,
QIOChannelSocket *sock,
const char *export_name,
QCryptoTLSCreds *tlscreds,
const char *hostname,
Error **errp);
void nbd_client_close(BlockDriverState *bs);
int nbd_client_co_pdiscard(BlockDriverState *bs, int64_t offset, int count);
int nbd_client_co_flush(BlockDriverState *bs);
int nbd_client_co_pwritev(BlockDriverState *bs, uint64_t offset,
uint64_t bytes, QEMUIOVector *qiov, int flags);
int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
uint64_t bytes, QEMUIOVector *qiov, int flags);
void nbd_client_detach_aio_context(BlockDriverState *bs);
void nbd_client_attach_aio_context(BlockDriverState *bs,
AioContext *new_context);
int nbd_client_session_co_discard(NbdClientSession *client, int64_t sector_num,
int nb_sectors);
int nbd_client_session_co_flush(NbdClientSession *client);
int nbd_client_session_co_writev(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
int nbd_client_session_co_readv(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
#endif /* NBD_CLIENT_H */

View File

@@ -26,25 +26,22 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "block/nbd-client.h"
#include "qapi/error.h"
#include "qemu/uri.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "qapi/qmp/qdict.h"
#include "qemu/sockets.h"
#include "qapi/qmp/qjson.h"
#include "qapi/qmp/qint.h"
#include "qapi/qmp/qstring.h"
#include "qemu/cutils.h"
#include <sys/types.h>
#include <unistd.h>
#define EN_OPTSTR ":exportname="
typedef struct BDRVNBDState {
NbdClientSession client;
/* For nbd_refresh_filename() */
char *path, *host, *port, *export, *tlscredsid;
QemuOpts *socket_opts;
} BDRVNBDState;
static int nbd_parse_uri(const char *filename, QDict *options)
@@ -178,7 +175,7 @@ static void nbd_parse_filename(const char *filename, QDict *options,
InetSocketAddress *addr = NULL;
addr = inet_parse(host_spec, errp);
if (!addr) {
if (error_is_set(errp)) {
goto out;
}
@@ -191,232 +188,132 @@ out:
g_free(file);
}
static SocketAddress *nbd_config(BDRVNBDState *s, QemuOpts *opts, Error **errp)
static void nbd_config(BDRVNBDState *s, QDict *options, char **export,
Error **errp)
{
SocketAddress *saddr;
Error *local_err = NULL;
s->path = g_strdup(qemu_opt_get(opts, "path"));
s->host = g_strdup(qemu_opt_get(opts, "host"));
if (!s->path == !s->host) {
if (s->path) {
if (qdict_haskey(options, "path") == qdict_haskey(options, "host")) {
if (qdict_haskey(options, "path")) {
error_setg(errp, "path and host may not be used at the same time.");
} else {
error_setg(errp, "one of path and host must be specified.");
}
return NULL;
return;
}
saddr = g_new0(SocketAddress, 1);
s->client.is_unix = qdict_haskey(options, "path");
s->socket_opts = qemu_opts_create(&socket_optslist, NULL, 0,
&error_abort);
if (s->path) {
UnixSocketAddress *q_unix;
saddr->type = SOCKET_ADDRESS_KIND_UNIX;
q_unix = saddr->u.q_unix.data = g_new0(UnixSocketAddress, 1);
q_unix->path = g_strdup(s->path);
qemu_opts_absorb_qdict(s->socket_opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
if (!qemu_opt_get(s->socket_opts, "port")) {
qemu_opt_set_number(s->socket_opts, "port", NBD_DEFAULT_PORT);
}
*export = g_strdup(qdict_get_try_str(options, "export"));
if (*export) {
qdict_del(options, "export");
}
}
static int nbd_establish_connection(BlockDriverState *bs, Error **errp)
{
BDRVNBDState *s = bs->opaque;
int sock;
if (s->client.is_unix) {
sock = unix_connect_opts(s->socket_opts, errp, NULL, NULL);
} else {
InetSocketAddress *inet;
s->port = g_strdup(qemu_opt_get(opts, "port"));
saddr->type = SOCKET_ADDRESS_KIND_INET;
inet = saddr->u.inet.data = g_new0(InetSocketAddress, 1);
inet->host = g_strdup(s->host);
inet->port = g_strdup(s->port);
if (!inet->port) {
inet->port = g_strdup_printf("%d", NBD_DEFAULT_PORT);
sock = inet_connect_opts(s->socket_opts, errp, NULL, NULL);
if (sock >= 0) {
socket_set_nodelay(sock);
}
}
s->client.is_unix = saddr->type == SOCKET_ADDRESS_KIND_UNIX;
s->export = g_strdup(qemu_opt_get(opts, "export"));
return saddr;
}
NbdClientSession *nbd_get_client_session(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
return &s->client;
}
static QIOChannelSocket *nbd_establish_connection(SocketAddress *saddr,
Error **errp)
{
QIOChannelSocket *sioc;
Error *local_err = NULL;
sioc = qio_channel_socket_new();
qio_channel_socket_connect_sync(sioc,
saddr,
&local_err);
if (local_err) {
error_propagate(errp, local_err);
return NULL;
/* Failed to establish connection */
if (sock < 0) {
logout("Failed to establish connection to NBD server\n");
return -errno;
}
qio_channel_set_delay(QIO_CHANNEL(sioc), false);
return sioc;
return sock;
}
static QCryptoTLSCreds *nbd_get_tls_creds(const char *id, Error **errp)
{
Object *obj;
QCryptoTLSCreds *creds;
obj = object_resolve_path_component(
object_get_objects_root(), id);
if (!obj) {
error_setg(errp, "No TLS credentials with id '%s'",
id);
return NULL;
}
creds = (QCryptoTLSCreds *)
object_dynamic_cast(obj, TYPE_QCRYPTO_TLS_CREDS);
if (!creds) {
error_setg(errp, "Object with id '%s' is not TLS credentials",
id);
return NULL;
}
if (creds->endpoint != QCRYPTO_TLS_CREDS_ENDPOINT_CLIENT) {
error_setg(errp,
"Expecting TLS credentials with a client endpoint");
return NULL;
}
object_ref(obj);
return creds;
}
static QemuOptsList nbd_runtime_opts = {
.name = "nbd",
.head = QTAILQ_HEAD_INITIALIZER(nbd_runtime_opts.head),
.desc = {
{
.name = "host",
.type = QEMU_OPT_STRING,
.help = "TCP host to connect to",
},
{
.name = "port",
.type = QEMU_OPT_STRING,
.help = "TCP port to connect to",
},
{
.name = "path",
.type = QEMU_OPT_STRING,
.help = "Unix socket path to connect to",
},
{
.name = "export",
.type = QEMU_OPT_STRING,
.help = "Name of the NBD export to open",
},
{
.name = "tls-creds",
.type = QEMU_OPT_STRING,
.help = "ID of the TLS credentials to use",
},
},
};
static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVNBDState *s = bs->opaque;
QemuOpts *opts = NULL;
char *export = NULL;
int result, sock;
Error *local_err = NULL;
QIOChannelSocket *sioc = NULL;
SocketAddress *saddr = NULL;
QCryptoTLSCreds *tlscreds = NULL;
const char *hostname = NULL;
int ret = -EINVAL;
opts = qemu_opts_create(&nbd_runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
goto error;
}
/* Pop the config into our state object. Exit if invalid. */
saddr = nbd_config(s, opts, errp);
if (!saddr) {
goto error;
}
s->tlscredsid = g_strdup(qemu_opt_get(opts, "tls-creds"));
if (s->tlscredsid) {
tlscreds = nbd_get_tls_creds(s->tlscredsid, errp);
if (!tlscreds) {
goto error;
}
if (saddr->type != SOCKET_ADDRESS_KIND_INET) {
error_setg(errp, "TLS only supported over IP sockets");
goto error;
}
hostname = saddr->u.inet.data->host;
nbd_config(s, options, &export, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return -EINVAL;
}
/* establish TCP connection, return error if it fails
* TODO: Configurable retry-until-timeout behaviour.
*/
sioc = nbd_establish_connection(saddr, errp);
if (!sioc) {
ret = -ECONNREFUSED;
goto error;
sock = nbd_establish_connection(bs, errp);
if (sock < 0) {
return sock;
}
/* NBD handshake */
ret = nbd_client_init(bs, sioc, s->export,
tlscreds, hostname, errp);
error:
if (sioc) {
object_unref(OBJECT(sioc));
}
if (tlscreds) {
object_unref(OBJECT(tlscreds));
}
if (ret < 0) {
g_free(s->path);
g_free(s->host);
g_free(s->port);
g_free(s->export);
g_free(s->tlscredsid);
}
qapi_free_SocketAddress(saddr);
qemu_opts_del(opts);
return ret;
result = nbd_client_session_init(&s->client, bs, sock, export);
g_free(export);
return result;
}
static int nbd_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
BDRVNBDState *s = bs->opaque;
return nbd_client_session_co_readv(&s->client, sector_num,
nb_sectors, qiov);
}
static int nbd_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
BDRVNBDState *s = bs->opaque;
return nbd_client_session_co_writev(&s->client, sector_num,
nb_sectors, qiov);
}
static int nbd_co_flush(BlockDriverState *bs)
{
return nbd_client_co_flush(bs);
BDRVNBDState *s = bs->opaque;
return nbd_client_session_co_flush(&s->client);
}
static void nbd_refresh_limits(BlockDriverState *bs, Error **errp)
static int nbd_co_discard(BlockDriverState *bs, int64_t sector_num,
int nb_sectors)
{
bs->bl.max_pdiscard = NBD_MAX_BUFFER_SIZE;
bs->bl.max_transfer = NBD_MAX_BUFFER_SIZE;
BDRVNBDState *s = bs->opaque;
return nbd_client_session_co_discard(&s->client, sector_num,
nb_sectors);
}
static void nbd_close(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
nbd_client_close(bs);
g_free(s->path);
g_free(s->host);
g_free(s->port);
g_free(s->export);
g_free(s->tlscredsid);
qemu_opts_del(s->socket_opts);
nbd_client_session_close(&s->client);
}
static int64_t nbd_getlength(BlockDriverState *bs)
@@ -426,115 +323,46 @@ static int64_t nbd_getlength(BlockDriverState *bs)
return s->client.size;
}
static void nbd_detach_aio_context(BlockDriverState *bs)
{
nbd_client_detach_aio_context(bs);
}
static void nbd_attach_aio_context(BlockDriverState *bs,
AioContext *new_context)
{
nbd_client_attach_aio_context(bs, new_context);
}
static void nbd_refresh_filename(BlockDriverState *bs, QDict *options)
{
BDRVNBDState *s = bs->opaque;
QDict *opts = qdict_new();
qdict_put_obj(opts, "driver", QOBJECT(qstring_from_str("nbd")));
if (s->path && s->export) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"nbd+unix:///%s?socket=%s", s->export, s->path);
} else if (s->path && !s->export) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"nbd+unix://?socket=%s", s->path);
} else if (!s->path && s->export && s->port) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"nbd://%s:%s/%s", s->host, s->port, s->export);
} else if (!s->path && s->export && !s->port) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"nbd://%s/%s", s->host, s->export);
} else if (!s->path && !s->export && s->port) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"nbd://%s:%s", s->host, s->port);
} else if (!s->path && !s->export && !s->port) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"nbd://%s", s->host);
}
if (s->path) {
qdict_put_obj(opts, "path", QOBJECT(qstring_from_str(s->path)));
} else if (s->port) {
qdict_put_obj(opts, "host", QOBJECT(qstring_from_str(s->host)));
qdict_put_obj(opts, "port", QOBJECT(qstring_from_str(s->port)));
} else {
qdict_put_obj(opts, "host", QOBJECT(qstring_from_str(s->host)));
}
if (s->export) {
qdict_put_obj(opts, "export", QOBJECT(qstring_from_str(s->export)));
}
if (s->tlscredsid) {
qdict_put_obj(opts, "tls-creds",
QOBJECT(qstring_from_str(s->tlscredsid)));
}
bs->full_open_options = opts;
}
static BlockDriver bdrv_nbd = {
.format_name = "nbd",
.protocol_name = "nbd",
.instance_size = sizeof(BDRVNBDState),
.bdrv_parse_filename = nbd_parse_filename,
.bdrv_file_open = nbd_open,
.bdrv_co_preadv = nbd_client_co_preadv,
.bdrv_co_pwritev = nbd_client_co_pwritev,
.bdrv_close = nbd_close,
.bdrv_co_flush_to_os = nbd_co_flush,
.bdrv_co_pdiscard = nbd_client_co_pdiscard,
.bdrv_refresh_limits = nbd_refresh_limits,
.bdrv_getlength = nbd_getlength,
.bdrv_detach_aio_context = nbd_detach_aio_context,
.bdrv_attach_aio_context = nbd_attach_aio_context,
.bdrv_refresh_filename = nbd_refresh_filename,
.format_name = "nbd",
.protocol_name = "nbd",
.instance_size = sizeof(BDRVNBDState),
.bdrv_parse_filename = nbd_parse_filename,
.bdrv_file_open = nbd_open,
.bdrv_co_readv = nbd_co_readv,
.bdrv_co_writev = nbd_co_writev,
.bdrv_close = nbd_close,
.bdrv_co_flush_to_os = nbd_co_flush,
.bdrv_co_discard = nbd_co_discard,
.bdrv_getlength = nbd_getlength,
};
static BlockDriver bdrv_nbd_tcp = {
.format_name = "nbd",
.protocol_name = "nbd+tcp",
.instance_size = sizeof(BDRVNBDState),
.bdrv_parse_filename = nbd_parse_filename,
.bdrv_file_open = nbd_open,
.bdrv_co_preadv = nbd_client_co_preadv,
.bdrv_co_pwritev = nbd_client_co_pwritev,
.bdrv_close = nbd_close,
.bdrv_co_flush_to_os = nbd_co_flush,
.bdrv_co_pdiscard = nbd_client_co_pdiscard,
.bdrv_refresh_limits = nbd_refresh_limits,
.bdrv_getlength = nbd_getlength,
.bdrv_detach_aio_context = nbd_detach_aio_context,
.bdrv_attach_aio_context = nbd_attach_aio_context,
.bdrv_refresh_filename = nbd_refresh_filename,
.format_name = "nbd",
.protocol_name = "nbd+tcp",
.instance_size = sizeof(BDRVNBDState),
.bdrv_parse_filename = nbd_parse_filename,
.bdrv_file_open = nbd_open,
.bdrv_co_readv = nbd_co_readv,
.bdrv_co_writev = nbd_co_writev,
.bdrv_close = nbd_close,
.bdrv_co_flush_to_os = nbd_co_flush,
.bdrv_co_discard = nbd_co_discard,
.bdrv_getlength = nbd_getlength,
};
static BlockDriver bdrv_nbd_unix = {
.format_name = "nbd",
.protocol_name = "nbd+unix",
.instance_size = sizeof(BDRVNBDState),
.bdrv_parse_filename = nbd_parse_filename,
.bdrv_file_open = nbd_open,
.bdrv_co_preadv = nbd_client_co_preadv,
.bdrv_co_pwritev = nbd_client_co_pwritev,
.bdrv_close = nbd_close,
.bdrv_co_flush_to_os = nbd_co_flush,
.bdrv_co_pdiscard = nbd_client_co_pdiscard,
.bdrv_refresh_limits = nbd_refresh_limits,
.bdrv_getlength = nbd_getlength,
.bdrv_detach_aio_context = nbd_detach_aio_context,
.bdrv_attach_aio_context = nbd_attach_aio_context,
.bdrv_refresh_filename = nbd_refresh_filename,
.format_name = "nbd",
.protocol_name = "nbd+unix",
.instance_size = sizeof(BDRVNBDState),
.bdrv_parse_filename = nbd_parse_filename,
.bdrv_file_open = nbd_open,
.bdrv_co_readv = nbd_co_readv,
.bdrv_co_writev = nbd_co_writev,
.bdrv_close = nbd_close,
.bdrv_co_flush_to_os = nbd_co_flush,
.bdrv_co_discard = nbd_co_discard,
.bdrv_getlength = nbd_getlength,
};
static void bdrv_nbd_init(void)

View File

@@ -1,7 +1,7 @@
/*
* QEMU Block driver for native access to files on NFS shares
*
* Copyright (c) 2014-2016 Peter Lieven <pl@kamp.de>
* Copyright (c) 2014 Peter Lieven <pl@kamp.de>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
@@ -22,33 +22,24 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "config-host.h"
#include <poll.h>
#include "qemu-common.h"
#include "qemu/config-file.h"
#include "qemu/error-report.h"
#include "qapi/error.h"
#include "block/block_int.h"
#include "trace.h"
#include "qemu/iov.h"
#include "qemu/uri.h"
#include "qemu/cutils.h"
#include "sysemu/sysemu.h"
#include <nfsc/libnfs.h>
#define QEMU_NFS_MAX_READAHEAD_SIZE 1048576
#define QEMU_NFS_MAX_PAGECACHE_SIZE (8388608 / NFS_BLKSIZE)
#define QEMU_NFS_MAX_DEBUG_LEVEL 2
typedef struct NFSClient {
struct nfs_context *context;
struct nfsfh *fh;
int events;
bool has_zero_init;
AioContext *aio_context;
blkcnt_t st_blocks;
bool cache_used;
} NFSClient;
typedef struct NFSRPC {
@@ -58,7 +49,6 @@ typedef struct NFSRPC {
struct stat *st;
Coroutine *co;
QEMUBH *bh;
NFSClient *client;
} NFSRPC;
static void nfs_process_read(void *arg);
@@ -68,10 +58,10 @@ static void nfs_set_events(NFSClient *client)
{
int ev = nfs_which_events(client->context);
if (ev != client->events) {
aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
false,
(ev & POLLIN) ? nfs_process_read : NULL,
(ev & POLLOUT) ? nfs_process_write : NULL, client);
qemu_aio_set_fd_handler(nfs_get_fd(client->context),
(ev & POLLIN) ? nfs_process_read : NULL,
(ev & POLLOUT) ? nfs_process_write : NULL,
client);
}
client->events = ev;
@@ -94,17 +84,15 @@ static void nfs_process_write(void *arg)
static void nfs_co_init_task(NFSClient *client, NFSRPC *task)
{
*task = (NFSRPC) {
.co = qemu_coroutine_self(),
.client = client,
.co = qemu_coroutine_self(),
};
}
static void nfs_co_generic_bh_cb(void *opaque)
{
NFSRPC *task = opaque;
task->complete = 1;
qemu_bh_delete(task->bh);
qemu_coroutine_enter(task->co);
qemu_coroutine_enter(task->co, NULL);
}
static void
@@ -112,6 +100,7 @@ nfs_co_generic_cb(int ret, struct nfs_context *nfs, void *data,
void *private_data)
{
NFSRPC *task = private_data;
task->complete = 1;
task->ret = ret;
if (task->ret > 0 && task->iov) {
if (task->ret <= task->iov->size) {
@@ -127,11 +116,8 @@ nfs_co_generic_cb(int ret, struct nfs_context *nfs, void *data,
error_report("NFS Error: %s", nfs_get_error(nfs));
}
if (task->co) {
task->bh = aio_bh_new(task->client->aio_context,
nfs_co_generic_bh_cb, task);
task->bh = qemu_bh_new(nfs_co_generic_bh_cb, task);
qemu_bh_schedule(task->bh);
} else {
task->complete = 1;
}
}
@@ -179,11 +165,7 @@ static int coroutine_fn nfs_co_writev(BlockDriverState *bs,
nfs_co_init_task(client, &task);
buf = g_try_malloc(nb_sectors * BDRV_SECTOR_SIZE);
if (nb_sectors && buf == NULL) {
return -ENOMEM;
}
buf = g_malloc(nb_sectors * BDRV_SECTOR_SIZE);
qemu_iovec_to_buf(iov, 0, buf, nb_sectors * BDRV_SECTOR_SIZE);
if (nfs_pwrite_async(client->context, client->fh,
@@ -242,32 +224,13 @@ static QemuOptsList runtime_opts = {
},
};
static void nfs_detach_aio_context(BlockDriverState *bs)
{
NFSClient *client = bs->opaque;
aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
false, NULL, NULL, NULL);
client->events = 0;
}
static void nfs_attach_aio_context(BlockDriverState *bs,
AioContext *new_context)
{
NFSClient *client = bs->opaque;
client->aio_context = new_context;
nfs_set_events(client);
}
static void nfs_client_close(NFSClient *client)
{
if (client->context) {
if (client->fh) {
nfs_close(client->context, client->fh);
}
aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
false, NULL, NULL, NULL);
qemu_aio_set_fd_handler(nfs_get_fd(client->context), NULL, NULL, NULL);
nfs_destroy_context(client->context);
}
memset(client, 0, sizeof(NFSClient));
@@ -280,7 +243,7 @@ static void nfs_file_close(BlockDriverState *bs)
}
static int64_t nfs_client_open(NFSClient *client, const char *filename,
int flags, Error **errp, int open_flags)
int flags, Error **errp)
{
int ret = -EINVAL, i;
struct stat st;
@@ -293,10 +256,6 @@ static int64_t nfs_client_open(NFSClient *client, const char *filename,
error_setg(errp, "Invalid URL specified");
goto fail;
}
if (!uri->server) {
error_setg(errp, "Invalid URL specified");
goto fail;
}
strp = strrchr(uri->path, '/');
if (strp == NULL) {
error_setg(errp, "Invalid URL specified");
@@ -313,69 +272,17 @@ static int64_t nfs_client_open(NFSClient *client, const char *filename,
qp = query_params_parse(uri->query);
for (i = 0; i < qp->n; i++) {
unsigned long long val;
if (!qp->p[i].value) {
error_setg(errp, "Value for NFS parameter expected: %s",
qp->p[i].name);
goto fail;
}
if (parse_uint_full(qp->p[i].value, &val, 0)) {
error_setg(errp, "Illegal value for NFS parameter: %s",
qp->p[i].name);
goto fail;
}
if (!strcmp(qp->p[i].name, "uid")) {
nfs_set_uid(client->context, val);
} else if (!strcmp(qp->p[i].name, "gid")) {
nfs_set_gid(client->context, val);
} else if (!strcmp(qp->p[i].name, "tcp-syncnt")) {
nfs_set_tcp_syncnt(client->context, val);
#ifdef LIBNFS_FEATURE_READAHEAD
} else if (!strcmp(qp->p[i].name, "readahead")) {
if (open_flags & BDRV_O_NOCACHE) {
error_setg(errp, "Cannot enable NFS readahead "
"if cache.direct = on");
goto fail;
}
if (val > QEMU_NFS_MAX_READAHEAD_SIZE) {
error_report("NFS Warning: Truncating NFS readahead"
" size to %d", QEMU_NFS_MAX_READAHEAD_SIZE);
val = QEMU_NFS_MAX_READAHEAD_SIZE;
}
nfs_set_readahead(client->context, val);
#ifdef LIBNFS_FEATURE_PAGECACHE
nfs_set_pagecache_ttl(client->context, 0);
#endif
client->cache_used = true;
#endif
#ifdef LIBNFS_FEATURE_PAGECACHE
nfs_set_pagecache_ttl(client->context, 0);
} else if (!strcmp(qp->p[i].name, "pagecache")) {
if (open_flags & BDRV_O_NOCACHE) {
error_setg(errp, "Cannot enable NFS pagecache "
"if cache.direct = on");
goto fail;
}
if (val > QEMU_NFS_MAX_PAGECACHE_SIZE) {
error_report("NFS Warning: Truncating NFS pagecache"
" size to %d pages", QEMU_NFS_MAX_PAGECACHE_SIZE);
val = QEMU_NFS_MAX_PAGECACHE_SIZE;
}
nfs_set_pagecache(client->context, val);
nfs_set_pagecache_ttl(client->context, 0);
client->cache_used = true;
#endif
#ifdef LIBNFS_FEATURE_DEBUG
} else if (!strcmp(qp->p[i].name, "debug")) {
/* limit the maximum debug level to avoid potential flooding
* of our log files. */
if (val > QEMU_NFS_MAX_DEBUG_LEVEL) {
error_report("NFS Warning: Limiting NFS debug level"
" to %d", QEMU_NFS_MAX_DEBUG_LEVEL);
val = QEMU_NFS_MAX_DEBUG_LEVEL;
}
nfs_set_debug(client->context, val);
#endif
if (!strncmp(qp->p[i].name, "uid", 3)) {
nfs_set_uid(client->context, atoi(qp->p[i].value));
} else if (!strncmp(qp->p[i].name, "gid", 3)) {
nfs_set_gid(client->context, atoi(qp->p[i].value));
} else if (!strncmp(qp->p[i].name, "tcp-syncnt", 10)) {
nfs_set_tcp_syncnt(client->context, atoi(qp->p[i].value));
} else {
error_setg(errp, "Unknown NFS parameter name: %s",
qp->p[i].name);
@@ -414,7 +321,6 @@ static int64_t nfs_client_open(NFSClient *client, const char *filename,
}
ret = DIV_ROUND_UP(st.st_size, BDRV_SECTOR_SIZE);
client->st_blocks = st.st_blocks;
client->has_zero_init = S_ISREG(st.st_mode);
goto out;
fail:
@@ -435,54 +341,38 @@ static int nfs_file_open(BlockDriverState *bs, QDict *options, int flags,
QemuOpts *opts;
Error *local_err = NULL;
client->aio_context = bdrv_get_aio_context(bs);
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto out;
return -EINVAL;
}
ret = nfs_client_open(client, qemu_opt_get(opts, "filename"),
(flags & BDRV_O_RDWR) ? O_RDWR : O_RDONLY,
errp, bs->open_flags);
errp);
if (ret < 0) {
goto out;
return ret;
}
bs->total_sectors = ret;
ret = 0;
out:
qemu_opts_del(opts);
return ret;
return 0;
}
static QemuOptsList nfs_create_opts = {
.name = "nfs-create-opts",
.head = QTAILQ_HEAD_INITIALIZER(nfs_create_opts.head),
.desc = {
{
.name = BLOCK_OPT_SIZE,
.type = QEMU_OPT_SIZE,
.help = "Virtual disk size"
},
{ /* end of list */ }
}
};
static int nfs_file_create(const char *url, QemuOpts *opts, Error **errp)
static int nfs_file_create(const char *url, QEMUOptionParameter *options,
Error **errp)
{
int ret = 0;
int64_t total_size = 0;
NFSClient *client = g_new0(NFSClient, 1);
client->aio_context = qemu_get_aio_context();
NFSClient *client = g_malloc0(sizeof(NFSClient));
/* Read out options */
total_size = ROUND_UP(qemu_opt_get_size_del(opts, BLOCK_OPT_SIZE, 0),
BDRV_SECTOR_SIZE);
while (options && options->name) {
if (!strcmp(options->name, "size")) {
total_size = options->value.n;
}
options++;
}
ret = nfs_client_open(client, url, O_CREAT, errp, 0);
ret = nfs_client_open(client, url, O_CREAT, errp);
if (ret < 0) {
goto out;
}
@@ -505,11 +395,6 @@ static int64_t nfs_get_allocated_file_size(BlockDriverState *bs)
NFSRPC task = {0};
struct stat st;
if (bdrv_is_read_only(bs) &&
!(bs->open_flags & BDRV_O_NOCACHE)) {
return client->st_blocks * 512;
}
task.st = &st;
if (nfs_fstat_async(client->context, client->fh, nfs_co_generic_cb,
&task) != 0) {
@@ -518,10 +403,10 @@ static int64_t nfs_get_allocated_file_size(BlockDriverState *bs)
while (!task.complete) {
nfs_set_events(client);
aio_poll(client->aio_context, true);
qemu_aio_wait();
}
return (task.ret < 0 ? task.ret : st.st_blocks * 512);
return (task.ret < 0 ? task.ret : st.st_blocks * st.st_blksize);
}
static int nfs_file_truncate(BlockDriverState *bs, int64_t offset)
@@ -530,76 +415,23 @@ static int nfs_file_truncate(BlockDriverState *bs, int64_t offset)
return nfs_ftruncate(client->context, client->fh, offset);
}
/* Note that this will not re-establish a connection with the NFS server
* - it is effectively a NOP. */
static int nfs_reopen_prepare(BDRVReopenState *state,
BlockReopenQueue *queue, Error **errp)
{
NFSClient *client = state->bs->opaque;
struct stat st;
int ret = 0;
if (state->flags & BDRV_O_RDWR && bdrv_is_read_only(state->bs)) {
error_setg(errp, "Cannot open a read-only mount as read-write");
return -EACCES;
}
if ((state->flags & BDRV_O_NOCACHE) && client->cache_used) {
error_setg(errp, "Cannot disable cache if libnfs readahead or"
" pagecache is enabled");
return -EINVAL;
}
/* Update cache for read-only reopens */
if (!(state->flags & BDRV_O_RDWR)) {
ret = nfs_fstat(client->context, client->fh, &st);
if (ret < 0) {
error_setg(errp, "Failed to fstat file: %s",
nfs_get_error(client->context));
return ret;
}
client->st_blocks = st.st_blocks;
}
return 0;
}
#ifdef LIBNFS_FEATURE_PAGECACHE
static void nfs_invalidate_cache(BlockDriverState *bs,
Error **errp)
{
NFSClient *client = bs->opaque;
nfs_pagecache_invalidate(client->context, client->fh);
}
#endif
static BlockDriver bdrv_nfs = {
.format_name = "nfs",
.protocol_name = "nfs",
.format_name = "nfs",
.protocol_name = "nfs",
.instance_size = sizeof(NFSClient),
.bdrv_needs_filename = true,
.create_opts = &nfs_create_opts,
.instance_size = sizeof(NFSClient),
.bdrv_needs_filename = true,
.bdrv_has_zero_init = nfs_has_zero_init,
.bdrv_get_allocated_file_size = nfs_get_allocated_file_size,
.bdrv_truncate = nfs_file_truncate,
.bdrv_has_zero_init = nfs_has_zero_init,
.bdrv_get_allocated_file_size = nfs_get_allocated_file_size,
.bdrv_truncate = nfs_file_truncate,
.bdrv_file_open = nfs_file_open,
.bdrv_close = nfs_file_close,
.bdrv_create = nfs_file_create,
.bdrv_file_open = nfs_file_open,
.bdrv_close = nfs_file_close,
.bdrv_create = nfs_file_create,
.bdrv_reopen_prepare = nfs_reopen_prepare,
.bdrv_co_readv = nfs_co_readv,
.bdrv_co_writev = nfs_co_writev,
.bdrv_co_flush_to_disk = nfs_co_flush,
.bdrv_detach_aio_context = nfs_detach_aio_context,
.bdrv_attach_aio_context = nfs_attach_aio_context,
#ifdef LIBNFS_FEATURE_PAGECACHE
.bdrv_invalidate_cache = nfs_invalidate_cache,
#endif
.bdrv_co_readv = nfs_co_readv,
.bdrv_co_writev = nfs_co_writev,
.bdrv_co_flush_to_disk = nfs_co_flush,
};
static void nfs_block_init(void)

View File

@@ -1,286 +0,0 @@
/*
* Null block driver
*
* Authors:
* Fam Zheng <famz@redhat.com>
*
* Copyright (C) 2014 Red Hat, Inc.
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qapi/qmp/qdict.h"
#include "qapi/qmp/qstring.h"
#include "block/block_int.h"
#define NULL_OPT_LATENCY "latency-ns"
#define NULL_OPT_ZEROES "read-zeroes"
typedef struct {
int64_t length;
int64_t latency_ns;
bool read_zeroes;
} BDRVNullState;
static QemuOptsList runtime_opts = {
.name = "null",
.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
.desc = {
{
.name = "filename",
.type = QEMU_OPT_STRING,
.help = "",
},
{
.name = BLOCK_OPT_SIZE,
.type = QEMU_OPT_SIZE,
.help = "size of the null block",
},
{
.name = NULL_OPT_LATENCY,
.type = QEMU_OPT_NUMBER,
.help = "nanoseconds (approximated) to wait "
"before completing request",
},
{
.name = NULL_OPT_ZEROES,
.type = QEMU_OPT_BOOL,
.help = "return zeroes when read",
},
{ /* end of list */ }
},
};
static int null_file_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
QemuOpts *opts;
BDRVNullState *s = bs->opaque;
int ret = 0;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &error_abort);
s->length =
qemu_opt_get_size(opts, BLOCK_OPT_SIZE, 1 << 30);
s->latency_ns =
qemu_opt_get_number(opts, NULL_OPT_LATENCY, 0);
if (s->latency_ns < 0) {
error_setg(errp, "latency-ns is invalid");
ret = -EINVAL;
}
s->read_zeroes = qemu_opt_get_bool(opts, NULL_OPT_ZEROES, false);
qemu_opts_del(opts);
return ret;
}
static void null_close(BlockDriverState *bs)
{
}
static int64_t null_getlength(BlockDriverState *bs)
{
BDRVNullState *s = bs->opaque;
return s->length;
}
static coroutine_fn int null_co_common(BlockDriverState *bs)
{
BDRVNullState *s = bs->opaque;
if (s->latency_ns) {
co_aio_sleep_ns(bdrv_get_aio_context(bs), QEMU_CLOCK_REALTIME,
s->latency_ns);
}
return 0;
}
static coroutine_fn int null_co_readv(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
QEMUIOVector *qiov)
{
BDRVNullState *s = bs->opaque;
if (s->read_zeroes) {
qemu_iovec_memset(qiov, 0, 0, nb_sectors * BDRV_SECTOR_SIZE);
}
return null_co_common(bs);
}
static coroutine_fn int null_co_writev(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
QEMUIOVector *qiov)
{
return null_co_common(bs);
}
static coroutine_fn int null_co_flush(BlockDriverState *bs)
{
return null_co_common(bs);
}
typedef struct {
BlockAIOCB common;
QEMUBH *bh;
QEMUTimer timer;
} NullAIOCB;
static const AIOCBInfo null_aiocb_info = {
.aiocb_size = sizeof(NullAIOCB),
};
static void null_bh_cb(void *opaque)
{
NullAIOCB *acb = opaque;
acb->common.cb(acb->common.opaque, 0);
qemu_bh_delete(acb->bh);
qemu_aio_unref(acb);
}
static void null_timer_cb(void *opaque)
{
NullAIOCB *acb = opaque;
acb->common.cb(acb->common.opaque, 0);
timer_deinit(&acb->timer);
qemu_aio_unref(acb);
}
static inline BlockAIOCB *null_aio_common(BlockDriverState *bs,
BlockCompletionFunc *cb,
void *opaque)
{
NullAIOCB *acb;
BDRVNullState *s = bs->opaque;
acb = qemu_aio_get(&null_aiocb_info, bs, cb, opaque);
/* Only emulate latency after vcpu is running. */
if (s->latency_ns) {
aio_timer_init(bdrv_get_aio_context(bs), &acb->timer,
QEMU_CLOCK_REALTIME, SCALE_NS,
null_timer_cb, acb);
timer_mod_ns(&acb->timer,
qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + s->latency_ns);
} else {
acb->bh = aio_bh_new(bdrv_get_aio_context(bs), null_bh_cb, acb);
qemu_bh_schedule(acb->bh);
}
return &acb->common;
}
static BlockAIOCB *null_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov,
int nb_sectors,
BlockCompletionFunc *cb,
void *opaque)
{
BDRVNullState *s = bs->opaque;
if (s->read_zeroes) {
qemu_iovec_memset(qiov, 0, 0, nb_sectors * BDRV_SECTOR_SIZE);
}
return null_aio_common(bs, cb, opaque);
}
static BlockAIOCB *null_aio_writev(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov,
int nb_sectors,
BlockCompletionFunc *cb,
void *opaque)
{
return null_aio_common(bs, cb, opaque);
}
static BlockAIOCB *null_aio_flush(BlockDriverState *bs,
BlockCompletionFunc *cb,
void *opaque)
{
return null_aio_common(bs, cb, opaque);
}
static int null_reopen_prepare(BDRVReopenState *reopen_state,
BlockReopenQueue *queue, Error **errp)
{
return 0;
}
static int64_t coroutine_fn null_co_get_block_status(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors, int *pnum,
BlockDriverState **file)
{
BDRVNullState *s = bs->opaque;
off_t start = sector_num * BDRV_SECTOR_SIZE;
*pnum = nb_sectors;
*file = bs;
if (s->read_zeroes) {
return BDRV_BLOCK_OFFSET_VALID | start | BDRV_BLOCK_ZERO;
} else {
return BDRV_BLOCK_OFFSET_VALID | start;
}
}
static void null_refresh_filename(BlockDriverState *bs, QDict *opts)
{
QINCREF(opts);
qdict_del(opts, "filename");
if (!qdict_size(opts)) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename), "%s://",
bs->drv->format_name);
}
qdict_put(opts, "driver", qstring_from_str(bs->drv->format_name));
bs->full_open_options = opts;
}
static BlockDriver bdrv_null_co = {
.format_name = "null-co",
.protocol_name = "null-co",
.instance_size = sizeof(BDRVNullState),
.bdrv_file_open = null_file_open,
.bdrv_close = null_close,
.bdrv_getlength = null_getlength,
.bdrv_co_readv = null_co_readv,
.bdrv_co_writev = null_co_writev,
.bdrv_co_flush_to_disk = null_co_flush,
.bdrv_reopen_prepare = null_reopen_prepare,
.bdrv_co_get_block_status = null_co_get_block_status,
.bdrv_refresh_filename = null_refresh_filename,
};
static BlockDriver bdrv_null_aio = {
.format_name = "null-aio",
.protocol_name = "null-aio",
.instance_size = sizeof(BDRVNullState),
.bdrv_file_open = null_file_open,
.bdrv_close = null_close,
.bdrv_getlength = null_getlength,
.bdrv_aio_readv = null_aio_readv,
.bdrv_aio_writev = null_aio_writev,
.bdrv_aio_flush = null_aio_flush,
.bdrv_reopen_prepare = null_reopen_prepare,
.bdrv_co_get_block_status = null_co_get_block_status,
.bdrv_refresh_filename = null_refresh_filename,
};
static void bdrv_null_init(void)
{
bdrv_register(&bdrv_null_co);
bdrv_register(&bdrv_null_aio);
}
block_init(bdrv_null_init);

View File

@@ -2,12 +2,8 @@
* Block driver for Parallels disk image format
*
* Copyright (c) 2007 Alex Beregszaszi
* Copyright (c) 2015 Denis V. Lunev <den@openvz.org>
*
* This code was originally based on comparing different disk images created
* by Parallels. Currently it is based on opened OpenVZ sources
* available at
* http://git.openvz.org/?p=ploop;a=summary
* This code is based on comparing different disk images created by Parallels.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
@@ -27,578 +23,74 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "block/block_int.h"
#include "sysemu/block-backend.h"
#include "qemu/module.h"
#include "qemu/bswap.h"
#include "qemu/bitmap.h"
#include "qapi/util.h"
/**************************************************************/
#define HEADER_MAGIC "WithoutFreeSpace"
#define HEADER_MAGIC2 "WithouFreSpacExt"
#define HEADER_VERSION 2
#define HEADER_INUSE_MAGIC (0x746F6E59)
#define MAX_PARALLELS_IMAGE_FACTOR (1ull << 32)
#define DEFAULT_CLUSTER_SIZE 1048576 /* 1 MiB */
#define HEADER_SIZE 64
// always little-endian
typedef struct ParallelsHeader {
struct parallels_header {
char magic[16]; // "WithoutFreeSpace"
uint32_t version;
uint32_t heads;
uint32_t cylinders;
uint32_t tracks;
uint32_t bat_entries;
uint64_t nb_sectors;
uint32_t inuse;
uint32_t data_off;
char padding[12];
} QEMU_PACKED ParallelsHeader;
typedef enum ParallelsPreallocMode {
PRL_PREALLOC_MODE_FALLOCATE = 0,
PRL_PREALLOC_MODE_TRUNCATE = 1,
PRL_PREALLOC_MODE__MAX = 2,
} ParallelsPreallocMode;
static const char *prealloc_mode_lookup[] = {
"falloc",
"truncate",
NULL,
};
uint32_t catalog_entries;
uint32_t nb_sectors;
char padding[24];
} QEMU_PACKED;
typedef struct BDRVParallelsState {
/** Locking is conservative, the lock protects
* - image file extending (truncate, fallocate)
* - any access to block allocation table
*/
CoMutex lock;
ParallelsHeader *header;
uint32_t header_size;
bool header_unclean;
unsigned long *bat_dirty_bmap;
unsigned int bat_dirty_block;
uint32_t *bat_bitmap;
unsigned int bat_size;
int64_t data_end;
uint64_t prealloc_size;
ParallelsPreallocMode prealloc_mode;
uint32_t *catalog_bitmap;
unsigned int catalog_size;
unsigned int tracks;
unsigned int off_multiplier;
} BDRVParallelsState;
#define PARALLELS_OPT_PREALLOC_MODE "prealloc-mode"
#define PARALLELS_OPT_PREALLOC_SIZE "prealloc-size"
static QemuOptsList parallels_runtime_opts = {
.name = "parallels",
.head = QTAILQ_HEAD_INITIALIZER(parallels_runtime_opts.head),
.desc = {
{
.name = PARALLELS_OPT_PREALLOC_SIZE,
.type = QEMU_OPT_SIZE,
.help = "Preallocation size on image expansion",
.def_value_str = "128MiB",
},
{
.name = PARALLELS_OPT_PREALLOC_MODE,
.type = QEMU_OPT_STRING,
.help = "Preallocation mode on image expansion "
"(allowed values: falloc, truncate)",
.def_value_str = "falloc",
},
{ /* end of list */ },
},
};
static int64_t bat2sect(BDRVParallelsState *s, uint32_t idx)
static int parallels_probe(const uint8_t *buf, int buf_size, const char *filename)
{
return (uint64_t)le32_to_cpu(s->bat_bitmap[idx]) * s->off_multiplier;
}
const struct parallels_header *ph = (const void *)buf;
static uint32_t bat_entry_off(uint32_t idx)
{
return sizeof(ParallelsHeader) + sizeof(uint32_t) * idx;
}
if (buf_size < HEADER_SIZE)
return 0;
static int64_t seek_to_sector(BDRVParallelsState *s, int64_t sector_num)
{
uint32_t index, offset;
index = sector_num / s->tracks;
offset = sector_num % s->tracks;
/* not allocated */
if ((index >= s->bat_size) || (s->bat_bitmap[index] == 0)) {
return -1;
}
return bat2sect(s, index) + offset;
}
static int cluster_remainder(BDRVParallelsState *s, int64_t sector_num,
int nb_sectors)
{
int ret = s->tracks - sector_num % s->tracks;
return MIN(nb_sectors, ret);
}
static int64_t block_status(BDRVParallelsState *s, int64_t sector_num,
int nb_sectors, int *pnum)
{
int64_t start_off = -2, prev_end_off = -2;
*pnum = 0;
while (nb_sectors > 0 || start_off == -2) {
int64_t offset = seek_to_sector(s, sector_num);
int to_end;
if (start_off == -2) {
start_off = offset;
prev_end_off = offset;
} else if (offset != prev_end_off) {
break;
}
to_end = cluster_remainder(s, sector_num, nb_sectors);
nb_sectors -= to_end;
sector_num += to_end;
*pnum += to_end;
if (offset > 0) {
prev_end_off += to_end;
}
}
return start_off;
}
static int64_t allocate_clusters(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum)
{
BDRVParallelsState *s = bs->opaque;
uint32_t idx, to_allocate, i;
int64_t pos, space;
pos = block_status(s, sector_num, nb_sectors, pnum);
if (pos > 0) {
return pos;
}
idx = sector_num / s->tracks;
if (idx >= s->bat_size) {
return -EINVAL;
}
to_allocate = DIV_ROUND_UP(sector_num + *pnum, s->tracks) - idx;
space = to_allocate * s->tracks;
if (s->data_end + space > bdrv_getlength(bs->file->bs) >> BDRV_SECTOR_BITS) {
int ret;
space += s->prealloc_size;
if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE) {
ret = bdrv_pwrite_zeroes(bs->file,
s->data_end << BDRV_SECTOR_BITS,
space << BDRV_SECTOR_BITS, 0);
} else {
ret = bdrv_truncate(bs->file->bs,
(s->data_end + space) << BDRV_SECTOR_BITS);
}
if (ret < 0) {
return ret;
}
}
for (i = 0; i < to_allocate; i++) {
s->bat_bitmap[idx + i] = cpu_to_le32(s->data_end / s->off_multiplier);
s->data_end += s->tracks;
bitmap_set(s->bat_dirty_bmap,
bat_entry_off(idx + i) / s->bat_dirty_block, 1);
}
return bat2sect(s, idx) + sector_num % s->tracks;
}
static coroutine_fn int parallels_co_flush_to_os(BlockDriverState *bs)
{
BDRVParallelsState *s = bs->opaque;
unsigned long size = DIV_ROUND_UP(s->header_size, s->bat_dirty_block);
unsigned long bit;
qemu_co_mutex_lock(&s->lock);
bit = find_first_bit(s->bat_dirty_bmap, size);
while (bit < size) {
uint32_t off = bit * s->bat_dirty_block;
uint32_t to_write = s->bat_dirty_block;
int ret;
if (off + to_write > s->header_size) {
to_write = s->header_size - off;
}
ret = bdrv_pwrite(bs->file, off, (uint8_t *)s->header + off,
to_write);
if (ret < 0) {
qemu_co_mutex_unlock(&s->lock);
return ret;
}
bit = find_next_bit(s->bat_dirty_bmap, size, bit + 1);
}
bitmap_zero(s->bat_dirty_bmap, size);
qemu_co_mutex_unlock(&s->lock);
return 0;
}
static int64_t coroutine_fn parallels_co_get_block_status(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *pnum, BlockDriverState **file)
{
BDRVParallelsState *s = bs->opaque;
int64_t offset;
qemu_co_mutex_lock(&s->lock);
offset = block_status(s, sector_num, nb_sectors, pnum);
qemu_co_mutex_unlock(&s->lock);
if (offset < 0) {
return 0;
}
*file = bs->file->bs;
return (offset << BDRV_SECTOR_BITS) |
BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID;
}
static coroutine_fn int parallels_co_writev(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
{
BDRVParallelsState *s = bs->opaque;
uint64_t bytes_done = 0;
QEMUIOVector hd_qiov;
int ret = 0;
qemu_iovec_init(&hd_qiov, qiov->niov);
while (nb_sectors > 0) {
int64_t position;
int n, nbytes;
qemu_co_mutex_lock(&s->lock);
position = allocate_clusters(bs, sector_num, nb_sectors, &n);
qemu_co_mutex_unlock(&s->lock);
if (position < 0) {
ret = (int)position;
break;
}
nbytes = n << BDRV_SECTOR_BITS;
qemu_iovec_reset(&hd_qiov);
qemu_iovec_concat(&hd_qiov, qiov, bytes_done, nbytes);
ret = bdrv_co_writev(bs->file, position, n, &hd_qiov);
if (ret < 0) {
break;
}
nb_sectors -= n;
sector_num += n;
bytes_done += nbytes;
}
qemu_iovec_destroy(&hd_qiov);
return ret;
}
static coroutine_fn int parallels_co_readv(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
{
BDRVParallelsState *s = bs->opaque;
uint64_t bytes_done = 0;
QEMUIOVector hd_qiov;
int ret = 0;
qemu_iovec_init(&hd_qiov, qiov->niov);
while (nb_sectors > 0) {
int64_t position;
int n, nbytes;
qemu_co_mutex_lock(&s->lock);
position = block_status(s, sector_num, nb_sectors, &n);
qemu_co_mutex_unlock(&s->lock);
nbytes = n << BDRV_SECTOR_BITS;
if (position < 0) {
qemu_iovec_memset(qiov, bytes_done, 0, nbytes);
} else {
qemu_iovec_reset(&hd_qiov);
qemu_iovec_concat(&hd_qiov, qiov, bytes_done, nbytes);
ret = bdrv_co_readv(bs->file, position, n, &hd_qiov);
if (ret < 0) {
break;
}
}
nb_sectors -= n;
sector_num += n;
bytes_done += nbytes;
}
qemu_iovec_destroy(&hd_qiov);
return ret;
}
static int parallels_check(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix)
{
BDRVParallelsState *s = bs->opaque;
int64_t size, prev_off, high_off;
int ret;
uint32_t i;
bool flush_bat = false;
int cluster_size = s->tracks << BDRV_SECTOR_BITS;
size = bdrv_getlength(bs->file->bs);
if (size < 0) {
res->check_errors++;
return size;
}
if (s->header_unclean) {
fprintf(stderr, "%s image was not closed correctly\n",
fix & BDRV_FIX_ERRORS ? "Repairing" : "ERROR");
res->corruptions++;
if (fix & BDRV_FIX_ERRORS) {
/* parallels_close will do the job right */
res->corruptions_fixed++;
s->header_unclean = false;
}
}
res->bfi.total_clusters = s->bat_size;
res->bfi.compressed_clusters = 0; /* compression is not supported */
high_off = 0;
prev_off = 0;
for (i = 0; i < s->bat_size; i++) {
int64_t off = bat2sect(s, i) << BDRV_SECTOR_BITS;
if (off == 0) {
prev_off = 0;
continue;
}
/* cluster outside the image */
if (off > size) {
fprintf(stderr, "%s cluster %u is outside image\n",
fix & BDRV_FIX_ERRORS ? "Repairing" : "ERROR", i);
res->corruptions++;
if (fix & BDRV_FIX_ERRORS) {
prev_off = 0;
s->bat_bitmap[i] = 0;
res->corruptions_fixed++;
flush_bat = true;
continue;
}
}
res->bfi.allocated_clusters++;
if (off > high_off) {
high_off = off;
}
if (prev_off != 0 && (prev_off + cluster_size) != off) {
res->bfi.fragmented_clusters++;
}
prev_off = off;
}
if (flush_bat) {
ret = bdrv_pwrite_sync(bs->file, 0, s->header, s->header_size);
if (ret < 0) {
res->check_errors++;
return ret;
}
}
res->image_end_offset = high_off + cluster_size;
if (size > res->image_end_offset) {
int64_t count;
count = DIV_ROUND_UP(size - res->image_end_offset, cluster_size);
fprintf(stderr, "%s space leaked at the end of the image %" PRId64 "\n",
fix & BDRV_FIX_LEAKS ? "Repairing" : "ERROR",
size - res->image_end_offset);
res->leaks += count;
if (fix & BDRV_FIX_LEAKS) {
ret = bdrv_truncate(bs->file->bs, res->image_end_offset);
if (ret < 0) {
res->check_errors++;
return ret;
}
res->leaks_fixed += count;
}
}
if (!memcmp(ph->magic, HEADER_MAGIC, 16) &&
(le32_to_cpu(ph->version) == HEADER_VERSION))
return 100;
return 0;
}
static int parallels_create(const char *filename, QemuOpts *opts, Error **errp)
{
int64_t total_size, cl_size;
uint8_t tmp[BDRV_SECTOR_SIZE];
Error *local_err = NULL;
BlockBackend *file;
uint32_t bat_entries, bat_sectors;
ParallelsHeader header;
int ret;
total_size = ROUND_UP(qemu_opt_get_size_del(opts, BLOCK_OPT_SIZE, 0),
BDRV_SECTOR_SIZE);
cl_size = ROUND_UP(qemu_opt_get_size_del(opts, BLOCK_OPT_CLUSTER_SIZE,
DEFAULT_CLUSTER_SIZE), BDRV_SECTOR_SIZE);
if (total_size >= MAX_PARALLELS_IMAGE_FACTOR * cl_size) {
error_propagate(errp, local_err);
return -E2BIG;
}
ret = bdrv_create_file(filename, opts, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
file = blk_new_open(filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, &local_err);
if (file == NULL) {
error_propagate(errp, local_err);
return -EIO;
}
blk_set_allow_write_beyond_eof(file, true);
ret = blk_truncate(file, 0);
if (ret < 0) {
goto exit;
}
bat_entries = DIV_ROUND_UP(total_size, cl_size);
bat_sectors = DIV_ROUND_UP(bat_entry_off(bat_entries), cl_size);
bat_sectors = (bat_sectors * cl_size) >> BDRV_SECTOR_BITS;
memset(&header, 0, sizeof(header));
memcpy(header.magic, HEADER_MAGIC2, sizeof(header.magic));
header.version = cpu_to_le32(HEADER_VERSION);
/* don't care much about geometry, it is not used on image level */
header.heads = cpu_to_le32(16);
header.cylinders = cpu_to_le32(total_size / BDRV_SECTOR_SIZE / 16 / 32);
header.tracks = cpu_to_le32(cl_size >> BDRV_SECTOR_BITS);
header.bat_entries = cpu_to_le32(bat_entries);
header.nb_sectors = cpu_to_le64(DIV_ROUND_UP(total_size, BDRV_SECTOR_SIZE));
header.data_off = cpu_to_le32(bat_sectors);
/* write all the data */
memset(tmp, 0, sizeof(tmp));
memcpy(tmp, &header, sizeof(header));
ret = blk_pwrite(file, 0, tmp, BDRV_SECTOR_SIZE, 0);
if (ret < 0) {
goto exit;
}
ret = blk_pwrite_zeroes(file, BDRV_SECTOR_SIZE,
(bat_sectors - 1) << BDRV_SECTOR_BITS, 0);
if (ret < 0) {
goto exit;
}
ret = 0;
done:
blk_unref(file);
return ret;
exit:
error_setg_errno(errp, -ret, "Failed to create Parallels image");
goto done;
}
static int parallels_probe(const uint8_t *buf, int buf_size,
const char *filename)
{
const ParallelsHeader *ph = (const void *)buf;
if (buf_size < sizeof(ParallelsHeader)) {
return 0;
}
if ((!memcmp(ph->magic, HEADER_MAGIC, 16) ||
!memcmp(ph->magic, HEADER_MAGIC2, 16)) &&
(le32_to_cpu(ph->version) == HEADER_VERSION)) {
return 100;
}
return 0;
}
static int parallels_update_header(BlockDriverState *bs)
{
BDRVParallelsState *s = bs->opaque;
unsigned size = MAX(bdrv_opt_mem_align(bs->file->bs),
sizeof(ParallelsHeader));
if (size > s->header_size) {
size = s->header_size;
}
return bdrv_pwrite_sync(bs->file, 0, s->header, size);
}
static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVParallelsState *s = bs->opaque;
ParallelsHeader ph;
int ret, size, i;
QemuOpts *opts = NULL;
Error *local_err = NULL;
char *buf;
int i;
struct parallels_header ph;
int ret;
bs->read_only = 1; // no write support yet
ret = bdrv_pread(bs->file, 0, &ph, sizeof(ph));
if (ret < 0) {
goto fail;
}
bs->total_sectors = le64_to_cpu(ph.nb_sectors);
if (memcmp(ph.magic, HEADER_MAGIC, 16) ||
(le32_to_cpu(ph.version) != HEADER_VERSION)) {
error_setg(errp, "Image not in Parallels format");
ret = -EINVAL;
goto fail;
}
if (le32_to_cpu(ph.version) != HEADER_VERSION) {
goto fail_format;
}
if (!memcmp(ph.magic, HEADER_MAGIC, 16)) {
s->off_multiplier = 1;
bs->total_sectors = 0xffffffff & bs->total_sectors;
} else if (!memcmp(ph.magic, HEADER_MAGIC2, 16)) {
s->off_multiplier = le32_to_cpu(ph.tracks);
} else {
goto fail_format;
}
bs->total_sectors = le32_to_cpu(ph.nb_sectors);
s->tracks = le32_to_cpu(ph.tracks);
if (s->tracks == 0) {
@@ -606,165 +98,87 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
ret = -EINVAL;
goto fail;
}
if (s->tracks > INT32_MAX/513) {
error_setg(errp, "Invalid image: Too big cluster");
ret = -EFBIG;
goto fail;
}
s->bat_size = le32_to_cpu(ph.bat_entries);
if (s->bat_size > INT_MAX / sizeof(uint32_t)) {
s->catalog_size = le32_to_cpu(ph.catalog_entries);
if (s->catalog_size > INT_MAX / 4) {
error_setg(errp, "Catalog too large");
ret = -EFBIG;
goto fail;
}
s->catalog_bitmap = g_malloc(s->catalog_size * 4);
size = bat_entry_off(s->bat_size);
s->header_size = ROUND_UP(size, bdrv_opt_mem_align(bs->file->bs));
s->header = qemu_try_blockalign(bs->file->bs, s->header_size);
if (s->header == NULL) {
ret = -ENOMEM;
goto fail;
}
s->data_end = le32_to_cpu(ph.data_off);
if (s->data_end == 0) {
s->data_end = ROUND_UP(bat_entry_off(s->bat_size), BDRV_SECTOR_SIZE);
}
if (s->data_end < s->header_size) {
/* there is not enough unused space to fit to block align between BAT
and actual data. We can't avoid read-modify-write... */
s->header_size = size;
}
ret = bdrv_pread(bs->file, 0, s->header, s->header_size);
ret = bdrv_pread(bs->file, 64, s->catalog_bitmap, s->catalog_size * 4);
if (ret < 0) {
goto fail;
}
s->bat_bitmap = (uint32_t *)(s->header + 1);
for (i = 0; i < s->bat_size; i++) {
int64_t off = bat2sect(s, i);
if (off >= s->data_end) {
s->data_end = off + s->tracks;
}
}
if (le32_to_cpu(ph.inuse) == HEADER_INUSE_MAGIC) {
/* Image was not closed correctly. The check is mandatory */
s->header_unclean = true;
if ((flags & BDRV_O_RDWR) && !(flags & BDRV_O_CHECK)) {
error_setg(errp, "parallels: Image was not closed correctly; "
"cannot be opened read/write");
ret = -EACCES;
goto fail;
}
}
opts = qemu_opts_create(&parallels_runtime_opts, NULL, 0, &local_err);
if (local_err != NULL) {
goto fail_options;
}
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err != NULL) {
goto fail_options;
}
s->prealloc_size =
qemu_opt_get_size_del(opts, PARALLELS_OPT_PREALLOC_SIZE, 0);
s->prealloc_size = MAX(s->tracks, s->prealloc_size >> BDRV_SECTOR_BITS);
buf = qemu_opt_get_del(opts, PARALLELS_OPT_PREALLOC_MODE);
s->prealloc_mode = qapi_enum_parse(prealloc_mode_lookup, buf,
PRL_PREALLOC_MODE__MAX, PRL_PREALLOC_MODE_FALLOCATE, &local_err);
g_free(buf);
if (local_err != NULL) {
goto fail_options;
}
if (!bdrv_has_zero_init(bs->file->bs) ||
bdrv_truncate(bs->file->bs, bdrv_getlength(bs->file->bs)) != 0) {
s->prealloc_mode = PRL_PREALLOC_MODE_FALLOCATE;
}
if (flags & BDRV_O_RDWR) {
s->header->inuse = cpu_to_le32(HEADER_INUSE_MAGIC);
ret = parallels_update_header(bs);
if (ret < 0) {
goto fail;
}
}
s->bat_dirty_block = 4 * getpagesize();
s->bat_dirty_bmap =
bitmap_new(DIV_ROUND_UP(s->header_size, s->bat_dirty_block));
for (i = 0; i < s->catalog_size; i++)
le32_to_cpus(&s->catalog_bitmap[i]);
qemu_co_mutex_init(&s->lock);
return 0;
fail_format:
error_setg(errp, "Image not in Parallels format");
ret = -EINVAL;
fail:
qemu_vfree(s->header);
g_free(s->catalog_bitmap);
return ret;
fail_options:
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
}
static int64_t seek_to_sector(BlockDriverState *bs, int64_t sector_num)
{
BDRVParallelsState *s = bs->opaque;
uint32_t index, offset;
index = sector_num / s->tracks;
offset = sector_num % s->tracks;
/* not allocated */
if ((index > s->catalog_size) || (s->catalog_bitmap[index] == 0))
return -1;
return (uint64_t)(s->catalog_bitmap[index] + offset) * 512;
}
static int parallels_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
while (nb_sectors > 0) {
int64_t position = seek_to_sector(bs, sector_num);
if (position >= 0) {
if (bdrv_pread(bs->file, position, buf, 512) != 512)
return -1;
} else {
memset(buf, 0, 512);
}
nb_sectors--;
sector_num++;
buf += 512;
}
return 0;
}
static coroutine_fn int parallels_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVParallelsState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = parallels_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void parallels_close(BlockDriverState *bs)
{
BDRVParallelsState *s = bs->opaque;
if (bs->open_flags & BDRV_O_RDWR) {
s->header->inuse = 0;
parallels_update_header(bs);
}
if (bs->open_flags & BDRV_O_RDWR) {
bdrv_truncate(bs->file->bs, s->data_end << BDRV_SECTOR_BITS);
}
g_free(s->bat_dirty_bmap);
qemu_vfree(s->header);
g_free(s->catalog_bitmap);
}
static QemuOptsList parallels_create_opts = {
.name = "parallels-create-opts",
.head = QTAILQ_HEAD_INITIALIZER(parallels_create_opts.head),
.desc = {
{
.name = BLOCK_OPT_SIZE,
.type = QEMU_OPT_SIZE,
.help = "Virtual disk size",
},
{
.name = BLOCK_OPT_CLUSTER_SIZE,
.type = QEMU_OPT_SIZE,
.help = "Parallels image cluster size",
.def_value_str = stringify(DEFAULT_CLUSTER_SIZE),
},
{ /* end of list */ }
}
};
static BlockDriver bdrv_parallels = {
.format_name = "parallels",
.instance_size = sizeof(BDRVParallelsState),
.bdrv_probe = parallels_probe,
.bdrv_open = parallels_open,
.bdrv_read = parallels_co_read,
.bdrv_close = parallels_close,
.bdrv_co_get_block_status = parallels_co_get_block_status,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_co_flush_to_os = parallels_co_flush_to_os,
.bdrv_co_readv = parallels_co_readv,
.bdrv_co_writev = parallels_co_writev,
.bdrv_create = parallels_create,
.bdrv_check = parallels_check,
.create_opts = &parallels_create_opts,
};
static void bdrv_parallels_init(void)

View File

@@ -22,23 +22,15 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "block/qapi.h"
#include "block/block_int.h"
#include "block/throttle-groups.h"
#include "block/write-threshold.h"
#include "qmp-commands.h"
#include "qapi-visit.h"
#include "qapi/qmp-output-visitor.h"
#include "qapi/qmp/types.h"
#include "sysemu/block-backend.h"
#include "qemu/cutils.h"
BlockDeviceInfo *bdrv_block_device_info(BlockBackend *blk,
BlockDriverState *bs, Error **errp)
BlockDeviceInfo *bdrv_block_device_info(BlockDriverState *bs)
{
ImageInfo **p_image_info;
BlockDriverState *bs0;
BlockDeviceInfo *info = g_malloc0(sizeof(*info));
info->file = g_strdup(bs->filename);
@@ -47,13 +39,6 @@ BlockDeviceInfo *bdrv_block_device_info(BlockBackend *blk,
info->encrypted = bs->encrypted;
info->encryption_key_missing = bdrv_key_required(bs);
info->cache = g_new(BlockdevCacheInfo, 1);
*info->cache = (BlockdevCacheInfo) {
.writeback = blk ? blk_enable_write_cache(blk) : true,
.direct = !!(bs->open_flags & BDRV_O_NOCACHE),
.no_flush = !!(bs->open_flags & BDRV_O_NO_FLUSH),
};
if (bs->node_name[0]) {
info->has_node_name = true;
info->node_name = g_strdup(bs->node_name);
@@ -65,13 +50,10 @@ BlockDeviceInfo *bdrv_block_device_info(BlockBackend *blk,
}
info->backing_file_depth = bdrv_get_backing_file_depth(bs);
info->detect_zeroes = bs->detect_zeroes;
if (blk && blk_get_public(blk)->throttle_state) {
if (bs->io_limits_enabled) {
ThrottleConfig cfg;
throttle_group_get_config(blk, &cfg);
throttle_get_config(&bs->throttle_state, &cfg);
info->bps = cfg.buckets[THROTTLE_BPS_TOTAL].avg;
info->bps_rd = cfg.buckets[THROTTLE_BPS_READ].avg;
info->bps_wr = cfg.buckets[THROTTLE_BPS_WRITE].avg;
@@ -94,52 +76,8 @@ BlockDeviceInfo *bdrv_block_device_info(BlockBackend *blk,
info->has_iops_wr_max = cfg.buckets[THROTTLE_OPS_WRITE].max;
info->iops_wr_max = cfg.buckets[THROTTLE_OPS_WRITE].max;
info->has_bps_max_length = info->has_bps_max;
info->bps_max_length =
cfg.buckets[THROTTLE_BPS_TOTAL].burst_length;
info->has_bps_rd_max_length = info->has_bps_rd_max;
info->bps_rd_max_length =
cfg.buckets[THROTTLE_BPS_READ].burst_length;
info->has_bps_wr_max_length = info->has_bps_wr_max;
info->bps_wr_max_length =
cfg.buckets[THROTTLE_BPS_WRITE].burst_length;
info->has_iops_max_length = info->has_iops_max;
info->iops_max_length =
cfg.buckets[THROTTLE_OPS_TOTAL].burst_length;
info->has_iops_rd_max_length = info->has_iops_rd_max;
info->iops_rd_max_length =
cfg.buckets[THROTTLE_OPS_READ].burst_length;
info->has_iops_wr_max_length = info->has_iops_wr_max;
info->iops_wr_max_length =
cfg.buckets[THROTTLE_OPS_WRITE].burst_length;
info->has_iops_size = cfg.op_size;
info->iops_size = cfg.op_size;
info->has_group = true;
info->group = g_strdup(throttle_group_get_name(blk));
}
info->write_threshold = bdrv_write_threshold_get(bs);
bs0 = bs;
p_image_info = &info->image;
while (1) {
Error *local_err = NULL;
bdrv_query_image_info(bs0, p_image_info, &local_err);
if (local_err) {
error_propagate(errp, local_err);
qapi_free_BlockDeviceInfo(info);
return NULL;
}
if (bs0->drv && bs0->backing) {
bs0 = bs0->backing->bs;
(*p_image_info)->has_backing_image = true;
p_image_info = &((*p_image_info)->backing_image);
} else {
break;
}
}
return info;
@@ -226,26 +164,19 @@ void bdrv_query_image_info(BlockDriverState *bs,
ImageInfo **p_info,
Error **errp)
{
int64_t size;
uint64_t total_sectors;
const char *backing_filename;
char backing_filename2[1024];
BlockDriverInfo bdi;
int ret;
Error *err = NULL;
ImageInfo *info;
ImageInfo *info = g_new0(ImageInfo, 1);
aio_context_acquire(bdrv_get_aio_context(bs));
bdrv_get_geometry(bs, &total_sectors);
size = bdrv_getlength(bs);
if (size < 0) {
error_setg_errno(errp, -size, "Can't get size of device '%s'",
bdrv_get_device_name(bs));
goto out;
}
info = g_new0(ImageInfo, 1);
info->filename = g_strdup(bs->filename);
info->format = g_strdup(bdrv_get_format_name(bs));
info->virtual_size = size;
info->virtual_size = total_sectors * 512;
info->actual_size = bdrv_get_allocated_file_size(bs);
info->has_actual_size = info->actual_size >= 0;
if (bdrv_is_encrypted(bs)) {
@@ -265,23 +196,14 @@ void bdrv_query_image_info(BlockDriverState *bs,
backing_filename = bs->backing_file;
if (backing_filename[0] != '\0') {
char *backing_filename2 = g_malloc0(PATH_MAX);
info->backing_filename = g_strdup(backing_filename);
info->has_backing_filename = true;
bdrv_get_full_backing_filename(bs, backing_filename2, PATH_MAX, &err);
if (err) {
/* Can't reconstruct the full backing filename, so we must omit
* this field and apply a Best Effort to this query. */
g_free(backing_filename2);
backing_filename2 = NULL;
error_free(err);
err = NULL;
}
bdrv_get_full_backing_filename(bs, backing_filename2,
sizeof(backing_filename2));
/* Always report the full_backing_filename if present, even if it's the
* same as backing_filename. That they are same is useful info. */
if (backing_filename2) {
info->full_backing_filename = g_strdup(backing_filename2);
if (strcmp(backing_filename, backing_filename2) != 0) {
info->full_backing_filename =
g_strdup(backing_filename2);
info->has_full_backing_filename = true;
}
@@ -289,7 +211,6 @@ void bdrv_query_image_info(BlockDriverState *bs,
info->backing_filename_format = g_strdup(bs->backing_format);
info->has_backing_filename_format = true;
}
g_free(backing_filename2);
}
ret = bdrv_query_snapshot_info_list(bs, &info->snapshots, &err);
@@ -307,46 +228,60 @@ void bdrv_query_image_info(BlockDriverState *bs,
default:
error_propagate(errp, err);
qapi_free_ImageInfo(info);
goto out;
return;
}
*p_info = info;
out:
aio_context_release(bdrv_get_aio_context(bs));
}
/* @p_info will be set only on success. */
static void bdrv_query_info(BlockBackend *blk, BlockInfo **p_info,
Error **errp)
void bdrv_query_info(BlockDriverState *bs,
BlockInfo **p_info,
Error **errp)
{
BlockInfo *info = g_malloc0(sizeof(*info));
BlockDriverState *bs = blk_bs(blk);
info->device = g_strdup(blk_name(blk));
BlockDriverState *bs0;
ImageInfo **p_image_info;
Error *local_err = NULL;
info->device = g_strdup(bs->device_name);
info->type = g_strdup("unknown");
info->locked = blk_dev_is_medium_locked(blk);
info->removable = blk_dev_has_removable_media(blk);
info->locked = bdrv_dev_is_medium_locked(bs);
info->removable = bdrv_dev_has_removable_media(bs);
if (blk_dev_has_tray(blk)) {
if (bdrv_dev_has_removable_media(bs)) {
info->has_tray_open = true;
info->tray_open = blk_dev_is_tray_open(blk);
info->tray_open = bdrv_dev_is_tray_open(bs);
}
if (blk_iostatus_is_enabled(blk)) {
if (bdrv_iostatus_is_enabled(bs)) {
info->has_io_status = true;
info->io_status = blk_iostatus(blk);
info->io_status = bs->iostatus;
}
if (bs && !QLIST_EMPTY(&bs->dirty_bitmaps)) {
if (!QLIST_EMPTY(&bs->dirty_bitmaps)) {
info->has_dirty_bitmaps = true;
info->dirty_bitmaps = bdrv_query_dirty_bitmaps(bs);
}
if (bs && bs->drv) {
if (bs->drv) {
info->has_inserted = true;
info->inserted = bdrv_block_device_info(blk, bs, errp);
if (info->inserted == NULL) {
goto err;
info->inserted = bdrv_block_device_info(bs);
bs0 = bs;
p_image_info = &info->inserted->image;
while (1) {
bdrv_query_image_info(bs0, p_image_info, &local_err);
if (local_err) {
error_propagate(errp, local_err);
goto err;
}
if (bs0->drv && bs0->backing_hd) {
bs0 = bs0->backing_hd;
(*p_image_info)->has_backing_image = true;
p_image_info = &((*p_image_info)->backing_image);
} else {
break;
}
}
}
@@ -357,115 +292,36 @@ static void bdrv_query_info(BlockBackend *blk, BlockInfo **p_info,
qapi_free_BlockInfo(info);
}
static BlockStats *bdrv_query_stats(BlockBackend *blk,
const BlockDriverState *bs,
bool query_backing);
static void bdrv_query_blk_stats(BlockDeviceStats *ds, BlockBackend *blk)
{
BlockAcctStats *stats = blk_get_stats(blk);
BlockAcctTimedStats *ts = NULL;
ds->rd_bytes = stats->nr_bytes[BLOCK_ACCT_READ];
ds->wr_bytes = stats->nr_bytes[BLOCK_ACCT_WRITE];
ds->rd_operations = stats->nr_ops[BLOCK_ACCT_READ];
ds->wr_operations = stats->nr_ops[BLOCK_ACCT_WRITE];
ds->failed_rd_operations = stats->failed_ops[BLOCK_ACCT_READ];
ds->failed_wr_operations = stats->failed_ops[BLOCK_ACCT_WRITE];
ds->failed_flush_operations = stats->failed_ops[BLOCK_ACCT_FLUSH];
ds->invalid_rd_operations = stats->invalid_ops[BLOCK_ACCT_READ];
ds->invalid_wr_operations = stats->invalid_ops[BLOCK_ACCT_WRITE];
ds->invalid_flush_operations =
stats->invalid_ops[BLOCK_ACCT_FLUSH];
ds->rd_merged = stats->merged[BLOCK_ACCT_READ];
ds->wr_merged = stats->merged[BLOCK_ACCT_WRITE];
ds->flush_operations = stats->nr_ops[BLOCK_ACCT_FLUSH];
ds->wr_total_time_ns = stats->total_time_ns[BLOCK_ACCT_WRITE];
ds->rd_total_time_ns = stats->total_time_ns[BLOCK_ACCT_READ];
ds->flush_total_time_ns = stats->total_time_ns[BLOCK_ACCT_FLUSH];
ds->has_idle_time_ns = stats->last_access_time_ns > 0;
if (ds->has_idle_time_ns) {
ds->idle_time_ns = block_acct_idle_time_ns(stats);
}
ds->account_invalid = stats->account_invalid;
ds->account_failed = stats->account_failed;
while ((ts = block_acct_interval_next(stats, ts))) {
BlockDeviceTimedStatsList *timed_stats =
g_malloc0(sizeof(*timed_stats));
BlockDeviceTimedStats *dev_stats = g_malloc0(sizeof(*dev_stats));
timed_stats->next = ds->timed_stats;
timed_stats->value = dev_stats;
ds->timed_stats = timed_stats;
TimedAverage *rd = &ts->latency[BLOCK_ACCT_READ];
TimedAverage *wr = &ts->latency[BLOCK_ACCT_WRITE];
TimedAverage *fl = &ts->latency[BLOCK_ACCT_FLUSH];
dev_stats->interval_length = ts->interval_length;
dev_stats->min_rd_latency_ns = timed_average_min(rd);
dev_stats->max_rd_latency_ns = timed_average_max(rd);
dev_stats->avg_rd_latency_ns = timed_average_avg(rd);
dev_stats->min_wr_latency_ns = timed_average_min(wr);
dev_stats->max_wr_latency_ns = timed_average_max(wr);
dev_stats->avg_wr_latency_ns = timed_average_avg(wr);
dev_stats->min_flush_latency_ns = timed_average_min(fl);
dev_stats->max_flush_latency_ns = timed_average_max(fl);
dev_stats->avg_flush_latency_ns = timed_average_avg(fl);
dev_stats->avg_rd_queue_depth =
block_acct_queue_depth(ts, BLOCK_ACCT_READ);
dev_stats->avg_wr_queue_depth =
block_acct_queue_depth(ts, BLOCK_ACCT_WRITE);
}
}
static void bdrv_query_bds_stats(BlockStats *s, const BlockDriverState *bs,
bool query_backing)
{
if (bdrv_get_node_name(bs)[0]) {
s->has_node_name = true;
s->node_name = g_strdup(bdrv_get_node_name(bs));
}
s->stats->wr_highest_offset = bs->wr_highest_offset;
if (bs->file) {
s->has_parent = true;
s->parent = bdrv_query_stats(NULL, bs->file->bs, query_backing);
}
if (query_backing && bs->backing) {
s->has_backing = true;
s->backing = bdrv_query_stats(NULL, bs->backing->bs, query_backing);
}
}
static BlockStats *bdrv_query_stats(BlockBackend *blk,
const BlockDriverState *bs,
bool query_backing)
BlockStats *bdrv_query_stats(const BlockDriverState *bs)
{
BlockStats *s;
s = g_malloc0(sizeof(*s));
s->stats = g_malloc0(sizeof(*s->stats));
if (blk) {
if (bs->device_name[0]) {
s->has_device = true;
s->device = g_strdup(blk_name(blk));
bdrv_query_blk_stats(s->stats, blk);
s->device = g_strdup(bs->device_name);
}
if (bs) {
bdrv_query_bds_stats(s, bs, query_backing);
s->stats = g_malloc0(sizeof(*s->stats));
s->stats->rd_bytes = bs->nr_bytes[BDRV_ACCT_READ];
s->stats->wr_bytes = bs->nr_bytes[BDRV_ACCT_WRITE];
s->stats->rd_operations = bs->nr_ops[BDRV_ACCT_READ];
s->stats->wr_operations = bs->nr_ops[BDRV_ACCT_WRITE];
s->stats->wr_highest_offset = bs->wr_highest_sector * BDRV_SECTOR_SIZE;
s->stats->flush_operations = bs->nr_ops[BDRV_ACCT_FLUSH];
s->stats->wr_total_time_ns = bs->total_time_ns[BDRV_ACCT_WRITE];
s->stats->rd_total_time_ns = bs->total_time_ns[BDRV_ACCT_READ];
s->stats->flush_total_time_ns = bs->total_time_ns[BDRV_ACCT_FLUSH];
if (bs->file) {
s->has_parent = true;
s->parent = bdrv_query_stats(bs->file);
}
if (bs->backing_hd) {
s->has_backing = true;
s->backing = bdrv_query_stats(bs->backing_hd);
}
return s;
@@ -474,17 +330,15 @@ static BlockStats *bdrv_query_stats(BlockBackend *blk,
BlockInfoList *qmp_query_block(Error **errp)
{
BlockInfoList *head = NULL, **p_next = &head;
BlockBackend *blk;
BlockDriverState *bs = NULL;
Error *local_err = NULL;
for (blk = blk_next(NULL); blk; blk = blk_next(blk)) {
while ((bs = bdrv_next(bs))) {
BlockInfoList *info = g_malloc0(sizeof(*info));
bdrv_query_info(blk, &info->value, &local_err);
bdrv_query_info(bs, &info->value, &local_err);
if (local_err) {
error_propagate(errp, local_err);
g_free(info);
qapi_free_BlockInfoList(head);
return NULL;
goto err;
}
*p_next = info;
@@ -492,41 +346,20 @@ BlockInfoList *qmp_query_block(Error **errp)
}
return head;
err:
qapi_free_BlockInfoList(head);
return NULL;
}
static bool next_query_bds(BlockBackend **blk, BlockDriverState **bs,
bool query_nodes)
{
if (query_nodes) {
*bs = bdrv_next_node(*bs);
return !!*bs;
}
*blk = blk_next(*blk);
*bs = *blk ? blk_bs(*blk) : NULL;
return !!*blk;
}
BlockStatsList *qmp_query_blockstats(bool has_query_nodes,
bool query_nodes,
Error **errp)
BlockStatsList *qmp_query_blockstats(Error **errp)
{
BlockStatsList *head = NULL, **p_next = &head;
BlockBackend *blk = NULL;
BlockDriverState *bs = NULL;
/* Just to be safe if query_nodes is not always initialized */
query_nodes = has_query_nodes && query_nodes;
while (next_query_bds(&blk, &bs, query_nodes)) {
while ((bs = bdrv_next(bs))) {
BlockStatsList *info = g_malloc0(sizeof(*info));
AioContext *ctx = blk ? blk_get_aio_context(blk)
: bdrv_get_aio_context(bs);
aio_context_acquire(ctx);
info->value = bdrv_query_stats(blk, bs, !query_nodes);
aio_context_release(ctx);
info->value = bdrv_query_stats(bs);
*p_next = info;
p_next = &info->next;
@@ -539,7 +372,7 @@ BlockStatsList *qmp_query_blockstats(bool has_query_nodes,
static char *get_human_readable_size(char *buf, int buf_size, int64_t size)
{
static const char suffixes[NB_SUFFIXES] = {'K', 'M', 'G', 'T'};
static const char suffixes[NB_SUFFIXES] = "KMGT";
int64_t base;
int i;
@@ -635,9 +468,17 @@ static void dump_qobject(fprintf_function func_fprintf, void *f,
}
case QTYPE_QBOOL: {
QBool *value = qobject_to_qbool(obj);
func_fprintf(f, "%s", qbool_get_bool(value) ? "true" : "false");
func_fprintf(f, "%s", qbool_get_int(value) ? "true" : "false");
break;
}
case QTYPE_QERROR: {
QString *value = qerror_human((QError *)obj);
func_fprintf(f, "%s", qstring_get_str(value));
break;
}
case QTYPE_NONE:
break;
case QTYPE_MAX:
default:
abort();
}
@@ -650,10 +491,11 @@ static void dump_qlist(fprintf_function func_fprintf, void *f, int indentation,
int i = 0;
for (entry = qlist_first(list); entry; entry = qlist_next(entry), i++) {
QType type = qobject_type(entry->value);
qtype_code type = qobject_type(entry->value);
bool composite = (type == QTYPE_QDICT || type == QTYPE_QLIST);
func_fprintf(f, "%*s[%i]:%c", indentation * 4, "", i,
composite ? '\n' : ' ');
const char *format = composite ? "%*s[%i]:\n" : "%*s[%i]: ";
func_fprintf(f, format, indentation * 4, "", i);
dump_qobject(func_fprintf, f, indentation + 1, entry->value);
if (!composite) {
func_fprintf(f, "\n");
@@ -667,9 +509,10 @@ static void dump_qdict(fprintf_function func_fprintf, void *f, int indentation,
const QDictEntry *entry;
for (entry = qdict_first(dict); entry; entry = qdict_next(dict, entry)) {
QType type = qobject_type(entry->value);
qtype_code type = qobject_type(entry->value);
bool composite = (type == QTYPE_QDICT || type == QTYPE_QLIST);
char *key = g_malloc(strlen(entry->key) + 1);
const char *format = composite ? "%*s%s:\n" : "%*s%s: ";
char key[strlen(entry->key) + 1];
int i;
/* replace dashes with spaces in key (variable) names */
@@ -677,28 +520,29 @@ static void dump_qdict(fprintf_function func_fprintf, void *f, int indentation,
key[i] = entry->key[i] == '-' ? ' ' : entry->key[i];
}
key[i] = 0;
func_fprintf(f, "%*s%s:%c", indentation * 4, "", key,
composite ? '\n' : ' ');
func_fprintf(f, format, indentation * 4, "", key);
dump_qobject(func_fprintf, f, indentation + 1, entry->value);
if (!composite) {
func_fprintf(f, "\n");
}
g_free(key);
}
}
void bdrv_image_info_specific_dump(fprintf_function func_fprintf, void *f,
ImageInfoSpecific *info_spec)
{
Error *local_err = NULL;
QmpOutputVisitor *ov = qmp_output_visitor_new();
QObject *obj, *data;
Visitor *v = qmp_output_visitor_new(&obj);
visit_type_ImageInfoSpecific(v, NULL, &info_spec, &error_abort);
visit_complete(v, &obj);
visit_type_ImageInfoSpecific(qmp_output_get_visitor(ov), &info_spec, NULL,
&local_err);
obj = qmp_output_get_qobject(ov);
assert(qobject_type(obj) == QTYPE_QDICT);
data = qdict_get(qobject_to_qdict(obj), "data");
dump_qobject(func_fprintf, f, 1, data);
visit_free(v);
qmp_output_visitor_cleanup(ov);
}
void bdrv_image_info_dump(fprintf_function func_fprintf, void *f,
@@ -736,10 +580,7 @@ void bdrv_image_info_dump(fprintf_function func_fprintf, void *f,
if (info->has_backing_filename) {
func_fprintf(f, "backing file: %s", info->backing_filename);
if (!info->has_full_backing_filename) {
func_fprintf(f, " (cannot determine actual path)");
} else if (strcmp(info->backing_filename,
info->full_backing_filename) != 0) {
if (info->has_full_backing_filename) {
func_fprintf(f, " (actual path: %s)", info->full_backing_filename);
}
func_fprintf(f, "\n");

View File

@@ -21,17 +21,11 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "qemu/error-report.h"
#include "block/block_int.h"
#include "sysemu/block-backend.h"
#include "qemu/module.h"
#include "qemu/bswap.h"
#include <zlib.h>
#include "qapi/qmp/qerror.h"
#include "crypto/cipher.h"
#include "qemu/aes.h"
#include "migration/migration.h"
/**************************************************************/
@@ -54,10 +48,9 @@ typedef struct QCowHeader {
uint64_t size; /* in bytes */
uint8_t cluster_bits;
uint8_t l2_bits;
uint16_t padding;
uint32_t crypt_method;
uint64_t l1_table_offset;
} QEMU_PACKED QCowHeader;
} QCowHeader;
#define L2_CACHE_SIZE 16
@@ -67,7 +60,7 @@ typedef struct BDRVQcowState {
int cluster_sectors;
int l2_bits;
int l2_size;
unsigned int l1_size;
int l1_size;
uint64_t cluster_offset_mask;
uint64_t l1_table_offset;
uint64_t *l1_table;
@@ -77,8 +70,10 @@ typedef struct BDRVQcowState {
uint8_t *cluster_cache;
uint8_t *cluster_data;
uint64_t cluster_cache_offset;
QCryptoCipher *cipher; /* NULL if no key yet */
uint32_t crypt_method; /* current crypt method, 0 if no key yet */
uint32_t crypt_method_header;
AES_KEY aes_encrypt_key;
AES_KEY aes_decrypt_key;
CoMutex lock;
Error *migration_blocker;
} BDRVQcowState;
@@ -101,8 +96,7 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVQcowState *s = bs->opaque;
unsigned int len, i, shift;
int ret;
int len, i, shift, ret;
QCowHeader header;
ret = bdrv_pread(bs->file, 0, &header, sizeof(header));
@@ -124,57 +118,27 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
goto fail;
}
if (header.version != QCOW_VERSION) {
error_setg(errp, "Unsupported qcow version %" PRIu32, header.version);
char version[64];
snprintf(version, sizeof(version), "QCOW version %d", header.version);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "qcow", version);
ret = -ENOTSUP;
goto fail;
}
if (header.size <= 1) {
error_setg(errp, "Image size is too small (must be at least 2 bytes)");
if (header.size <= 1 || header.cluster_bits < 9) {
error_setg(errp, "invalid value in qcow header");
ret = -EINVAL;
goto fail;
}
if (header.cluster_bits < 9 || header.cluster_bits > 16) {
error_setg(errp, "Cluster size must be between 512 and 64k");
ret = -EINVAL;
goto fail;
}
/* l2_bits specifies number of entries; storing a uint64_t in each entry,
* so bytes = num_entries << 3. */
if (header.l2_bits < 9 - 3 || header.l2_bits > 16 - 3) {
error_setg(errp, "L2 table size must be between 512 and 64k");
ret = -EINVAL;
goto fail;
}
if (header.crypt_method > QCOW_CRYPT_AES) {
error_setg(errp, "invalid encryption method in qcow header");
ret = -EINVAL;
goto fail;
}
if (!qcrypto_cipher_supports(QCRYPTO_CIPHER_ALG_AES_128)) {
error_setg(errp, "AES cipher not available");
ret = -EINVAL;
goto fail;
}
s->crypt_method_header = header.crypt_method;
if (s->crypt_method_header) {
if (bdrv_uses_whitelist() &&
s->crypt_method_header == QCOW_CRYPT_AES) {
error_setg(errp,
"Use of AES-CBC encrypted qcow images is no longer "
"supported in system emulators");
error_append_hint(errp,
"You can use 'qemu-img convert' to convert your "
"image to an alternative supported format, such "
"as unencrypted qcow, or raw with the LUKS "
"format instead.\n");
ret = -ENOSYS;
goto fail;
}
bs->encrypted = true;
bs->encrypted = 1;
}
s->cluster_bits = header.cluster_bits;
s->cluster_size = 1 << s->cluster_bits;
@@ -186,27 +150,10 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
/* read the level 1 table */
shift = s->cluster_bits + s->l2_bits;
if (header.size > UINT64_MAX - (1LL << shift)) {
error_setg(errp, "Image too large");
ret = -EINVAL;
goto fail;
} else {
uint64_t l1_size = (header.size + (1LL << shift) - 1) >> shift;
if (l1_size > INT_MAX / sizeof(uint64_t)) {
error_setg(errp, "Image too large");
ret = -EINVAL;
goto fail;
}
s->l1_size = l1_size;
}
s->l1_size = (header.size + (1LL << shift) - 1) >> shift;
s->l1_table_offset = header.l1_table_offset;
s->l1_table = g_try_new(uint64_t, s->l1_size);
if (s->l1_table == NULL) {
error_setg(errp, "Could not allocate memory for L1 table");
ret = -ENOMEM;
goto fail;
}
s->l1_table = g_malloc(s->l1_size * sizeof(uint64_t));
ret = bdrv_pread(bs->file, s->l1_table_offset, s->l1_table,
s->l1_size * sizeof(uint64_t));
@@ -217,16 +164,8 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
for(i = 0;i < s->l1_size; i++) {
be64_to_cpus(&s->l1_table[i]);
}
/* alloc L2 cache (max. 64k * 16 * 8 = 8 MB) */
s->l2_cache =
qemu_try_blockalign(bs->file->bs,
s->l2_size * L2_CACHE_SIZE * sizeof(uint64_t));
if (s->l2_cache == NULL) {
error_setg(errp, "Could not allocate L2 table cache");
ret = -ENOMEM;
goto fail;
}
/* alloc L2 cache */
s->l2_cache = g_malloc(s->l2_size * L2_CACHE_SIZE * sizeof(uint64_t));
s->cluster_cache = g_malloc(s->cluster_size);
s->cluster_data = g_malloc(s->cluster_size);
s->cluster_cache_offset = -1;
@@ -234,10 +173,8 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
/* read the backing file name */
if (header.backing_file_offset != 0) {
len = header.backing_file_size;
if (len > 1023 || len >= sizeof(bs->backing_file)) {
error_setg(errp, "Backing file name too long");
ret = -EINVAL;
goto fail;
if (len > 1023) {
len = 1023;
}
ret = bdrv_pread(bs->file, header.backing_file_offset,
bs->backing_file, len);
@@ -248,9 +185,9 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
}
/* Disable migration when qcow images are used */
error_setg(&s->migration_blocker, "The qcow format used by node '%s' "
"does not support live migration",
bdrv_get_device_or_node_name(bs));
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"qcow", bs->device_name, "live migration");
migrate_add_blocker(s->migration_blocker);
qemu_co_mutex_init(&s->lock);
@@ -258,7 +195,7 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
fail:
g_free(s->l1_table);
qemu_vfree(s->l2_cache);
g_free(s->l2_cache);
g_free(s->cluster_cache);
g_free(s->cluster_data);
return ret;
@@ -278,7 +215,6 @@ static int qcow_set_key(BlockDriverState *bs, const char *key)
BDRVQcowState *s = bs->opaque;
uint8_t keybuf[16];
int len, i;
Error *err;
memset(keybuf, 0, 16);
len = strlen(key);
@@ -289,68 +225,38 @@ static int qcow_set_key(BlockDriverState *bs, const char *key)
for(i = 0;i < len;i++) {
keybuf[i] = key[i];
}
assert(bs->encrypted);
s->crypt_method = s->crypt_method_header;
qcrypto_cipher_free(s->cipher);
s->cipher = qcrypto_cipher_new(
QCRYPTO_CIPHER_ALG_AES_128,
QCRYPTO_CIPHER_MODE_CBC,
keybuf, G_N_ELEMENTS(keybuf),
&err);
if (!s->cipher) {
/* XXX would be nice if errors in this method could
* be properly propagate to the caller. Would need
* the bdrv_set_key() API signature to be fixed. */
error_free(err);
if (AES_set_encrypt_key(keybuf, 128, &s->aes_encrypt_key) != 0)
return -1;
if (AES_set_decrypt_key(keybuf, 128, &s->aes_decrypt_key) != 0)
return -1;
}
return 0;
}
/* The crypt function is compatible with the linux cryptoloop
algorithm for < 4 GB images. NOTE: out_buf == in_buf is
supported */
static int encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
uint8_t *out_buf, const uint8_t *in_buf,
int nb_sectors, bool enc, Error **errp)
static void encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
uint8_t *out_buf, const uint8_t *in_buf,
int nb_sectors, int enc,
const AES_KEY *key)
{
union {
uint64_t ll[2];
uint8_t b[16];
} ivec;
int i;
int ret;
for(i = 0; i < nb_sectors; i++) {
ivec.ll[0] = cpu_to_le64(sector_num);
ivec.ll[1] = 0;
if (qcrypto_cipher_setiv(s->cipher,
ivec.b, G_N_ELEMENTS(ivec.b),
errp) < 0) {
return -1;
}
if (enc) {
ret = qcrypto_cipher_encrypt(s->cipher,
in_buf,
out_buf,
512,
errp);
} else {
ret = qcrypto_cipher_decrypt(s->cipher,
in_buf,
out_buf,
512,
errp);
}
if (ret < 0) {
return -1;
}
AES_cbc_encrypt(in_buf, out_buf, 512, key,
ivec.b, enc);
sector_num++;
in_buf += 512;
out_buf += 512;
}
return 0;
}
/* 'allocate' is:
@@ -384,7 +290,7 @@ static uint64_t get_cluster_offset(BlockDriverState *bs,
if (!allocate)
return 0;
/* allocate a new l2 entry */
l2_offset = bdrv_getlength(bs->file->bs);
l2_offset = bdrv_getlength(bs->file);
/* round to cluster size */
l2_offset = (l2_offset + s->cluster_size - 1) & ~(s->cluster_size - 1);
/* update the L1 entry */
@@ -424,8 +330,7 @@ static uint64_t get_cluster_offset(BlockDriverState *bs,
s->l2_size * sizeof(uint64_t)) < 0)
return 0;
} else {
if (bdrv_pread(bs->file, l2_offset, l2_table,
s->l2_size * sizeof(uint64_t)) !=
if (bdrv_pread(bs->file, l2_offset, l2_table, s->l2_size * sizeof(uint64_t)) !=
s->l2_size * sizeof(uint64_t))
return 0;
}
@@ -446,42 +351,34 @@ static uint64_t get_cluster_offset(BlockDriverState *bs,
overwritten */
if (decompress_cluster(bs, cluster_offset) < 0)
return 0;
cluster_offset = bdrv_getlength(bs->file->bs);
cluster_offset = bdrv_getlength(bs->file);
cluster_offset = (cluster_offset + s->cluster_size - 1) &
~(s->cluster_size - 1);
/* write the cluster content */
if (bdrv_pwrite(bs->file, cluster_offset, s->cluster_cache,
s->cluster_size) !=
if (bdrv_pwrite(bs->file, cluster_offset, s->cluster_cache, s->cluster_size) !=
s->cluster_size)
return -1;
} else {
cluster_offset = bdrv_getlength(bs->file->bs);
cluster_offset = bdrv_getlength(bs->file);
if (allocate == 1) {
/* round to cluster size */
cluster_offset = (cluster_offset + s->cluster_size - 1) &
~(s->cluster_size - 1);
bdrv_truncate(bs->file->bs, cluster_offset + s->cluster_size);
bdrv_truncate(bs->file, cluster_offset + s->cluster_size);
/* if encrypted, we must initialize the cluster
content which won't be written */
if (bs->encrypted &&
if (s->crypt_method &&
(n_end - n_start) < s->cluster_sectors) {
uint64_t start_sect;
assert(s->cipher);
start_sect = (offset & ~(s->cluster_size - 1)) >> 9;
memset(s->cluster_data + 512, 0x00, 512);
for(i = 0; i < s->cluster_sectors; i++) {
if (i < n_start || i >= n_end) {
Error *err = NULL;
if (encrypt_sectors(s, start_sect + i,
s->cluster_data,
s->cluster_data + 512, 1,
true, &err) < 0) {
error_free(err);
errno = EIO;
return -1;
}
if (bdrv_pwrite(bs->file,
cluster_offset + i * 512,
encrypt_sectors(s, start_sect + i,
s->cluster_data,
s->cluster_data + 512, 1, 1,
&s->aes_encrypt_key);
if (bdrv_pwrite(bs->file, cluster_offset + i * 512,
s->cluster_data, 512) != 512)
return -1;
}
@@ -503,7 +400,7 @@ static uint64_t get_cluster_offset(BlockDriverState *bs,
}
static int64_t coroutine_fn qcow_co_get_block_status(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *pnum, BlockDriverState **file)
int64_t sector_num, int nb_sectors, int *pnum)
{
BDRVQcowState *s = bs->opaque;
int index_in_cluster, n;
@@ -520,11 +417,10 @@ static int64_t coroutine_fn qcow_co_get_block_status(BlockDriverState *bs,
if (!cluster_offset) {
return 0;
}
if ((cluster_offset & QCOW_OFLAG_COMPRESSED) || s->cipher) {
if ((cluster_offset & QCOW_OFLAG_COMPRESSED) || s->crypt_method) {
return BDRV_BLOCK_DATA;
}
cluster_offset |= (index_in_cluster << BDRV_SECTOR_BITS);
*file = bs->file->bs;
return BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID | cluster_offset;
}
@@ -588,13 +484,9 @@ static coroutine_fn int qcow_co_readv(BlockDriverState *bs, int64_t sector_num,
QEMUIOVector hd_qiov;
uint8_t *buf;
void *orig_buf;
Error *err = NULL;
if (qiov->niov > 1) {
buf = orig_buf = qemu_try_blockalign(bs, qiov->size);
if (buf == NULL) {
return -ENOMEM;
}
buf = orig_buf = qemu_blockalign(bs, qiov->size);
} else {
orig_buf = NULL;
buf = (uint8_t *)qiov->iov->iov_base;
@@ -613,13 +505,14 @@ static coroutine_fn int qcow_co_readv(BlockDriverState *bs, int64_t sector_num,
}
if (!cluster_offset) {
if (bs->backing) {
if (bs->backing_hd) {
/* read from the base image */
hd_iov.iov_base = (void *)buf;
hd_iov.iov_len = n * 512;
qemu_iovec_init_external(&hd_qiov, &hd_iov, 1);
qemu_co_mutex_unlock(&s->lock);
ret = bdrv_co_readv(bs->backing, sector_num, n, &hd_qiov);
ret = bdrv_co_readv(bs->backing_hd, sector_num,
n, &hd_qiov);
qemu_co_mutex_lock(&s->lock);
if (ret < 0) {
goto fail;
@@ -650,12 +543,10 @@ static coroutine_fn int qcow_co_readv(BlockDriverState *bs, int64_t sector_num,
if (ret < 0) {
break;
}
if (bs->encrypted) {
assert(s->cipher);
if (encrypt_sectors(s, sector_num, buf, buf,
n, false, &err) < 0) {
goto fail;
}
if (s->crypt_method) {
encrypt_sectors(s, sector_num, buf, buf,
n, 0,
&s->aes_decrypt_key);
}
}
ret = 0;
@@ -676,7 +567,6 @@ done:
return ret;
fail:
error_free(err);
ret = -EIO;
goto done;
}
@@ -698,10 +588,7 @@ static coroutine_fn int qcow_co_writev(BlockDriverState *bs, int64_t sector_num,
s->cluster_cache_offset = -1; /* disable compressed cache */
if (qiov->niov > 1) {
buf = orig_buf = qemu_try_blockalign(bs, qiov->size);
if (buf == NULL) {
return -ENOMEM;
}
buf = orig_buf = qemu_blockalign(bs, qiov->size);
qemu_iovec_to_buf(qiov, 0, buf, qiov->size);
} else {
orig_buf = NULL;
@@ -724,18 +611,12 @@ static coroutine_fn int qcow_co_writev(BlockDriverState *bs, int64_t sector_num,
ret = -EIO;
break;
}
if (bs->encrypted) {
Error *err = NULL;
assert(s->cipher);
if (s->crypt_method) {
if (!cluster_data) {
cluster_data = g_malloc0(s->cluster_size);
}
if (encrypt_sectors(s, sector_num, cluster_data, buf,
n, true, &err) < 0) {
error_free(err);
ret = -EIO;
break;
}
encrypt_sectors(s, sector_num, cluster_data, buf,
n, 1, &s->aes_encrypt_key);
src_buf = cluster_data;
} else {
src_buf = buf;
@@ -772,10 +653,8 @@ static void qcow_close(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
qcrypto_cipher_free(s->cipher);
s->cipher = NULL;
g_free(s->l1_table);
qemu_vfree(s->l2_cache);
g_free(s->l2_cache);
g_free(s->cluster_cache);
g_free(s->cluster_data);
@@ -783,43 +662,46 @@ static void qcow_close(BlockDriverState *bs)
error_free(s->migration_blocker);
}
static int qcow_create(const char *filename, QemuOpts *opts, Error **errp)
static int qcow_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
{
int header_size, backing_filename_len, l1_size, shift, i;
QCowHeader header;
uint8_t *tmp;
int64_t total_size = 0;
char *backing_file = NULL;
const char *backing_file = NULL;
int flags = 0;
Error *local_err = NULL;
int ret;
BlockBackend *qcow_blk;
BlockDriverState *qcow_bs;
/* Read out options */
total_size = ROUND_UP(qemu_opt_get_size_del(opts, BLOCK_OPT_SIZE, 0),
BDRV_SECTOR_SIZE);
backing_file = qemu_opt_get_del(opts, BLOCK_OPT_BACKING_FILE);
if (qemu_opt_get_bool_del(opts, BLOCK_OPT_ENCRYPT, false)) {
flags |= BLOCK_FLAG_ENCRYPT;
while (options && options->name) {
if (!strcmp(options->name, BLOCK_OPT_SIZE)) {
total_size = options->value.n / 512;
} else if (!strcmp(options->name, BLOCK_OPT_BACKING_FILE)) {
backing_file = options->value.s;
} else if (!strcmp(options->name, BLOCK_OPT_ENCRYPT)) {
flags |= options->value.n ? BLOCK_FLAG_ENCRYPT : 0;
}
options++;
}
ret = bdrv_create_file(filename, opts, &local_err);
ret = bdrv_create_file(filename, options, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
goto cleanup;
return ret;
}
qcow_blk = blk_new_open(filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, &local_err);
if (qcow_blk == NULL) {
qcow_bs = NULL;
ret = bdrv_open(&qcow_bs, filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, NULL, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
ret = -EIO;
goto cleanup;
return ret;
}
blk_set_allow_write_beyond_eof(qcow_blk, true);
ret = blk_truncate(qcow_blk, 0);
ret = bdrv_truncate(qcow_bs, 0);
if (ret < 0) {
goto exit;
}
@@ -827,7 +709,7 @@ static int qcow_create(const char *filename, QemuOpts *opts, Error **errp)
memset(&header, 0, sizeof(header));
header.magic = cpu_to_be32(QCOW_MAGIC);
header.version = cpu_to_be32(QCOW_VERSION);
header.size = cpu_to_be64(total_size);
header.size = cpu_to_be64(total_size * 512);
header_size = sizeof(header);
backing_filename_len = 0;
if (backing_file) {
@@ -849,7 +731,7 @@ static int qcow_create(const char *filename, QemuOpts *opts, Error **errp)
}
header_size = (header_size + 7) & ~7;
shift = header.cluster_bits + header.l2_bits;
l1_size = (total_size + (1LL << shift) - 1) >> shift;
l1_size = ((total_size * 512) + (1LL << shift) - 1) >> shift;
header.l1_table_offset = cpu_to_be64(header_size);
if (flags & BLOCK_FLAG_ENCRYPT) {
@@ -859,24 +741,24 @@ static int qcow_create(const char *filename, QemuOpts *opts, Error **errp)
}
/* write all the data */
ret = blk_pwrite(qcow_blk, 0, &header, sizeof(header), 0);
ret = bdrv_pwrite(qcow_bs, 0, &header, sizeof(header));
if (ret != sizeof(header)) {
goto exit;
}
if (backing_file) {
ret = blk_pwrite(qcow_blk, sizeof(header),
backing_file, backing_filename_len, 0);
ret = bdrv_pwrite(qcow_bs, sizeof(header),
backing_file, backing_filename_len);
if (ret != backing_filename_len) {
goto exit;
}
}
tmp = g_malloc0(BDRV_SECTOR_SIZE);
for (i = 0; i < DIV_ROUND_UP(sizeof(uint64_t) * l1_size, BDRV_SECTOR_SIZE);
i++) {
ret = blk_pwrite(qcow_blk, header_size + BDRV_SECTOR_SIZE * i,
tmp, BDRV_SECTOR_SIZE, 0);
for (i = 0; i < ((sizeof(uint64_t)*l1_size + BDRV_SECTOR_SIZE - 1)/
BDRV_SECTOR_SIZE); i++) {
ret = bdrv_pwrite(qcow_bs, header_size +
BDRV_SECTOR_SIZE*i, tmp, BDRV_SECTOR_SIZE);
if (ret != BDRV_SECTOR_SIZE) {
g_free(tmp);
goto exit;
@@ -886,9 +768,7 @@ static int qcow_create(const char *filename, QemuOpts *opts, Error **errp)
g_free(tmp);
ret = 0;
exit:
blk_unref(qcow_blk);
cleanup:
g_free(backing_file);
bdrv_unref(qcow_bs);
return ret;
}
@@ -902,7 +782,7 @@ static int qcow_make_empty(BlockDriverState *bs)
if (bdrv_pwrite_sync(bs->file, s->l1_table_offset, s->l1_table,
l1_length) < 0)
return -1;
ret = bdrv_truncate(bs->file->bs, s->l1_table_offset + l1_length);
ret = bdrv_truncate(bs->file, s->l1_table_offset + l1_length);
if (ret < 0)
return ret;
@@ -915,32 +795,32 @@ static int qcow_make_empty(BlockDriverState *bs)
/* XXX: put compressed sectors first, then all the cluster aligned
tables to avoid losing bytes in alignment */
static coroutine_fn int
qcow_co_pwritev_compressed(BlockDriverState *bs, uint64_t offset,
uint64_t bytes, QEMUIOVector *qiov)
static int qcow_write_compressed(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
BDRVQcowState *s = bs->opaque;
QEMUIOVector hd_qiov;
struct iovec iov;
z_stream strm;
int ret, out_len;
uint8_t *buf, *out_buf;
uint8_t *out_buf;
uint64_t cluster_offset;
buf = qemu_blockalign(bs, s->cluster_size);
if (bytes != s->cluster_size) {
if (bytes > s->cluster_size ||
offset + bytes != bs->total_sectors << BDRV_SECTOR_BITS)
{
qemu_vfree(buf);
return -EINVAL;
}
/* Zero-pad last write if image size is not cluster aligned */
memset(buf + bytes, 0, s->cluster_size - bytes);
}
qemu_iovec_to_buf(qiov, 0, buf, qiov->size);
if (nb_sectors != s->cluster_sectors) {
ret = -EINVAL;
out_buf = g_malloc(s->cluster_size);
/* Zero-pad last write if image size is not cluster aligned */
if (sector_num + nb_sectors == bs->total_sectors &&
nb_sectors < s->cluster_sectors) {
uint8_t *pad_buf = qemu_blockalign(bs, s->cluster_size);
memset(pad_buf, 0, s->cluster_size);
memcpy(pad_buf, buf, nb_sectors * BDRV_SECTOR_SIZE);
ret = qcow_write_compressed(bs, sector_num,
pad_buf, s->cluster_sectors);
qemu_vfree(pad_buf);
}
return ret;
}
out_buf = g_malloc(s->cluster_size + (s->cluster_size / 1000) + 128);
/* best compression, small window, no zlib header */
memset(&strm, 0, sizeof(strm));
@@ -969,35 +849,27 @@ qcow_co_pwritev_compressed(BlockDriverState *bs, uint64_t offset,
if (ret != Z_STREAM_END || out_len >= s->cluster_size) {
/* could not compress: write normal cluster */
ret = qcow_co_writev(bs, offset >> BDRV_SECTOR_BITS,
bytes >> BDRV_SECTOR_BITS, qiov);
ret = bdrv_write(bs, sector_num, buf, s->cluster_sectors);
if (ret < 0) {
goto fail;
}
goto success;
}
qemu_co_mutex_lock(&s->lock);
cluster_offset = get_cluster_offset(bs, offset, 2, out_len, 0, 0);
qemu_co_mutex_unlock(&s->lock);
if (cluster_offset == 0) {
ret = -EIO;
goto fail;
}
cluster_offset &= s->cluster_offset_mask;
} else {
cluster_offset = get_cluster_offset(bs, sector_num << 9, 2,
out_len, 0, 0);
if (cluster_offset == 0) {
ret = -EIO;
goto fail;
}
iov = (struct iovec) {
.iov_base = out_buf,
.iov_len = out_len,
};
qemu_iovec_init_external(&hd_qiov, &iov, 1);
ret = bdrv_co_pwritev(bs->file, cluster_offset, out_len, &hd_qiov, 0);
if (ret < 0) {
goto fail;
cluster_offset &= s->cluster_offset_mask;
ret = bdrv_pwrite(bs->file, cluster_offset, out_buf, out_len);
if (ret < 0) {
goto fail;
}
}
success:
ret = 0;
fail:
qemu_vfree(buf);
g_free(out_buf);
return ret;
}
@@ -1009,28 +881,24 @@ static int qcow_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
return 0;
}
static QemuOptsList qcow_create_opts = {
.name = "qcow-create-opts",
.head = QTAILQ_HEAD_INITIALIZER(qcow_create_opts.head),
.desc = {
{
.name = BLOCK_OPT_SIZE,
.type = QEMU_OPT_SIZE,
.help = "Virtual disk size"
},
{
.name = BLOCK_OPT_BACKING_FILE,
.type = QEMU_OPT_STRING,
.help = "File name of a base image"
},
{
.name = BLOCK_OPT_ENCRYPT,
.type = QEMU_OPT_BOOL,
.help = "Encrypt the image",
.def_value_str = "off"
},
{ /* end of list */ }
}
static QEMUOptionParameter qcow_create_options[] = {
{
.name = BLOCK_OPT_SIZE,
.type = OPT_SIZE,
.help = "Virtual disk size"
},
{
.name = BLOCK_OPT_BACKING_FILE,
.type = OPT_STRING,
.help = "File name of a base image"
},
{
.name = BLOCK_OPT_ENCRYPT,
.type = OPT_FLAG,
.help = "Encrypt the image"
},
{ NULL }
};
static BlockDriver bdrv_qcow = {
@@ -1039,10 +907,9 @@ static BlockDriver bdrv_qcow = {
.bdrv_probe = qcow_probe,
.bdrv_open = qcow_open,
.bdrv_close = qcow_close,
.bdrv_reopen_prepare = qcow_reopen_prepare,
.bdrv_create = qcow_create,
.bdrv_reopen_prepare = qcow_reopen_prepare,
.bdrv_create = qcow_create,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.supports_backing = true,
.bdrv_co_readv = qcow_co_readv,
.bdrv_co_writev = qcow_co_writev,
@@ -1050,10 +917,10 @@ static BlockDriver bdrv_qcow = {
.bdrv_set_key = qcow_set_key,
.bdrv_make_empty = qcow_make_empty,
.bdrv_co_pwritev_compressed = qcow_co_pwritev_compressed,
.bdrv_write_compressed = qcow_write_compressed,
.bdrv_get_info = qcow_get_info,
.create_opts = &qcow_create_opts,
.create_options = qcow_create_options,
};
static void bdrv_qcow_init(void)

View File

@@ -22,127 +22,52 @@
* THE SOFTWARE.
*/
/* Needed for CONFIG_MADVISE */
#include "qemu/osdep.h"
#include "block/block_int.h"
#include "qemu-common.h"
#include "qcow2.h"
#include "trace.h"
typedef struct Qcow2CachedTable {
int64_t offset;
uint64_t lru_counter;
int ref;
bool dirty;
void* table;
int64_t offset;
bool dirty;
int cache_hits;
int ref;
} Qcow2CachedTable;
struct Qcow2Cache {
Qcow2CachedTable *entries;
struct Qcow2Cache *depends;
Qcow2CachedTable* entries;
struct Qcow2Cache* depends;
int size;
bool depends_on_flush;
void *table_array;
uint64_t lru_counter;
uint64_t cache_clean_lru_counter;
};
static inline void *qcow2_cache_get_table_addr(BlockDriverState *bs,
Qcow2Cache *c, int table)
{
BDRVQcow2State *s = bs->opaque;
return (uint8_t *) c->table_array + (size_t) table * s->cluster_size;
}
static inline int qcow2_cache_get_table_idx(BlockDriverState *bs,
Qcow2Cache *c, void *table)
{
BDRVQcow2State *s = bs->opaque;
ptrdiff_t table_offset = (uint8_t *) table - (uint8_t *) c->table_array;
int idx = table_offset / s->cluster_size;
assert(idx >= 0 && idx < c->size && table_offset % s->cluster_size == 0);
return idx;
}
static void qcow2_cache_table_release(BlockDriverState *bs, Qcow2Cache *c,
int i, int num_tables)
{
#if QEMU_MADV_DONTNEED != QEMU_MADV_INVALID
BDRVQcow2State *s = bs->opaque;
void *t = qcow2_cache_get_table_addr(bs, c, i);
int align = getpagesize();
size_t mem_size = (size_t) s->cluster_size * num_tables;
size_t offset = QEMU_ALIGN_UP((uintptr_t) t, align) - (uintptr_t) t;
size_t length = QEMU_ALIGN_DOWN(mem_size - offset, align);
if (length > 0) {
qemu_madvise((uint8_t *) t + offset, length, QEMU_MADV_DONTNEED);
}
#endif
}
static inline bool can_clean_entry(Qcow2Cache *c, int i)
{
Qcow2CachedTable *t = &c->entries[i];
return t->ref == 0 && !t->dirty && t->offset != 0 &&
t->lru_counter <= c->cache_clean_lru_counter;
}
void qcow2_cache_clean_unused(BlockDriverState *bs, Qcow2Cache *c)
{
int i = 0;
while (i < c->size) {
int to_clean = 0;
/* Skip the entries that we don't need to clean */
while (i < c->size && !can_clean_entry(c, i)) {
i++;
}
/* And count how many we can clean in a row */
while (i < c->size && can_clean_entry(c, i)) {
c->entries[i].offset = 0;
c->entries[i].lru_counter = 0;
i++;
to_clean++;
}
if (to_clean > 0) {
qcow2_cache_table_release(bs, c, i - to_clean, to_clean);
}
}
c->cache_clean_lru_counter = c->lru_counter;
}
Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
Qcow2Cache *c;
int i;
c = g_new0(Qcow2Cache, 1);
c = g_malloc0(sizeof(*c));
c->size = num_tables;
c->entries = g_try_new0(Qcow2CachedTable, num_tables);
c->table_array = qemu_try_blockalign(bs->file->bs,
(size_t) num_tables * s->cluster_size);
c->entries = g_malloc0(sizeof(*c->entries) * num_tables);
if (!c->entries || !c->table_array) {
qemu_vfree(c->table_array);
g_free(c->entries);
g_free(c);
c = NULL;
for (i = 0; i < c->size; i++) {
c->entries[i].table = qemu_blockalign(bs, s->cluster_size);
}
return c;
}
int qcow2_cache_destroy(BlockDriverState *bs, Qcow2Cache *c)
int qcow2_cache_destroy(BlockDriverState* bs, Qcow2Cache *c)
{
int i;
for (i = 0; i < c->size; i++) {
assert(c->entries[i].ref == 0);
qemu_vfree(c->entries[i].table);
}
qemu_vfree(c->table_array);
g_free(c->entries);
g_free(c);
@@ -166,7 +91,7 @@ static int qcow2_cache_flush_dependency(BlockDriverState *bs, Qcow2Cache *c)
static int qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int ret = 0;
if (!c->entries[i].dirty || !c->entries[i].offset) {
@@ -179,7 +104,7 @@ static int qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
if (c->depends) {
ret = qcow2_cache_flush_dependency(bs, c);
} else if (c->depends_on_flush) {
ret = bdrv_flush(bs->file->bs);
ret = bdrv_flush(bs->file);
if (ret >= 0) {
c->depends_on_flush = false;
}
@@ -210,8 +135,8 @@ static int qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
BLKDBG_EVENT(bs->file, BLKDBG_L2_UPDATE);
}
ret = bdrv_pwrite(bs->file, c->entries[i].offset,
qcow2_cache_get_table_addr(bs, c, i), s->cluster_size);
ret = bdrv_pwrite(bs->file, c->entries[i].offset, c->entries[i].table,
s->cluster_size);
if (ret < 0) {
return ret;
}
@@ -221,9 +146,9 @@ static int qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
return 0;
}
int qcow2_cache_write(BlockDriverState *bs, Qcow2Cache *c)
int qcow2_cache_flush(BlockDriverState *bs, Qcow2Cache *c)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int result = 0;
int ret;
int i;
@@ -237,15 +162,8 @@ int qcow2_cache_write(BlockDriverState *bs, Qcow2Cache *c)
}
}
return result;
}
int qcow2_cache_flush(BlockDriverState *bs, Qcow2Cache *c)
{
int result = qcow2_cache_write(bs, c);
if (result == 0) {
int ret = bdrv_flush(bs->file->bs);
ret = bdrv_flush(bs->file);
if (ret < 0) {
result = ret;
}
@@ -294,55 +212,66 @@ int qcow2_cache_empty(BlockDriverState *bs, Qcow2Cache *c)
for (i = 0; i < c->size; i++) {
assert(c->entries[i].ref == 0);
c->entries[i].offset = 0;
c->entries[i].lru_counter = 0;
c->entries[i].cache_hits = 0;
}
qcow2_cache_table_release(bs, c, 0, c->size);
c->lru_counter = 0;
return 0;
}
static int qcow2_cache_find_entry_to_replace(Qcow2Cache *c)
{
int i;
int min_count = INT_MAX;
int min_index = -1;
for (i = 0; i < c->size; i++) {
if (c->entries[i].ref) {
continue;
}
if (c->entries[i].cache_hits < min_count) {
min_index = i;
min_count = c->entries[i].cache_hits;
}
/* Give newer hits priority */
/* TODO Check how to optimize the replacement strategy */
c->entries[i].cache_hits /= 2;
}
if (min_index == -1) {
/* This can't happen in current synchronous code, but leave the check
* here as a reminder for whoever starts using AIO with the cache */
abort();
}
return min_index;
}
static int qcow2_cache_do_get(BlockDriverState *bs, Qcow2Cache *c,
uint64_t offset, void **table, bool read_from_disk)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int i;
int ret;
int lookup_index;
uint64_t min_lru_counter = UINT64_MAX;
int min_lru_index = -1;
trace_qcow2_cache_get(qemu_coroutine_self(), c == s->l2_table_cache,
offset, read_from_disk);
/* Check if the table is already cached */
i = lookup_index = (offset / s->cluster_size * 4) % c->size;
do {
const Qcow2CachedTable *t = &c->entries[i];
if (t->offset == offset) {
for (i = 0; i < c->size; i++) {
if (c->entries[i].offset == offset) {
goto found;
}
if (t->ref == 0 && t->lru_counter < min_lru_counter) {
min_lru_counter = t->lru_counter;
min_lru_index = i;
}
if (++i == c->size) {
i = 0;
}
} while (i != lookup_index);
if (min_lru_index == -1) {
/* This can't happen in current synchronous code, but leave the check
* here as a reminder for whoever starts using AIO with the cache */
abort();
}
/* Cache miss: write a table back and replace it */
i = min_lru_index;
/* If not, write a table back and replace it */
i = qcow2_cache_find_entry_to_replace(c);
trace_qcow2_cache_get_replace_entry(qemu_coroutine_self(),
c == s->l2_table_cache, i);
if (i < 0) {
return i;
}
ret = qcow2_cache_entry_flush(bs, c, i);
if (ret < 0) {
@@ -357,20 +286,22 @@ static int qcow2_cache_do_get(BlockDriverState *bs, Qcow2Cache *c,
BLKDBG_EVENT(bs->file, BLKDBG_L2_LOAD);
}
ret = bdrv_pread(bs->file, offset,
qcow2_cache_get_table_addr(bs, c, i),
s->cluster_size);
ret = bdrv_pread(bs->file, offset, c->entries[i].table, s->cluster_size);
if (ret < 0) {
return ret;
}
}
/* Give the table some hits for the start so that it won't be replaced
* immediately. The number 32 is completely arbitrary. */
c->entries[i].cache_hits = 32;
c->entries[i].offset = offset;
/* And return the right table */
found:
c->entries[i].cache_hits++;
c->entries[i].ref++;
*table = qcow2_cache_get_table_addr(bs, c, i);
*table = c->entries[i].table;
trace_qcow2_cache_get_done(qemu_coroutine_self(),
c == s->l2_table_cache, i);
@@ -390,24 +321,36 @@ int qcow2_cache_get_empty(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,
return qcow2_cache_do_get(bs, c, offset, table, false);
}
void qcow2_cache_put(BlockDriverState *bs, Qcow2Cache *c, void **table)
int qcow2_cache_put(BlockDriverState *bs, Qcow2Cache *c, void **table)
{
int i = qcow2_cache_get_table_idx(bs, c, *table);
int i;
for (i = 0; i < c->size; i++) {
if (c->entries[i].table == *table) {
goto found;
}
}
return -ENOENT;
found:
c->entries[i].ref--;
*table = NULL;
if (c->entries[i].ref == 0) {
c->entries[i].lru_counter = ++c->lru_counter;
}
assert(c->entries[i].ref >= 0);
return 0;
}
void qcow2_cache_entry_mark_dirty(BlockDriverState *bs, Qcow2Cache *c,
void *table)
void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table)
{
int i = qcow2_cache_get_table_idx(bs, c, table);
assert(c->entries[i].offset != 0);
int i;
for (i = 0; i < c->size; i++) {
if (c->entries[i].table == table) {
goto found;
}
}
abort();
found:
c->entries[i].dirty = true;
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -22,17 +22,13 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "block/block_int.h"
#include "block/qcow2.h"
#include "qemu/bswap.h"
#include "qemu/error-report.h"
#include "qemu/cutils.h"
void qcow2_free_snapshots(BlockDriverState *bs)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int i;
for(i = 0; i < s->nb_snapshots; i++) {
@@ -46,7 +42,7 @@ void qcow2_free_snapshots(BlockDriverState *bs)
int qcow2_read_snapshots(BlockDriverState *bs)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QCowSnapshotHeader h;
QCowSnapshotExtraData extra;
QCowSnapshot *sn;
@@ -62,7 +58,7 @@ int qcow2_read_snapshots(BlockDriverState *bs)
}
offset = s->snapshots_offset;
s->snapshots = g_new0(QCowSnapshot, s->nb_snapshots);
s->snapshots = g_malloc0(s->nb_snapshots * sizeof(QCowSnapshot));
for(i = 0; i < s->nb_snapshots; i++) {
/* Read statically sized part of the snapshot header */
@@ -139,7 +135,7 @@ fail:
/* add at the end of the file a new list of snapshots */
static int qcow2_write_snapshots(BlockDriverState *bs)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QCowSnapshot *sn;
QCowSnapshotHeader h;
QCowSnapshotExtraData extra;
@@ -281,7 +277,7 @@ fail:
static void find_new_snapshot_id(BlockDriverState *bs,
char *id_str, int id_str_size)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QCowSnapshot *sn;
int i;
unsigned long id, id_max = 0;
@@ -299,7 +295,7 @@ static int find_snapshot_by_id_and_name(BlockDriverState *bs,
const char *id,
const char *name)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int i;
if (id && name) {
@@ -341,7 +337,7 @@ static int find_snapshot_by_id_or_name(BlockDriverState *bs,
/* if no id is provided, a new one is constructed */
int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QCowSnapshot *new_snapshot_list = NULL;
QCowSnapshot *old_snapshot_list = NULL;
QCowSnapshot sn1, *sn = &sn1;
@@ -355,8 +351,10 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
memset(sn, 0, sizeof(*sn));
/* Generate an ID */
find_new_snapshot_id(bs, sn_info->id_str, sizeof(sn_info->id_str));
/* Generate an ID if it wasn't passed */
if (sn_info->id_str[0] == '\0') {
find_new_snapshot_id(bs, sn_info->id_str, sizeof(sn_info->id_str));
}
/* Check that the ID is unique */
if (find_snapshot_by_id_and_name(bs, sn_info->id_str, NULL) >= 0) {
@@ -383,12 +381,7 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
sn->l1_table_offset = l1_table_offset;
sn->l1_size = s->l1_size;
l1_table = g_try_new(uint64_t, s->l1_size);
if (s->l1_size && l1_table == NULL) {
ret = -ENOMEM;
goto fail;
}
l1_table = g_malloc(s->l1_size * sizeof(uint64_t));
for(i = 0; i < s->l1_size; i++) {
l1_table[i] = cpu_to_be64(s->l1_table[i]);
}
@@ -419,7 +412,7 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
}
/* Append the new snapshot to the snapshot list */
new_snapshot_list = g_new(QCowSnapshot, s->nb_snapshots + 1);
new_snapshot_list = g_malloc((s->nb_snapshots + 1) * sizeof(QCowSnapshot));
if (s->snapshots) {
memcpy(new_snapshot_list, s->snapshots,
s->nb_snapshots * sizeof(QCowSnapshot));
@@ -443,7 +436,7 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
qcow2_discard_clusters(bs, qcow2_vm_state_offset(s),
align_offset(sn->vm_state_size, s->cluster_size)
>> BDRV_SECTOR_BITS,
QCOW2_DISCARD_NEVER, false);
QCOW2_DISCARD_NEVER);
#ifdef DEBUG_ALLOC
{
@@ -464,7 +457,7 @@ fail:
/* copy the snapshot 'snapshot_name' into the current disk image */
int qcow2_snapshot_goto(BlockDriverState *bs, const char *snapshot_id)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QCowSnapshot *sn;
int i, snapshot_index;
int cur_l1_bytes, sn_l1_bytes;
@@ -506,14 +499,9 @@ int qcow2_snapshot_goto(BlockDriverState *bs, const char *snapshot_id)
* Decrease the refcount referenced by the old one only when the L1
* table is overwritten.
*/
sn_l1_table = g_try_malloc0(cur_l1_bytes);
if (cur_l1_bytes && sn_l1_table == NULL) {
ret = -ENOMEM;
goto fail;
}
sn_l1_table = g_malloc0(cur_l1_bytes);
ret = bdrv_pread(bs->file, sn->l1_table_offset,
sn_l1_table, sn_l1_bytes);
ret = bdrv_pread(bs->file, sn->l1_table_offset, sn_l1_table, sn_l1_bytes);
if (ret < 0) {
goto fail;
}
@@ -591,7 +579,7 @@ int qcow2_snapshot_delete(BlockDriverState *bs,
const char *name,
Error **errp)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QCowSnapshot sn;
int snapshot_index, ret;
@@ -654,7 +642,7 @@ int qcow2_snapshot_delete(BlockDriverState *bs,
int qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QEMUSnapshotInfo *sn_tab, *sn_info;
QCowSnapshot *sn;
int i;
@@ -664,7 +652,7 @@ int qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab)
return s->nb_snapshots;
}
sn_tab = g_new0(QEMUSnapshotInfo, s->nb_snapshots);
sn_tab = g_malloc0(s->nb_snapshots * sizeof(QEMUSnapshotInfo));
for(i = 0; i < s->nb_snapshots; i++) {
sn_info = sn_tab + i;
sn = s->snapshots + i;
@@ -687,7 +675,7 @@ int qcow2_snapshot_load_tmp(BlockDriverState *bs,
Error **errp)
{
int i, snapshot_index;
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QCowSnapshot *sn;
uint64_t *new_l1_table;
int new_l1_bytes;
@@ -705,27 +693,22 @@ int qcow2_snapshot_load_tmp(BlockDriverState *bs,
sn = &s->snapshots[snapshot_index];
/* Allocate and read in the snapshot's L1 table */
if (sn->l1_size > QCOW_MAX_L1_SIZE / sizeof(uint64_t)) {
if (sn->l1_size > QCOW_MAX_L1_SIZE) {
error_setg(errp, "Snapshot L1 table too large");
return -EFBIG;
}
new_l1_bytes = sn->l1_size * sizeof(uint64_t);
new_l1_table = qemu_try_blockalign(bs->file->bs,
align_offset(new_l1_bytes, 512));
if (new_l1_table == NULL) {
return -ENOMEM;
}
new_l1_table = g_malloc0(align_offset(new_l1_bytes, 512));
ret = bdrv_pread(bs->file, sn->l1_table_offset,
new_l1_table, new_l1_bytes);
ret = bdrv_pread(bs->file, sn->l1_table_offset, new_l1_table, new_l1_bytes);
if (ret < 0) {
error_setg(errp, "Failed to read l1 table for snapshot");
qemu_vfree(new_l1_table);
g_free(new_l1_table);
return ret;
}
/* Switch the L1 table */
qemu_vfree(s->l1_table);
g_free(s->l1_table);
s->l1_size = sn->l1_size;
s->l1_table_offset = sn->l1_table_offset;

File diff suppressed because it is too large Load Diff

View File

@@ -25,8 +25,8 @@
#ifndef BLOCK_QCOW2_H
#define BLOCK_QCOW2_H
#include "crypto/cipher.h"
#include "qemu/coroutine.h"
#include "qemu/aes.h"
#include "block/coroutine.h"
//#define DEBUG_ALLOC
//#define DEBUG_ALLOC2
@@ -59,22 +59,15 @@
/* The cluster reads as all zeros */
#define QCOW_OFLAG_ZERO (1ULL << 0)
#define REFCOUNT_SHIFT 1 /* refcount size is 2 bytes */
#define MIN_CLUSTER_BITS 9
#define MAX_CLUSTER_BITS 21
/* Must be at least 2 to cover COW */
#define MIN_L2_CACHE_SIZE 2 /* clusters */
#define L2_CACHE_SIZE 16
/* Must be at least 4 to cover all cases of refcount table growth */
#define MIN_REFCOUNT_CACHE_SIZE 4 /* clusters */
/* Whichever is more */
#define DEFAULT_L2_CACHE_CLUSTERS 8 /* clusters */
#define DEFAULT_L2_CACHE_BYTE_SIZE 1048576 /* bytes */
/* The refblock cache needs only a fourth of the L2 cache size to cover as many
* clusters */
#define DEFAULT_L2_REFCOUNT_SIZE_RATIO 4
#define REFCOUNT_CACHE_SIZE 4
#define DEFAULT_CLUSTER_SIZE 65536
@@ -84,7 +77,6 @@
#define QCOW2_OPT_DISCARD_SNAPSHOT "pass-discard-snapshot"
#define QCOW2_OPT_DISCARD_OTHER "pass-discard-other"
#define QCOW2_OPT_OVERLAP "overlap-check"
#define QCOW2_OPT_OVERLAP_TEMPLATE "overlap-check.template"
#define QCOW2_OPT_OVERLAP_MAIN_HEADER "overlap-check.main-header"
#define QCOW2_OPT_OVERLAP_ACTIVE_L1 "overlap-check.active-l1"
#define QCOW2_OPT_OVERLAP_ACTIVE_L2 "overlap-check.active-l2"
@@ -93,10 +85,6 @@
#define QCOW2_OPT_OVERLAP_SNAPSHOT_TABLE "overlap-check.snapshot-table"
#define QCOW2_OPT_OVERLAP_INACTIVE_L1 "overlap-check.inactive-l1"
#define QCOW2_OPT_OVERLAP_INACTIVE_L2 "overlap-check.inactive-l2"
#define QCOW2_OPT_CACHE_SIZE "cache-size"
#define QCOW2_OPT_L2_CACHE_SIZE "l2-cache-size"
#define QCOW2_OPT_REFCOUNT_CACHE_SIZE "refcount-cache-size"
#define QCOW2_OPT_CACHE_CLEAN_INTERVAL "cache-clean-interval"
typedef struct QCowHeader {
uint32_t magic;
@@ -217,12 +205,7 @@ typedef struct Qcow2DiscardRegion {
QTAILQ_ENTRY(Qcow2DiscardRegion) next;
} Qcow2DiscardRegion;
typedef uint64_t Qcow2GetRefcountFunc(const void *refcount_array,
uint64_t index);
typedef void Qcow2SetRefcountFunc(void *refcount_array,
uint64_t index, uint64_t value);
typedef struct BDRVQcow2State {
typedef struct BDRVQcowState {
int cluster_bits;
int cluster_size;
int cluster_sectors;
@@ -230,8 +213,6 @@ typedef struct BDRVQcow2State {
int l2_size;
int l1_size;
int l1_vm_state_index;
int refcount_block_bits;
int refcount_block_size;
int csize_shift;
int csize_mask;
uint64_t cluster_offset_mask;
@@ -240,8 +221,6 @@ typedef struct BDRVQcow2State {
Qcow2Cache* l2_table_cache;
Qcow2Cache* refcount_block_cache;
QEMUTimer *cache_clean_timer;
unsigned cache_clean_interval;
uint8_t *cluster_cache;
uint8_t *cluster_data;
@@ -256,8 +235,10 @@ typedef struct BDRVQcow2State {
CoMutex lock;
QCryptoCipher *cipher; /* current cipher, NULL if no key yet */
uint32_t crypt_method; /* current crypt method, 0 if no key yet */
uint32_t crypt_method_header;
AES_KEY aes_encrypt_key;
AES_KEY aes_decrypt_key;
uint64_t snapshots_offset;
int snapshots_size;
unsigned int nb_snapshots;
@@ -267,16 +248,10 @@ typedef struct BDRVQcow2State {
int qcow_version;
bool use_lazy_refcounts;
int refcount_order;
int refcount_bits;
uint64_t refcount_max;
Qcow2GetRefcountFunc *get_refcount;
Qcow2SetRefcountFunc *set_refcount;
bool discard_passthrough[QCOW2_DISCARD_MAX];
int overlap_check; /* bitmask of Qcow2MetadataOverlap values */
bool signaled_corruption;
uint64_t incompatible_features;
uint64_t compatible_features;
@@ -287,13 +262,20 @@ typedef struct BDRVQcow2State {
QLIST_HEAD(, Qcow2UnknownHeaderExtension) unknown_header_ext;
QTAILQ_HEAD (, Qcow2DiscardRegion) discards;
bool cache_discards;
} BDRVQcowState;
/* Backing file path and format as stored in the image (this is not the
* effective path/format, which may be the result of a runtime option
* override) */
char *image_backing_file;
char *image_backing_format;
} BDRVQcow2State;
/* XXX: use std qcow open function ? */
typedef struct QCowCreateState {
int cluster_size;
int cluster_bits;
uint16_t *refcount_block;
uint64_t *refcount_table;
int64_t l1_table_offset;
int64_t refcount_table_offset;
int64_t refcount_block_offset;
} QCowCreateState;
struct QCowAIOCB;
typedef struct Qcow2COWRegion {
/**
@@ -302,8 +284,8 @@ typedef struct Qcow2COWRegion {
*/
uint64_t offset;
/** Number of bytes to copy */
int nb_bytes;
/** Number of sectors to copy */
int nb_sectors;
} Qcow2COWRegion;
/**
@@ -318,6 +300,12 @@ typedef struct QCowL2Meta
/** Host offset of the first newly allocated cluster */
uint64_t alloc_offset;
/**
* Number of sectors from the start of the first allocated cluster to
* the end of the (possibly shortened) request
*/
int nb_available;
/** Number of newly allocated clusters */
int nb_clusters;
@@ -397,28 +385,28 @@ typedef enum QCow2MetadataOverlap {
#define REFT_OFFSET_MASK 0xfffffffffffffe00ULL
static inline int64_t start_of_cluster(BDRVQcow2State *s, int64_t offset)
static inline int64_t start_of_cluster(BDRVQcowState *s, int64_t offset)
{
return offset & ~(s->cluster_size - 1);
}
static inline int64_t offset_into_cluster(BDRVQcow2State *s, int64_t offset)
static inline int64_t offset_into_cluster(BDRVQcowState *s, int64_t offset)
{
return offset & (s->cluster_size - 1);
}
static inline uint64_t size_to_clusters(BDRVQcow2State *s, uint64_t size)
static inline int size_to_clusters(BDRVQcowState *s, int64_t size)
{
return (size + (s->cluster_size - 1)) >> s->cluster_bits;
}
static inline int64_t size_to_l1(BDRVQcow2State *s, int64_t size)
static inline int64_t size_to_l1(BDRVQcowState *s, int64_t size)
{
int shift = s->cluster_bits + s->l2_bits;
return (size + (1ULL << shift) - 1) >> shift;
}
static inline int offset_to_l2_index(BDRVQcow2State *s, int64_t offset)
static inline int offset_to_l2_index(BDRVQcowState *s, int64_t offset)
{
return (offset >> s->cluster_bits) & (s->l2_size - 1);
}
@@ -429,12 +417,12 @@ static inline int64_t align_offset(int64_t offset, int n)
return offset;
}
static inline int64_t qcow2_vm_state_offset(BDRVQcow2State *s)
static inline int64_t qcow2_vm_state_offset(BDRVQcowState *s)
{
return (int64_t)s->l1_vm_state_index << (s->cluster_bits + s->l2_bits);
}
static inline uint64_t qcow2_max_refcount_clusters(BDRVQcow2State *s)
static inline uint64_t qcow2_max_refcount_clusters(BDRVQcowState *s)
{
return QCOW_MAX_REFTABLE_SIZE >> s->cluster_bits;
}
@@ -453,7 +441,7 @@ static inline int qcow2_get_cluster_type(uint64_t l2_entry)
}
/* Check whether refcounts are eager or lazy */
static inline bool qcow2_need_accurate_refcounts(BDRVQcow2State *s)
static inline bool qcow2_need_accurate_refcounts(BDRVQcowState *s)
{
return !(s->incompatible_features & QCOW2_INCOMPAT_DIRTY);
}
@@ -465,12 +453,8 @@ static inline uint64_t l2meta_cow_start(QCowL2Meta *m)
static inline uint64_t l2meta_cow_end(QCowL2Meta *m)
{
return m->offset + m->cow_end.offset + m->cow_end.nb_bytes;
}
static inline uint64_t refcount_diff(uint64_t r1, uint64_t r2)
{
return r1 > r2 ? r1 - r2 : r2 - r1;
return m->offset + m->cow_end.offset
+ (m->cow_end.nb_sectors << BDRV_SECTOR_BITS);
}
// FIXME Need qcow2_ prefix to global functions
@@ -484,24 +468,16 @@ int qcow2_mark_corrupt(BlockDriverState *bs);
int qcow2_mark_consistent(BlockDriverState *bs);
int qcow2_update_header(BlockDriverState *bs);
void qcow2_signal_corruption(BlockDriverState *bs, bool fatal, int64_t offset,
int64_t size, const char *message_format, ...)
GCC_FMT_ATTR(5, 6);
/* qcow2-refcount.c functions */
int qcow2_refcount_init(BlockDriverState *bs);
void qcow2_refcount_close(BlockDriverState *bs);
int qcow2_get_refcount(BlockDriverState *bs, int64_t cluster_index,
uint64_t *refcount);
int qcow2_update_cluster_refcount(BlockDriverState *bs, int64_t cluster_index,
uint64_t addend, bool decrease,
enum qcow2_discard_type type);
int addend, enum qcow2_discard_type type);
int64_t qcow2_alloc_clusters(BlockDriverState *bs, uint64_t size);
int64_t qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offset,
int64_t nb_clusters);
int qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offset,
int nb_clusters);
int64_t qcow2_alloc_bytes(BlockDriverState *bs, int size);
void qcow2_free_clusters(BlockDriverState *bs,
int64_t offset, int64_t size,
@@ -522,36 +498,31 @@ int qcow2_check_metadata_overlap(BlockDriverState *bs, int ign, int64_t offset,
int qcow2_pre_write_overlap_check(BlockDriverState *bs, int ign, int64_t offset,
int64_t size);
int qcow2_change_refcount_order(BlockDriverState *bs, int refcount_order,
BlockDriverAmendStatusCB *status_cb,
void *cb_opaque, Error **errp);
/* qcow2-cluster.c functions */
int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
bool exact_size);
int qcow2_write_l1_entry(BlockDriverState *bs, int l1_index);
void qcow2_l2_cache_reset(BlockDriverState *bs);
int qcow2_decompress_cluster(BlockDriverState *bs, uint64_t cluster_offset);
int qcow2_encrypt_sectors(BDRVQcow2State *s, int64_t sector_num,
uint8_t *out_buf, const uint8_t *in_buf,
int nb_sectors, bool enc, Error **errp);
void qcow2_encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
uint8_t *out_buf, const uint8_t *in_buf,
int nb_sectors, int enc,
const AES_KEY *key);
int qcow2_get_cluster_offset(BlockDriverState *bs, uint64_t offset,
unsigned int *bytes, uint64_t *cluster_offset);
int *num, uint64_t *cluster_offset);
int qcow2_alloc_cluster_offset(BlockDriverState *bs, uint64_t offset,
unsigned int *bytes, uint64_t *host_offset,
QCowL2Meta **m);
int *num, uint64_t *host_offset, QCowL2Meta **m);
uint64_t qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs,
uint64_t offset,
int compressed_size);
int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m);
int qcow2_discard_clusters(BlockDriverState *bs, uint64_t offset,
int nb_sectors, enum qcow2_discard_type type, bool full_discard);
int nb_sectors, enum qcow2_discard_type type);
int qcow2_zero_clusters(BlockDriverState *bs, uint64_t offset, int nb_sectors);
int qcow2_expand_zero_clusters(BlockDriverState *bs,
BlockDriverAmendStatusCB *status_cb,
void *cb_opaque);
int qcow2_expand_zero_clusters(BlockDriverState *bs);
/* qcow2-snapshot.c functions */
int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info);
@@ -573,21 +544,18 @@ int qcow2_read_snapshots(BlockDriverState *bs);
Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables);
int qcow2_cache_destroy(BlockDriverState* bs, Qcow2Cache *c);
void qcow2_cache_entry_mark_dirty(BlockDriverState *bs, Qcow2Cache *c,
void *table);
void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table);
int qcow2_cache_flush(BlockDriverState *bs, Qcow2Cache *c);
int qcow2_cache_write(BlockDriverState *bs, Qcow2Cache *c);
int qcow2_cache_set_dependency(BlockDriverState *bs, Qcow2Cache *c,
Qcow2Cache *dependency);
void qcow2_cache_depends_on_flush(Qcow2Cache *c);
void qcow2_cache_clean_unused(BlockDriverState *bs, Qcow2Cache *c);
int qcow2_cache_empty(BlockDriverState *bs, Qcow2Cache *c);
int qcow2_cache_get(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,
void **table);
int qcow2_cache_get_empty(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,
void **table);
void qcow2_cache_put(BlockDriverState *bs, Qcow2Cache *c, void **table);
int qcow2_cache_put(BlockDriverState *bs, Qcow2Cache *c, void **table);
#endif

View File

@@ -11,7 +11,6 @@
*
*/
#include "qemu/osdep.h"
#include "qed.h"
typedef struct {
@@ -228,13 +227,12 @@ int qed_check(BDRVQEDState *s, BdrvCheckResult *result, bool fix)
};
int ret;
check.used_clusters = g_try_new0(uint32_t, (check.nclusters + 31) / 32);
if (check.nclusters && check.used_clusters == NULL) {
return -ENOMEM;
}
check.used_clusters = g_malloc0(((check.nclusters + 31) / 32) *
sizeof(check.used_clusters[0]));
check.result->bfi.total_clusters =
DIV_ROUND_UP(s->header.image_size, s->header.cluster_size);
(s->header.image_size + s->header.cluster_size - 1) /
s->header.cluster_size;
ret = qed_check_l1_table(&check, s->l1_table);
if (ret == 0) {
/* Only check for leaks if entire image was scanned successfully */

View File

@@ -12,7 +12,6 @@
*
*/
#include "qemu/osdep.h"
#include "qed.h"
/**

View File

@@ -11,10 +11,9 @@
*
*/
#include "qemu/osdep.h"
#include "qed.h"
void *gencb_alloc(size_t len, BlockCompletionFunc *cb, void *opaque)
void *gencb_alloc(size_t len, BlockDriverCompletionFunc *cb, void *opaque)
{
GenericCB *gencb = g_malloc(len);
gencb->cb = cb;
@@ -25,7 +24,7 @@ void *gencb_alloc(size_t len, BlockCompletionFunc *cb, void *opaque)
void gencb_complete(void *opaque, int ret)
{
GenericCB *gencb = opaque;
BlockCompletionFunc *cb = gencb->cb;
BlockDriverCompletionFunc *cb = gencb->cb;
void *user_opaque = gencb->opaque;
g_free(gencb);

View File

@@ -50,7 +50,6 @@
* table will be deleted in favor of the existing cache entry.
*/
#include "qemu/osdep.h"
#include "trace.h"
#include "qed.h"

View File

@@ -12,11 +12,9 @@
*
*/
#include "qemu/osdep.h"
#include "trace.h"
#include "qemu/sockets.h" /* for EINPROGRESS on Windows */
#include "qed.h"
#include "qemu/bswap.h"
typedef struct {
GenericCB gencb;
@@ -51,7 +49,7 @@ out:
}
static void qed_read_table(BDRVQEDState *s, uint64_t offset, QEDTable *table,
BlockCompletionFunc *cb, void *opaque)
BlockDriverCompletionFunc *cb, void *opaque)
{
QEDReadTableCB *read_table_cb = gencb_alloc(sizeof(*read_table_cb),
cb, opaque);
@@ -121,7 +119,7 @@ out:
*/
static void qed_write_table(BDRVQEDState *s, uint64_t offset, QEDTable *table,
unsigned int index, unsigned int n, bool flush,
BlockCompletionFunc *cb, void *opaque)
BlockDriverCompletionFunc *cb, void *opaque)
{
QEDWriteTableCB *write_table_cb;
unsigned int sector_mask = BDRV_SECTOR_SIZE / sizeof(uint64_t) - 1;
@@ -175,14 +173,14 @@ int qed_read_l1_table_sync(BDRVQEDState *s)
qed_read_table(s, s->header.l1_table_offset,
s->l1_table, qed_sync_cb, &ret);
while (ret == -EINPROGRESS) {
aio_poll(bdrv_get_aio_context(s->bs), true);
qemu_aio_wait();
}
return ret;
}
void qed_write_l1_table(BDRVQEDState *s, unsigned int index, unsigned int n,
BlockCompletionFunc *cb, void *opaque)
BlockDriverCompletionFunc *cb, void *opaque)
{
BLKDBG_EVENT(s->bs->file, BLKDBG_L1_UPDATE);
qed_write_table(s, s->header.l1_table_offset,
@@ -196,7 +194,7 @@ int qed_write_l1_table_sync(BDRVQEDState *s, unsigned int index,
qed_write_l1_table(s, index, n, qed_sync_cb, &ret);
while (ret == -EINPROGRESS) {
aio_poll(bdrv_get_aio_context(s->bs), true);
qemu_aio_wait();
}
return ret;
@@ -237,7 +235,7 @@ static void qed_read_l2_table_cb(void *opaque, int ret)
}
void qed_read_l2_table(BDRVQEDState *s, QEDRequest *request, uint64_t offset,
BlockCompletionFunc *cb, void *opaque)
BlockDriverCompletionFunc *cb, void *opaque)
{
QEDReadL2TableCB *read_l2_table_cb;
@@ -269,7 +267,7 @@ int qed_read_l2_table_sync(BDRVQEDState *s, QEDRequest *request, uint64_t offset
qed_read_l2_table(s, request, offset, qed_sync_cb, &ret);
while (ret == -EINPROGRESS) {
aio_poll(bdrv_get_aio_context(s->bs), true);
qemu_aio_wait();
}
return ret;
@@ -277,7 +275,7 @@ int qed_read_l2_table_sync(BDRVQEDState *s, QEDRequest *request, uint64_t offset
void qed_write_l2_table(BDRVQEDState *s, QEDRequest *request,
unsigned int index, unsigned int n, bool flush,
BlockCompletionFunc *cb, void *opaque)
BlockDriverCompletionFunc *cb, void *opaque)
{
BLKDBG_EVENT(s->bs->file, BLKDBG_L2_UPDATE);
qed_write_table(s, request->l2_table->offset,
@@ -291,7 +289,7 @@ int qed_write_l2_table_sync(BDRVQEDState *s, QEDRequest *request,
qed_write_l2_table(s, request, index, n, flush, qed_sync_cb, &ret);
while (ret == -EINPROGRESS) {
aio_poll(bdrv_get_aio_context(s->bs), true);
qemu_aio_wait();
}
return ret;

View File

@@ -12,18 +12,27 @@
*
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu/timer.h"
#include "qemu/bswap.h"
#include "trace.h"
#include "qed.h"
#include "qapi/qmp/qerror.h"
#include "migration/migration.h"
#include "sysemu/block-backend.h"
static void qed_aio_cancel(BlockDriverAIOCB *blockacb)
{
QEDAIOCB *acb = (QEDAIOCB *)blockacb;
bool finished = false;
/* Wait for the request to finish */
acb->finished = &finished;
while (!finished) {
qemu_aio_wait();
}
}
static const AIOCBInfo qed_aiocb_info = {
.aiocb_size = sizeof(QEDAIOCB),
.cancel = qed_aio_cancel,
};
static int bdrv_qed_probe(const uint8_t *buf, int buf_size,
@@ -134,7 +143,7 @@ static void qed_write_header_read_cb(void *opaque, int ret)
* This function only updates known header fields in-place and does not affect
* extra data after the QED header.
*/
static void qed_write_header(BDRVQEDState *s, BlockCompletionFunc cb,
static void qed_write_header(BDRVQEDState *s, BlockDriverCompletionFunc cb,
void *opaque)
{
/* We must write full sectors for O_DIRECT but cannot necessarily generate
@@ -143,7 +152,8 @@ static void qed_write_header(BDRVQEDState *s, BlockCompletionFunc cb,
* them, and write back.
*/
int nsectors = DIV_ROUND_UP(sizeof(QEDHeader), BDRV_SECTOR_SIZE);
int nsectors = (sizeof(QEDHeader) + BDRV_SECTOR_SIZE - 1) /
BDRV_SECTOR_SIZE;
size_t len = nsectors * BDRV_SECTOR_SIZE;
QEDWriteHeaderCB *write_header_cb = gencb_alloc(sizeof(*write_header_cb),
cb, opaque);
@@ -218,7 +228,7 @@ static bool qed_is_image_size_valid(uint64_t image_size, uint32_t cluster_size,
*
* The string is NUL-terminated.
*/
static int qed_read_string(BdrvChild *file, uint64_t offset, size_t n,
static int qed_read_string(BlockDriverState *file, uint64_t offset, size_t n,
char *buf, size_t buflen)
{
int ret;
@@ -347,7 +357,7 @@ static void qed_start_need_check_timer(BDRVQEDState *s)
* migration.
*/
timer_mod(s->need_check_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
NANOSECONDS_PER_SECOND * QED_NEED_CHECK_TIMEOUT);
get_ticks_per_sec() * QED_NEED_CHECK_TIMEOUT);
}
/* It's okay to call this multiple times or when no timer is started */
@@ -357,25 +367,10 @@ static void qed_cancel_need_check_timer(BDRVQEDState *s)
timer_del(s->need_check_timer);
}
static void bdrv_qed_detach_aio_context(BlockDriverState *bs)
static void bdrv_qed_rebind(BlockDriverState *bs)
{
BDRVQEDState *s = bs->opaque;
qed_cancel_need_check_timer(s);
timer_free(s->need_check_timer);
}
static void bdrv_qed_attach_aio_context(BlockDriverState *bs,
AioContext *new_context)
{
BDRVQEDState *s = bs->opaque;
s->need_check_timer = aio_timer_new(new_context,
QEMU_CLOCK_VIRTUAL, SCALE_NS,
qed_need_check_timer_cb, s);
if (s->header.features & QED_F_NEED_CHECK) {
qed_start_need_check_timer(s);
}
s->bs = bs;
}
static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
@@ -401,8 +396,11 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
}
if (s->header.features & ~QED_FEATURE_MASK) {
/* image uses unsupported feature bits */
error_setg(errp, "Unsupported QED features: %" PRIx64,
s->header.features & ~QED_FEATURE_MASK);
char buf[64];
snprintf(buf, sizeof(buf), "%" PRIx64,
s->header.features & ~QED_FEATURE_MASK);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "QED", buf);
return -ENOTSUP;
}
if (!qed_is_cluster_size_valid(s->header.cluster_size)) {
@@ -410,7 +408,7 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
}
/* Round down file size to the last cluster */
file_size = bdrv_getlength(bs->file->bs);
file_size = bdrv_getlength(bs->file);
if (file_size < 0) {
return file_size;
}
@@ -430,14 +428,9 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
s->table_nelems = (s->header.cluster_size * s->header.table_size) /
sizeof(uint64_t);
s->l2_shift = ctz32(s->header.cluster_size);
s->l2_shift = ffs(s->header.cluster_size) - 1;
s->l2_mask = s->table_nelems - 1;
s->l1_shift = s->l2_shift + ctz32(s->table_nelems);
/* Header size calculation must not overflow uint32_t */
if (s->header.header_size > UINT32_MAX / s->header.cluster_size) {
return -EINVAL;
}
s->l1_shift = s->l2_shift + ffs(s->table_nelems) - 1;
if ((s->header.features & QED_F_BACKING_FILE)) {
if ((uint64_t)s->header.backing_filename_offset +
@@ -465,7 +458,7 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
* feature is no longer valid.
*/
if ((s->header.autoclear_features & ~QED_AUTOCLEAR_FEATURE_MASK) != 0 &&
!bdrv_is_read_only(bs->file->bs) && !(flags & BDRV_O_INACTIVE)) {
!bdrv_is_read_only(bs->file) && !(flags & BDRV_O_INCOMING)) {
s->header.autoclear_features &= QED_AUTOCLEAR_FEATURE_MASK;
ret = qed_write_header_sync(s);
@@ -474,7 +467,7 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
}
/* From here on only known autoclear feature bits are valid */
bdrv_flush(bs->file->bs);
bdrv_flush(bs->file);
}
s->l1_table = qed_alloc_table(s);
@@ -492,8 +485,8 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
* potentially inconsistent images to be opened read-only. This can
* aid data recovery from an otherwise inconsistent image.
*/
if (!bdrv_is_read_only(bs->file->bs) &&
!(flags & BDRV_O_INACTIVE)) {
if (!bdrv_is_read_only(bs->file) &&
!(flags & BDRV_O_INCOMING)) {
BdrvCheckResult result = {0};
ret = qed_check(s, &result, true);
@@ -503,7 +496,8 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
}
}
bdrv_qed_attach_aio_context(bs, bdrv_get_aio_context(bs));
s->need_check_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
qed_need_check_timer_cb, s);
out:
if (ret) {
@@ -513,11 +507,13 @@ out:
return ret;
}
static void bdrv_qed_refresh_limits(BlockDriverState *bs, Error **errp)
static int bdrv_qed_refresh_limits(BlockDriverState *bs)
{
BDRVQEDState *s = bs->opaque;
bs->bl.pwrite_zeroes_alignment = s->header.cluster_size;
bs->bl.write_zeroes_alignment = s->header.cluster_size >> BDRV_SECTOR_BITS;
return 0;
}
/* We have nothing to do for QED reopen, stubs just return
@@ -532,10 +528,11 @@ static void bdrv_qed_close(BlockDriverState *bs)
{
BDRVQEDState *s = bs->opaque;
bdrv_qed_detach_aio_context(bs);
qed_cancel_need_check_timer(s);
timer_free(s->need_check_timer);
/* Ensure writes reach stable storage */
bdrv_flush(bs->file->bs);
bdrv_flush(bs->file);
/* Clean shutdown, no check required on next open */
if (s->header.features & QED_F_NEED_CHECK) {
@@ -550,7 +547,7 @@ static void bdrv_qed_close(BlockDriverState *bs)
static int qed_create(const char *filename, uint32_t cluster_size,
uint64_t image_size, uint32_t table_size,
const char *backing_file, const char *backing_fmt,
QemuOpts *opts, Error **errp)
Error **errp)
{
QEDHeader header = {
.magic = QED_MAGIC,
@@ -567,25 +564,25 @@ static int qed_create(const char *filename, uint32_t cluster_size,
size_t l1_size = header.cluster_size * header.table_size;
Error *local_err = NULL;
int ret = 0;
BlockBackend *blk;
BlockDriverState *bs;
ret = bdrv_create_file(filename, opts, &local_err);
ret = bdrv_create_file(filename, NULL, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
blk = blk_new_open(filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, &local_err);
if (blk == NULL) {
bs = NULL;
ret = bdrv_open(&bs, filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_CACHE_WB | BDRV_O_PROTOCOL, NULL,
&local_err);
if (ret < 0) {
error_propagate(errp, local_err);
return -EIO;
return ret;
}
blk_set_allow_write_beyond_eof(blk, true);
/* File must start empty and grow, check truncate is supported */
ret = blk_truncate(blk, 0);
ret = bdrv_truncate(bs, 0);
if (ret < 0) {
goto out;
}
@@ -601,18 +598,18 @@ static int qed_create(const char *filename, uint32_t cluster_size,
}
qed_header_cpu_to_le(&header, &le_header);
ret = blk_pwrite(blk, 0, &le_header, sizeof(le_header), 0);
ret = bdrv_pwrite(bs, 0, &le_header, sizeof(le_header));
if (ret < 0) {
goto out;
}
ret = blk_pwrite(blk, sizeof(le_header), backing_file,
header.backing_filename_size, 0);
ret = bdrv_pwrite(bs, sizeof(le_header), backing_file,
header.backing_filename_size);
if (ret < 0) {
goto out;
}
l1_table = g_malloc0(l1_size);
ret = blk_pwrite(blk, header.l1_table_offset, l1_table, l1_size, 0);
ret = bdrv_pwrite(bs, header.l1_table_offset, l1_table, l1_size);
if (ret < 0) {
goto out;
}
@@ -620,58 +617,57 @@ static int qed_create(const char *filename, uint32_t cluster_size,
ret = 0; /* success */
out:
g_free(l1_table);
blk_unref(blk);
bdrv_unref(bs);
return ret;
}
static int bdrv_qed_create(const char *filename, QemuOpts *opts, Error **errp)
static int bdrv_qed_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
{
uint64_t image_size = 0;
uint32_t cluster_size = QED_DEFAULT_CLUSTER_SIZE;
uint32_t table_size = QED_DEFAULT_TABLE_SIZE;
char *backing_file = NULL;
char *backing_fmt = NULL;
int ret;
const char *backing_file = NULL;
const char *backing_fmt = NULL;
image_size = ROUND_UP(qemu_opt_get_size_del(opts, BLOCK_OPT_SIZE, 0),
BDRV_SECTOR_SIZE);
backing_file = qemu_opt_get_del(opts, BLOCK_OPT_BACKING_FILE);
backing_fmt = qemu_opt_get_del(opts, BLOCK_OPT_BACKING_FMT);
cluster_size = qemu_opt_get_size_del(opts,
BLOCK_OPT_CLUSTER_SIZE,
QED_DEFAULT_CLUSTER_SIZE);
table_size = qemu_opt_get_size_del(opts, BLOCK_OPT_TABLE_SIZE,
QED_DEFAULT_TABLE_SIZE);
while (options && options->name) {
if (!strcmp(options->name, BLOCK_OPT_SIZE)) {
image_size = options->value.n;
} else if (!strcmp(options->name, BLOCK_OPT_BACKING_FILE)) {
backing_file = options->value.s;
} else if (!strcmp(options->name, BLOCK_OPT_BACKING_FMT)) {
backing_fmt = options->value.s;
} else if (!strcmp(options->name, BLOCK_OPT_CLUSTER_SIZE)) {
if (options->value.n) {
cluster_size = options->value.n;
}
} else if (!strcmp(options->name, BLOCK_OPT_TABLE_SIZE)) {
if (options->value.n) {
table_size = options->value.n;
}
}
options++;
}
if (!qed_is_cluster_size_valid(cluster_size)) {
error_setg(errp, "QED cluster size must be within range [%u, %u] "
"and power of 2",
QED_MIN_CLUSTER_SIZE, QED_MAX_CLUSTER_SIZE);
ret = -EINVAL;
goto finish;
fprintf(stderr, "QED cluster size must be within range [%u, %u] and power of 2\n",
QED_MIN_CLUSTER_SIZE, QED_MAX_CLUSTER_SIZE);
return -EINVAL;
}
if (!qed_is_table_size_valid(table_size)) {
error_setg(errp, "QED table size must be within range [%u, %u] "
"and power of 2",
QED_MIN_TABLE_SIZE, QED_MAX_TABLE_SIZE);
ret = -EINVAL;
goto finish;
fprintf(stderr, "QED table size must be within range [%u, %u] and power of 2\n",
QED_MIN_TABLE_SIZE, QED_MAX_TABLE_SIZE);
return -EINVAL;
}
if (!qed_is_image_size_valid(image_size, cluster_size, table_size)) {
error_setg(errp, "QED image size must be a non-zero multiple of "
"cluster size and less than %" PRIu64 " bytes",
qed_max_image_size(cluster_size, table_size));
ret = -EINVAL;
goto finish;
fprintf(stderr, "QED image size must be a non-zero multiple of "
"cluster size and less than %" PRIu64 " bytes\n",
qed_max_image_size(cluster_size, table_size));
return -EINVAL;
}
ret = qed_create(filename, cluster_size, image_size, table_size,
backing_file, backing_fmt, opts, errp);
finish:
g_free(backing_file);
g_free(backing_fmt);
return ret;
return qed_create(filename, cluster_size, image_size, table_size,
backing_file, backing_fmt, errp);
}
typedef struct {
@@ -680,7 +676,6 @@ typedef struct {
uint64_t pos;
int64_t status;
int *pnum;
BlockDriverState **file;
} QEDIsAllocatedCB;
static void qed_is_allocated_cb(void *opaque, int ret, uint64_t offset, size_t len)
@@ -692,7 +687,6 @@ static void qed_is_allocated_cb(void *opaque, int ret, uint64_t offset, size_t l
case QED_CLUSTER_FOUND:
offset |= qed_offset_into_cluster(s, cb->pos);
cb->status = BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID | offset;
*cb->file = cb->bs->file->bs;
break;
case QED_CLUSTER_ZERO:
cb->status = BDRV_BLOCK_ZERO;
@@ -708,14 +702,13 @@ static void qed_is_allocated_cb(void *opaque, int ret, uint64_t offset, size_t l
}
if (cb->co) {
qemu_coroutine_enter(cb->co);
qemu_coroutine_enter(cb->co, NULL);
}
}
static int64_t coroutine_fn bdrv_qed_co_get_block_status(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors, int *pnum,
BlockDriverState **file)
int nb_sectors, int *pnum)
{
BDRVQEDState *s = bs->opaque;
size_t len = (size_t)nb_sectors * BDRV_SECTOR_SIZE;
@@ -724,7 +717,6 @@ static int64_t coroutine_fn bdrv_qed_co_get_block_status(BlockDriverState *bs,
.pos = (uint64_t)sector_num * BDRV_SECTOR_SIZE,
.status = BDRV_BLOCK_OFFSET_MASK,
.pnum = pnum,
.file = file,
};
QEDRequest request = { .l2_table = NULL };
@@ -749,20 +741,18 @@ static BDRVQEDState *acb_to_s(QEDAIOCB *acb)
/**
* Read from the backing file or zero-fill if no backing file
*
* @s: QED state
* @pos: Byte position in device
* @qiov: Destination I/O vector
* @backing_qiov: Possibly shortened copy of qiov, to be allocated here
* @cb: Completion function
* @opaque: User data for completion function
* @s: QED state
* @pos: Byte position in device
* @qiov: Destination I/O vector
* @cb: Completion function
* @opaque: User data for completion function
*
* This function reads qiov->size bytes starting at pos from the backing file.
* If there is no backing file then zeroes are read.
*/
static void qed_read_backing_file(BDRVQEDState *s, uint64_t pos,
QEMUIOVector *qiov,
QEMUIOVector **backing_qiov,
BlockCompletionFunc *cb, void *opaque)
BlockDriverCompletionFunc *cb, void *opaque)
{
uint64_t backing_length = 0;
size_t size;
@@ -770,8 +760,8 @@ static void qed_read_backing_file(BDRVQEDState *s, uint64_t pos,
/* If there is a backing file, get its length. Treat the absence of a
* backing file like a zero length backing file.
*/
if (s->bs->backing) {
int64_t l = bdrv_getlength(s->bs->backing->bs);
if (s->bs->backing_hd) {
int64_t l = bdrv_getlength(s->bs->backing_hd);
if (l < 0) {
cb(opaque, l);
return;
@@ -794,21 +784,15 @@ static void qed_read_backing_file(BDRVQEDState *s, uint64_t pos,
/* If the read straddles the end of the backing file, shorten it */
size = MIN((uint64_t)backing_length - pos, qiov->size);
assert(*backing_qiov == NULL);
*backing_qiov = g_new(QEMUIOVector, 1);
qemu_iovec_init(*backing_qiov, qiov->niov);
qemu_iovec_concat(*backing_qiov, qiov, 0, size);
BLKDBG_EVENT(s->bs->file, BLKDBG_READ_BACKING_AIO);
bdrv_aio_readv(s->bs->backing, pos / BDRV_SECTOR_SIZE,
*backing_qiov, size / BDRV_SECTOR_SIZE, cb, opaque);
bdrv_aio_readv(s->bs->backing_hd, pos / BDRV_SECTOR_SIZE,
qiov, size / BDRV_SECTOR_SIZE, cb, opaque);
}
typedef struct {
GenericCB gencb;
BDRVQEDState *s;
QEMUIOVector qiov;
QEMUIOVector *backing_qiov;
struct iovec iov;
uint64_t offset;
} CopyFromBackingFileCB;
@@ -825,12 +809,6 @@ static void qed_copy_from_backing_file_write(void *opaque, int ret)
CopyFromBackingFileCB *copy_cb = opaque;
BDRVQEDState *s = copy_cb->s;
if (copy_cb->backing_qiov) {
qemu_iovec_destroy(copy_cb->backing_qiov);
g_free(copy_cb->backing_qiov);
copy_cb->backing_qiov = NULL;
}
if (ret) {
qed_copy_from_backing_file_cb(copy_cb, ret);
return;
@@ -854,7 +832,7 @@ static void qed_copy_from_backing_file_write(void *opaque, int ret)
*/
static void qed_copy_from_backing_file(BDRVQEDState *s, uint64_t pos,
uint64_t len, uint64_t offset,
BlockCompletionFunc *cb,
BlockDriverCompletionFunc *cb,
void *opaque)
{
CopyFromBackingFileCB *copy_cb;
@@ -868,12 +846,11 @@ static void qed_copy_from_backing_file(BDRVQEDState *s, uint64_t pos,
copy_cb = gencb_alloc(sizeof(*copy_cb), cb, opaque);
copy_cb->s = s;
copy_cb->offset = offset;
copy_cb->backing_qiov = NULL;
copy_cb->iov.iov_base = qemu_blockalign(s->bs, len);
copy_cb->iov.iov_len = len;
qemu_iovec_init_external(&copy_cb->qiov, &copy_cb->iov, 1);
qed_read_backing_file(s, pos, &copy_cb->qiov, &copy_cb->backing_qiov,
qed_read_backing_file(s, pos, &copy_cb->qiov,
qed_copy_from_backing_file_write, copy_cb);
}
@@ -905,15 +882,21 @@ static void qed_update_l2_table(BDRVQEDState *s, QEDTable *table, int index,
static void qed_aio_complete_bh(void *opaque)
{
QEDAIOCB *acb = opaque;
BlockCompletionFunc *cb = acb->common.cb;
BlockDriverCompletionFunc *cb = acb->common.cb;
void *user_opaque = acb->common.opaque;
int ret = acb->bh_ret;
bool *finished = acb->finished;
qemu_bh_delete(acb->bh);
qemu_aio_unref(acb);
qemu_aio_release(acb);
/* Invoke callback */
cb(user_opaque, ret);
/* Signal cancel completion */
if (finished) {
*finished = true;
}
}
static void qed_aio_complete(QEDAIOCB *acb, int ret)
@@ -934,8 +917,7 @@ static void qed_aio_complete(QEDAIOCB *acb, int ret)
/* Arrange for a bh to invoke the completion function */
acb->bh_ret = ret;
acb->bh = aio_bh_new(bdrv_get_aio_context(acb->common.bs),
qed_aio_complete_bh, acb);
acb->bh = qemu_bh_new(qed_aio_complete_bh, acb);
qemu_bh_schedule(acb->bh);
/* Start next allocating write request waiting behind this one. Note that
@@ -1053,7 +1035,7 @@ static void qed_aio_write_flush_before_l2_update(void *opaque, int ret)
QEDAIOCB *acb = opaque;
BDRVQEDState *s = acb_to_s(acb);
if (!bdrv_aio_flush(s->bs->file->bs, qed_aio_write_l2_update_cb, opaque)) {
if (!bdrv_aio_flush(s->bs->file, qed_aio_write_l2_update_cb, opaque)) {
qed_aio_complete(acb, -EIO);
}
}
@@ -1067,7 +1049,7 @@ static void qed_aio_write_main(void *opaque, int ret)
BDRVQEDState *s = acb_to_s(acb);
uint64_t offset = acb->cur_cluster +
qed_offset_into_cluster(s, acb->cur_pos);
BlockCompletionFunc *next_fn;
BlockDriverCompletionFunc *next_fn;
trace_qed_aio_write_main(s, acb, ret, offset, acb->cur_qiov.size);
@@ -1079,7 +1061,7 @@ static void qed_aio_write_main(void *opaque, int ret)
if (acb->find_cluster_ret == QED_CLUSTER_FOUND) {
next_fn = qed_aio_next_io;
} else {
if (s->bs->backing) {
if (s->bs->backing_hd) {
next_fn = qed_aio_write_flush_before_l2_update;
} else {
next_fn = qed_aio_write_l2_update_cb;
@@ -1137,7 +1119,7 @@ static void qed_aio_write_prefill(void *opaque, int ret)
static bool qed_should_set_need_check(BDRVQEDState *s)
{
/* The flush before L2 update path ensures consistency */
if (s->bs->backing) {
if (s->bs->backing_hd) {
return false;
}
@@ -1167,7 +1149,7 @@ static void qed_aio_write_zero_cluster(void *opaque, int ret)
static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len)
{
BDRVQEDState *s = acb_to_s(acb);
BlockCompletionFunc *cb;
BlockDriverCompletionFunc *cb;
/* Cancel timer when the first allocating request comes in */
if (QSIMPLEQ_EMPTY(&s->allocating_write_reqs)) {
@@ -1224,11 +1206,7 @@ static void qed_aio_write_inplace(QEDAIOCB *acb, uint64_t offset, size_t len)
struct iovec *iov = acb->qiov->iov;
if (!iov->iov_base) {
iov->iov_base = qemu_try_blockalign(acb->common.bs, iov->iov_len);
if (iov->iov_base == NULL) {
qed_aio_complete(acb, -ENOMEM);
return;
}
iov->iov_base = qemu_blockalign(acb->common.bs, iov->iov_len);
memset(iov->iov_base, 0, iov->iov_len);
}
}
@@ -1314,7 +1292,7 @@ static void qed_aio_read_data(void *opaque, int ret,
return;
} else if (ret != QED_CLUSTER_FOUND) {
qed_read_backing_file(s, acb->cur_pos, &acb->cur_qiov,
&acb->backing_qiov, qed_aio_next_io, acb);
qed_aio_next_io, acb);
return;
}
@@ -1340,12 +1318,6 @@ static void qed_aio_next_io(void *opaque, int ret)
trace_qed_aio_next_io(s, acb, ret, acb->cur_pos + acb->cur_qiov.size);
if (acb->backing_qiov) {
qemu_iovec_destroy(acb->backing_qiov);
g_free(acb->backing_qiov);
acb->backing_qiov = NULL;
}
/* Handle I/O error */
if (ret) {
qed_aio_complete(acb, ret);
@@ -1368,11 +1340,11 @@ static void qed_aio_next_io(void *opaque, int ret)
io_fn, acb);
}
static BlockAIOCB *qed_aio_setup(BlockDriverState *bs,
int64_t sector_num,
QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb,
void *opaque, int flags)
static BlockDriverAIOCB *qed_aio_setup(BlockDriverState *bs,
int64_t sector_num,
QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque, int flags)
{
QEDAIOCB *acb = qemu_aio_get(&qed_aiocb_info, bs, cb, opaque);
@@ -1380,11 +1352,11 @@ static BlockAIOCB *qed_aio_setup(BlockDriverState *bs,
opaque, flags);
acb->flags = flags;
acb->finished = NULL;
acb->qiov = qiov;
acb->qiov_offset = 0;
acb->cur_pos = (uint64_t)sector_num * BDRV_SECTOR_SIZE;
acb->end_pos = acb->cur_pos + nb_sectors * BDRV_SECTOR_SIZE;
acb->backing_qiov = NULL;
acb->request.l2_table = NULL;
qemu_iovec_init(&acb->cur_qiov, qiov->niov);
@@ -1393,20 +1365,20 @@ static BlockAIOCB *qed_aio_setup(BlockDriverState *bs,
return &acb->common;
}
static BlockAIOCB *bdrv_qed_aio_readv(BlockDriverState *bs,
int64_t sector_num,
QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb,
void *opaque)
static BlockDriverAIOCB *bdrv_qed_aio_readv(BlockDriverState *bs,
int64_t sector_num,
QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque)
{
return qed_aio_setup(bs, sector_num, qiov, nb_sectors, cb, opaque, 0);
}
static BlockAIOCB *bdrv_qed_aio_writev(BlockDriverState *bs,
int64_t sector_num,
QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb,
void *opaque)
static BlockDriverAIOCB *bdrv_qed_aio_writev(BlockDriverState *bs,
int64_t sector_num,
QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque)
{
return qed_aio_setup(bs, sector_num, qiov, nb_sectors, cb,
opaque, QED_AIOCB_WRITE);
@@ -1418,44 +1390,47 @@ typedef struct {
bool done;
} QEDWriteZeroesCB;
static void coroutine_fn qed_co_pwrite_zeroes_cb(void *opaque, int ret)
static void coroutine_fn qed_co_write_zeroes_cb(void *opaque, int ret)
{
QEDWriteZeroesCB *cb = opaque;
cb->done = true;
cb->ret = ret;
if (cb->co) {
qemu_coroutine_enter(cb->co);
qemu_coroutine_enter(cb->co, NULL);
}
}
static int coroutine_fn bdrv_qed_co_pwrite_zeroes(BlockDriverState *bs,
int64_t offset,
int count,
BdrvRequestFlags flags)
static int coroutine_fn bdrv_qed_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors,
BdrvRequestFlags flags)
{
BlockAIOCB *blockacb;
BlockDriverAIOCB *blockacb;
BDRVQEDState *s = bs->opaque;
QEDWriteZeroesCB cb = { .done = false };
QEMUIOVector qiov;
struct iovec iov;
/* Fall back if the request is not aligned */
if (qed_offset_into_cluster(s, offset) ||
qed_offset_into_cluster(s, count)) {
return -ENOTSUP;
/* Refuse if there are untouched backing file sectors */
if (bs->backing_hd) {
if (qed_offset_into_cluster(s, sector_num * BDRV_SECTOR_SIZE) != 0) {
return -ENOTSUP;
}
if (qed_offset_into_cluster(s, nb_sectors * BDRV_SECTOR_SIZE) != 0) {
return -ENOTSUP;
}
}
/* Zero writes start without an I/O buffer. If a buffer becomes necessary
* then it will be allocated during request processing.
*/
iov.iov_base = NULL;
iov.iov_len = count;
iov.iov_base = NULL,
iov.iov_len = nb_sectors * BDRV_SECTOR_SIZE,
qemu_iovec_init_external(&qiov, &iov, 1);
blockacb = qed_aio_setup(bs, offset >> BDRV_SECTOR_BITS, &qiov,
count >> BDRV_SECTOR_BITS,
qed_co_pwrite_zeroes_cb, &cb,
blockacb = qed_aio_setup(bs, sector_num, &qiov, nb_sectors,
qed_co_write_zeroes_cb, &cb,
QED_AIOCB_WRITE | QED_AIOCB_ZERO);
if (!blockacb) {
return -EIO;
@@ -1591,11 +1566,18 @@ static void bdrv_qed_invalidate_cache(BlockDriverState *bs, Error **errp)
bdrv_qed_close(bs);
bdrv_invalidate_cache(bs->file, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
memset(s, 0, sizeof(BDRVQEDState));
ret = bdrv_qed_open(bs, NULL, bs->open_flags, &local_err);
if (local_err) {
error_propagate(errp, local_err);
error_prepend(errp, "Could not reopen qed layer: ");
error_setg(errp, "Could not reopen qed layer: %s",
error_get_pretty(local_err));
error_free(local_err);
return;
} else if (ret < 0) {
error_setg_errno(errp, -ret, "Could not reopen qed layer");
@@ -1611,47 +1593,39 @@ static int bdrv_qed_check(BlockDriverState *bs, BdrvCheckResult *result,
return qed_check(s, result, !!fix);
}
static QemuOptsList qed_create_opts = {
.name = "qed-create-opts",
.head = QTAILQ_HEAD_INITIALIZER(qed_create_opts.head),
.desc = {
{
.name = BLOCK_OPT_SIZE,
.type = QEMU_OPT_SIZE,
.help = "Virtual disk size"
},
{
.name = BLOCK_OPT_BACKING_FILE,
.type = QEMU_OPT_STRING,
.help = "File name of a base image"
},
{
.name = BLOCK_OPT_BACKING_FMT,
.type = QEMU_OPT_STRING,
.help = "Image format of the base image"
},
{
.name = BLOCK_OPT_CLUSTER_SIZE,
.type = QEMU_OPT_SIZE,
.help = "Cluster size (in bytes)",
.def_value_str = stringify(QED_DEFAULT_CLUSTER_SIZE)
},
{
.name = BLOCK_OPT_TABLE_SIZE,
.type = QEMU_OPT_SIZE,
.help = "L1/L2 table size (in clusters)"
},
{ /* end of list */ }
}
static QEMUOptionParameter qed_create_options[] = {
{
.name = BLOCK_OPT_SIZE,
.type = OPT_SIZE,
.help = "Virtual disk size (in bytes)"
}, {
.name = BLOCK_OPT_BACKING_FILE,
.type = OPT_STRING,
.help = "File name of a base image"
}, {
.name = BLOCK_OPT_BACKING_FMT,
.type = OPT_STRING,
.help = "Image format of the base image"
}, {
.name = BLOCK_OPT_CLUSTER_SIZE,
.type = OPT_SIZE,
.help = "Cluster size (in bytes)",
.value = { .n = QED_DEFAULT_CLUSTER_SIZE },
}, {
.name = BLOCK_OPT_TABLE_SIZE,
.type = OPT_SIZE,
.help = "L1/L2 table size (in clusters)"
},
{ /* end of list */ }
};
static BlockDriver bdrv_qed = {
.format_name = "qed",
.instance_size = sizeof(BDRVQEDState),
.create_opts = &qed_create_opts,
.supports_backing = true,
.create_options = qed_create_options,
.bdrv_probe = bdrv_qed_probe,
.bdrv_rebind = bdrv_qed_rebind,
.bdrv_open = bdrv_qed_open,
.bdrv_close = bdrv_qed_close,
.bdrv_reopen_prepare = bdrv_qed_reopen_prepare,
@@ -1660,7 +1634,7 @@ static BlockDriver bdrv_qed = {
.bdrv_co_get_block_status = bdrv_qed_co_get_block_status,
.bdrv_aio_readv = bdrv_qed_aio_readv,
.bdrv_aio_writev = bdrv_qed_aio_writev,
.bdrv_co_pwrite_zeroes = bdrv_qed_co_pwrite_zeroes,
.bdrv_co_write_zeroes = bdrv_qed_co_write_zeroes,
.bdrv_truncate = bdrv_qed_truncate,
.bdrv_getlength = bdrv_qed_getlength,
.bdrv_get_info = bdrv_qed_get_info,
@@ -1668,8 +1642,6 @@ static BlockDriver bdrv_qed = {
.bdrv_change_backing_file = bdrv_qed_change_backing_file,
.bdrv_invalidate_cache = bdrv_qed_invalidate_cache,
.bdrv_check = bdrv_qed_check,
.bdrv_detach_aio_context = bdrv_qed_detach_aio_context,
.bdrv_attach_aio_context = bdrv_qed_attach_aio_context,
};
static void bdrv_qed_init(void)

View File

@@ -16,7 +16,6 @@
#define BLOCK_QED_H
#include "block/block_int.h"
#include "qemu/cutils.h"
/* The layout of a QED file is as follows:
*
@@ -44,7 +43,7 @@
*
* All fields are little-endian on disk.
*/
#define QED_DEFAULT_CLUSTER_SIZE 65536
enum {
QED_MAGIC = 'Q' | 'E' << 8 | 'D' << 16 | '\0' << 24,
@@ -70,6 +69,7 @@ enum {
*/
QED_MIN_CLUSTER_SIZE = 4 * 1024, /* in bytes */
QED_MAX_CLUSTER_SIZE = 64 * 1024 * 1024,
QED_DEFAULT_CLUSTER_SIZE = 64 * 1024,
/* Allocated clusters are tracked using a 2-level pagetable. Table size is
* a multiple of clusters so large maximum image sizes can be supported
@@ -129,11 +129,12 @@ enum {
};
typedef struct QEDAIOCB {
BlockAIOCB common;
BlockDriverAIOCB common;
QEMUBH *bh;
int bh_ret; /* final return status for completion bh */
QSIMPLEQ_ENTRY(QEDAIOCB) next; /* next request */
int flags; /* QED_AIOCB_* bits ORed together */
bool *finished; /* signal for cancel completion */
uint64_t end_pos; /* request end on block device, in bytes */
/* User scatter-gather list */
@@ -142,7 +143,6 @@ typedef struct QEDAIOCB {
/* Current cluster scatter-gather list */
QEMUIOVector cur_qiov;
QEMUIOVector *backing_qiov;
uint64_t cur_pos; /* position on block device, in bytes */
uint64_t cur_cluster; /* cluster offset in image file */
unsigned int cur_nclusters; /* number of clusters being accessed */
@@ -203,11 +203,11 @@ typedef void QEDFindClusterFunc(void *opaque, int ret, uint64_t offset, size_t l
* Generic callback for chaining async callbacks
*/
typedef struct {
BlockCompletionFunc *cb;
BlockDriverCompletionFunc *cb;
void *opaque;
} GenericCB;
void *gencb_alloc(size_t len, BlockCompletionFunc *cb, void *opaque);
void *gencb_alloc(size_t len, BlockDriverCompletionFunc *cb, void *opaque);
void gencb_complete(void *opaque, int ret);
/**
@@ -230,16 +230,16 @@ void qed_commit_l2_cache_entry(L2TableCache *l2_cache, CachedL2Table *l2_table);
*/
int qed_read_l1_table_sync(BDRVQEDState *s);
void qed_write_l1_table(BDRVQEDState *s, unsigned int index, unsigned int n,
BlockCompletionFunc *cb, void *opaque);
BlockDriverCompletionFunc *cb, void *opaque);
int qed_write_l1_table_sync(BDRVQEDState *s, unsigned int index,
unsigned int n);
int qed_read_l2_table_sync(BDRVQEDState *s, QEDRequest *request,
uint64_t offset);
void qed_read_l2_table(BDRVQEDState *s, QEDRequest *request, uint64_t offset,
BlockCompletionFunc *cb, void *opaque);
BlockDriverCompletionFunc *cb, void *opaque);
void qed_write_l2_table(BDRVQEDState *s, QEDRequest *request,
unsigned int index, unsigned int n, bool flush,
BlockCompletionFunc *cb, void *opaque);
BlockDriverCompletionFunc *cb, void *opaque);
int qed_write_l2_table_sync(BDRVQEDState *s, QEDRequest *request,
unsigned int index, unsigned int n, bool flush);

Some files were not shown because too many files have changed in this diff Show More