Compare commits

..

71 Commits

Author SHA1 Message Date
Michael Roth
0d83fccb4f Update version for 2.7.1 release
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-23 09:38:06 -06:00
Ashijeet Acharya
4dde694191 ide: Fix memory leak in ide_register_restart_cb()
Fix a memory leak in ide_register_restart_cb() in hw/ide/core.c and add
idebus_unrealize() in hw/ide/qdev.c to have calls to
qemu_del_vm_change_state_handler() to deal with the dangling change
state handler during hot-unplugging ide devices which might lead to a
crash.

Signed-off-by: Ashijeet Acharya <ashijeetacharya@gmail.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1474995212-10580-1-git-send-email-ashijeetacharya@gmail.com
[Minor whitespace fix --js]
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit ca44141d5f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-21 10:49:48 -06:00
Marc-André Lureau
7d17d68971 portio: keep references on portio
The isa_register_portio_list() function allocates ioports
data/state. Let's keep the reference to this data on some owner.  This
isn't enough to fix leaks, but at least, ASAN stops complaining of
direct leaks. Further cleanup would require calling
portio_list_del/destroy().

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit e305a16510)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-21 10:08:19 -06:00
John Snow
345f1cd9f6 block-backend: Always notify on blk_eject
blk_eject is only used by scsi-disk and atapi, and in both cases we
only attempt to invoke blk_eject if we have a bona-fide change in
tray state.

The "issue" here is that the tray state does not generate a QMP event
unless there is a medium/BDS attached to the device, so if libvirt et al
are waiting for a tray event to occur from an empty-but-closed drive,
software opening that drive will not emit an event and libvirt will
wait forever.

Change this by modifying blk_eject to always emit an event, instead of
conditionally on a "real" backend eject.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1373264

Reported-by: Peter Krempa <pkrempa@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 1478553214-497-2-git-send-email-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit c47ee043dc)

* dropped functional depedenecy on 2d76e724

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-21 09:59:57 -06:00
Mark Cave-Ayland
8d5f2a7570 dma-helpers: explicitly pass alignment into DMA helpers
The hard-coded default alignment is BDRV_SECTOR_SIZE, however this is not
necessarily the case for all platforms. Use this as the default alignment for
all current callers.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Eric Blake <eblake@redhat.com>
Acked-by: John Snow <jsnow@redhat.com>
Message-id: 1476445266-27503-2-git-send-email-mark.cave-ayland@ilande.co.uk
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit 99868af3d0)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-21 09:27:59 -06:00
John Snow
5f20161cf3 atapi: classify read_cd as conditionally returning data
For the purposes of byte_count_limit verification, add a new flag that
identifies read_cd as sometimes returning data, then check the BCL in
its command handler after we know that it will indeed return data.

Reported-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 1477970211-25754-2-git-send-email-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit e7bd708ec8)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-20 19:33:05 -06:00
Stefan Hajnoczi
05838b4688 ui/gtk: fix "Copy" menu item segfault
The "Copy" menu item copies VTE terminal text to the clipboard.  This
only works with VTE terminals, not with graphics consoles.

Disable the menu item when the current notebook page isn't a VTE
terminal.

This patch fixes a segfault.  Reproducer: Start QEMU and click the Copy
menu item when the guest display is visible.

Reported-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Tested-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20161214142518.10504-1-stefanha@redhat.com
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit a08156321a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-20 19:31:47 -06:00
Thorsten Kohfeldt
223d1a2da1 vfio/pci: Fix vfio_rtl8168_quirk_data_read address offset
Introductory comment for rtl8168 VFIO MSI-X quirk states:
At BAR2 offset 0x70 there is a dword data register,
         offset 0x74 is a dword address register.
vfio: vfio_bar_read(0000:05:00.0:BAR2+0x70, 4) = 0xfee00398 // read data

Thus, correct offset for data read is 0x70,
but function vfio_rtl8168_quirk_data_read() wrongfully uses offset 0x74.

Signed-off-by: Thorsten Kohfeldt <thorsten.kohfeldt@gmx.de>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
(cherry picked from commit 31e6a7b17b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-14 16:34:09 -06:00
7f7ac2141e msmouse: Fix segfault caused by free the chr before chardev cleanup.
Segfault happens when leaving qemu with msmouse backend:

 #0  0x00007fa8526ac975 in raise () at /lib64/libc.so.6
 #1  0x00007fa8526add8a in abort () at /lib64/libc.so.6
 #2  0x0000558be78846ab in error_exit (err=16, msg=0x558be799da10 ...
 #3  0x0000558be7884717 in qemu_mutex_destroy (mutex=0x558be93be750) at ...
 #4  0x0000558be7549951 in qemu_chr_free_common (chr=0x558be93be750) at ...
 #5  0x0000558be754999c in qemu_chr_free (chr=0x558be93be750) at ...
 #6  0x0000558be7549a20 in qemu_chr_delete (chr=0x558be93be750) at ...
 #7  0x0000558be754a8ef in qemu_chr_cleanup () at qemu-char.c:4643
 #8  0x0000558be755843e in main (argc=5, argv=0x7ffe925d7118, ...

The chr was freed by msmouse close callback before chardev cleanup,
Then qemu_mutex_destroy triggered raise().

Because freeing chr is handled by qemu_chr_free_common, Remove the free from
msmouse_chr_close to avoid double free.

Fixes: c1111a24a3
Cc: qemu-stable@nongnu.org
Signed-off-by: Lin Ma <lma@suse.com>
Message-Id: <20160915143158.4796-1-lma@suse.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 9e14037f05)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-14 16:34:09 -06:00
Paolo Bonzini
db1604cd60 Revert "megasas: remove useless check for cmd->frame"
This reverts commit 8cc46787b5.
It turns out that cmd->frame can be NULL and thus the commit
can cause a SIGSEGV

Reported-by: Holger Schranz <holger@fam-schranz.de>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 421cc3e7e8)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 18:15:33 -06:00
Eduardo Habkost
cc1fd25295 vl: Delay initialization of memory backends
Initialization of memory backends may take a while when
prealloc=yes is used, depending on their size. Initializing
memory backends before chardevs may delay the creation of monitor
sockets, and trigger timeouts on management software that waits
until the monitor socket is created by QEMU. See, for example,
the bug report at:
https://bugzilla.redhat.com/show_bug.cgi?id=1371211

In addition to that, allocating memory before calling
configure_accelerator() breaks the tcg_enabled() checks at
memory_region_init_*().

This patch fixes those problems by adding "memory-backend-*"
classes to the delayed-initialization list.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
(cherry picked from commit 6546d0dba6)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 18:13:21 -06:00
Eduardo Habkost
ee99e42be4 vhost-user-test: Use libqos instead of pxe-virtio.rom
vhost-user-test relies on iPXE just to initialize the virtio-net
device, and doesn't do any actual packet tx/rx testing.

In addition to that, the test relies on TCG, which is
imcompatible with vhost. The test only worked by accident: a bug
the memory backend initialization made memory regions not have
the DIRTY_MEMORY_CODE bit set in dirty_log_mask.

This changes vhost-user-test to initialize the virtio-net device
using libqos, and not use TCG nor pxe-virtio.rom.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
(cherry picked from commit cdafe92961)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 18:13:02 -06:00
Peter Xu
0ef167c907 intel_iommu: fix incorrect device invalidate
"mask" needs to be inverted before use.

Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 6cb99acc28)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 18:06:43 -06:00
Adrian Bunk
248a780fd3 rules.mak: Use -r instead of -Wl, -r to fix building when PIE is default
Building qemu fails in distributions where gcc enables PIE by default
(e.g. Debian unstable) with:

/usr/bin/ld: -r and -pie may not be used together

Use -r instead of -Wl,-r to avoid gcc passing -pie to the linker
when PIE is enabled and a relocatable object is passed.

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Message-Id: <20161127162817.15144-1-bunk@stusta.de>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit c96f0ee6a6)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 18:05:55 -06:00
Peter Xu
80f630be21 pci-assign: sync MSI/MSI-X cap and table with PCIDevice
Since commit e1d4fb2d ("kvm-irqchip: x86: add msi route notify fn"),
kvm_irqchip_add_msi_route() starts to use pci_get_msi_message() to fetch
MSI info. This requires that we setup MSI related fields in PCIDevice.
For most devices, that won't be a problem, as long as we are using
general interfaces like msi_init()/msix_init().

However, for pci-assign devices, MSI/MSI-X is treated differently - PCI
assign devices are maintaining its own MSI table and cap information in
AssignedDevice struct. however that's not synced up with PCIDevice's
fields. That will leads to pci_get_msi_message() failed to find correct
MSI capability, even with an NULL msix_table.

A quick fix is to sync up the two places: both the capability bits and
table address for MSI/MSI-X.

Reported-by: Changlimin <changlimin@h3c.com>
Tested-by: Changlimin <changlimin@h3c.com>
Cc: qemu-stable@nongnu.org
Fixes: e1d4fb2d ("kvm-irqchip: x86: add msi route notify fn")
Signed-off-by: Peter Xu <peterx@redhat.com>

Message-Id: <1480042522-16551-1-git-send-email-peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 64e184e260)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 18:04:19 -06:00
Zhuang Yanying
353801cde4 ivshmem: Fix 64 bit memory bar configuration
Device ivshmem property use64=0 is designed to make the device
expose a 32 bit shared memory BAR instead of 64 bit one.  The
default is a 64 bit BAR, except pc-1.2 and older retain a 32 bit
BAR.  A 32 bit BAR can support only up to 1 GiB of shared memory.

This worked as designed until commit 5400c02 accidentally flipped
its sense: since then, we misinterpret use64=0 as use64=1 and vice
versa.  Worse, the default got flipped as well.  Devices
ivshmem-plain and ivshmem-doorbell are not affected.

Fix by restoring the test of IVShmemState member not_legacy_32bit
that got messed up in commit 5400c02.  Also update its
initialization for devices ivhsmem-plain and ivshmem-doorbell.
Without that, they'd regress to 32 bit BARs.

Cc: qemu-stable@nongnu.org
Signed-off-by: Zhuang Yanying <ann.zhuangyanying@huawei.com>
Reviewed-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
(cherry picked from commit be4e0d7375)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:49:41 -06:00
Greg Kurz
c8a3159df4 vhost: drop legacy vring layout bits
The legacy vring layout is not used anymore as we use the separate
mappings even for legacy devices.
This patch simply removes it.

This also fixes a bug with virtio 1 devices when the vring descriptor table
is mapped at a higher address than the used vring because the following
function may return an insanely great value:

hwaddr virtio_queue_get_ring_size(VirtIODevice *vdev, int n)
{
    return vdev->vq[n].vring.used - vdev->vq[n].vring.desc +
           virtio_queue_get_used_size(vdev, n);
}

and the mapping fails.

Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 1cdce7c54d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:49:41 -06:00
Max Reitz
48fdfebab6 block/curl: Do not wait for data beyond EOF
libcurl will only give us as much data as there is, not more. The block
layer will deny requests beyond the end of file for us; but since this
block driver is still using a sector-based interface, we can still get
in trouble if the file size is not a multiple of 512.

While we have already made sure not to attempt transfers beyond the end
of the file, we are currently still trying to receive data from there if
the original request exceeds the file size. This patch fixes this issue
and invokes qemu_iovec_memset() on the iovec's tail.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20161025025431.24714-5-mreitz@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
(cherry picked from commit 4e504535c1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:48:44 -06:00
Max Reitz
6eb194ddd7 block/curl: Remember all sockets
For some connection types (like FTP, generally), more than one socket
may be used (in FTP's case: control vs. data stream). As of commit
838ef60249 ("curl: Eliminate unnecessary
use of curl_multi_socket_all"), we have to remember all of the sockets
used by libcurl, but in fact we only did that for a single one. Since
one libcurl connection may use multiple sockets, however, we have to
remember them all.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20161025025431.24714-4-mreitz@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
(cherry picked from commit ff5ca1664a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:48:44 -06:00
Max Reitz
7e278ef797 block/curl: Fix return value from curl_read_cb
While commit 38bbc0a580 is correct in that
the callback is supposed to return the number of bytes handled; what it
does not mention is that libcurl will throw an error if the callback did
not "handle" all of the data passed to it.

Therefore, if the callback receives some data that it cannot handle
(either because the receive buffer has not been set up yet or because it
would not fit into the receive buffer) and we have to ignore it, we
still have to report that the data has been handled.

Obviously, this should not happen normally. But it does happen at least
for FTP connections where some data (that we do not expect) may be
generated when the connection is established.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20161025025431.24714-3-mreitz@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
(cherry picked from commit 4e7676571b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:48:44 -06:00
Max Reitz
6a5ea68e9e block/curl: Use BDRV_SECTOR_SIZE
Currently, curl defines its own constant SECTOR_SIZE. There is no
advantage over using the global BDRV_SECTOR_SIZE, so drop it.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20161025025431.24714-2-mreitz@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
(cherry picked from commit 9054d9f6b0)

Conflicts:
	block/curl.c

* drop context dep on fffb6e1 / aio_bh_schedule_oneshot

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:48:44 -06:00
Eric Blake
31454ebde5 block: Pass unaligned discard requests to drivers
Discard is advisory, so rounding the requests to alignment
boundaries is never semantically wrong from the data that
the guest sees.  But at least the Dell Equallogic iSCSI SANs
has an interesting property that its advertised discard
alignment is 15M, yet documents that discarding a sequence
of 1M slices will eventually result in the 15M page being
marked as discarded, and it is possible to observe which
pages have been discarded.

Between commits 9f1963b and b8d0a980, we converted the block
layer to a byte-based interface that ultimately ignores any
unaligned head or tail based on the driver's advertised
discard granularity, which means that qemu 2.7 refuses to
pass any discard request smaller than 15M down to the Dell
Equallogic hardware.  This is a slight regression in behavior
compared to earlier qemu, where a guest executing discards
in power-of-2 chunks used to be able to get every page
discarded, but is now left with various pages still allocated
because the guest requests did not align with the hardware's
15M pages.

Since the SCSI specification says nothing about a minimum
discard granularity, and only documents the preferred
alignment, it is best if the block layer gives the driver
every bit of information about discard requests, rather than
rounding it to alignment boundaries early.

Rework the block layer discard algorithm to mirror the write
zero algorithm: always peel off any unaligned head or tail
and manage that in isolation, then do the bulk of the request
on an aligned boundary.  The fallback when the driver returns
-ENOTSUP for an unaligned request is to silently ignore that
portion of the discard request; but for devices that can pass
the partial request all the way down to hardware, this can
result in the hardware coalescing requests and discarding
aligned pages after all.

Reported by: Peter Lieven <pl@kamp.de>
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

(cherry picked from commit 3482b9bc41)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:48:44 -06:00
Eric Blake
5e4eb851dd block: Return -ENOTSUP rather than assert on unaligned discards
Right now, the block layer rounds discard requests, so that
individual drivers are able to assert that discard requests
will never be unaligned.  But there are some ISCSI devices
that track and coalesce multiple unaligned requests, turning it
into an actual discard if the requests eventually cover an
entire page, which implies that it is better to always pass
discard requests as low down the stack as possible.

In isolation, this patch has no semantic effect, since the
block layer currently never passes an unaligned request through.
But the block layer already has code that silently ignores
drivers that return -ENOTSUP for a discard request that cannot
be honored (as well as drivers that return 0 even when nothing
was done).  But the next patch will update the block layer to
fragment discard requests, so that clients are guaranteed that
they are either dealing with an unaligned head or tail, or an
aligned core, making it similar to the block layer semantics of
write zero fragmentation.

CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 49228d1e95)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:48:43 -06:00
Eric Blake
dd11d33deb block: Let write zeroes fallback work even with small max_transfer
Commit 443668ca rewrote the write_zeroes logic to guarantee that
an unaligned request never crosses a cluster boundary.  But
in the rewrite, the new code assumed that at most one iteration
would be needed to get to an alignment boundary.

However, it is easy to trigger an assertion failure: the Linux
kernel limits loopback devices to advertise a max_transfer of
only 64k.  Any operation that requires falling back to writes
rather than more efficient zeroing must obey max_transfer during
that fallback, which means an unaligned head may require multiple
iterations of the write fallbacks before reaching the aligned
boundaries, when layering a format with clusters larger than 64k
atop the protocol of file access to a loopback device.

Test case:

$ qemu-img create -f qcow2 -o cluster_size=1M file 10M
$ losetup /dev/loop2 /path/to/file
$ qemu-io -f qcow2 /dev/loop2
qemu-io> w 7m 1k
qemu-io> w -z 8003584 2093056

In fairness to Denis (as the original listed author of the culprit
commit), the faulty logic for at most one iteration is probably all
my fault in reworking his idea.  But the solution is to restore what
was in place prior to that commit: when dealing with an unaligned
head or tail, iterate as many times as necessary while fragmenting
the operation at max_transfer boundaries.

Reported-by: Ed Swierk <eswierk@skyportsystems.com>
CC: qemu-stable@nongnu.org
CC: Denis V. Lunev <den@openvz.org>
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit b2f95feec5)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:48:30 -06:00
Eric Blake
c4bf37e0b0 qcow2: Inform block layer about discard boundaries
At the qcow2 layer, discard is only possible on a per-cluster
basis; at the moment, qcow2 silently rounds any unaligned
requests to this granularity.  However, an upcoming patch will
fix a regression in the block layer ignoring too much of an
unaligned discard request, by changing the block layer to
break up a discard request at alignment boundaries; for that
to work, the block layer must know about our limits.

However, we can't go one step further by changing
qcow2_discard_clusters() to assert that requests are always
aligned, since that helper function is reached on paths
outside of the block layer.

CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit ecdbead659)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:48:19 -06:00
Samuel Thibault
e9dbd28e2a slirp: Fix access to freed memory
if_start() goes through the slirp->if_fastq and slirp->if_batchq
list of pending messages, and accesses ifm->ifq_so->so_nqueued of its
elements if ifm->ifq_so != NULL.  When freeing a socket, we thus need
to make sure that any pending message for this socket does not refer
to the socket any more.

Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Tested-by: Brian Candler <b.candler@pobox.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit ea64d5f088)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-08 13:56:26 -06:00
Greg Kurz
92230a5963 vhost: adapt vhost_verify_ring_mappings() to virtio 1 ring layout
With virtio 1, the vring layout is split in 3 separate regions of
contiguous memory for the descriptor table, the available ring and the
used ring, as opposed with legacy virtio which uses a single region.

In case of memory re-mapping, the code ensures it doesn't affect the
vring mapping. This is done in vhost_verify_ring_mappings() which assumes
the device is legacy.

This patch changes vhost_verify_ring_mappings() to check the mappings of
each part of the vring separately.

This works for legacy mappings as well.

Cc: qemu-stable@nongnu.org
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit f1f9e6c596)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-08 13:53:11 -06:00
Kevin Wolf
48b3aa20ae block: Don't mark node clean after failed flush
Commit 3ff2f67a changed bdrv_co_flush() so that no flush is issues if
the image hasn't been dirtied since the last flush. This is not quite
correct: The condition should be that the image hasn't been dirtied
since the last _successful_ flush. This patch changes the logic
accordingly.

Without this fix, subsequent bdrv_co_flush() calls would return success
without actually doing anything even though the image is still dirty.
The difference is visible in some blkdebug test cases where error
messages incorrectly disappeared after commit 3ff2f67a.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1478300595-10090-1-git-send-email-kwolf@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit e6af1e0854)

Conflicts:
	block/io.c

* remove context dep on 9972354

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-08 13:50:54 -06:00
Michael S. Tsirkin
f1372d6e14 virtio-net: mark VIRTIO_NET_F_GSO as legacy
virtio 1.0 spec says this is a legacy feature bit,
hide it from guests in modern mode.

Note: for cross-version migration compatibility,
we keep the bit set in host_features.
The result will be that a guest migrating cross-version
will see host features change under it.
As guests only seem to read it once, this should
not be an issue. Meanwhile, will work to fix guests to
ignore this bit in virtio1 mode, too.

Cc: qemu-stable@nongnu.org
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
(cherry picked from commit 2a083ffd2e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-08 13:50:54 -06:00
Michael S. Tsirkin
63087cd74b virtio: allow per-device-class legacy features
Legacy features are those that transitional devices only
expose on the legacy interface.
Allow different ones per device class.

Cc: qemu-stable@nongnu.org # dependency for the next patch
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
(cherry picked from commit 9b706dbbbb)

Conflicts:
	hw/virtio/virtio.c

* drop context dep on ff4c07df
* resolv func dep on ff4c07df creating vdc variable in
  virtio_device_class_init()

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-08 13:50:54 -06:00
David Gibson
9df69dcd14 target-ppc: Fix CPU migration from qemu-2.6 <-> later versions
When migration for target-ppc was converted to vmstate, several
VMSTATE_EQUAL() checks were foolishly included of things that really
should be internal state.  Specifically we verified equality of the
insns_flags and insns_flags2 fields, which are used within TCG to
determine which groups of instructions are available on this cpu
model.  Between qemu-2.6 and qemu-2.7 we made some changes to these
classes which broke migration.

This path fixes migration both forwards and backwards.  On migration
from 2.6 to later versions we import the fields into teporary
variables, which we then ignore.  In migration backwards, we populate
the temporary fields from the runtime fields, but mask out the bits
which were added after qemu-2.6, allowing the VMSTATE_EQUAL in
qemu-2.6 to accept the stream.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit 16a2497bd4)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-02 09:38:26 -06:00
Daniel P. Berrange
95a0638043 net: fix sending of data with -net socket, listen backend
The use of -net socket,listen was broken in the following
commit

  commit 16a3df403b
  Author: Zhang Chen <zhangchen.fnst@cn.fujitsu.com>
  Date:   Fri May 13 15:35:19 2016 +0800

    net/net: Add SocketReadState for reuse codes

    This function is from net/socket.c, move it to net.c and net.h.
    Add SocketReadState to make others reuse net_fill_rstate().
    suggestion from jason.

This refactored the state out of NetSocketState into a
separate SocketReadState. This refactoring requires
that a callback is provided to be triggered upon
completion of a packet receive from the guest.

The patch only registered this callback in the codepaths
hit by -net socket,connect, not -net socket,listen. So
as a result packets sent by the guest in the latter case
get dropped on the floor.

This bug is hidden because net_fill_rstate() silently
does nothing if the callback is not set.

This patch adds in the middle callback registration
and also adds an assert so that QEMU aborts if there
are any other codepaths hit which are missing the
callback.

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Zhang Chen <zhangchen.fnst@cn.fujitsu.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit e79cd40680)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-18 19:06:19 -06:00
Corey Minyard
1790a9d77d acpi/ipmi: Initialize the fwinfo before fetching it
The initialization was missed before, resulting in some
bad data in the smbus case.

Signed-off-by: Corey Minyard <cminyard@mvista.com>
Cc: qemu-stable@nongnu.org
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 698ae42b91)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-18 19:05:21 -06:00
Alex Williamson
1b16ded6a5 memory: Don't use memcpy for ram_device regions
With a vfio assigned device we lay down a base MemoryRegion registered
as an IO region, giving us read & write accessors.  If the region
supports mmap, we lay down a higher priority sub-region MemoryRegion
on top of the base layer initialized as a RAM device pointer to the
mmap.  Finally, if we have any quirks for the device (ie. address
ranges that need additional virtualization support), we put another IO
sub-region on top of the mmap MemoryRegion.  When this is flattened,
we now potentially have sub-page mmap MemoryRegions exposed which
cannot be directly mapped through KVM.

This is as expected, but a subtle detail of this is that we end up
with two different access mechanisms through QEMU.  If we disable the
mmap MemoryRegion, we make use of the IO MemoryRegion and service
accesses using pread and pwrite to the vfio device file descriptor.
If the mmap MemoryRegion is enabled and results in one of these
sub-page gaps, QEMU handles the access as RAM, using memcpy to the
mmap.  Using either pread/pwrite or the mmap directly should be
correct, but using memcpy causes us problems.  I expect that not only
does memcpy not necessarily honor the original width and alignment in
performing a copy, but it potentially also uses processor instructions
not intended for MMIO spaces.  It turns out that this has been a
problem for Realtek NIC assignment, which has such a quirk that
creates a sub-page mmap MemoryRegion access.

To resolve this, we disable memory_access_is_direct() for ram_device
regions since QEMU assumes that it can use memcpy for those regions.
Instead we access through MemoryRegionOps, which replaces the memcpy
with simple de-references of standard sizes to the host memory.

With this patch we attempt to provide unrestricted access to the RAM
device, allowing byte through qword access as well as unaligned
access.  The assumption here is that accesses initiated by the VM are
driven by a device specific driver, which knows the device
capabilities.  If unaligned accesses are not supported by the device,
we don't want them to work in a VM by performing multiple aligned
accesses to compose the unaligned access.  A down-side of this
philosophy is that the xp command from the monitor attempts to use
the largest available access weidth, unaware of the underlying
device.  Using memcpy had this same restriction, but at least now an
operator can dump individual registers, even if blocks of device
memory may result in access widths beyond the capabilities of a
given device (RTL NICs only support up to dword).

Reported-by: Thorsten Kohfeldt <thorsten.kohfeldt@gmx.de>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 4a2e242bbb)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:19:01 -05:00
Alex Williamson
ca83f87a66 memory: Replace skip_dump flag with "ram_device"
Setting skip_dump on a MemoryRegion allows us to modify one specific
code path, but the restriction we're trying to address encompasses
more than that.  If we have a RAM MemoryRegion backed by a physical
device, it not only restricts our ability to dump that region, but
also affects how we should manipulate it.  Here we recognize that
MemoryRegions do not change to sometimes allow dumps and other times
not, so we replace setting the skip_dump flag with a new initializer
so that we know exactly the type of region to which we're applying
this behavior.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 21e00fa55f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:19:01 -05:00
Prasad J Pandit
2817466c55 net: rtl8139: limit processing of ring descriptors
RTL8139 ethernet controller in C+ mode supports multiple
descriptor rings, each with maximum of 64 descriptors. While
processing transmit descriptor ring in 'rtl8139_cplus_transmit',
it does not limit the descriptor count and runs forever. Add
check to avoid it.

Reported-by: Andrew Henderson <hendersa@icculus.org>
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit c7c3591669)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:19:01 -05:00
Alberto Garcia
e389e44a35 qemu-iotests: Test I/O in a single drive from a throttling group
iotest 093 contains a test that creates a throttling group with
several drives and performs I/O in all of them. This patch adds a new
test that creates a similar setup but only performs I/O in one of the
drives at the same time.

This is useful to test that the round robin algorithm is behaving
properly in these scenarios, and is specifically written using the
regression introduced in 27ccdd5259 as an example.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit a26ddb4396)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:19:00 -05:00
Alberto Garcia
b1fdc94193 throttle: Correct access to wrong BlockBackendPublic structures
In 27ccdd5259 the throttling fields were
moved from BlockDriverState to BlockBackend. However in a few cases
the code started using throttling fields from the active BlockBackend
instead of the round-robin token, making the algorithm behave
incorrectly.

This can cause starvation if there's a throttling group with several
drives but only one of them has I/O.

Cc: qemu-stable@nongnu.org
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 6bf77e1c2d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:19:00 -05:00
Thomas Huth
61781984db ppc/kvm: Mark 64kB page size support as disabled if not available
QEMU currently refuses to start with KVM-PR and only prints out

	qemu: fatal: Unknown MMU model 851972

when being started there. This is because commit 4322e8ced5
("ppc: Fix 64K pages support in full emulation") introduced a new
POWERPC_MMU_64K bit to indicate support for this page size, but
it never gets cleared on KVM-PR if the host kernel does not support
this. Thus we've got to turn off this bit in the mmu_model for KVM-PR.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 0d594f5565)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:19:00 -05:00
Paolo Bonzini
857efecf91 rbd: shift byte count as a 64-bit value
Otherwise, reads of more than 2GB fail.  Until commit
7bbca9e290, reads of 2^41
bytes succeeded at least theoretically.

In fact, pdiscard ought to receive a 64-bit integer as the
count for the same reason.

Reported by Coverity.

Fixes: 7bbca9e290
Cc: qemu-stable@nongnu.org
Cc: kwolf@redhat.com
Cc: eblake@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit e948f663e9)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:19:00 -05:00
Markus Armbruster
99837b0d36 tests/test-qmp-input-strict: Cover missing struct members
These tests would have caught the bug fixed by the previous commit.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1475594630-24758-1-git-send-email-armbru@redhat.com>
(cherry picked from commit bce3035a44)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:19:00 -05:00
Marc-André Lureau
b34c7bd463 qapi: Fix crash when 'any' or 'null' parameter is missing
Unlike the other visit methods, visit_type_any() and visit_type_null()
neglect to check whether qmp_input_get_object() succeeded.  They crash
when it fails.  Reproducer:

{ "execute": "qom-set",
  "arguments": { "path": "/machine", "property": "rtc-time" } }

Will crash with:

qapi/qapi-visit-core.c:277: visit_type_any: Assertion `!err != !*obj'
failed

Broken in commit 5c678ee.  Fix by adding the missing error checks.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20160922203927.28241-3-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Commit message rephrased]
Signed-off-by: Markus Armbruster <armbru@redhat.com>

(cherry picked from commit c489780203)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:18:54 -05:00
Marc-André Lureau
f3467f56c1 qmp: fix object-add assert() without props
Since commit ad739706bb, user_creatable_add_type() expects to be
given a qdict. However, if object-add is called without props, you reach
the assert: "qemu/qom/object_interfaces.c:115: user_creatable_add_type:
Assertion `qdict' failed.", because the qdict isn't created in this
case (it's optional).

Furthermore, qmp_input_visitor_new() is not meant to be called without a
dict, and a further commit will assert in this situation.

If none given, create an empty qdict in qmp to avoid the
user_creatable_add_type() assert(qdict).

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20160922203927.28241-2-marcandre.lureau@redhat.com>
Tested-by: Xiao Long Jiang <zxiaol@linux.vnet.ibm.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
(cherry picked from commit e64c75a975)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 17:24:25 -05:00
Daniel P. Berrange
5be5335661 char: fix missing return in error path for chardev TLS init
If the qio_channel_tls_new_(server|client) methods fail,
we disconnect the client. Unfortunately a missing return
means we then go on to try and run the TLS handshake on
a NULL I/O channel. This gives predictably segfaulty
results.

The main way to trigger this is to request a bogus TLS
priority string for the TLS credentials. e.g.

  -object tls-creds-x509,id=tls0,priority=wibble,...

Most other ways appear impossible to trigger except
perhaps if OOM conditions cause gnutls initialization
to fail.

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit 660a2d83e0)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 17:16:58 -05:00
Emilio G. Cota
af29bd3193 qht: fix unlock-after-free segfault upon resizing
The old map's bucket locks are being unlocked *after*
that same old map has been passed to RCU for destruction.
This is a bug that can cause a segfault, since there's
no guarantee that the deletion will be deferred (e.g.
there may be no concurrent readers).

The segfault is easily triggered in RHEL6/CentOS6 with qht-test,
particularly on a single-core system or by pinning qht-test
to a single core.

Fix it by unlocking the map's bucket locks right after having
published the new map, and (crucially) before marking the map
for deletion via call_rcu().

While at it, expand qht_do_resize() to atomically do (1) a reset,
(2) a resize, or (3) a reset+resize. This simplifies the calling
code, since the new function (qht_do_resize_reset()) acquires
and releases the buckets' locks.

Note that no qht_do_reset inline is provided, since it would have
no users--qht_reset() already performs a reset without taking
ht->lock.

Reported-by: Peter Maydell <peter.maydell@linaro.org>
Reported-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1475706880-10667-3-git-send-email-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 76b553b308)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 17:15:30 -05:00
Emilio G. Cota
f72ca1ac6f qht: simplify qht_reset_size
Sometimes gcc doesn't pick up the fact that 'new' is properly
set if 'resize == true', which may generate an unnecessary
build warning.

Fix it by removing 'resize' and directly checking that 'new'
is non-NULL.

Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1475706880-10667-2-git-send-email-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit f555a9d0b3)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 17:14:57 -05:00
Eric Blake
4d45fe11d1 migrate: Fix cpu-throttle-increment regression in HMP
Commit 69ef1f3 accidentally broke migrate_set_parameter's ability
to set the cpu-throttle-increment to anything other than the
default, because it forgot to parse the user's string into an
integer.

CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit bb2b777cf9)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 17:10:51 -05:00
John Snow
4a25ab2a04 block-backend: remove blk_flush_all
We can teach Xen to drain and flush each device as it needs to, instead
of trying to flush ALL devices. This removes the last user of
blk_flush_all.

The function is therefore removed under the premise that any new uses
of blk_flush_all would be the wrong paradigm: either flush the single
device that requires flushing, or use an appropriate flush_all mechanism
from outside of the BlkBackend layer.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Acked-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 49137bf684)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 17:06:27 -05:00
John Snow
95200ebb50 qemu: use bdrv_flush_all for vm_stop et al
Reimplement bdrv_flush_all for vm_stop. In contrast to blk_flush_all,
bdrv_flush_all does not have device model restrictions. This allows
us to flush and halt unconditionally without error.

This allows us to do things like migrate when we have a device with
an open tray, but has a node that may need to be flushed, or nodes
that aren't currently attached to any device and need to be flushed.

Specifically, this allows us to migrate when we have a CDROM with
an open tray.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Acked-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 22af08eacf)
Conflicts:
	cpus.c

* drop context dependancy on 6d0ceb80

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 17:04:56 -05:00
John Snow
8e94512568 block: reintroduce bdrv_flush_all
Commit fe1a9cbc moved the flush_all routine from the bdrv layer to the
block-backend layer. In doing so, however, the semantics of the routine
changed slightly such that flush_all now used blk_flush instead of
bdrv_flush.

blk_flush can fail if the attached device model reports that it is not
"available," (i.e. the tray is open.) This changed the semantics of
flush_all such that it can now fail for e.g. open CDROM drives.

Reintroduce bdrv_flush_all to regain the old semantics without having to
alter the behavior of blk_flush or blk_flush_all, which are already
'doing the right thing.'

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Acked-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 4085f5c7a2)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 17:01:44 -05:00
Eric Blake
d40d148feb iscsi: Fix divide-by-zero regression on raw SG devices
When qemu uses iscsi devices in sg mode, iscsilun->block_size
is left at 0.  Prior to commits cf081fca and similar, when
block limits were tracked in sectors, this did not matter:
various block limits were just left at 0.  But when we started
scaling by block size, this caused SIGFPE.

Then, in a later patch, commit a5b8dd2c added an assertion to
bdrv_open_common() that request_alignment is always non-zero;
which was not true for SG mode.  Rather than relax that assertion,
we can just provide a sane value (we don't know of any SG device
with a block size smaller than qemu's default sizing of 512 bytes).

One possible solution for SG mode is to just blindly skip ALL
of iscsi_refresh_limits(), since we already short circuit so
many other things in sg mode.  But this patch takes a slightly
more conservative approach, and merely guarantees that scaling
will succeed, while still using multiples of the original size
where possible.  Resulting limits may still be zero in SG mode
(that is, we mostly only fix block_size used as a denominator
or which affect assertions, not all uses).

Reported-by: Holger Schranz <holger@fam-schranz.de>
Signed-off-by: Eric Blake <eblake@redhat.com>
CC: qemu-stable@nongnu.org

Message-Id: <1473283640-15756-1-git-send-email-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 95eaa78537)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:53:21 -05:00
Daniel P. Berrange
f9856029d5 qcow2: fix encryption during cow of sectors
Broken in previous commit:

  commit aaa4d20b49
  Author: Kevin Wolf <kwolf@redhat.com>
  Date:   Wed Jun 1 15:21:05 2016 +0200

      qcow2: Make copy_sectors() byte based

The copy_sectors() code was originally using the 'sector'
parameter for encryption, which was passed in by the caller
from the QCowL2Meta.offset field (aka the guest logical
offset).

After the change, the code is using 'cluster_offset' which
was passed in from QCow2L2Meta.alloc_offset field (aka the
host physical offset).

This would cause the data to be encrypted using an incorrect
initialization vector which will in turn cause later reads
to return garbage.

Although current qcow2 built-in encryption is blocked from
usage in the emulator, one could still hit this if writing
to the file via qemu-{img,io,nbd} commands.

Cc: qemu-stable@nongnu.org
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit bb9f8dd0e1)
Conflicts:
	tests/qemu-iotests/group

* drop context dependancy on non-2.7 iotest groups

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:51:20 -05:00
David Gibson
a3a254550b vfio/pci: Fix regression in MSI routing configuration
d1f6af6 "kvm-irqchip: simplify kvm_irqchip_add_msi_route" was a cleanup
of kvmchip routing configuration, that was mostly intended for x86.
However, it also contains a subtle change in behaviour which breaks EEH[1]
error recovery on certain VFIO passthrough devices on spapr guests.  So far
it's only been seen on a BCM5719 NIC on a POWER8 server, but there may be
other hardware with the same problem.  It's also possible there could be
circumstances where it causes a bug on x86 as well, though I don't know of
any obvious candidates.

Prior to d1f6af6, both vfio_msix_vector_do_use() and
vfio_add_kvm_msi_virq() used msg == NULL as a special flag to mark this
as the "dummy" vector used to make the host hardware state sync with the
guest expected hardware state in terms of MSI configuration.

Specifically that flag caused vfio_add_kvm_msi_virq() to become a no-op,
meaning the dummy irq would always be delivered via qemu. d1f6af6 changed
vfio_add_kvm_msi_virq() so it takes a vector number instead of the msg
parameter, and determines the correct message itself.  The test for !msg
was removed, and not replaced with anything there or in the caller.

With an spapr guest which has a VFIO device, if an EEH error occurs on the
host hardware, then the device will be isolated then reset.  This is a
combination of host and guest action, mediated by some EEH related
hypercalls.  I haven't fully traced the mechanics, but somehow installing
the kvm irqchip route for the dummy irq on the BCM5719 means that after EEH
reset and recovery, at least some irqs are no longer delivered to the
guest.

In particular, the guest never gets the link up event, and so the NIC is
effectively dead.

[1] EEH (Enhanced Error Handling) is an IBM POWER server specific PCI-*
    error reporting and recovery mechanism.  The concept is somewhat
    similar to PCI-E AER, but the details are different.

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1373802

Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Gavin Shan <gwshan@au1.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Cc: qemu-stable@nongnu.org
Fixes: d1f6af6a17 ("kvm-irqchip: simplify kvm_irqchip_add_msi_route")
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
(cherry picked from commit 6d17a018d0)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:47:51 -05:00
Cornelia Huck
533dedf059 s390x/css: handle cssid 255 correctly
The cssid 255 is reserved but still valid from an architectural
point of view. However, feeding a bogus schid of 0xffffffff into
the virtio hypercall will lead to a crash:

Stack trace of thread 138363:
        #0  0x00000000100d168c css_find_subch (qemu-system-s390x)
        #1  0x00000000100d3290 virtio_ccw_hcall_notify
        #2  0x00000000100cbf60 s390_virtio_hypercall
        #3  0x000000001010ff7a handle_hypercall
        #4  0x0000000010079ed4 kvm_cpu_exec (qemu-system-s390x)
        #5  0x00000000100609b4 qemu_kvm_cpu_thread_fn
        #6  0x000003ff8b887bb4 start_thread (libpthread.so.0)
        #7  0x000003ff8b78df0a thread_start (libc.so.6)

This is because the css array was only allocated for 0..254
instead of 0..255.

Let's fix this by bumping MAX_CSSID to 255 and fencing off the
reserved cssid of 255 during css image allocation.

Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
(cherry picked from commit 882b3b9769)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:36 -05:00
John Snow
54c26b7340 ahci: clear aiocb in ncq_cb
Similar to existing fixes for IDE (87ac25fd) and ATAPI (7f951b2d), the
AIOCB must be cleared in the callback. Otherwise, we may accidentally
try to reset a dangling pointer in bdrv_aio_cancel() from a port reset.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1474575040-32079-2-git-send-email-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit df403bc588)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:36 -05:00
Fam Zheng
f5436d1dab virtio-scsi: Don't abort when media is ejected
With an ejected block backend, blk_get_aio_context() would return
qemu_aio_context. In this case don't assert.

Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <1473848224-24809-3-git-send-email-famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 2a2d69f490)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:35 -05:00
Fam Zheng
3550eeafcd scsi-disk: Cleaning up around tray open state
Even if tray is not open, it can be empty (blk_is_inserted() == false).
Handle both cases correctly by replacing the s->tray_open checks with
blk_is_available(), which is an AND of the two.

Also simplify successive checks of them into blk_is_available(), in a
couple cases.

Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <1473848224-24809-2-git-send-email-famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit cd723b8560)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:35 -05:00
Fam Zheng
316c2c9448 iothread: Stop threads before main() quits
Right after main_loop ends, we release various things but keep iothread
alive. The latter is not prepared to the sudden change of resources.

Specifically, after bdrv_close_all(), virtio-scsi dataplane get a
surprise at the empty BlockBackend:

(gdb) bt
    at /usr/src/debug/qemu-2.6.0/hw/scsi/virtio-scsi.c:543
    at /usr/src/debug/qemu-2.6.0/hw/scsi/virtio-scsi.c:577

It is because the d->conf.blk->root is set to NULL, then
blk_get_aio_context() returns qemu_aio_context, whereas s->ctx is still
pointing to the iothread:

    hw/scsi/virtio-scsi.c:543:

    if (s->dataplane_started) {
        assert(blk_get_aio_context(d->conf.blk) == s->ctx);
    }

To fix this, let's stop iothreads before doing bdrv_close_all().

Cc: qemu-stable@nongnu.org
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1473326931-9699-1-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit dce8921b2b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:35 -05:00
Daniel P. Berrange
98b4465f7d crypto: ensure XTS is only used with ciphers with 16 byte blocks
The XTS cipher mode needs to be used with a cipher which has
a block size of 16 bytes. If a mis-matching block size is used,
the code will either corrupt memory beyond the IV array, or
not fully encrypt/decrypt the IV.

This fixes a memory corruption crash when attempting to use
cast5-128 with xts, since the former has an 8 byte block size.

A test case is added to ensure the cipher creation fails with
such an invalid combination.

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
(cherry picked from commit a5d2f44d0d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:35 -05:00
Paolo Bonzini
8342e1240b scsi: mptconfig: fix misuse of MPTSAS_CONFIG_PACK
These issues cause respectively a QEMU crash and a leak of 2 bytes of
stack.  They were discovered by VictorV of 360 Marvel Team.

Reported-by: Tom Victor <i-tangtianwen@360.cm>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 65a8e1f641)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:35 -05:00
Prasad J Pandit
0b6ab25367 scsi: mptconfig: fix an assert expression
When LSI SAS1068 Host Bus emulator builds configuration page
headers, mptsas_config_pack() should assert that the size
fits in a byte.  However, the size is expressed in 32-bit
units, so up to 1020 bytes fit.  The assertion was only
allowing replies up to 252 bytes, so fix it.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
Message-Id: <1472645167-30765-2-git-send-email-ppandit@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit cf2bce203a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:35 -05:00
Prasad J Pandit
742886578d vmw_pvscsi: check page count while initialising descriptor rings
Vmware Paravirtual SCSI emulation uses command descriptors to
process SCSI commands. These descriptors come with their ring
buffers. A guest could set the page count for these rings to
an arbitrary value, leading to infinite loop or OOB access.
Add check to avoid it.

Reported-by: Tom Victor <vv474172261@gmail.com>
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
Message-Id: <1472626169-12989-1-git-send-email-ppandit@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 7f61f4690d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:34 -05:00
Rony Weng
2f8e8c7396 scsi-disk: change disk serial length from 20 to 36
Openstack Cinder assigns volume a 36 characters uuid as serial.
QEMU will shrinks the uuid to 20 characters, which does not match
the original uuid.

Note that there is no limit to the length of the serial number in
the SCSI spec.  20 was copy-pasted from virtio-blk which in turn was
copy-pasted from ATA; 36 is even more arbitrary.  However, bumping it
up too much might cause issues (e.g. 252 seems to make sense because
then the maximum amount of returned data is 256; but who knows there's
no off-by-one somewhere for such a nicely rounded number).

Signed-off-by: Rony Weng <ronyweng@synology.com>
Message-Id: <1472457138-23386-1-git-send-email-ronyweng@synology.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 48b6206305)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:34 -05:00
069e885d83 qemu-char: avoid segfault if user lacks of permisson of a given logfile
Function qemu_chr_alloc returns NULL if it failed to open logfile by any reason,
says no write permission. For backends tty, stdio and msmouse, They need to
check this return value to avoid segfault in this case.

Signed-off-by: Lin Ma <lma@suse.com>
Cc: qemu-stable <qemu-stable@nongnu.org>
Message-Id: <20160914062250.22226-1-lma@suse.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 71200fb966)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:34 -05:00
Prasad J Pandit
bfb15f77bb scsi: pvscsi: limit process IO loop to ring size
Vmware Paravirtual SCSI emulator while processing IO requests
could run into an infinite loop if 'pvscsi_ring_pop_req_descr'
always returned positive value. Limit IO loop to the ring size.

Cc: qemu-stable@nongnu.org
Reported-by: Li Qiang <liqiang6-s@360.cn>
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
Message-Id: <1473845952-30785-1-git-send-email-ppandit@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit d251157ac1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:34 -05:00
Li Qiang
c6a7b922f8 scsi: mptsas: use g_new0 to allocate MPTSASRequest object
When processing IO request in mptsas, it uses g_new to allocate
a 'req' object. If an error occurs before 'req->sreq' is
allocated, It could lead to an OOB write in mptsas_free_request
function. Use g_new0 to avoid it.

Reported-by: Li Qiang <liqiang6-s@360.cn>
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
Message-Id: <1473684251-17476-1-git-send-email-ppandit@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 670e56d3ed)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:34 -05:00
Greg Kurz
d06c61f310 9pfs: fix potential segfault during walk
If the call to fid_to_qid() returns an error, we will call v9fs_path_free()
on uninitialized paths.

It is a regression introduced by the following commit:

56f101ecce 9pfs: handle walk of ".." in the root directory

Let's fix this by initializing dpath and path before calling fid_to_qid().

Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
[groug: updated the changelog to indicate this is regression and to provide
        the offending commit SHA1]
Signed-off-by: Greg Kurz <groug@kaod.org>

(cherry picked from commit 13fd08e631)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:34 -05:00
Gonglei
91a2f46297 vnc: fix qemu crash because of SIGSEGV
The backtrace is:

0x00007f0b75cdf880 in pixman_image_get_stride () from /lib64/libpixman-1.so.0
0x00007f0b77bcb3cf in vnc_server_fb_stride (vd=0x7f0b7a1a2bb0) at ui/vnc.c:680
vnc_dpy_copy (dcl=0x7f0b7a1a2c00, src_x=224, src_y=263, dst_x=319, dst_y=363, w=1, h=1) at ui/vnc.c:915
0x00007f0b77bbcc35 in dpy_gfx_copy (con=0x7f0b7a146210, src_x=src_x@entry=224, src_y=src_y@entry=263, dst_x=dst_x@entry=319,
dst_y=dst_y@entry=363, w=1, h=1) at ui/console.c:1575
0x00007f0b77bbda4e in qemu_console_copy (con=<optimized out>, src_x=src_x@entry=224, src_y=src_y@entry=263, dst_x=dst_x@entry=319,
dst_y=dst_y@entry=363, w=<optimized out>, h=<optimized out>) at ui/console.c:2111
0x00007f0b77ac0980 in cirrus_do_copy (h=<optimized out>, w=<optimized out>, src=<optimized out>, dst=<optimized out>, s=0x7f0b7b086090) at hw/display/cirrus_vga.c:774
cirrus_bitblt_videotovideo_copy (s=0x7f0b7b086090) at hw/display/cirrus_vga.c:793
cirrus_bitblt_videotovideo (s=0x7f0b7b086090) at hw/display/cirrus_vga.c:915
cirrus_bitblt_start (s=0x7f0b7b086090) at hw/display/cirrus_vga.c:1056
0x00007f0b77965cfb in memory_region_write_accessor (mr=0x7f0b7b096e40, addr=320, value=<optimized out>, size=1, shift=<optimized out>,mask=<optimized out>, attrs=...) at /root/rpmbuild/BUILD/master/qemu/memory.c:525
0x00007f0b77963f59 in access_with_adjusted_size (addr=addr@entry=320, value=value@entry=0x7f0b69a268d8, size=size@entry=4,
access_size_min=<optimized out>, access_size_max=<optimized out>, access=access@entry=0x7f0b77965c80 <memory_region_write_accessor>,
mr=mr@entry=0x7f0b7b096e40, attrs=attrs@entry=...) at /root/rpmbuild/BUILD/master/qemu/memory.c:591
0x00007f0b77968315 in memory_region_dispatch_write (mr=mr@entry=0x7f0b7b096e40, addr=addr@entry=320, data=18446744073709551362,
size=size@entry=4, attrs=attrs@entry=...) at /root/rpmbuild/BUILD/master/qemu/memory.c:1262
0x00007f0b779256a9 in address_space_write_continue (mr=0x7f0b7b096e40, l=4, addr1=320, len=4, buf=0x7f0b77713028 "\002\377\377\377",
attrs=..., addr=4273930560, as=0x7f0b7827d280 <address_space_memory>) at /root/rpmbuild/BUILD/master/qemu/exec.c:2544
address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=<optimized out>, len=<optimized out>) at /root/rpmbuild/BUILD/master/qemu/exec.c:2601
0x00007f0b77925c1d in address_space_rw (as=<optimized out>, addr=<optimized out>, attrs=..., attrs@entry=...,
buf=buf@entry=0x7f0b77713028 "\002\377\377\377", len=<optimized out>, is_write=<optimized out>) at /root/rpmbuild/BUILD/master/qemu/exec.c:2703
0x00007f0b77962f53 in kvm_cpu_exec (cpu=cpu@entry=0x7f0b79fcc2d0) at /root/rpmbuild/BUILD/master/qemu/kvm-all.c:1965
0x00007f0b77950cc6 in qemu_kvm_cpu_thread_fn (arg=0x7f0b79fcc2d0) at /root/rpmbuild/BUILD/master/qemu/cpus.c:1078
0x00007f0b744b3dc5 in start_thread (arg=0x7f0b69a27700) at pthread_create.c:308
0x00007f0b70d3d66d in clone () from /lib64/libc.so.6

The code path while meeting segfault:
 vnc_dpy_copy
   vnc_update_client
     vnc_disconnect_finish [while vnc_disconnect_start() is invoked because somethins wrong]
       vnc_update_server_surface
         vd->server = NULL;
   vnc_server_fb_stride
     pixman_image_get_stride(vd->server)

Let's add a non-NULL check before calling vnc_server_fb_stride() to avoid segmentation fault.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Daniel P. Berrange <berrange@redhat.com>
Reported-by: Yanying Zhuang <ann.zhuangyanying@huawei.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-id: 1472788698-120964-1-git-send-email-arei.gonglei@huawei.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 3e10c3ecfc)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:33 -05:00
Ladi Prosek
520d4b288f virtio-balloon: discard virtqueue element on reset
The one pending element is being freed but not discarded on device
reset, which causes svq->inuse to creep up, eventually hitting the
"Virtqueue size exceeded" error.

Properly discarding the element on device reset makes sure that its
buffers are unmapped and the inuse counter stays balanced.

Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Roman Kagan <rkagan@virtuozzo.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 104e70cae7)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:33 -05:00
Stefan Hajnoczi
4b6542dd17 virtio: zero vq->inuse in virtio_reset()
vq->inuse must be zeroed upon device reset like most other virtqueue
fields.

In theory, virtio_reset() just needs assert(vq->inuse == 0) since
devices must clean up in-flight requests during reset (requests cannot
not be leaked!).

In practice, it is difficult to achieve vq->inuse == 0 across reset
because balloon, blk, 9p, etc implement various different strategies for
cleaning up requests.  Most devices call g_free(elem) directly without
telling virtio.c that the VirtQueueElement is cleaned up.  Therefore
vq->inuse is not decremented during reset.

This patch zeroes vq->inuse and trusts that devices are not leaking
VirtQueueElements across reset.

I will send a follow-up series that refactors request life-cycle across
all devices and converts vq->inuse = 0 into assert(vq->inuse == 0) but
this more invasive approach is not appropriate for stable trees.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-stable <qemu-stable@nongnu.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Ladi Prosek <lprosek@redhat.com>
(cherry picked from commit 4b7f91ed02)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 16:41:33 -05:00
Michael Roth
c1a77fd6fa Merge tag 'ppc-for-2.7-20161013' into stable-2.7-staging
qemu-2.7 (stable): ppc patch queue 2016-10-13

TCG for ppc does not properly implement hardware transactional memory.
It has a stub implementation in which transactions always fail.
Unfortunately in v2.7.0, HTM is advertised as being available to
guests, which means guests may incorrectly attempt to use it and hang.

This has been the case for a while, but has become more urgent with
recent (guest) Linux kernel versions which attempt to lazily enable
TM.  Under TCG that now triggers the problem regularly, instead of
just when running a TM aware userspace program.

The problem is already fixed in the 2.8/master branch, by correctly
advertising HTM as not being available with TCG.  This series
backports the relevant patches to the qemu-2.7 stable branch to fix
the problem there.

* tag 'ppc-for-2.7-20161013':
  ppc: Check the availability of transactional memory
  hw/ppc/spapr: Fix the selection of the processor features
  hw/ppc/spapr: Move code related to "ibm,pa-features" to a separate function
  linux-headers: update
2016-11-02 16:40:01 -05:00
6259 changed files with 454316 additions and 1126336 deletions

View File

@@ -1,27 +0,0 @@
env:
CIRRUS_CLONE_DEPTH: 1
freebsd_12_task:
freebsd_instance:
image: freebsd-12-0-release-amd64
cpu: 8
memory: 8G
install_script: pkg install -y
bison curl cyrus-sasl git glib gmake gnutls
nettle perl5 pixman pkgconf png usbredir
script:
- mkdir build
- cd build
- ../configure || { cat config.log; exit 1; }
- gmake -j8
- gmake -j8 V=1 check
macos_task:
osx_instance:
image: mojave-base
install_script:
- brew install pkg-config python glib pixman make sdl2
script:
- ./configure --python=/usr/local/bin/python3 || { cat config.log; exit 1; }
- gmake -j$(sysctl -n hw.ncpu)
- gmake check -j$(sysctl -n hw.ncpu)

View File

@@ -1,34 +0,0 @@
# EditorConfig is a file format and collection of text editor plugins
# for maintaining consistent coding styles between different editors
# and IDEs. Most popular editors support this either natively or via
# plugin.
#
# Check https://editorconfig.org for details.
root = true
[*]
end_of_line = lf
insert_final_newline = true
charset = utf-8
[*.mak]
indent_style = tab
indent_size = 8
file_type_emacs = makefile
[Makefile*]
indent_style = tab
indent_size = 8
file_type_emacs = makefile
[*.{c,h}]
indent_style = space
indent_size = 4
[*.{vert,frag}]
file_type_emacs = glsl
[*.json]
indent_style = space
file_type_emacs = python

View File

@@ -1,8 +0,0 @@
# GDB may have ./.gdbinit loading disabled by default. In that case you can
# follow the instructions it prints. They boil down to adding the following to
# your home directory's ~/.gdbinit file:
#
# add-auto-load-safe-path /path/to/qemu/.gdbinit
# Load QEMU-specific sub-commands and settings
source scripts/qemu-gdb.py

86
.gitignore vendored
View File

@@ -1,4 +1,3 @@
/.doctrees
/config-devices.*
/config-all-devices.*
/config-all-disas.*
@@ -6,18 +5,21 @@
/config-target.*
/config.status
/config-temp
/elf2dmp
/trace-events-all
/trace/generated-tracers.h
/trace/generated-tracers.c
/trace/generated-tracers-dtrace.h
/trace/generated-tracers.dtrace
/trace/generated-events.h
/trace/generated-events.c
/trace/generated-helpers-wrappers.h
/trace/generated-helpers.h
/trace/generated-helpers.c
/trace/generated-tcg-tracers.h
/trace/generated-ust-provider.h
/trace/generated-ust.c
/ui/shader/texture-blit-frag.h
/ui/shader/texture-blit-vert.h
/ui/shader/texture-blit-flip-vert.h
/ui/input-keymap-*.c
*-timestamp
/*-softmmu
/*-darwin-user
@@ -29,24 +31,17 @@
/libuser
/linux-headers/asm
/qga/qapi-generated
/qapi-gen-timestamp
/qapi/qapi-builtin-types.[ch]
/qapi/qapi-builtin-visit.[ch]
/qapi/qapi-commands-*.[ch]
/qapi/qapi-commands.[ch]
/qapi/qapi-emit-events.[ch]
/qapi/qapi-events-*.[ch]
/qapi/qapi-events.[ch]
/qapi/qapi-introspect.[ch]
/qapi/qapi-types-*.[ch]
/qapi/qapi-types.[ch]
/qapi/qapi-visit-*.[ch]
/qapi/qapi-visit.[ch]
/qapi/qapi-doc.texi
/qapi-generated
/qapi-types.[ch]
/qapi-visit.[ch]
/qapi-event.[ch]
/qmp-commands.h
/qmp-introspect.[ch]
/qmp-marshal.c
/qemu-doc.html
/qemu-tech.html
/qemu-doc.info
/qemu-doc.txt
/qemu-edid
/qemu-tech.info
/qemu-img
/qemu-nbd
/qemu-options.def
@@ -56,21 +51,16 @@
/qemu-io
/qemu-ga
/qemu-bridge-helper
/qemu-keymap
/qemu-monitor.texi
/qemu-monitor-info.texi
/qemu-version.h
/qemu-version.h.tmp
/module_block.h
/scsi/qemu-pr-helper
/vhost-user-scsi
/vhost-user-blk
/qmp-commands.txt
/vscclient
/fsdev/virtfs-proxy-helper
*.tmp
*.[1-9]
*.a
*.aux
*.cp
*.dvi
*.exe
*.msi
*.dll
@@ -92,10 +82,13 @@
*.d
!/scripts/qemu-guest-agent/fsfreeze-hook.d
*.o
*.lo
*.la
*.pc
.libs
.sdk
*.gcda
*.gcno
*.gcov
/pc-bios/bios-pq/status
/pc-bios/vgabios-pq/status
/pc-bios/optionrom/linuxboot.asm
@@ -106,10 +99,6 @@
/pc-bios/optionrom/linuxboot_dma.bin
/pc-bios/optionrom/linuxboot_dma.raw
/pc-bios/optionrom/linuxboot_dma.img
/pc-bios/optionrom/pvh.asm
/pc-bios/optionrom/pvh.bin
/pc-bios/optionrom/pvh.raw
/pc-bios/optionrom/pvh.img
/pc-bios/optionrom/multiboot.asm
/pc-bios/optionrom/multiboot.bin
/pc-bios/optionrom/multiboot.raw
@@ -120,40 +109,9 @@
/pc-bios/optionrom/kvmvapic.img
/pc-bios/s390-ccw/s390-ccw.elf
/pc-bios/s390-ccw/s390-ccw.img
/docs/built
/docs/interop/qemu-ga-qapi.texi
/docs/interop/qemu-ga-ref.html
/docs/interop/qemu-ga-ref.info*
/docs/interop/qemu-ga-ref.txt
/docs/interop/qemu-qmp-qapi.texi
/docs/interop/qemu-qmp-ref.html
/docs/interop/qemu-qmp-ref.info*
/docs/interop/qemu-qmp-ref.txt
/docs/version.texi
*.tps
.stgit-*
.git-submodule-status
cscope.*
tags
TAGS
docker-src.*
*~
*.ast_raw
*.depend_raw
trace.h
trace.c
trace-ust.h
trace-ust.h
trace-dtrace.h
trace-dtrace.dtrace
trace-root.h
trace-root.c
trace-ust-root.h
trace-ust-root.h
trace-ust-all.h
trace-ust-all.c
trace-dtrace-root.h
trace-dtrace-root.dtrace
trace-ust-all.h
trace-ust-all.c
/target/arm/decode-sve.inc.c

View File

@@ -1,73 +0,0 @@
before_script:
- apt-get update -qq
- apt-get install -y -qq flex bison libglib2.0-dev libpixman-1-dev genisoimage
build-system1:
script:
- apt-get install -y -qq libgtk-3-dev libvte-dev nettle-dev libcacard-dev
libusb-dev libvde-dev libspice-protocol-dev libgl1-mesa-dev
- ./configure --enable-werror --target-list="aarch64-softmmu alpha-softmmu
cris-softmmu hppa-softmmu lm32-softmmu moxie-softmmu microblazeel-softmmu
mips64el-softmmu m68k-softmmu ppc-softmmu riscv64-softmmu sparc-softmmu"
- make -j2
- make -j2 check
build-system2:
script:
- apt-get install -y -qq libsdl2-dev libgcrypt-dev libbrlapi-dev libaio-dev
libfdt-dev liblzo2-dev librdmacm-dev libibverbs-dev libibumad-dev
- ./configure --enable-werror --target-list="tricore-softmmu unicore32-softmmu
microblaze-softmmu mips-softmmu riscv32-softmmu s390x-softmmu sh4-softmmu
sparc64-softmmu x86_64-softmmu xtensa-softmmu nios2-softmmu or1k-softmmu"
- make -j2
- make -j2 check
build-disabled:
script:
- ./configure --enable-werror --disable-rdma --disable-slirp --disable-curl
--disable-capstone --disable-live-block-migration --disable-glusterfs
--disable-replication --disable-coroutine-pool --disable-smartcard
--disable-guest-agent --disable-curses --disable-libxml2 --disable-tpm
--disable-qom-cast-debug --disable-spice --disable-vhost-vsock
--disable-vhost-net --disable-vhost-crypto --disable-vhost-user
--target-list="i386-softmmu ppc64-softmmu mips64-softmmu i386-linux-user"
- make -j2
- make -j2 check-qtest SPEED=slow
build-tcg-disabled:
script:
- apt-get install -y -qq clang libgtk-3-dev libbluetooth-dev libusb-dev
- ./configure --cc=clang --enable-werror --disable-tcg --audio-drv-list=""
- make -j2
- make check-unit
- make check-qapi-schema
- cd tests/qemu-iotests/
- ./check -raw 001 002 003 004 005 008 009 010 011 012 021 025 032 033 048
052 063 077 086 101 104 106 113 147 148 150 151 152 157 159 160
163 170 171 183 184 192 194 197 205 208 215 221 222 226 227 236
- ./check -qcow2 001 002 003 004 005 007 008 009 010 011 012 013 017 018 019
020 021 022 024 025 027 028 029 031 032 033 034 035 036 037 038
039 040 042 043 046 047 048 049 050 051 052 053 054 056 057 058
060 061 062 063 065 066 067 068 069 071 072 073 074 079 080 082
085 086 089 090 091 095 096 097 098 099 102 103 104 105 107 108
110 111 114 117 120 122 124 126 127 129 130 132 133 134 137 138
139 140 141 142 143 144 145 147 150 151 152 154 155 156 157 158
161 165 170 172 174 176 177 179 184 186 187 190 192 194 195 196
197 200 202 203 205 208 209 214 215 216 217 218 222 226 227 229 234
build-user:
script:
- ./configure --enable-werror --disable-system --disable-guest-agent
--disable-capstone --disable-slirp --disable-fdt
- make -j2
- make run-tcg-tests-i386-linux-user run-tcg-tests-x86_64-linux-user
build-clang:
script:
- apt-get install -y -qq clang libsdl2-dev
xfslibs-dev libiscsi-dev libnfs-dev libseccomp-dev gnutls-dev librbd-dev
- ./configure --cc=clang --cxx=clang++ --enable-werror
--target-list="alpha-softmmu arm-softmmu m68k-softmmu mips64-softmmu
ppc-softmmu s390x-softmmu x86_64-softmmu arm-linux-user"
- make -j2
- make -j2 check

51
.gitmodules vendored
View File

@@ -1,54 +1,33 @@
[submodule "roms/vgabios"]
path = roms/vgabios
url = git://git.qemu-project.org/vgabios.git/
[submodule "roms/seabios"]
path = roms/seabios
url = https://git.qemu.org/git/seabios.git/
url = git://git.qemu-project.org/seabios.git/
[submodule "roms/SLOF"]
path = roms/SLOF
url = https://git.qemu.org/git/SLOF.git
url = git://git.qemu-project.org/SLOF.git
[submodule "roms/ipxe"]
path = roms/ipxe
url = https://git.qemu.org/git/ipxe.git
url = git://git.qemu-project.org/ipxe.git
[submodule "roms/openbios"]
path = roms/openbios
url = https://git.qemu.org/git/openbios.git
url = git://git.qemu-project.org/openbios.git
[submodule "roms/openhackware"]
path = roms/openhackware
url = https://git.qemu.org/git/openhackware.git
url = git://git.qemu-project.org/openhackware.git
[submodule "roms/qemu-palcode"]
path = roms/qemu-palcode
url = https://git.qemu.org/git/qemu-palcode.git
url = git://github.com/rth7680/qemu-palcode.git
[submodule "roms/sgabios"]
path = roms/sgabios
url = https://git.qemu.org/git/sgabios.git
url = git://git.qemu-project.org/sgabios.git
[submodule "pixman"]
path = pixman
url = git://anongit.freedesktop.org/pixman
[submodule "dtc"]
path = dtc
url = https://git.qemu.org/git/dtc.git
url = git://git.qemu-project.org/dtc.git
[submodule "roms/u-boot"]
path = roms/u-boot
url = https://git.qemu.org/git/u-boot.git
[submodule "roms/skiboot"]
path = roms/skiboot
url = https://git.qemu.org/git/skiboot.git
[submodule "roms/QemuMacDrivers"]
path = roms/QemuMacDrivers
url = https://git.qemu.org/git/QemuMacDrivers.git
[submodule "ui/keycodemapdb"]
path = ui/keycodemapdb
url = https://git.qemu.org/git/keycodemapdb.git
[submodule "capstone"]
path = capstone
url = https://git.qemu.org/git/capstone.git
[submodule "roms/seabios-hppa"]
path = roms/seabios-hppa
url = https://github.com/hdeller/seabios-hppa.git
[submodule "roms/u-boot-sam460ex"]
path = roms/u-boot-sam460ex
url = https://git.qemu.org/git/u-boot-sam460ex.git
[submodule "tests/fp/berkeley-testfloat-3"]
path = tests/fp/berkeley-testfloat-3
url = https://github.com/cota/berkeley-testfloat-3
[submodule "tests/fp/berkeley-softfloat-3"]
path = tests/fp/berkeley-softfloat-3
url = https://github.com/cota/berkeley-softfloat-3
[submodule "roms/edk2"]
path = roms/edk2
url = https://github.com/tianocore/edk2.git
url = git://git.qemu-project.org/u-boot.git

View File

@@ -1,51 +0,0 @@
#
# Common git-publish profiles that can be used to send patches to QEMU upstream.
#
# See https://github.com/stefanha/git-publish for more information
#
[gitpublishprofile "default"]
base = master
to = qemu-devel@nongnu.org
cccmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback 2>/dev/null
[gitpublishprofile "rfc"]
base = master
prefix = RFC PATCH
to = qemu-devel@nongnu.org
cccmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback 2>/dev/null
[gitpublishprofile "stable"]
base = master
to = qemu-devel@nongnu.org
cc = qemu-stable@nongnu.org
cccmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback 2>/dev/null
[gitpublishprofile "trivial"]
base = master
to = qemu-devel@nongnu.org
cc = qemu-trivial@nongnu.org
cccmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback 2>/dev/null
[gitpublishprofile "block"]
base = master
to = qemu-devel@nongnu.org
cc = qemu-block@nongnu.org
cccmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback 2>/dev/null
[gitpublishprofile "arm"]
base = master
to = qemu-devel@nongnu.org
cc = qemu-arm@nongnu.org
cccmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback 2>/dev/null
[gitpublishprofile "s390"]
base = master
to = qemu-devel@nongnu.org
cc = qemu-s390@nongnu.org
cccmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback 2>/dev/null
[gitpublishprofile "ppc"]
base = master
to = qemu-devel@nongnu.org
cc = qemu-ppc@nongnu.org
cccmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback 2>/dev/null

View File

@@ -1,7 +1,6 @@
# This mailmap fixes up author names/addresses.
# The first section translates weird addresses from the original git import
# into proper addresses so that they are counted properly by git shortlog.
# This mailmap just translates the weird addresses from the original import into git
# into proper addresses so that they are counted properly in git shortlog output.
#
Andrzej Zaborowski <balrogg@gmail.com> balrog <balrog@c046a42c-6fe2-441c-8c8c-71466251a162>
Anthony Liguori <anthony@codemonkey.ws> aliguori <aliguori@c046a42c-6fe2-441c-8c8c-71466251a162>
Anthony Liguori <anthony@codemonkey.ws> Anthony Liguori <aliguori@us.ibm.com>
@@ -9,31 +8,10 @@ Aurelien Jarno <aurelien@aurel32.net> aurel32 <aurel32@c046a42c-6fe2-441c-8c8c-7
Blue Swirl <blauwirbel@gmail.com> blueswir1 <blueswir1@c046a42c-6fe2-441c-8c8c-71466251a162>
Edgar E. Iglesias <edgar.iglesias@gmail.com> edgar_igl <edgar_igl@c046a42c-6fe2-441c-8c8c-71466251a162>
Fabrice Bellard <fabrice@bellard.org> bellard <bellard@c046a42c-6fe2-441c-8c8c-71466251a162>
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
Jocelyn Mayer <l_indien@magic.fr> j_mayer <j_mayer@c046a42c-6fe2-441c-8c8c-71466251a162>
Paul Brook <paul@codesourcery.com> pbrook <pbrook@c046a42c-6fe2-441c-8c8c-71466251a162>
Yongbok Kim <yongbok.kim@mips.com> <yongbok.kim@imgtec.com>
Aleksandar Markovic <amarkovic@wavecomp.com> <aleksandar.markovic@mips.com>
Aleksandar Markovic <amarkovic@wavecomp.com> <aleksandar.markovic@imgtec.com>
Paul Burton <pburton@wavecomp.com> <paul.burton@mips.com>
Paul Burton <pburton@wavecomp.com> <paul.burton@imgtec.com>
Paul Burton <pburton@wavecomp.com> <paul@archlinuxmips.org>
Thiemo Seufer <ths@networkno.de> ths <ths@c046a42c-6fe2-441c-8c8c-71466251a162>
malc <av1474@comtv.ru> malc <malc@c046a42c-6fe2-441c-8c8c-71466251a162>
# There is also a:
# (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162>
# for the cvs2svn initialization commit e63c3dc74bf.
# Next, translate a few commits where mailman rewrote the From: line due
# to strict SPF, although we prefer to avoid adding more entries like that.
Ed Swierk <eswierk@skyportsystems.com> Ed Swierk via Qemu-devel <qemu-devel@nongnu.org>
Ian McKellar <ianloic@google.com> Ian McKellar via Qemu-devel <qemu-devel@nongnu.org>
Julia Suvorova <jusual@mail.ru> Julia Suvorova via Qemu-devel <qemu-devel@nongnu.org>
Justin Terry (VM) <juterry@microsoft.com> Justin Terry (VM) via Qemu-devel <qemu-devel@nongnu.org>
# Also list preferred name forms where people have changed their
# git author config, or had utf8/latin1 encoding issues.
Daniel P. Berrangé <berrange@redhat.com>
Reimar Döffinger <Reimar.Doeffinger@gmx.de>

View File

@@ -1,40 +0,0 @@
language: c
git:
submodules: false
env:
global:
- LC_ALL=C
matrix:
- IMAGE=debian-amd64
TARGET_LIST=x86_64-softmmu,x86_64-linux-user
# currently disabled as the mxe.cc repos are down
# - IMAGE=debian-win32-cross
# TARGET_LIST=arm-softmmu,i386-softmmu,lm32-softmmu
# - IMAGE=debian-win64-cross
# TARGET_LIST=aarch64-softmmu,sparc64-softmmu,x86_64-softmmu
- IMAGE=debian-armel-cross
TARGET_LIST=arm-softmmu,arm-linux-user,armeb-linux-user
- IMAGE=debian-armhf-cross
TARGET_LIST=arm-softmmu,arm-linux-user,armeb-linux-user
- IMAGE=debian-arm64-cross
TARGET_LIST=aarch64-softmmu,aarch64-linux-user
- IMAGE=debian-s390x-cross
TARGET_LIST=s390x-softmmu,s390x-linux-user
- IMAGE=debian-mips-cross
TARGET_LIST=mips-softmmu,mipsel-linux-user
- IMAGE=debian-mips64el-cross
TARGET_LIST=mips64el-softmmu,mips64el-linux-user
- IMAGE=debian-ppc64el-cross
TARGET_LIST=ppc64-softmmu,ppc64-linux-user,ppc64abi32-linux-user
build:
pre_ci:
- make docker-image-${IMAGE} V=1
pre_ci_boot:
image_name: qemu
image_tag: ${IMAGE}
pull: false
options: "-e HOME=/root"
ci:
- unset CC
- ./configure ${QEMU_CONFIGURE_OPTS} --target-list=${TARGET_LIST}
- make -j$(($(getconf _NPROCESSORS_ONLN) + 1))

View File

@@ -1,28 +1,24 @@
# The current Travis default is a VM based 16.04 Xenial on GCE
# Additional builds with specific requirements for a full VM need to
# be added as additional matrix: entries later on
dist: xenial
sudo: false
language: c
python:
- "2.4"
compiler:
- gcc
- clang
cache: ccache
addons:
apt:
packages:
# Build dependencies
- libaio-dev
- libattr1-dev
- libbrlapi-dev
- libcap-ng-dev
- libgcc-4.8-dev
- libgnutls-dev
- libgtk-3-dev
- libiscsi-dev
- liblttng-ust-dev
- libncurses5-dev
- libnfs-dev
- libncurses5-dev
- libnss3-dev
- libpixman-1-dev
- libpng12-dev
@@ -34,15 +30,9 @@ addons:
- libssh2-1-dev
- liburcu-dev
- libusb-1.0-0-dev
- libvte-2.91-dev
- libvte-2.90-dev
- sparse
- uuid-dev
- gcovr
homebrew:
packages:
- glib
- pixman
# The channel name "irc.oftc.net#qemu" is encrypted against qemu/qemu
# to prevent IRC notifications from forks. This was created using:
@@ -53,235 +43,59 @@ notifications:
- secure: "F7GDRgjuOo5IUyRLqSkmDL7kvdU4UcH3Lm/W2db2JnDHTGCqgEdaYEYKciyCLZ57vOTsTsOgesN8iUT7hNHBd1KWKjZe9KDTZWppWRYVwAwQMzVeSOsbbU4tRoJ6Pp+3qhH1Z0eGYR9ZgKYAoTumDFgSAYRp4IscKS8jkoedOqM="
on_success: change
on_failure: always
env:
global:
- SRC_DIR="."
- BUILD_DIR="."
- BASE_CONFIG="--disable-docs --disable-tools"
- TEST_CMD="make check -j3 V=1"
# This is broadly a list of "mainline" softmmu targets which have support across the major distros
- MAIN_SOFTMMU_TARGETS="aarch64-softmmu,arm-softmmu,i386-softmmu,mips-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu"
- TEST_CMD="make check"
matrix:
- CONFIG=""
- CONFIG="--enable-debug --enable-debug-tcg --enable-trace-backends=log"
- CONFIG="--disable-linux-aio --disable-cap-ng --disable-attr --disable-brlapi --disable-uuid --disable-libusb"
- CONFIG="--enable-modules"
- CONFIG="--with-coroutine=ucontext"
- CONFIG="--with-coroutine=sigaltstack"
git:
# we want to do this ourselves
submodules: false
before_install:
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew update ; fi
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew install libffi gettext glib pixman ; fi
- wget -O - http://people.linaro.org/~alex.bennee/qemu-submodule-git-seed.tar.xz | tar -xvJ
- git submodule update --init --recursive
before_script:
- mkdir -p ${BUILD_DIR} && cd ${BUILD_DIR}
- ${SRC_DIR}/configure ${BASE_CONFIG} ${CONFIG} || { cat config.log && exit 1; }
- ./configure ${CONFIG}
script:
- make -j3 && ${TEST_CMD}
matrix:
include:
- env:
- CONFIG="--disable-system"
# we split the system builds as it takes a while to build them all
- env:
- CONFIG="--disable-user --target-list=${MAIN_SOFTMMU_TARGETS}"
- env:
- CONFIG="--disable-user --target-list-exclude=${MAIN_SOFTMMU_TARGETS}"
# Just build tools and run minimal unit and softfloat checks
- env:
- BASE_CONFIG="--enable-tools"
- CONFIG="--disable-user --disable-system"
- TEST_CMD="make check-unit check-softfloat -j3"
- env:
- CONFIG="--enable-debug --enable-debug-tcg --disable-user"
# TCG debug can be run just on it's own and is mostly agnostic to user/softmmu distinctions
- env:
- CONFIG="--enable-debug-tcg --disable-system"
- env:
- CONFIG="--disable-linux-aio --disable-cap-ng --disable-attr --disable-brlapi --disable-libusb --disable-replication --target-list=${MAIN_SOFTMMU_TARGETS}"
# Module builds are mostly of interest to major distros
- env:
- CONFIG="--enable-modules --target-list=${MAIN_SOFTMMU_TARGETS}"
# Alternate coroutines implementations are only really of interest to KVM users
# However we can't test against KVM on Travis so we can only run unit tests
- env:
- CONFIG="--with-coroutine=ucontext --disable-tcg"
- TEST_CMD="make check-unit -j3 V=1"
- env:
- CONFIG="--with-coroutine=sigaltstack --disable-tcg"
- TEST_CMD="make check-unit -j3 V=1"
# Check we can build docs and tools (out of tree)
- env:
- BUILD_DIR="out-of-tree/build/dir" SRC_DIR="../../.."
- BASE_CONFIG="--enable-tools --enable-docs"
- CONFIG="--target-list=x86_64-softmmu,aarch64-linux-user"
addons:
apt:
packages:
- python-sphinx
- texinfo
- perl
# Test with Clang for compile portability (Travis uses clang-5.0)
- env:
- CONFIG="--disable-system"
compiler: clang
- env:
- CONFIG="--disable-user --target-list=${MAIN_SOFTMMU_TARGETS}"
compiler: clang
- env:
- CONFIG="--disable-user --target-list-exclude=${MAIN_SOFTMMU_TARGETS}"
compiler: clang
# gprof/gcov are GCC features
- env:
- CONFIG="--enable-gprof --enable-gcov --disable-pie --target-list=${MAIN_SOFTMMU_TARGETS}"
after_success:
- ${SRC_DIR}/scripts/travis/coverage-summary.sh
- env: CONFIG="--enable-gprof --enable-gcov --disable-pie"
compiler: gcc
# We manually include builds which we disable "make check" for
- env:
- CONFIG="--without-default-devices --disable-user"
- TEST_CMD=""
# We manually include builds which we disable "make check" for
- env:
- CONFIG="--enable-debug --enable-tcg-interpreter"
- TEST_CMD=""
# We don't need to exercise every backend with every front-end
- env:
- CONFIG="--enable-trace-backends=log,simple,syslog --disable-system"
- TEST_CMD=""
- env:
- CONFIG="--enable-trace-backends=ftrace --target-list=x86_64-softmmu"
- TEST_CMD=""
- env:
- CONFIG="--enable-trace-backends=ust --target-list=x86_64-softmmu"
- TEST_CMD=""
# MacOSX builds
- env:
- CONFIG="--target-list=${MAIN_SOFTMMU_TARGETS}"
- env: CONFIG="--enable-debug --enable-tcg-interpreter"
TEST_CMD=""
compiler: gcc
- env: CONFIG="--enable-trace-backends=simple"
TEST_CMD=""
compiler: gcc
- env: CONFIG="--enable-trace-backends=ftrace"
TEST_CMD=""
compiler: gcc
- env: CONFIG="--enable-trace-backends=ust"
TEST_CMD=""
compiler: gcc
- env: CONFIG="--with-coroutine=gthread"
TEST_CMD=""
compiler: gcc
- env: CONFIG=""
os: osx
osx_image: xcode9.4
compiler: clang
- env:
- CONFIG="--target-list=i386-softmmu,ppc-softmmu,ppc64-softmmu,m68k-softmmu,x86_64-softmmu"
os: osx
osx_image: xcode10.2
compiler: clang
# Python builds
- env:
- CONFIG="--target-list=x86_64-softmmu"
language: python
python:
- "3.4"
- env:
- CONFIG="--target-list=x86_64-softmmu"
language: python
python:
- "3.6"
# Acceptance (Functional) tests
- env:
- CONFIG="--python=/usr/bin/python3 --target-list=x86_64-softmmu"
- TEST_CMD="make AVOCADO_SHOW=app check-acceptance"
- env: CONFIG=""
sudo: required
addons:
apt:
packages:
- python3-pip
- python3.5-venv
# Using newer GCC with sanitizers
- addons:
apt:
update: true
sources:
# PPAs for newer toolchains
- ubuntu-toolchain-r-test
packages:
# Extra toolchains
- gcc-7
- g++-7
# Build dependencies
- libaio-dev
- libattr1-dev
- libbrlapi-dev
- libcap-ng-dev
- libgnutls-dev
- libgtk-3-dev
- libiscsi-dev
- liblttng-ust-dev
- libnfs-dev
- libncurses5-dev
- libnss3-dev
- libpixman-1-dev
- libpng12-dev
- librados-dev
- libsdl1.2-dev
- libseccomp-dev
- libspice-protocol-dev
- libspice-server-dev
- libssh2-1-dev
- liburcu-dev
- libusb-1.0-0-dev
- libvte-2.91-dev
- sparse
- uuid-dev
language: generic
compiler: none
env:
- COMPILER_NAME=gcc CXX=g++-7 CC=gcc-7
- CONFIG="--cc=gcc-7 --cxx=g++-7 --disable-pie --disable-linux-user"
- TEST_CMD=""
before_script:
- ./configure ${CONFIG} --extra-cflags="-g3 -O0 -fsanitize=thread -fuse-ld=gold" || { cat config.log && exit 1; }
# Run check-tcg against linux-user
- env:
- CONFIG="--disable-system"
- TEST_CMD="make -j3 check-tcg V=1"
# Run check-tcg against softmmu targets
- env:
- CONFIG="--target-list=xtensa-softmmu,arm-softmmu"
- TEST_CMD="make -j3 check-tcg V=1"
dist: trusty
compiler: gcc
before_install:
- sudo apt-get update -qq
- sudo apt-get build-dep -qq qemu
- wget -O - http://people.linaro.org/~alex.bennee/qemu-submodule-git-seed.tar.xz | tar -xvJ
- git submodule update --init --recursive

View File

@@ -9,7 +9,7 @@ patches before submitting.
Of course, the most important aspect in any coding style is whitespace.
Crusty old coders who have trouble spotting the glasses on their noses
can tell the difference between a tab and eight spaces from a distance
of approximately fifteen parsecs. Many a flamewar has been fought and
of approximately fifteen parsecs. Many a flamewar have been fought and
lost on this issue.
QEMU indents are four spaces. Tabs are never used, except in Makefiles
@@ -116,62 +116,3 @@ if (a == 1) {
Rationale: Yoda conditions (as in 'if (1 == a)') are awkward to read.
Besides, good compilers already warn users when '==' is mis-typed as '=',
even when the constant is on the right.
7. Comment style
We use traditional C-style /* */ comments and avoid // comments.
Rationale: The // form is valid in C99, so this is purely a matter of
consistency of style. The checkpatch script will warn you about this.
Multiline comment blocks should have a row of stars on the left,
and the initial /* and terminating */ both on their own lines:
/*
* like
* this
*/
This is the same format required by the Linux kernel coding style.
(Some of the existing comments in the codebase use the GNU Coding
Standards form which does not have stars on the left, or other
variations; avoid these when writing new comments, but don't worry
about converting to the preferred form unless you're editing that
comment anyway.)
Rationale: Consistency, and ease of visually picking out a multiline
comment from the surrounding code.
8. trace-events style
8.1 0x prefix
In trace-events files, use a '0x' prefix to specify hex numbers, as in:
some_trace(unsigned x, uint64_t y) "x 0x%x y 0x" PRIx64
An exception is made for groups of numbers that are hexadecimal by
convention and separated by the symbols '.', '/', ':', or ' ' (such as
PCI bus id):
another_trace(int cssid, int ssid, int dev_num) "bus id: %x.%x.%04x"
However, you can use '0x' for such groups if you want. Anyway, be sure that
it is obvious that numbers are in hex, ex.:
data_dump(uint8_t c1, uint8_t c2, uint8_t c3) "bytes (in hex): %02x %02x %02x"
Rationale: hex numbers are hard to read in logs when there is no 0x prefix,
especially when (occasionally) the representation doesn't contain any letters
and especially in one line with other decimal numbers. Number groups are allowed
to not use '0x' because for some things notations like %x.%x.%x are used not
only in Qemu. Also dumping raw data bytes with '0x' is less readable.
8.2 '#' printf flag
Do not use printf flag '#', like '%#x'.
Rationale: there are two ways to add a '0x' prefix to printed number: '0x%...'
and '%#...'. For consistency the only one way should be used. Arguments for
'0x%' are:
- it is more popular
- '%#' omits the 0x for the value 0 which makes output inconsistent

View File

@@ -1,8 +1,8 @@
GNU LESSER GENERAL PUBLIC LICENSE
Version 2.1, February 1999
GNU LESSER GENERAL PUBLIC LICENSE
Version 2.1, February 1999
Copyright (C) 1991, 1999 Free Software Foundation, Inc.
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
@@ -10,7 +10,7 @@
as the successor of the GNU Library Public License, version 2, hence
the version number 2.1.]
Preamble
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
@@ -112,7 +112,7 @@ modification follow. Pay close attention to the difference between a
former contains code derived from the library, whereas the latter must
be combined with the library in order to run.
GNU LESSER GENERAL PUBLIC LICENSE
GNU LESSER GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License Agreement applies to any software library or other
@@ -146,7 +146,7 @@ such a program is covered only if its contents constitute a work based
on the Library (independent of the use of the Library in a tool for
writing it). Whether that is true depends on what the Library does
and what the program that uses the Library does.
1. You may copy and distribute verbatim copies of the Library's
complete source code as you receive it, in any medium, provided that
you conspicuously and appropriately publish on each copy an
@@ -432,7 +432,7 @@ decision will be guided by the two goals of preserving the free status
of all derivatives of our free software and of promoting the sharing
and reuse of software generally.
NO WARRANTY
NO WARRANTY
15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
@@ -455,7 +455,7 @@ FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES.
END OF TERMS AND CONDITIONS
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Libraries
@@ -476,7 +476,7 @@ convey the exclusion of warranty; and each file should have at least the
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
version 2 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
@@ -485,7 +485,7 @@ convey the exclusion of warranty; and each file should have at least the
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Also add information on how to contact you by electronic and paper mail.
@@ -500,3 +500,5 @@ necessary. Here is a sample; alter the names:
Ty Coon, President of Vice
That's all there is to it!

View File

@@ -1,6 +1,6 @@
This file documents changes for QEMU releases 0.12 and earlier.
For changelog information for later releases, see
https://wiki.qemu.org/ChangeLog or look at the git history for
http://wiki.qemu-project.org/ChangeLog or look at the git history for
more detailed information.

27
HACKING
View File

@@ -1,28 +1,10 @@
1. Preprocessor
1.1. Variadic macros
For variadic macros, stick with this C99-like syntax:
#define DPRINTF(fmt, ...) \
do { printf("IRQ: " fmt, ## __VA_ARGS__); } while (0)
1.2. Include directives
Order include directives as follows:
#include "qemu/osdep.h" /* Always first... */
#include <...> /* then system headers... */
#include "..." /* and finally QEMU headers. */
The "qemu/osdep.h" header contains preprocessor macros that affect the behavior
of core system headers like <stdint.h>. It must be the first include so that
core system headers included by external libraries get the preprocessor macros
that QEMU depends on.
Do not include "qemu/osdep.h" from header files since the .c file will have
already included it.
2. C types
It should be common sense to use the right type, but we have collected
@@ -118,15 +100,6 @@ Please note that g_malloc will exit on allocation failure, so there
is no need to test for failure (as you would have to with malloc).
Calling g_malloc with a zero size is valid and will return NULL.
Prefer g_new(T, n) instead of g_malloc(sizeof(T) * n) for the following
reasons:
a. It catches multiplication overflowing size_t;
b. It returns T * instead of void *, letting compiler catch more type
errors.
Declarations like T *v = g_malloc(sizeof(*v)) are acceptable, though.
Memory allocated by qemu_memalign or qemu_blockalign must be freed with
qemu_vfree, since breaking this will cause problems on Win32.

View File

@@ -1,36 +0,0 @@
# These are "proxy" symbols used to pass config-host.mak values
# down to Kconfig. See also MINIKCONF_ARGS in the Makefile:
# these two need to be kept in sync.
config KVM
bool
config LINUX
bool
config OPENGL
bool
config X11
bool
config SPICE
bool
config IVSHMEM
bool
config TPM
bool
config VHOST_USER
bool
config XEN
bool
config VIRTFS
bool
config PVRDMA
bool

File diff suppressed because it is too large Load Diff

881
Makefile

File diff suppressed because it is too large Load Diff

View File

@@ -1,23 +1,20 @@
#######################################################################
# Common libraries for tools and emulators
stub-obj-y = stubs/ util/ crypto/
stub-obj-y = stubs/ crypto/
util-obj-y = util/ qobject/ qapi/
chardev-obj-y = chardev/
#######################################################################
# authz-obj-y is code used by both qemu system emulation and qemu-img
authz-obj-y = authz/
util-obj-y += qmp-introspect.o qapi-types.o qapi-visit.o qapi-event.o
#######################################################################
# block-obj-y is code used by both qemu system emulation and qemu-img
block-obj-y = nbd/
block-obj-y += block.o blockjob.o job.o
block-obj-y += block/ scsi/
block-obj-y = async.o thread-pool.o
block-obj-y += nbd/
block-obj-y += block.o blockjob.o
block-obj-y += main-loop.o iohandler.o qemu-timer.o
block-obj-$(CONFIG_POSIX) += aio-posix.o
block-obj-$(CONFIG_WIN32) += aio-win32.o
block-obj-y += block/
block-obj-y += qemu-io-cmds.o
block-obj-$(CONFIG_REPLICATION) += replication.o
block-obj-m = block/
@@ -44,8 +41,7 @@ io-obj-y = io/
ifeq ($(CONFIG_SOFTMMU),y)
common-obj-y = blockdev.o blockdev-nbd.o block/
common-obj-y += bootdevice.o iothread.o
common-obj-y += job-qmp.o
common-obj-y += iothread.o
common-obj-y += net/
common-obj-y += qdev-monitor.o device-hotplug.o
common-obj-$(CONFIG_WIN32) += os-win32.o
@@ -54,42 +50,45 @@ common-obj-$(CONFIG_POSIX) += os-posix.o
common-obj-$(CONFIG_LINUX) += fsdev/
common-obj-y += migration/
common-obj-y += qemu-char.o #aio.o
common-obj-y += page_cache.o
common-obj-$(CONFIG_SPICE) += spice-qemu-char.o
common-obj-y += audio/
common-obj-m += audio/
common-obj-y += hw/
common-obj-y += accel.o
common-obj-y += replay/
common-obj-y += ui/
common-obj-m += ui/
common-obj-y += bt-host.o bt-vhci.o
bt-host.o-cflags := $(BLUEZ_CFLAGS)
common-obj-y += dma-helpers.o
common-obj-y += vl.o
vl.o-cflags := $(GPROF_CFLAGS) $(SDL_CFLAGS)
common-obj-$(CONFIG_TPM) += tpm.o
common-obj-y += tpm.o
common-obj-$(CONFIG_SLIRP) += slirp/
common-obj-y += backends/
common-obj-y += chardev/
common-obj-$(CONFIG_SECCOMP) += qemu-seccomp.o
qemu-seccomp.o-cflags := $(SECCOMP_CFLAGS)
qemu-seccomp.o-libs := $(SECCOMP_LIBS)
common-obj-$(CONFIG_FDT) += device_tree.o
######################################################################
# qapi
common-obj-y += qmp-marshal.o
common-obj-y += qmp-introspect.o
common-obj-y += qmp.o hmp.o
common-obj-y += qapi/
endif
#######################################################################
# Target-independent parts used in system and user emulation
common-obj-y += cpus-common.o
common-obj-y += tcg-runtime.o
common-obj-y += hw/
common-obj-y += qom/
common-obj-y += disas/
@@ -97,105 +96,66 @@ common-obj-y += disas/
######################################################################
# Resource file for Windows executables
version-obj-$(CONFIG_WIN32) += $(BUILD_DIR)/version.o
version-lobj-$(CONFIG_WIN32) += $(BUILD_DIR)/version.lo
######################################################################
# tracing
util-obj-y += trace/
target-obj-y += trace/
######################################################################
# guest agent
# FIXME: a few definitions from qapi/qapi-types.o and
# qapi/qapi-visit.o are needed by libqemuutil.a. These should be
# extracted into a QAPI schema module, or perhaps a separate schema.
# FIXME: a few definitions from qapi-types.o/qapi-visit.o are needed
# by libqemuutil.a. These should be moved to a separate .json schema.
qga-obj-y = qga/
qga-vss-dll-obj-y = qga/
######################################################################
# contrib
elf2dmp-obj-y = contrib/elf2dmp/
ivshmem-client-obj-$(CONFIG_IVSHMEM) = contrib/ivshmem-client/
ivshmem-server-obj-$(CONFIG_IVSHMEM) = contrib/ivshmem-server/
libvhost-user-obj-y = contrib/libvhost-user/
vhost-user-scsi.o-cflags := $(LIBISCSI_CFLAGS)
vhost-user-scsi.o-libs := $(LIBISCSI_LIBS)
vhost-user-scsi-obj-y = contrib/vhost-user-scsi/
vhost-user-blk-obj-y = contrib/vhost-user-blk/
rdmacm-mux-obj-y = contrib/rdmacm-mux/
ivshmem-client-obj-y = contrib/ivshmem-client/
ivshmem-server-obj-y = contrib/ivshmem-server/
######################################################################
trace-events-subdirs =
trace-events-subdirs += accel/kvm
trace-events-subdirs += accel/tcg
trace-events-subdirs += audio
trace-events-subdirs += authz
trace-events-subdirs += block
trace-events-subdirs += chardev
trace-events-subdirs += crypto
trace-events-subdirs += hw/9pfs
trace-events-subdirs += hw/acpi
trace-events-subdirs += hw/alpha
trace-events-subdirs += hw/arm
trace-events-subdirs += hw/audio
trace-events-subdirs += hw/block
trace-events-subdirs += hw/block/dataplane
trace-events-subdirs += hw/char
trace-events-subdirs += hw/display
trace-events-subdirs += hw/dma
trace-events-subdirs += hw/hppa
trace-events-subdirs += hw/i2c
trace-events-subdirs += hw/i386
trace-events-subdirs += hw/i386/xen
trace-events-subdirs += hw/ide
trace-events-subdirs += hw/input
trace-events-subdirs += hw/intc
trace-events-subdirs += hw/isa
trace-events-subdirs += hw/mem
trace-events-subdirs += hw/misc
trace-events-subdirs += hw/misc/macio
trace-events-subdirs += hw/net
trace-events-subdirs += hw/nvram
trace-events-subdirs += hw/pci
trace-events-subdirs += hw/pci-host
trace-events-subdirs += hw/ppc
trace-events-subdirs += hw/rdma
trace-events-subdirs += hw/rdma/vmw
trace-events-subdirs += hw/s390x
trace-events-subdirs += hw/scsi
trace-events-subdirs += hw/sd
trace-events-subdirs += hw/sparc
trace-events-subdirs += hw/sparc64
trace-events-subdirs += hw/timer
trace-events-subdirs += hw/tpm
trace-events-subdirs += hw/usb
trace-events-subdirs += hw/vfio
trace-events-subdirs += hw/virtio
trace-events-subdirs += hw/watchdog
trace-events-subdirs += hw/xen
trace-events-subdirs += hw/gpio
trace-events-subdirs += io
trace-events-subdirs += linux-user
trace-events-subdirs += migration
trace-events-subdirs += nbd
trace-events-subdirs += net
trace-events-subdirs += qapi
trace-events-subdirs += qom
trace-events-subdirs += scsi
trace-events-subdirs += target/arm
trace-events-subdirs += target/hppa
trace-events-subdirs += target/i386
trace-events-subdirs += target/mips
trace-events-subdirs += target/ppc
trace-events-subdirs += target/riscv
trace-events-subdirs += target/s390x
trace-events-subdirs += target/sparc
trace-events-subdirs += ui
trace-events-subdirs += util
trace-events-files = $(SRC_PATH)/trace-events $(trace-events-subdirs:%=$(SRC_PATH)/%/trace-events)
trace-obj-y = trace-root.o
trace-obj-y += $(trace-events-subdirs:%=%/trace.o)
trace-obj-$(CONFIG_TRACE_UST) += trace-ust-all.o
trace-obj-$(CONFIG_TRACE_DTRACE) += trace-dtrace-root.o
trace-obj-$(CONFIG_TRACE_DTRACE) += $(trace-events-subdirs:%=%/trace-dtrace.o)
trace-events-y = trace-events
trace-events-y += util/trace-events
trace-events-y += crypto/trace-events
trace-events-y += io/trace-events
trace-events-y += migration/trace-events
trace-events-y += block/trace-events
trace-events-y += hw/block/trace-events
trace-events-y += hw/char/trace-events
trace-events-y += hw/intc/trace-events
trace-events-y += hw/net/trace-events
trace-events-y += hw/virtio/trace-events
trace-events-y += hw/audio/trace-events
trace-events-y += hw/misc/trace-events
trace-events-y += hw/usb/trace-events
trace-events-y += hw/scsi/trace-events
trace-events-y += hw/nvram/trace-events
trace-events-y += hw/display/trace-events
trace-events-y += hw/input/trace-events
trace-events-y += hw/timer/trace-events
trace-events-y += hw/dma/trace-events
trace-events-y += hw/sparc/trace-events
trace-events-y += hw/sd/trace-events
trace-events-y += hw/isa/trace-events
trace-events-y += hw/i386/trace-events
trace-events-y += hw/9pfs/trace-events
trace-events-y += hw/ppc/trace-events
trace-events-y += hw/pci/trace-events
trace-events-y += hw/s390x/trace-events
trace-events-y += hw/vfio/trace-events
trace-events-y += hw/acpi/trace-events
trace-events-y += hw/arm/trace-events
trace-events-y += hw/alpha/trace-events
trace-events-y += ui/trace-events
trace-events-y += audio/trace-events
trace-events-y += net/trace-events
trace-events-y += target-i386/trace-events
trace-events-y += target-sparc/trace-events
trace-events-y += target-s390x/trace-events
trace-events-y += target-ppc/trace-events
trace-events-y += qom/trace-events
trace-events-y += linux-user/trace-events

View File

@@ -4,19 +4,16 @@ BUILD_DIR?=$(CURDIR)/..
include ../config-host.mak
include config-target.mak
include $(SRC_PATH)/rules.mak
ifdef CONFIG_SOFTMMU
include config-devices.mak
endif
include $(SRC_PATH)/rules.mak
$(call set-vpath, $(SRC_PATH):$(BUILD_DIR))
ifdef CONFIG_LINUX
QEMU_CFLAGS += -I../linux-headers
endif
QEMU_CFLAGS += -iquote .. -iquote $(SRC_PATH)/target/$(TARGET_BASE_ARCH) -DNEED_CPU_H
QEMU_CFLAGS += -I.. -I$(SRC_PATH)/target-$(TARGET_BASE_ARCH) -DNEED_CPU_H
QEMU_CFLAGS+=-iquote $(SRC_PATH)/include
QEMU_CFLAGS+=-I$(SRC_PATH)/include
ifdef CONFIG_USER_ONLY
# user emulator name
@@ -25,11 +22,11 @@ QEMU_PROG_BUILD = $(QEMU_PROG)
else
# system emulator name
QEMU_PROG=qemu-system-$(TARGET_NAME)$(EXESUF)
ifneq (,$(findstring -mwindows,$(SDL_LIBS)))
ifneq (,$(findstring -mwindows,$(libs_softmmu)))
# Terminate program name with a 'w' because the linker builds a windows executable.
QEMU_PROGW=qemu-system-$(TARGET_NAME)w$(EXESUF)
$(QEMU_PROG): $(QEMU_PROGW)
$(call quiet-command,$(OBJCOPY) --subsystem console $(QEMU_PROGW) $(QEMU_PROG),"GEN","$(TARGET_DIR)$(QEMU_PROG)")
$(call quiet-command,$(OBJCOPY) --subsystem console $(QEMU_PROGW) $(QEMU_PROG)," GEN $(TARGET_DIR)$(QEMU_PROG)")
QEMU_PROG_BUILD = $(QEMU_PROGW)
else
QEMU_PROG_BUILD = $(QEMU_PROG)
@@ -39,14 +36,11 @@ endif
PROGS=$(QEMU_PROG) $(QEMU_PROGW)
STPFILES=
# Makefile Tests
include $(SRC_PATH)/tests/tcg/Makefile.include
config-target.h: config-target.h-timestamp
config-target.h-timestamp: config-target.mak
ifdef CONFIG_TRACE_SYSTEMTAP
stap: $(QEMU_PROG).stp-installed $(QEMU_PROG).stp $(QEMU_PROG)-simpletrace.stp $(QEMU_PROG)-log.stp
stap: $(QEMU_PROG).stp-installed $(QEMU_PROG).stp $(QEMU_PROG)-simpletrace.stp
ifdef CONFIG_USER_ONLY
TARGET_TYPE=user
@@ -54,69 +48,60 @@ else
TARGET_TYPE=system
endif
tracetool-y = $(SRC_PATH)/scripts/tracetool.py
tracetool-y += $(shell find $(SRC_PATH)/scripts/tracetool -name "*.py")
$(QEMU_PROG).stp-installed: $(BUILD_DIR)/trace-events-all $(tracetool-y)
$(QEMU_PROG).stp-installed: $(BUILD_DIR)/trace-events-all
$(call quiet-command,$(TRACETOOL) \
--group=all \
--format=stap \
--backends=$(TRACE_BACKENDS) \
--binary=$(bindir)/$(QEMU_PROG) \
--target-name=$(TARGET_NAME) \
--target-type=$(TARGET_TYPE) \
$< > $@,"GEN","$(TARGET_DIR)$(QEMU_PROG).stp-installed")
< $< > $@," GEN $(TARGET_DIR)$(QEMU_PROG).stp-installed")
$(QEMU_PROG).stp: $(BUILD_DIR)/trace-events-all $(tracetool-y)
$(QEMU_PROG).stp: $(BUILD_DIR)/trace-events-all
$(call quiet-command,$(TRACETOOL) \
--group=all \
--format=stap \
--backends=$(TRACE_BACKENDS) \
--binary=$(realpath .)/$(QEMU_PROG) \
--target-name=$(TARGET_NAME) \
--target-type=$(TARGET_TYPE) \
$< > $@,"GEN","$(TARGET_DIR)$(QEMU_PROG).stp")
< $< > $@," GEN $(TARGET_DIR)$(QEMU_PROG).stp")
$(QEMU_PROG)-simpletrace.stp: $(BUILD_DIR)/trace-events-all $(tracetool-y)
$(QEMU_PROG)-simpletrace.stp: $(BUILD_DIR)/trace-events-all
$(call quiet-command,$(TRACETOOL) \
--group=all \
--format=simpletrace-stap \
--backends=$(TRACE_BACKENDS) \
--probe-prefix=qemu.$(TARGET_TYPE).$(TARGET_NAME) \
$< > $@,"GEN","$(TARGET_DIR)$(QEMU_PROG)-simpletrace.stp")
$(QEMU_PROG)-log.stp: $(BUILD_DIR)/trace-events-all $(tracetool-y)
$(call quiet-command,$(TRACETOOL) \
--group=all \
--format=log-stap \
--backends=$(TRACE_BACKENDS) \
--probe-prefix=qemu.$(TARGET_TYPE).$(TARGET_NAME) \
$< > $@,"GEN","$(TARGET_DIR)$(QEMU_PROG)-log.stp")
< $< > $@," GEN $(TARGET_DIR)$(QEMU_PROG)-simpletrace.stp")
else
stap:
endif
.PHONY: stap
all: $(PROGS) stap
# Dummy command so that make thinks it has done something
@true
obj-y += trace/
#########################################################
# cpu emulator library
obj-y += exec.o
obj-y += accel/
obj-$(CONFIG_TCG) += tcg/tcg.o tcg/tcg-op.o tcg/tcg-op-vec.o tcg/tcg-op-gvec.o
obj-$(CONFIG_TCG) += tcg/tcg-common.o tcg/optimize.o
obj-$(CONFIG_TCG_INTERPRETER) += tcg/tci.o
obj-y = exec.o translate-all.o cpu-exec.o
obj-y += translate-common.o
obj-y += cpu-exec-common.o
obj-y += tcg/tcg.o tcg/tcg-op.o tcg/optimize.o
obj-$(CONFIG_TCG_INTERPRETER) += tci.o
obj-y += tcg/tcg-common.o
obj-$(CONFIG_TCG_INTERPRETER) += disas/tci.o
obj-$(CONFIG_TCG) += fpu/softfloat.o
obj-y += target/$(TARGET_BASE_ARCH)/
obj-y += fpu/softfloat.o
obj-y += target-$(TARGET_BASE_ARCH)/
obj-y += disas.o
obj-$(call notempty,$(TARGET_XML_FILES)) += gdbstub-xml.o
obj-$(call lnot,$(CONFIG_KVM)) += kvm-stub.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/decContext.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/decNumber.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/dpd/decimal32.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/dpd/decimal64.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/dpd/decimal128.o
#########################################################
# Linux user emulator target
@@ -128,7 +113,7 @@ QEMU_CFLAGS+=-I$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR) \
-I$(SRC_PATH)/linux-user
obj-y += linux-user/
obj-y += gdbstub.o thunk.o
obj-y += gdbstub.o thunk.o user-exec.o
endif #CONFIG_LINUX_USER
@@ -141,7 +126,7 @@ QEMU_CFLAGS+=-I$(SRC_PATH)/bsd-user -I$(SRC_PATH)/bsd-user/$(TARGET_ABI_DIR) \
-I$(SRC_PATH)/bsd-user/$(HOST_VARIANT_DIR)
obj-y += bsd-user/
obj-y += gdbstub.o
obj-y += gdbstub.o user-exec.o
endif #CONFIG_BSD_USER
@@ -149,16 +134,21 @@ endif #CONFIG_BSD_USER
# System emulator target
ifdef CONFIG_SOFTMMU
obj-y += arch_init.o cpus.o monitor.o gdbstub.o balloon.o ioport.o numa.o
obj-y += qtest.o
obj-y += qtest.o bootdevice.o
obj-y += hw/
obj-y += qapi/
obj-y += memory.o
obj-$(CONFIG_KVM) += kvm-all.o
obj-y += memory.o cputlb.o
obj-y += memory_mapping.o
obj-y += dump.o
obj-$(TARGET_X86_64) += win_dump.o
obj-y += migration/ram.o
obj-y += migration/ram.o migration/savevm.o
LIBS := $(libs_softmmu) $(LIBS)
# xen support
obj-$(CONFIG_XEN) += xen-common.o
obj-$(CONFIG_XEN_I386) += xen-hvm.o xen-mapcache.o
obj-$(call lnot,$(CONFIG_XEN)) += xen-common-stub.o
obj-$(call lnot,$(CONFIG_XEN_I386)) += xen-hvm-stub.o
# Hardware support
ifeq ($(TARGET_NAME), sparc64)
obj-y += hw/sparc64/
@@ -166,61 +156,66 @@ else
obj-y += hw/$(TARGET_BASE_ARCH)/
endif
GENERATED_FILES += hmp-commands.h hmp-commands-info.h
GENERATED_HEADERS += hmp-commands.h hmp-commands-info.h qmp-commands-old.h
endif # CONFIG_SOFTMMU
# Workaround for http://gcc.gnu.org/PR55489, see configure.
%/translate.o: QEMU_CFLAGS += $(TRANSLATE_OPT_CFLAGS)
dummy := $(call unnest-vars,,obj-y)
all-obj-y := $(obj-y)
target-obj-y :=
block-obj-y :=
common-obj-y :=
include $(SRC_PATH)/Makefile.objs
dummy := $(call unnest-vars,,target-obj-y)
target-obj-y-save := $(target-obj-y)
dummy := $(call unnest-vars,.., \
authz-obj-y \
block-obj-y \
block-obj-m \
chardev-obj-y \
crypto-obj-y \
crypto-aes-obj-y \
qom-obj-y \
io-obj-y \
common-obj-y \
common-obj-m)
target-obj-y := $(target-obj-y-save)
all-obj-y += $(common-obj-y)
all-obj-y += $(target-obj-y)
all-obj-y += $(qom-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(authz-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(block-obj-y) $(chardev-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(block-obj-y)
all-obj-$(CONFIG_USER_ONLY) += $(crypto-aes-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(crypto-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(io-obj-y)
ifdef CONFIG_SOFTMMU
$(QEMU_PROG_BUILD): config-devices.mak
endif
COMMON_LDADDS = ../libqemuutil.a
# build either PROG or PROGW
$(QEMU_PROG_BUILD): $(all-obj-y) $(COMMON_LDADDS)
$(QEMU_PROG_BUILD): $(all-obj-y) ../libqemuutil.a ../libqemustub.a
$(call LINK, $(filter-out %.mak, $^))
ifdef CONFIG_DARWIN
$(call quiet-command,Rez -append $(SRC_PATH)/pc-bios/qemu.rsrc -o $@,"REZ","$(TARGET_DIR)$@")
$(call quiet-command,SetFile -a C $@,"SETFILE","$(TARGET_DIR)$@")
$(call quiet-command,Rez -append $(SRC_PATH)/pc-bios/qemu.rsrc -o $@," REZ $(TARGET_DIR)$@")
$(call quiet-command,SetFile -a C $@," SETFILE $(TARGET_DIR)$@")
endif
gdbstub-xml.c: $(TARGET_XML_FILES) $(SRC_PATH)/scripts/feature_to_c.sh
$(call quiet-command,rm -f $@ && $(SHELL) $(SRC_PATH)/scripts/feature_to_c.sh $@ $(TARGET_XML_FILES),"GEN","$(TARGET_DIR)$@")
$(call quiet-command,rm -f $@ && $(SHELL) $(SRC_PATH)/scripts/feature_to_c.sh $@ $(TARGET_XML_FILES)," GEN $(TARGET_DIR)$@")
hmp-commands.h: $(SRC_PATH)/hmp-commands.hx $(SRC_PATH)/scripts/hxtool
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@,"GEN","$(TARGET_DIR)$@")
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
hmp-commands-info.h: $(SRC_PATH)/hmp-commands-info.hx $(SRC_PATH)/scripts/hxtool
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@,"GEN","$(TARGET_DIR)$@")
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
clean: clean-target
qmp-commands-old.h: $(SRC_PATH)/qmp-commands.hx $(SRC_PATH)/scripts/hxtool
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
clean:
rm -f *.a *~ $(PROGS)
rm -f $(shell find . -name '*.[od]')
rm -f hmp-commands.h gdbstub-xml.c
rm -f trace/generated-helpers.c trace/generated-helpers.c-timestamp
rm -f hmp-commands.h qmp-commands-old.h gdbstub-xml.c
ifdef CONFIG_TRACE_SYSTEMTAP
rm -f *.stp
endif
@@ -233,8 +228,7 @@ ifdef CONFIG_TRACE_SYSTEMTAP
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset"
$(INSTALL_DATA) $(QEMU_PROG).stp-installed "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset/$(QEMU_PROG).stp"
$(INSTALL_DATA) $(QEMU_PROG)-simpletrace.stp "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset/$(QEMU_PROG)-simpletrace.stp"
$(INSTALL_DATA) $(QEMU_PROG)-log.stp "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset/$(QEMU_PROG)-log.stp"
endif
GENERATED_FILES += config-target.h
Makefile: $(GENERATED_FILES)
GENERATED_HEADERS += config-target.h
Makefile: $(GENERATED_HEADERS)

54
README
View File

@@ -42,11 +42,12 @@ of other UNIX targets. The simple steps to build QEMU are:
../configure
make
Complete details of the process for building and configuring QEMU for
all supported host platforms can be found in the qemu-tech.html file.
Additional information can also be found online via the QEMU website:
https://qemu.org/Hosts/Linux
https://qemu.org/Hosts/Mac
https://qemu.org/Hosts/W32
http://qemu-project.org/Hosts/Linux
http://qemu-project.org/Hosts/W32
Submitting patches
@@ -54,9 +55,9 @@ Submitting patches
The QEMU source code is maintained under the GIT version control system.
git clone https://git.qemu.org/git/qemu.git
git clone git://git.qemu-project.org/qemu.git
When submitting patches, one common approach is to use 'git
When submitting patches, the preferred approach is to use 'git
format-patch' and/or 'git send-email' to format & send the mail to the
qemu-devel@nongnu.org mailing list. All patches submitted must contain
a 'Signed-off-by' line from the author. Patches should follow the
@@ -65,42 +66,9 @@ guidelines set out in the HACKING and CODING_STYLE files.
Additional information on submitting patches can be found online via
the QEMU website
https://qemu.org/Contribute/SubmitAPatch
https://qemu.org/Contribute/TrivialPatches
http://qemu-project.org/Contribute/SubmitAPatch
http://qemu-project.org/Contribute/TrivialPatches
The QEMU website is also maintained under source control.
git clone https://git.qemu.org/git/qemu-web.git
https://www.qemu.org/2017/02/04/the-new-qemu-website-is-up/
A 'git-publish' utility was created to make above process less
cumbersome, and is highly recommended for making regular contributions,
or even just for sending consecutive patch series revisions. It also
requires a working 'git send-email' setup, and by default doesn't
automate everything, so you may want to go through the above steps
manually for once.
For installation instructions, please go to
https://github.com/stefanha/git-publish
The workflow with 'git-publish' is:
$ git checkout master -b my-feature
$ # work on new commits, add your 'Signed-off-by' lines to each
$ git publish
Your patch series will be sent and tagged as my-feature-v1 if you need to refer
back to it in the future.
Sending v2:
$ git checkout my-feature # same topic branch
$ # making changes to the commits (using 'git rebase', for example)
$ git publish
Your patch series will be sent with 'v2' tag in the subject and the git tip
will be tagged as my-feature-v2.
Bug reporting
=============
@@ -118,7 +86,7 @@ reported via launchpad.
For additional information on bug reporting consult:
https://qemu.org/Contribute/ReportABug
http://qemu-project.org/Contribute/ReportABug
Contact
@@ -128,12 +96,12 @@ The QEMU community can be contacted in a number of ways, with the two
main methods being email and IRC
- qemu-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/qemu-devel
http://lists.nongnu.org/mailman/listinfo/qemu-devel
- #qemu on irc.oftc.net
Information on additional methods of contacting the community can be
found online via the QEMU website:
https://qemu.org/Contribute/StartHere
http://qemu-project.org/Contribute/StartHere
-- End

View File

@@ -1 +1 @@
4.0.1
2.7.1

156
accel.c Normal file
View File

@@ -0,0 +1,156 @@
/*
* QEMU System Emulator, accelerator interfaces
*
* Copyright (c) 2003-2008 Fabrice Bellard
* Copyright (c) 2014 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "sysemu/accel.h"
#include "hw/boards.h"
#include "qemu-common.h"
#include "sysemu/arch_init.h"
#include "sysemu/sysemu.h"
#include "sysemu/kvm.h"
#include "sysemu/qtest.h"
#include "hw/xen/xen.h"
#include "qom/object.h"
#include "hw/boards.h"
int tcg_tb_size;
static bool tcg_allowed = true;
static int tcg_init(MachineState *ms)
{
tcg_exec_init(tcg_tb_size * 1024 * 1024);
return 0;
}
static const TypeInfo accel_type = {
.name = TYPE_ACCEL,
.parent = TYPE_OBJECT,
.class_size = sizeof(AccelClass),
.instance_size = sizeof(AccelState),
};
/* Lookup AccelClass from opt_name. Returns NULL if not found */
static AccelClass *accel_find(const char *opt_name)
{
char *class_name = g_strdup_printf(ACCEL_CLASS_NAME("%s"), opt_name);
AccelClass *ac = ACCEL_CLASS(object_class_by_name(class_name));
g_free(class_name);
return ac;
}
static int accel_init_machine(AccelClass *acc, MachineState *ms)
{
ObjectClass *oc = OBJECT_CLASS(acc);
const char *cname = object_class_get_name(oc);
AccelState *accel = ACCEL(object_new(cname));
int ret;
ms->accelerator = accel;
*(acc->allowed) = true;
ret = acc->init_machine(ms);
if (ret < 0) {
ms->accelerator = NULL;
*(acc->allowed) = false;
object_unref(OBJECT(accel));
}
return ret;
}
void configure_accelerator(MachineState *ms)
{
const char *p;
char buf[10];
int ret;
bool accel_initialised = false;
bool init_failed = false;
AccelClass *acc = NULL;
p = qemu_opt_get(qemu_get_machine_opts(), "accel");
if (p == NULL) {
/* Use the default "accelerator", tcg */
p = "tcg";
}
while (!accel_initialised && *p != '\0') {
if (*p == ':') {
p++;
}
p = get_opt_name(buf, sizeof(buf), p, ':');
acc = accel_find(buf);
if (!acc) {
fprintf(stderr, "\"%s\" accelerator not found.\n", buf);
continue;
}
if (acc->available && !acc->available()) {
printf("%s not supported for this target\n",
acc->name);
continue;
}
ret = accel_init_machine(acc, ms);
if (ret < 0) {
init_failed = true;
fprintf(stderr, "failed to initialize %s: %s\n",
acc->name,
strerror(-ret));
} else {
accel_initialised = true;
}
}
if (!accel_initialised) {
if (!init_failed) {
fprintf(stderr, "No accelerator found!\n");
}
exit(1);
}
if (init_failed) {
fprintf(stderr, "Back to %s accelerator.\n", acc->name);
}
}
static void tcg_accel_class_init(ObjectClass *oc, void *data)
{
AccelClass *ac = ACCEL_CLASS(oc);
ac->name = "tcg";
ac->init_machine = tcg_init;
ac->allowed = &tcg_allowed;
}
#define TYPE_TCG_ACCEL ACCEL_CLASS_NAME("tcg")
static const TypeInfo tcg_accel_type = {
.name = TYPE_TCG_ACCEL,
.parent = TYPE_ACCEL,
.class_init = tcg_accel_class_init,
};
static void register_accel_types(void)
{
type_register_static(&accel_type);
type_register_static(&tcg_accel_type);
}
type_init(register_accel_types);

View File

@@ -1,4 +0,0 @@
obj-$(CONFIG_SOFTMMU) += accel.o
obj-$(CONFIG_KVM) += kvm/
obj-$(CONFIG_TCG) += tcg/
obj-y += stubs/

View File

@@ -1,152 +0,0 @@
/*
* QEMU System Emulator, accelerator interfaces
*
* Copyright (c) 2003-2008 Fabrice Bellard
* Copyright (c) 2014 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "sysemu/accel.h"
#include "hw/boards.h"
#include "sysemu/arch_init.h"
#include "sysemu/sysemu.h"
#include "sysemu/kvm.h"
#include "sysemu/qtest.h"
#include "hw/xen/xen.h"
#include "qom/object.h"
#include "qemu/error-report.h"
#include "qemu/option.h"
#include "qapi/error.h"
static const TypeInfo accel_type = {
.name = TYPE_ACCEL,
.parent = TYPE_OBJECT,
.class_size = sizeof(AccelClass),
.instance_size = sizeof(AccelState),
};
/* Lookup AccelClass from opt_name. Returns NULL if not found */
static AccelClass *accel_find(const char *opt_name)
{
char *class_name = g_strdup_printf(ACCEL_CLASS_NAME("%s"), opt_name);
AccelClass *ac = ACCEL_CLASS(object_class_by_name(class_name));
g_free(class_name);
return ac;
}
static int accel_init_machine(AccelClass *acc, MachineState *ms)
{
ObjectClass *oc = OBJECT_CLASS(acc);
const char *cname = object_class_get_name(oc);
AccelState *accel = ACCEL(object_new(cname));
int ret;
ms->accelerator = accel;
*(acc->allowed) = true;
ret = acc->init_machine(ms);
if (ret < 0) {
ms->accelerator = NULL;
*(acc->allowed) = false;
object_unref(OBJECT(accel));
} else {
object_set_accelerator_compat_props(acc->compat_props);
}
return ret;
}
void configure_accelerator(MachineState *ms, const char *progname)
{
const char *accel;
char **accel_list, **tmp;
int ret;
bool accel_initialised = false;
bool init_failed = false;
AccelClass *acc = NULL;
accel = qemu_opt_get(qemu_get_machine_opts(), "accel");
if (accel == NULL) {
/* Select the default accelerator */
int pnlen = strlen(progname);
if (pnlen >= 3 && g_str_equal(&progname[pnlen - 3], "kvm")) {
/* If the program name ends with "kvm", we prefer KVM */
accel = "kvm:tcg";
} else {
#if defined(CONFIG_TCG)
accel = "tcg";
#elif defined(CONFIG_KVM)
accel = "kvm";
#else
error_report("No accelerator selected and"
" no default accelerator available");
exit(1);
#endif
}
}
accel_list = g_strsplit(accel, ":", 0);
for (tmp = accel_list; !accel_initialised && tmp && *tmp; tmp++) {
acc = accel_find(*tmp);
if (!acc) {
continue;
}
if (acc->available && !acc->available()) {
printf("%s not supported for this target\n",
acc->name);
continue;
}
ret = accel_init_machine(acc, ms);
if (ret < 0) {
init_failed = true;
error_report("failed to initialize %s: %s",
acc->name, strerror(-ret));
} else {
accel_initialised = true;
}
}
g_strfreev(accel_list);
if (!accel_initialised) {
if (!init_failed) {
error_report("-machine accel=%s: No accelerator found", accel);
}
exit(1);
}
if (init_failed) {
error_report("Back to %s accelerator", acc->name);
}
}
void accel_setup_post(MachineState *ms)
{
AccelState *accel = ms->accelerator;
AccelClass *acc = ACCEL_GET_CLASS(accel);
if (acc->setup_post) {
acc->setup_post(ms, accel);
}
}
static void register_accel_types(void)
{
type_register_static(&accel_type);
}
type_init(register_accel_types);

View File

@@ -1,2 +0,0 @@
obj-y += kvm-all.o
obj-$(call lnot,$(CONFIG_SEV)) += sev-stub.o

View File

@@ -1,26 +0,0 @@
/*
* QEMU SEV stub
*
* Copyright Advanced Micro Devices 2018
*
* Authors:
* Brijesh Singh <brijesh.singh@amd.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "sysemu/sev.h"
int sev_encrypt_data(void *handle, uint8_t *ptr, uint64_t len)
{
abort();
}
void *sev_guest_init(const char *id)
{
return NULL;
}

View File

@@ -1,18 +0,0 @@
# See docs/devel/tracing.txt for syntax documentation.
# kvm-all.c
kvm_ioctl(int type, void *arg) "type 0x%x, arg %p"
kvm_vm_ioctl(int type, void *arg) "type 0x%x, arg %p"
kvm_vcpu_ioctl(int cpu_index, int type, void *arg) "cpu_index %d, type 0x%x, arg %p"
kvm_run_exit(int cpu_index, uint32_t reason) "cpu_index %d, reason %d"
kvm_device_ioctl(int fd, int type, void *arg) "dev fd %d, type 0x%x, arg %p"
kvm_failed_reg_get(uint64_t id, const char *msg) "Warning: Unable to retrieve ONEREG %" PRIu64 " from KVM: %s"
kvm_failed_reg_set(uint64_t id, const char *msg) "Warning: Unable to set ONEREG %" PRIu64 " to KVM: %s"
kvm_irqchip_commit_routes(void) ""
kvm_irqchip_add_msi_route(char *name, int vector, int virq) "dev %s vector %d virq %d"
kvm_irqchip_update_msi_route(int virq) "Updating MSI route virq=%d"
kvm_irqchip_release_virq(int virq) "virq %d"
kvm_set_ioeventfd_mmio(int fd, uint64_t addr, uint32_t val, bool assign, uint32_t size, bool datamatch) "fd: %d @0x%" PRIx64 " val=0x%x assign: %d size: %d match: %d"
kvm_set_ioeventfd_pio(int fd, uint16_t addr, uint32_t val, bool assign, uint32_t size, bool datamatch) "fd: %d @0x%x val=0x%x assign: %d size: %d match: %d"
kvm_set_user_memory(uint32_t slot, uint32_t flags, uint64_t guest_phys_addr, uint64_t memory_size, uint64_t userspace_addr, int ret) "Slot#%d flags=0x%x gpa=0x%"PRIx64 " size=0x%"PRIx64 " ua=0x%"PRIx64 " ret=%d"

View File

@@ -1,5 +0,0 @@
obj-$(call lnot,$(CONFIG_HAX)) += hax-stub.o
obj-$(call lnot,$(CONFIG_HVF)) += hvf-stub.o
obj-$(call lnot,$(CONFIG_WHPX)) += whpx-stub.o
obj-$(call lnot,$(CONFIG_KVM)) += kvm-stub.o
obj-$(call lnot,$(CONFIG_TCG)) += tcg-stub.o

View File

@@ -1,34 +0,0 @@
/*
* QEMU HAXM support
*
* Copyright (c) 2015, Intel Corporation
*
* Copyright 2016 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "cpu.h"
#include "sysemu/hax.h"
int hax_sync_vcpus(void)
{
return 0;
}
int hax_init_vcpu(CPUState *cpu)
{
return -ENOSYS;
}
int hax_smp_cpu_exec(CPUState *cpu)
{
return -ENOSYS;
}

View File

@@ -1,31 +0,0 @@
/*
* QEMU HVF support
*
* Copyright 2017 Red Hat, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2 or later, as published by the Free Software Foundation,
* and may be copied, distributed, and modified under those terms.
*
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "cpu.h"
#include "sysemu/hvf.h"
int hvf_init_vcpu(CPUState *cpu)
{
return -ENOSYS;
}
int hvf_vcpu_exec(CPUState *cpu)
{
return -ENOSYS;
}
void hvf_vcpu_destroy(CPUState *cpu)
{
}

View File

@@ -1,173 +0,0 @@
/*
* QEMU KVM stub
*
* Copyright Red Hat, Inc. 2010
*
* Author: Paolo Bonzini <pbonzini@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "cpu.h"
#include "sysemu/kvm.h"
#ifndef CONFIG_USER_ONLY
#include "hw/pci/msi.h"
#endif
KVMState *kvm_state;
bool kvm_kernel_irqchip;
bool kvm_async_interrupts_allowed;
bool kvm_eventfds_allowed;
bool kvm_irqfds_allowed;
bool kvm_resamplefds_allowed;
bool kvm_msi_via_irqfd_allowed;
bool kvm_gsi_routing_allowed;
bool kvm_gsi_direct_mapping;
bool kvm_allowed;
bool kvm_readonly_mem_allowed;
bool kvm_ioeventfd_any_length_allowed;
bool kvm_msi_use_devid;
int kvm_destroy_vcpu(CPUState *cpu)
{
return -ENOSYS;
}
int kvm_init_vcpu(CPUState *cpu)
{
return -ENOSYS;
}
void kvm_flush_coalesced_mmio_buffer(void)
{
}
void kvm_cpu_synchronize_state(CPUState *cpu)
{
}
void kvm_cpu_synchronize_post_reset(CPUState *cpu)
{
}
void kvm_cpu_synchronize_post_init(CPUState *cpu)
{
}
int kvm_cpu_exec(CPUState *cpu)
{
abort();
}
bool kvm_has_sync_mmu(void)
{
return false;
}
int kvm_has_many_ioeventfds(void)
{
return 0;
}
int kvm_update_guest_debug(CPUState *cpu, unsigned long reinject_trap)
{
return -ENOSYS;
}
int kvm_insert_breakpoint(CPUState *cpu, target_ulong addr,
target_ulong len, int type)
{
return -EINVAL;
}
int kvm_remove_breakpoint(CPUState *cpu, target_ulong addr,
target_ulong len, int type)
{
return -EINVAL;
}
void kvm_remove_all_breakpoints(CPUState *cpu)
{
}
int kvm_on_sigbus_vcpu(CPUState *cpu, int code, void *addr)
{
return 1;
}
int kvm_on_sigbus(int code, void *addr)
{
return 1;
}
bool kvm_memcrypt_enabled(void)
{
return false;
}
int kvm_memcrypt_encrypt_data(uint8_t *ptr, uint64_t len)
{
return 1;
}
#ifndef CONFIG_USER_ONLY
int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev)
{
return -ENOSYS;
}
void kvm_init_irq_routing(KVMState *s)
{
}
void kvm_irqchip_release_virq(KVMState *s, int virq)
{
}
int kvm_irqchip_update_msi_route(KVMState *s, int virq, MSIMessage msg,
PCIDevice *dev)
{
return -ENOSYS;
}
void kvm_irqchip_commit_routes(KVMState *s)
{
}
int kvm_irqchip_add_adapter_route(KVMState *s, AdapterInfo *adapter)
{
return -ENOSYS;
}
int kvm_irqchip_add_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
EventNotifier *rn, int virq)
{
return -ENOSYS;
}
int kvm_irqchip_remove_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
int virq)
{
return -ENOSYS;
}
bool kvm_has_free_slot(MachineState *ms)
{
return false;
}
void kvm_init_cpu_signals(CPUState *cpu)
{
abort();
}
bool kvm_arm_supports_user_irq(void)
{
return false;
}
#endif

View File

@@ -1,26 +0,0 @@
/*
* QEMU TCG accelerator stub
*
* Copyright Red Hat, Inc. 2013
*
* Author: Paolo Bonzini <pbonzini@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "cpu.h"
#include "tcg/tcg.h"
#include "exec/cpu-common.h"
#include "exec/exec-all.h"
void tb_flush(CPUState *cpu)
{
}
void tlb_set_dirty(CPUState *cpu, target_ulong vaddr)
{
}

View File

@@ -1,48 +0,0 @@
/*
* QEMU Windows Hypervisor Platform accelerator (WHPX) stub
*
* Copyright Microsoft Corp. 2017
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "cpu.h"
#include "sysemu/whpx.h"
int whpx_init_vcpu(CPUState *cpu)
{
return -1;
}
int whpx_vcpu_exec(CPUState *cpu)
{
return -1;
}
void whpx_destroy_vcpu(CPUState *cpu)
{
}
void whpx_vcpu_kick(CPUState *cpu)
{
}
void whpx_cpu_synchronize_state(CPUState *cpu)
{
}
void whpx_cpu_synchronize_post_reset(CPUState *cpu)
{
}
void whpx_cpu_synchronize_post_init(CPUState *cpu)
{
}
void whpx_cpu_synchronize_pre_loadvm(CPUState *cpu)
{
}

View File

@@ -1,8 +0,0 @@
obj-$(CONFIG_SOFTMMU) += tcg-all.o
obj-$(CONFIG_SOFTMMU) += cputlb.o
obj-y += tcg-runtime.o tcg-runtime-gvec.o
obj-y += cpu-exec.o cpu-exec-common.o translate-all.o
obj-y += translator.o
obj-$(CONFIG_USER_ONLY) += user-exec.o
obj-$(call lnot,$(CONFIG_SOFTMMU)) += user-exec-stub.o

View File

@@ -1,368 +0,0 @@
/*
* Atomic helper templates
* Included from tcg-runtime.c and cputlb.c.
*
* Copyright (c) 2016 Red Hat, Inc
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#include "trace/mem.h"
#if DATA_SIZE == 16
# define SUFFIX o
# define DATA_TYPE Int128
# define BSWAP bswap128
# define SHIFT 4
#elif DATA_SIZE == 8
# define SUFFIX q
# define DATA_TYPE uint64_t
# define SDATA_TYPE int64_t
# define BSWAP bswap64
# define SHIFT 3
#elif DATA_SIZE == 4
# define SUFFIX l
# define DATA_TYPE uint32_t
# define SDATA_TYPE int32_t
# define BSWAP bswap32
# define SHIFT 2
#elif DATA_SIZE == 2
# define SUFFIX w
# define DATA_TYPE uint16_t
# define SDATA_TYPE int16_t
# define BSWAP bswap16
# define SHIFT 1
#elif DATA_SIZE == 1
# define SUFFIX b
# define DATA_TYPE uint8_t
# define SDATA_TYPE int8_t
# define BSWAP
# define SHIFT 0
#else
# error unsupported data size
#endif
#if DATA_SIZE >= 4
# define ABI_TYPE DATA_TYPE
#else
# define ABI_TYPE uint32_t
#endif
#define ATOMIC_TRACE_RMW do { \
uint8_t info = glue(trace_mem_build_info_no_se, MEND)(SHIFT, false); \
\
trace_guest_mem_before_exec(ENV_GET_CPU(env), addr, info); \
trace_guest_mem_before_exec(ENV_GET_CPU(env), addr, \
info | TRACE_MEM_ST); \
} while (0)
#define ATOMIC_TRACE_LD do { \
uint8_t info = glue(trace_mem_build_info_no_se, MEND)(SHIFT, false); \
\
trace_guest_mem_before_exec(ENV_GET_CPU(env), addr, info); \
} while (0)
# define ATOMIC_TRACE_ST do { \
uint8_t info = glue(trace_mem_build_info_no_se, MEND)(SHIFT, true); \
\
trace_guest_mem_before_exec(ENV_GET_CPU(env), addr, info); \
} while (0)
/* Define host-endian atomic operations. Note that END is used within
the ATOMIC_NAME macro, and redefined below. */
#if DATA_SIZE == 1
# define END
# define MEND _be /* either le or be would be fine */
#elif defined(HOST_WORDS_BIGENDIAN)
# define END _be
# define MEND _be
#else
# define END _le
# define MEND _le
#endif
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE cmpv, ABI_TYPE newv EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
DATA_TYPE ret;
ATOMIC_TRACE_RMW;
#if DATA_SIZE == 16
ret = atomic16_cmpxchg(haddr, cmpv, newv);
#else
ret = atomic_cmpxchg__nocheck(haddr, cmpv, newv);
#endif
ATOMIC_MMU_CLEANUP;
return ret;
}
#if DATA_SIZE >= 16
#if HAVE_ATOMIC128
ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE val, *haddr = ATOMIC_MMU_LOOKUP;
ATOMIC_TRACE_LD;
val = atomic16_read(haddr);
ATOMIC_MMU_CLEANUP;
return val;
}
void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr,
ABI_TYPE val EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
ATOMIC_TRACE_ST;
atomic16_set(haddr, val);
ATOMIC_MMU_CLEANUP;
}
#endif
#else
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE val EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
DATA_TYPE ret;
ATOMIC_TRACE_RMW;
ret = atomic_xchg__nocheck(haddr, val);
ATOMIC_MMU_CLEANUP;
return ret;
}
#define GEN_ATOMIC_HELPER(X) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE val EXTRA_ARGS) \
{ \
ATOMIC_MMU_DECLS; \
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
DATA_TYPE ret; \
\
ATOMIC_TRACE_RMW; \
ret = atomic_##X(haddr, val); \
ATOMIC_MMU_CLEANUP; \
return ret; \
}
GEN_ATOMIC_HELPER(fetch_add)
GEN_ATOMIC_HELPER(fetch_and)
GEN_ATOMIC_HELPER(fetch_or)
GEN_ATOMIC_HELPER(fetch_xor)
GEN_ATOMIC_HELPER(add_fetch)
GEN_ATOMIC_HELPER(and_fetch)
GEN_ATOMIC_HELPER(or_fetch)
GEN_ATOMIC_HELPER(xor_fetch)
#undef GEN_ATOMIC_HELPER
/* These helpers are, as a whole, full barriers. Within the helper,
* the leading barrier is explicit and the trailing barrier is within
* cmpxchg primitive.
*
* Trace this load + RMW loop as a single RMW op. This way, regardless
* of CF_PARALLEL's value, we'll trace just a read and a write.
*/
#define GEN_ATOMIC_HELPER_FN(X, FN, XDATA_TYPE, RET) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE xval EXTRA_ARGS) \
{ \
ATOMIC_MMU_DECLS; \
XDATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
XDATA_TYPE cmp, old, new, val = xval; \
\
ATOMIC_TRACE_RMW; \
smp_mb(); \
cmp = atomic_read__nocheck(haddr); \
do { \
old = cmp; new = FN(old, val); \
cmp = atomic_cmpxchg__nocheck(haddr, old, new); \
} while (cmp != old); \
ATOMIC_MMU_CLEANUP; \
return RET; \
}
GEN_ATOMIC_HELPER_FN(fetch_smin, MIN, SDATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(fetch_umin, MIN, DATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(fetch_smax, MAX, SDATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(fetch_umax, MAX, DATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(smin_fetch, MIN, SDATA_TYPE, new)
GEN_ATOMIC_HELPER_FN(umin_fetch, MIN, DATA_TYPE, new)
GEN_ATOMIC_HELPER_FN(smax_fetch, MAX, SDATA_TYPE, new)
GEN_ATOMIC_HELPER_FN(umax_fetch, MAX, DATA_TYPE, new)
#undef GEN_ATOMIC_HELPER_FN
#endif /* DATA SIZE >= 16 */
#undef END
#undef MEND
#if DATA_SIZE > 1
/* Define reverse-host-endian atomic operations. Note that END is used
within the ATOMIC_NAME macro. */
#ifdef HOST_WORDS_BIGENDIAN
# define END _le
# define MEND _le
#else
# define END _be
# define MEND _be
#endif
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE cmpv, ABI_TYPE newv EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
DATA_TYPE ret;
ATOMIC_TRACE_RMW;
#if DATA_SIZE == 16
ret = atomic16_cmpxchg(haddr, BSWAP(cmpv), BSWAP(newv));
#else
ret = atomic_cmpxchg__nocheck(haddr, BSWAP(cmpv), BSWAP(newv));
#endif
ATOMIC_MMU_CLEANUP;
return BSWAP(ret);
}
#if DATA_SIZE >= 16
#if HAVE_ATOMIC128
ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE val, *haddr = ATOMIC_MMU_LOOKUP;
ATOMIC_TRACE_LD;
val = atomic16_read(haddr);
ATOMIC_MMU_CLEANUP;
return BSWAP(val);
}
void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr,
ABI_TYPE val EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
ATOMIC_TRACE_ST;
val = BSWAP(val);
atomic16_set(haddr, val);
ATOMIC_MMU_CLEANUP;
}
#endif
#else
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE val EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
ABI_TYPE ret;
ATOMIC_TRACE_RMW;
ret = atomic_xchg__nocheck(haddr, BSWAP(val));
ATOMIC_MMU_CLEANUP;
return BSWAP(ret);
}
#define GEN_ATOMIC_HELPER(X) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE val EXTRA_ARGS) \
{ \
ATOMIC_MMU_DECLS; \
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
DATA_TYPE ret; \
\
ATOMIC_TRACE_RMW; \
ret = atomic_##X(haddr, BSWAP(val)); \
ATOMIC_MMU_CLEANUP; \
return BSWAP(ret); \
}
GEN_ATOMIC_HELPER(fetch_and)
GEN_ATOMIC_HELPER(fetch_or)
GEN_ATOMIC_HELPER(fetch_xor)
GEN_ATOMIC_HELPER(and_fetch)
GEN_ATOMIC_HELPER(or_fetch)
GEN_ATOMIC_HELPER(xor_fetch)
#undef GEN_ATOMIC_HELPER
/* These helpers are, as a whole, full barriers. Within the helper,
* the leading barrier is explicit and the trailing barrier is within
* cmpxchg primitive.
*
* Trace this load + RMW loop as a single RMW op. This way, regardless
* of CF_PARALLEL's value, we'll trace just a read and a write.
*/
#define GEN_ATOMIC_HELPER_FN(X, FN, XDATA_TYPE, RET) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE xval EXTRA_ARGS) \
{ \
ATOMIC_MMU_DECLS; \
XDATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
XDATA_TYPE ldo, ldn, old, new, val = xval; \
\
ATOMIC_TRACE_RMW; \
smp_mb(); \
ldn = atomic_read__nocheck(haddr); \
do { \
ldo = ldn; old = BSWAP(ldo); new = FN(old, val); \
ldn = atomic_cmpxchg__nocheck(haddr, ldo, BSWAP(new)); \
} while (ldo != ldn); \
ATOMIC_MMU_CLEANUP; \
return RET; \
}
GEN_ATOMIC_HELPER_FN(fetch_smin, MIN, SDATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(fetch_umin, MIN, DATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(fetch_smax, MAX, SDATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(fetch_umax, MAX, DATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(smin_fetch, MIN, SDATA_TYPE, new)
GEN_ATOMIC_HELPER_FN(umin_fetch, MIN, DATA_TYPE, new)
GEN_ATOMIC_HELPER_FN(smax_fetch, MAX, SDATA_TYPE, new)
GEN_ATOMIC_HELPER_FN(umax_fetch, MAX, DATA_TYPE, new)
/* Note that for addition, we need to use a separate cmpxchg loop instead
of bswaps for the reverse-host-endian helpers. */
#define ADD(X, Y) (X + Y)
GEN_ATOMIC_HELPER_FN(fetch_add, ADD, DATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(add_fetch, ADD, DATA_TYPE, new)
#undef ADD
#undef GEN_ATOMIC_HELPER_FN
#endif /* DATA_SIZE >= 16 */
#undef END
#undef MEND
#endif /* DATA_SIZE > 1 */
#undef ATOMIC_TRACE_ST
#undef ATOMIC_TRACE_LD
#undef ATOMIC_TRACE_RMW
#undef BSWAP
#undef ABI_TYPE
#undef DATA_TYPE
#undef SDATA_TYPE
#undef SUFFIX
#undef DATA_SIZE
#undef SHIFT

View File

@@ -1,740 +0,0 @@
/*
* emulator main execution loop
*
* Copyright (c) 2003-2005 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "cpu.h"
#include "trace.h"
#include "disas/disas.h"
#include "exec/exec-all.h"
#include "tcg.h"
#include "qemu/atomic.h"
#include "sysemu/qtest.h"
#include "qemu/timer.h"
#include "qemu/rcu.h"
#include "exec/tb-hash.h"
#include "exec/tb-lookup.h"
#include "exec/log.h"
#include "qemu/main-loop.h"
#if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
#include "hw/i386/apic.h"
#endif
#include "sysemu/cpus.h"
#include "sysemu/replay.h"
/* -icount align implementation. */
typedef struct SyncClocks {
int64_t diff_clk;
int64_t last_cpu_icount;
int64_t realtime_clock;
} SyncClocks;
#if !defined(CONFIG_USER_ONLY)
/* Allow the guest to have a max 3ms advance.
* The difference between the 2 clocks could therefore
* oscillate around 0.
*/
#define VM_CLOCK_ADVANCE 3000000
#define THRESHOLD_REDUCE 1.5
#define MAX_DELAY_PRINT_RATE 2000000000LL
#define MAX_NB_PRINTS 100
static void align_clocks(SyncClocks *sc, const CPUState *cpu)
{
int64_t cpu_icount;
if (!icount_align_option) {
return;
}
cpu_icount = cpu->icount_extra + cpu->icount_decr.u16.low;
sc->diff_clk += cpu_icount_to_ns(sc->last_cpu_icount - cpu_icount);
sc->last_cpu_icount = cpu_icount;
if (sc->diff_clk > VM_CLOCK_ADVANCE) {
#ifndef _WIN32
struct timespec sleep_delay, rem_delay;
sleep_delay.tv_sec = sc->diff_clk / 1000000000LL;
sleep_delay.tv_nsec = sc->diff_clk % 1000000000LL;
if (nanosleep(&sleep_delay, &rem_delay) < 0) {
sc->diff_clk = rem_delay.tv_sec * 1000000000LL + rem_delay.tv_nsec;
} else {
sc->diff_clk = 0;
}
#else
Sleep(sc->diff_clk / SCALE_MS);
sc->diff_clk = 0;
#endif
}
}
static void print_delay(const SyncClocks *sc)
{
static float threshold_delay;
static int64_t last_realtime_clock;
static int nb_prints;
if (icount_align_option &&
sc->realtime_clock - last_realtime_clock >= MAX_DELAY_PRINT_RATE &&
nb_prints < MAX_NB_PRINTS) {
if ((-sc->diff_clk / (float)1000000000LL > threshold_delay) ||
(-sc->diff_clk / (float)1000000000LL <
(threshold_delay - THRESHOLD_REDUCE))) {
threshold_delay = (-sc->diff_clk / 1000000000LL) + 1;
printf("Warning: The guest is now late by %.1f to %.1f seconds\n",
threshold_delay - 1,
threshold_delay);
nb_prints++;
last_realtime_clock = sc->realtime_clock;
}
}
}
static void init_delay_params(SyncClocks *sc,
const CPUState *cpu)
{
if (!icount_align_option) {
return;
}
sc->realtime_clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT);
sc->diff_clk = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - sc->realtime_clock;
sc->last_cpu_icount = cpu->icount_extra + cpu->icount_decr.u16.low;
if (sc->diff_clk < max_delay) {
max_delay = sc->diff_clk;
}
if (sc->diff_clk > max_advance) {
max_advance = sc->diff_clk;
}
/* Print every 2s max if the guest is late. We limit the number
of printed messages to NB_PRINT_MAX(currently 100) */
print_delay(sc);
}
#else
static void align_clocks(SyncClocks *sc, const CPUState *cpu)
{
}
static void init_delay_params(SyncClocks *sc, const CPUState *cpu)
{
}
#endif /* CONFIG USER ONLY */
/* Execute a TB, and fix up the CPU state afterwards if necessary */
static inline tcg_target_ulong cpu_tb_exec(CPUState *cpu, TranslationBlock *itb)
{
CPUArchState *env = cpu->env_ptr;
uintptr_t ret;
TranslationBlock *last_tb;
int tb_exit;
uint8_t *tb_ptr = itb->tc.ptr;
qemu_log_mask_and_addr(CPU_LOG_EXEC, itb->pc,
"Trace %d: %p ["
TARGET_FMT_lx "/" TARGET_FMT_lx "/%#x] %s\n",
cpu->cpu_index, itb->tc.ptr,
itb->cs_base, itb->pc, itb->flags,
lookup_symbol(itb->pc));
#if defined(DEBUG_DISAS)
if (qemu_loglevel_mask(CPU_LOG_TB_CPU)
&& qemu_log_in_addr_range(itb->pc)) {
qemu_log_lock();
int flags = 0;
if (qemu_loglevel_mask(CPU_LOG_TB_FPU)) {
flags |= CPU_DUMP_FPU;
}
#if defined(TARGET_I386)
flags |= CPU_DUMP_CCOP;
#endif
log_cpu_state(cpu, flags);
qemu_log_unlock();
}
#endif /* DEBUG_DISAS */
cpu->can_do_io = !use_icount;
ret = tcg_qemu_tb_exec(env, tb_ptr);
cpu->can_do_io = 1;
last_tb = (TranslationBlock *)(ret & ~TB_EXIT_MASK);
tb_exit = ret & TB_EXIT_MASK;
trace_exec_tb_exit(last_tb, tb_exit);
if (tb_exit > TB_EXIT_IDX1) {
/* We didn't start executing this TB (eg because the instruction
* counter hit zero); we must restore the guest PC to the address
* of the start of the TB.
*/
CPUClass *cc = CPU_GET_CLASS(cpu);
qemu_log_mask_and_addr(CPU_LOG_EXEC, last_tb->pc,
"Stopped execution of TB chain before %p ["
TARGET_FMT_lx "] %s\n",
last_tb->tc.ptr, last_tb->pc,
lookup_symbol(last_tb->pc));
if (cc->synchronize_from_tb) {
cc->synchronize_from_tb(cpu, last_tb);
} else {
assert(cc->set_pc);
cc->set_pc(cpu, last_tb->pc);
}
}
return ret;
}
#ifndef CONFIG_USER_ONLY
/* Execute the code without caching the generated code. An interpreter
could be used if available. */
static void cpu_exec_nocache(CPUState *cpu, int max_cycles,
TranslationBlock *orig_tb, bool ignore_icount)
{
TranslationBlock *tb;
uint32_t cflags = curr_cflags() | CF_NOCACHE;
if (ignore_icount) {
cflags &= ~CF_USE_ICOUNT;
}
/* Should never happen.
We only end up here when an existing TB is too long. */
cflags |= MIN(max_cycles, CF_COUNT_MASK);
mmap_lock();
tb = tb_gen_code(cpu, orig_tb->pc, orig_tb->cs_base,
orig_tb->flags, cflags);
tb->orig_tb = orig_tb;
mmap_unlock();
/* execute the generated code */
trace_exec_tb_nocache(tb, tb->pc);
cpu_tb_exec(cpu, tb);
mmap_lock();
tb_phys_invalidate(tb, -1);
mmap_unlock();
tcg_tb_remove(tb);
}
#endif
void cpu_exec_step_atomic(CPUState *cpu)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
TranslationBlock *tb;
target_ulong cs_base, pc;
uint32_t flags;
uint32_t cflags = 1;
uint32_t cf_mask = cflags & CF_HASH_MASK;
/* volatile because we modify it between setjmp and longjmp */
volatile bool in_exclusive_region = false;
if (sigsetjmp(cpu->jmp_env, 0) == 0) {
tb = tb_lookup__cpu_state(cpu, &pc, &cs_base, &flags, cf_mask);
if (tb == NULL) {
mmap_lock();
tb = tb_gen_code(cpu, pc, cs_base, flags, cflags);
mmap_unlock();
}
start_exclusive();
/* Since we got here, we know that parallel_cpus must be true. */
parallel_cpus = false;
in_exclusive_region = true;
cc->cpu_exec_enter(cpu);
/* execute the generated code */
trace_exec_tb(tb, pc);
cpu_tb_exec(cpu, tb);
cc->cpu_exec_exit(cpu);
} else {
/*
* The mmap_lock is dropped by tb_gen_code if it runs out of
* memory.
*/
#ifndef CONFIG_SOFTMMU
tcg_debug_assert(!have_mmap_lock());
#endif
if (qemu_mutex_iothread_locked()) {
qemu_mutex_unlock_iothread();
}
assert_no_pages_locked();
}
if (in_exclusive_region) {
/* We might longjump out of either the codegen or the
* execution, so must make sure we only end the exclusive
* region if we started it.
*/
parallel_cpus = true;
end_exclusive();
}
}
struct tb_desc {
target_ulong pc;
target_ulong cs_base;
CPUArchState *env;
tb_page_addr_t phys_page1;
uint32_t flags;
uint32_t cf_mask;
uint32_t trace_vcpu_dstate;
};
static bool tb_lookup_cmp(const void *p, const void *d)
{
const TranslationBlock *tb = p;
const struct tb_desc *desc = d;
if (tb->pc == desc->pc &&
tb->page_addr[0] == desc->phys_page1 &&
tb->cs_base == desc->cs_base &&
tb->flags == desc->flags &&
tb->trace_vcpu_dstate == desc->trace_vcpu_dstate &&
(tb_cflags(tb) & (CF_HASH_MASK | CF_INVALID)) == desc->cf_mask) {
/* check next page if needed */
if (tb->page_addr[1] == -1) {
return true;
} else {
tb_page_addr_t phys_page2;
target_ulong virt_page2;
virt_page2 = (desc->pc & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE;
phys_page2 = get_page_addr_code(desc->env, virt_page2);
if (tb->page_addr[1] == phys_page2) {
return true;
}
}
}
return false;
}
TranslationBlock *tb_htable_lookup(CPUState *cpu, target_ulong pc,
target_ulong cs_base, uint32_t flags,
uint32_t cf_mask)
{
tb_page_addr_t phys_pc;
struct tb_desc desc;
uint32_t h;
desc.env = (CPUArchState *)cpu->env_ptr;
desc.cs_base = cs_base;
desc.flags = flags;
desc.cf_mask = cf_mask;
desc.trace_vcpu_dstate = *cpu->trace_dstate;
desc.pc = pc;
phys_pc = get_page_addr_code(desc.env, pc);
if (phys_pc == -1) {
return NULL;
}
desc.phys_page1 = phys_pc & TARGET_PAGE_MASK;
h = tb_hash_func(phys_pc, pc, flags, cf_mask, *cpu->trace_dstate);
return qht_lookup_custom(&tb_ctx.htable, &desc, h, tb_lookup_cmp);
}
void tb_set_jmp_target(TranslationBlock *tb, int n, uintptr_t addr)
{
if (TCG_TARGET_HAS_direct_jump) {
uintptr_t offset = tb->jmp_target_arg[n];
uintptr_t tc_ptr = (uintptr_t)tb->tc.ptr;
tb_target_set_jmp_target(tc_ptr, tc_ptr + offset, addr);
} else {
tb->jmp_target_arg[n] = addr;
}
}
static inline void tb_add_jump(TranslationBlock *tb, int n,
TranslationBlock *tb_next)
{
uintptr_t old;
assert(n < ARRAY_SIZE(tb->jmp_list_next));
qemu_spin_lock(&tb_next->jmp_lock);
/* make sure the destination TB is valid */
if (tb_next->cflags & CF_INVALID) {
goto out_unlock_next;
}
/* Atomically claim the jump destination slot only if it was NULL */
old = atomic_cmpxchg(&tb->jmp_dest[n], (uintptr_t)NULL, (uintptr_t)tb_next);
if (old) {
goto out_unlock_next;
}
/* patch the native jump address */
tb_set_jmp_target(tb, n, (uintptr_t)tb_next->tc.ptr);
/* add in TB jmp list */
tb->jmp_list_next[n] = tb_next->jmp_list_head;
tb_next->jmp_list_head = (uintptr_t)tb | n;
qemu_spin_unlock(&tb_next->jmp_lock);
qemu_log_mask_and_addr(CPU_LOG_EXEC, tb->pc,
"Linking TBs %p [" TARGET_FMT_lx
"] index %d -> %p [" TARGET_FMT_lx "]\n",
tb->tc.ptr, tb->pc, n,
tb_next->tc.ptr, tb_next->pc);
return;
out_unlock_next:
qemu_spin_unlock(&tb_next->jmp_lock);
return;
}
static inline TranslationBlock *tb_find(CPUState *cpu,
TranslationBlock *last_tb,
int tb_exit, uint32_t cf_mask)
{
TranslationBlock *tb;
target_ulong cs_base, pc;
uint32_t flags;
tb = tb_lookup__cpu_state(cpu, &pc, &cs_base, &flags, cf_mask);
if (tb == NULL) {
mmap_lock();
tb = tb_gen_code(cpu, pc, cs_base, flags, cf_mask);
mmap_unlock();
/* We add the TB in the virtual pc hash table for the fast lookup */
atomic_set(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)], tb);
}
#ifndef CONFIG_USER_ONLY
/* We don't take care of direct jumps when address mapping changes in
* system emulation. So it's not safe to make a direct jump to a TB
* spanning two pages because the mapping for the second page can change.
*/
if (tb->page_addr[1] != -1) {
last_tb = NULL;
}
#endif
/* See if we can patch the calling TB. */
if (last_tb) {
tb_add_jump(last_tb, tb_exit, tb);
}
return tb;
}
static inline bool cpu_handle_halt(CPUState *cpu)
{
if (cpu->halted) {
#if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
if ((cpu->interrupt_request & CPU_INTERRUPT_POLL)
&& replay_interrupt()) {
X86CPU *x86_cpu = X86_CPU(cpu);
qemu_mutex_lock_iothread();
apic_poll_irq(x86_cpu->apic_state);
cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
qemu_mutex_unlock_iothread();
}
#endif
if (!cpu_has_work(cpu)) {
return true;
}
cpu->halted = 0;
}
return false;
}
static inline void cpu_handle_debug_exception(CPUState *cpu)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
CPUWatchpoint *wp;
if (!cpu->watchpoint_hit) {
QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) {
wp->flags &= ~BP_WATCHPOINT_HIT;
}
}
cc->debug_excp_handler(cpu);
}
static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
{
if (cpu->exception_index < 0) {
#ifndef CONFIG_USER_ONLY
if (replay_has_exception()
&& cpu->icount_decr.u16.low + cpu->icount_extra == 0) {
/* try to cause an exception pending in the log */
cpu_exec_nocache(cpu, 1, tb_find(cpu, NULL, 0, curr_cflags()), true);
}
#endif
if (cpu->exception_index < 0) {
return false;
}
}
if (cpu->exception_index >= EXCP_INTERRUPT) {
/* exit request from the cpu execution loop */
*ret = cpu->exception_index;
if (*ret == EXCP_DEBUG) {
cpu_handle_debug_exception(cpu);
}
cpu->exception_index = -1;
return true;
} else {
#if defined(CONFIG_USER_ONLY)
/* if user mode only, we simulate a fake exception
which will be handled outside the cpu execution
loop */
#if defined(TARGET_I386)
CPUClass *cc = CPU_GET_CLASS(cpu);
cc->do_interrupt(cpu);
#endif
*ret = cpu->exception_index;
cpu->exception_index = -1;
return true;
#else
if (replay_exception()) {
CPUClass *cc = CPU_GET_CLASS(cpu);
qemu_mutex_lock_iothread();
cc->do_interrupt(cpu);
qemu_mutex_unlock_iothread();
cpu->exception_index = -1;
} else if (!replay_has_interrupt()) {
/* give a chance to iothread in replay mode */
*ret = EXCP_INTERRUPT;
return true;
}
#endif
}
return false;
}
static inline bool cpu_handle_interrupt(CPUState *cpu,
TranslationBlock **last_tb)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
/* Clear the interrupt flag now since we're processing
* cpu->interrupt_request and cpu->exit_request.
* Ensure zeroing happens before reading cpu->exit_request or
* cpu->interrupt_request (see also smp_wmb in cpu_exit())
*/
atomic_mb_set(&cpu->icount_decr.u16.high, 0);
if (unlikely(atomic_read(&cpu->interrupt_request))) {
int interrupt_request;
qemu_mutex_lock_iothread();
interrupt_request = cpu->interrupt_request;
if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
/* Mask out external interrupts for this step. */
interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK;
}
if (interrupt_request & CPU_INTERRUPT_DEBUG) {
cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
cpu->exception_index = EXCP_DEBUG;
qemu_mutex_unlock_iothread();
return true;
}
if (replay_mode == REPLAY_MODE_PLAY && !replay_has_interrupt()) {
/* Do nothing */
} else if (interrupt_request & CPU_INTERRUPT_HALT) {
replay_interrupt();
cpu->interrupt_request &= ~CPU_INTERRUPT_HALT;
cpu->halted = 1;
cpu->exception_index = EXCP_HLT;
qemu_mutex_unlock_iothread();
return true;
}
#if defined(TARGET_I386)
else if (interrupt_request & CPU_INTERRUPT_INIT) {
X86CPU *x86_cpu = X86_CPU(cpu);
CPUArchState *env = &x86_cpu->env;
replay_interrupt();
cpu_svm_check_intercept_param(env, SVM_EXIT_INIT, 0, 0);
do_cpu_init(x86_cpu);
cpu->exception_index = EXCP_HALTED;
qemu_mutex_unlock_iothread();
return true;
}
#else
else if (interrupt_request & CPU_INTERRUPT_RESET) {
replay_interrupt();
cpu_reset(cpu);
qemu_mutex_unlock_iothread();
return true;
}
#endif
/* The target hook has 3 exit conditions:
False when the interrupt isn't processed,
True when it is, and we should restart on a new TB,
and via longjmp via cpu_loop_exit. */
else {
if (cc->cpu_exec_interrupt(cpu, interrupt_request)) {
replay_interrupt();
cpu->exception_index = -1;
*last_tb = NULL;
}
/* The target hook may have updated the 'cpu->interrupt_request';
* reload the 'interrupt_request' value */
interrupt_request = cpu->interrupt_request;
}
if (interrupt_request & CPU_INTERRUPT_EXITTB) {
cpu->interrupt_request &= ~CPU_INTERRUPT_EXITTB;
/* ensure that no TB jump will be modified as
the program flow was changed */
*last_tb = NULL;
}
/* If we exit via cpu_loop_exit/longjmp it is reset in cpu_exec */
qemu_mutex_unlock_iothread();
}
/* Finally, check if we need to exit to the main loop. */
if (unlikely(atomic_read(&cpu->exit_request)
|| (use_icount && cpu->icount_decr.u16.low + cpu->icount_extra == 0))) {
atomic_set(&cpu->exit_request, 0);
if (cpu->exception_index == -1) {
cpu->exception_index = EXCP_INTERRUPT;
}
return true;
}
return false;
}
static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
TranslationBlock **last_tb, int *tb_exit)
{
uintptr_t ret;
int32_t insns_left;
trace_exec_tb(tb, tb->pc);
ret = cpu_tb_exec(cpu, tb);
tb = (TranslationBlock *)(ret & ~TB_EXIT_MASK);
*tb_exit = ret & TB_EXIT_MASK;
if (*tb_exit != TB_EXIT_REQUESTED) {
*last_tb = tb;
return;
}
*last_tb = NULL;
insns_left = atomic_read(&cpu->icount_decr.u32);
if (insns_left < 0) {
/* Something asked us to stop executing chained TBs; just
* continue round the main loop. Whatever requested the exit
* will also have set something else (eg exit_request or
* interrupt_request) which will be handled by
* cpu_handle_interrupt. cpu_handle_interrupt will also
* clear cpu->icount_decr.u16.high.
*/
return;
}
/* Instruction counter expired. */
assert(use_icount);
#ifndef CONFIG_USER_ONLY
/* Ensure global icount has gone forward */
cpu_update_icount(cpu);
/* Refill decrementer and continue execution. */
insns_left = MIN(0xffff, cpu->icount_budget);
cpu->icount_decr.u16.low = insns_left;
cpu->icount_extra = cpu->icount_budget - insns_left;
if (!cpu->icount_extra) {
/* Execute any remaining instructions, then let the main loop
* handle the next event.
*/
if (insns_left > 0) {
cpu_exec_nocache(cpu, insns_left, tb, false);
}
}
#endif
}
/* main execution loop */
int cpu_exec(CPUState *cpu)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
int ret;
SyncClocks sc = { 0 };
/* replay_interrupt may need current_cpu */
current_cpu = cpu;
if (cpu_handle_halt(cpu)) {
return EXCP_HALTED;
}
rcu_read_lock();
cc->cpu_exec_enter(cpu);
/* Calculate difference between guest clock and host clock.
* This delay includes the delay of the last cycle, so
* what we have to do is sleep until it is 0. As for the
* advance/delay we gain here, we try to fix it next time.
*/
init_delay_params(&sc, cpu);
/* prepare setjmp context for exception handling */
if (sigsetjmp(cpu->jmp_env, 0) != 0) {
#if defined(__clang__) || !QEMU_GNUC_PREREQ(4, 6)
/* Some compilers wrongly smash all local variables after
* siglongjmp. There were bug reports for gcc 4.5.0 and clang.
* Reload essential local variables here for those compilers.
* Newer versions of gcc would complain about this code (-Wclobbered). */
cpu = current_cpu;
cc = CPU_GET_CLASS(cpu);
#else /* buggy compiler */
/* Assert that the compiler does not smash local variables. */
g_assert(cpu == current_cpu);
g_assert(cc == CPU_GET_CLASS(cpu));
#endif /* buggy compiler */
#ifndef CONFIG_SOFTMMU
tcg_debug_assert(!have_mmap_lock());
#endif
if (qemu_mutex_iothread_locked()) {
qemu_mutex_unlock_iothread();
}
assert_no_pages_locked();
}
/* if an exception is pending, we execute it here */
while (!cpu_handle_exception(cpu, &ret)) {
TranslationBlock *last_tb = NULL;
int tb_exit = 0;
while (!cpu_handle_interrupt(cpu, &last_tb)) {
uint32_t cflags = cpu->cflags_next_tb;
TranslationBlock *tb;
/* When requested, use an exact setting for cflags for the next
execution. This is used for icount, precise smc, and stop-
after-access watchpoints. Since this request should never
have CF_INVALID set, -1 is a convenient invalid value that
does not require tcg headers for cpu_common_reset. */
if (cflags == -1) {
cflags = curr_cflags();
} else {
cpu->cflags_next_tb = -1;
}
tb = tb_find(cpu, last_tb, tb_exit, cflags);
cpu_loop_exec_tb(cpu, tb, &last_tb, &tb_exit);
/* Try to align the host and virtual clocks
if the guest is in advance */
align_clocks(&sc, cpu);
}
}
cc->cpu_exec_exit(cpu);
rcu_read_unlock();
return ret;
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,454 +0,0 @@
/*
* Software MMU support
*
* Generate helpers used by TCG for qemu_ld/st ops and code load
* functions.
*
* Included from target op helpers and exec.c.
*
* Copyright (c) 2003 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#if DATA_SIZE == 8
#define SUFFIX q
#define LSUFFIX q
#define SDATA_TYPE int64_t
#define DATA_TYPE uint64_t
#elif DATA_SIZE == 4
#define SUFFIX l
#define LSUFFIX l
#define SDATA_TYPE int32_t
#define DATA_TYPE uint32_t
#elif DATA_SIZE == 2
#define SUFFIX w
#define LSUFFIX uw
#define SDATA_TYPE int16_t
#define DATA_TYPE uint16_t
#elif DATA_SIZE == 1
#define SUFFIX b
#define LSUFFIX ub
#define SDATA_TYPE int8_t
#define DATA_TYPE uint8_t
#else
#error unsupported data size
#endif
/* For the benefit of TCG generated code, we want to avoid the complication
of ABI-specific return type promotion and always return a value extended
to the register size of the host. This is tcg_target_long, except in the
case of a 32-bit host and 64-bit data, and for that we always have
uint64_t. Don't bother with this widened value for SOFTMMU_CODE_ACCESS. */
#if defined(SOFTMMU_CODE_ACCESS) || DATA_SIZE == 8
# define WORD_TYPE DATA_TYPE
# define USUFFIX SUFFIX
#else
# define WORD_TYPE tcg_target_ulong
# define USUFFIX glue(u, SUFFIX)
# define SSUFFIX glue(s, SUFFIX)
#endif
#ifdef SOFTMMU_CODE_ACCESS
#define READ_ACCESS_TYPE MMU_INST_FETCH
#define ADDR_READ addr_code
#else
#define READ_ACCESS_TYPE MMU_DATA_LOAD
#define ADDR_READ addr_read
#endif
#if DATA_SIZE == 8
# define BSWAP(X) bswap64(X)
#elif DATA_SIZE == 4
# define BSWAP(X) bswap32(X)
#elif DATA_SIZE == 2
# define BSWAP(X) bswap16(X)
#else
# define BSWAP(X) (X)
#endif
#if DATA_SIZE == 1
# define helper_le_ld_name glue(glue(helper_ret_ld, USUFFIX), MMUSUFFIX)
# define helper_be_ld_name helper_le_ld_name
# define helper_le_lds_name glue(glue(helper_ret_ld, SSUFFIX), MMUSUFFIX)
# define helper_be_lds_name helper_le_lds_name
# define helper_le_st_name glue(glue(helper_ret_st, SUFFIX), MMUSUFFIX)
# define helper_be_st_name helper_le_st_name
#else
# define helper_le_ld_name glue(glue(helper_le_ld, USUFFIX), MMUSUFFIX)
# define helper_be_ld_name glue(glue(helper_be_ld, USUFFIX), MMUSUFFIX)
# define helper_le_lds_name glue(glue(helper_le_ld, SSUFFIX), MMUSUFFIX)
# define helper_be_lds_name glue(glue(helper_be_ld, SSUFFIX), MMUSUFFIX)
# define helper_le_st_name glue(glue(helper_le_st, SUFFIX), MMUSUFFIX)
# define helper_be_st_name glue(glue(helper_be_st, SUFFIX), MMUSUFFIX)
#endif
#ifndef SOFTMMU_CODE_ACCESS
static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env,
size_t mmu_idx, size_t index,
target_ulong addr,
uintptr_t retaddr,
bool recheck,
MMUAccessType access_type)
{
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, recheck,
access_type, DATA_SIZE);
}
#endif
WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong addr,
TCGMemOpIdx oi, uintptr_t retaddr)
{
uintptr_t mmu_idx = get_mmuidx(oi);
uintptr_t index = tlb_index(env, mmu_idx, addr);
CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
target_ulong tlb_addr = entry->ADDR_READ;
unsigned a_bits = get_alignment_bits(get_memop(oi));
uintptr_t haddr;
DATA_TYPE res;
if (addr & ((1 << a_bits) - 1)) {
cpu_unaligned_access(ENV_GET_CPU(env), addr, READ_ACCESS_TYPE,
mmu_idx, retaddr);
}
/* If the TLB entry is for a different page, reload and try again. */
if (!tlb_hit(tlb_addr, addr)) {
if (!VICTIM_TLB_HIT(ADDR_READ, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, READ_ACCESS_TYPE,
mmu_idx, retaddr);
index = tlb_index(env, mmu_idx, addr);
entry = tlb_entry(env, mmu_idx, addr);
}
tlb_addr = entry->ADDR_READ;
}
/* Handle an IO access. */
if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
if ((addr & (DATA_SIZE - 1)) != 0) {
goto do_unaligned_access;
}
/* ??? Note that the io helpers always read data in the target
byte ordering. We should push the LE/BE request down into io. */
res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
tlb_addr & TLB_RECHECK,
READ_ACCESS_TYPE);
res = TGT_LE(res);
return res;
}
/* Handle slow unaligned access (it spans two pages or IO). */
if (DATA_SIZE > 1
&& unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1
>= TARGET_PAGE_SIZE)) {
target_ulong addr1, addr2;
DATA_TYPE res1, res2;
unsigned shift;
do_unaligned_access:
addr1 = addr & ~(DATA_SIZE - 1);
addr2 = addr1 + DATA_SIZE;
res1 = helper_le_ld_name(env, addr1, oi, retaddr);
res2 = helper_le_ld_name(env, addr2, oi, retaddr);
shift = (addr & (DATA_SIZE - 1)) * 8;
/* Little-endian combine. */
res = (res1 >> shift) | (res2 << ((DATA_SIZE * 8) - shift));
return res;
}
haddr = addr + entry->addend;
#if DATA_SIZE == 1
res = glue(glue(ld, LSUFFIX), _p)((uint8_t *)haddr);
#else
res = glue(glue(ld, LSUFFIX), _le_p)((uint8_t *)haddr);
#endif
return res;
}
#if DATA_SIZE > 1
WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong addr,
TCGMemOpIdx oi, uintptr_t retaddr)
{
uintptr_t mmu_idx = get_mmuidx(oi);
uintptr_t index = tlb_index(env, mmu_idx, addr);
CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
target_ulong tlb_addr = entry->ADDR_READ;
unsigned a_bits = get_alignment_bits(get_memop(oi));
uintptr_t haddr;
DATA_TYPE res;
if (addr & ((1 << a_bits) - 1)) {
cpu_unaligned_access(ENV_GET_CPU(env), addr, READ_ACCESS_TYPE,
mmu_idx, retaddr);
}
/* If the TLB entry is for a different page, reload and try again. */
if (!tlb_hit(tlb_addr, addr)) {
if (!VICTIM_TLB_HIT(ADDR_READ, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, READ_ACCESS_TYPE,
mmu_idx, retaddr);
index = tlb_index(env, mmu_idx, addr);
entry = tlb_entry(env, mmu_idx, addr);
}
tlb_addr = entry->ADDR_READ;
}
/* Handle an IO access. */
if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
if ((addr & (DATA_SIZE - 1)) != 0) {
goto do_unaligned_access;
}
/* ??? Note that the io helpers always read data in the target
byte ordering. We should push the LE/BE request down into io. */
res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
tlb_addr & TLB_RECHECK,
READ_ACCESS_TYPE);
res = TGT_BE(res);
return res;
}
/* Handle slow unaligned access (it spans two pages or IO). */
if (DATA_SIZE > 1
&& unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1
>= TARGET_PAGE_SIZE)) {
target_ulong addr1, addr2;
DATA_TYPE res1, res2;
unsigned shift;
do_unaligned_access:
addr1 = addr & ~(DATA_SIZE - 1);
addr2 = addr1 + DATA_SIZE;
res1 = helper_be_ld_name(env, addr1, oi, retaddr);
res2 = helper_be_ld_name(env, addr2, oi, retaddr);
shift = (addr & (DATA_SIZE - 1)) * 8;
/* Big-endian combine. */
res = (res1 << shift) | (res2 >> ((DATA_SIZE * 8) - shift));
return res;
}
haddr = addr + entry->addend;
res = glue(glue(ld, LSUFFIX), _be_p)((uint8_t *)haddr);
return res;
}
#endif /* DATA_SIZE > 1 */
#ifndef SOFTMMU_CODE_ACCESS
/* Provide signed versions of the load routines as well. We can of course
avoid this for 64-bit data, or for 32-bit data on 32-bit host. */
#if DATA_SIZE * 8 < TCG_TARGET_REG_BITS
WORD_TYPE helper_le_lds_name(CPUArchState *env, target_ulong addr,
TCGMemOpIdx oi, uintptr_t retaddr)
{
return (SDATA_TYPE)helper_le_ld_name(env, addr, oi, retaddr);
}
# if DATA_SIZE > 1
WORD_TYPE helper_be_lds_name(CPUArchState *env, target_ulong addr,
TCGMemOpIdx oi, uintptr_t retaddr)
{
return (SDATA_TYPE)helper_be_ld_name(env, addr, oi, retaddr);
}
# endif
#endif
static inline void glue(io_write, SUFFIX)(CPUArchState *env,
size_t mmu_idx, size_t index,
DATA_TYPE val,
target_ulong addr,
uintptr_t retaddr,
bool recheck)
{
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
return io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr,
recheck, DATA_SIZE);
}
void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
TCGMemOpIdx oi, uintptr_t retaddr)
{
uintptr_t mmu_idx = get_mmuidx(oi);
uintptr_t index = tlb_index(env, mmu_idx, addr);
CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
target_ulong tlb_addr = tlb_addr_write(entry);
unsigned a_bits = get_alignment_bits(get_memop(oi));
uintptr_t haddr;
if (addr & ((1 << a_bits) - 1)) {
cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE,
mmu_idx, retaddr);
}
/* If the TLB entry is for a different page, reload and try again. */
if (!tlb_hit(tlb_addr, addr)) {
if (!VICTIM_TLB_HIT(addr_write, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, MMU_DATA_STORE,
mmu_idx, retaddr);
index = tlb_index(env, mmu_idx, addr);
entry = tlb_entry(env, mmu_idx, addr);
}
tlb_addr = tlb_addr_write(entry) & ~TLB_INVALID_MASK;
}
/* Handle an IO access. */
if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
if ((addr & (DATA_SIZE - 1)) != 0) {
goto do_unaligned_access;
}
/* ??? Note that the io helpers always read data in the target
byte ordering. We should push the LE/BE request down into io. */
val = TGT_LE(val);
glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr,
retaddr, tlb_addr & TLB_RECHECK);
return;
}
/* Handle slow unaligned access (it spans two pages or IO). */
if (DATA_SIZE > 1
&& unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1
>= TARGET_PAGE_SIZE)) {
int i;
target_ulong page2;
CPUTLBEntry *entry2;
do_unaligned_access:
/* Ensure the second page is in the TLB. Note that the first page
is already guaranteed to be filled, and that the second page
cannot evict the first. */
page2 = (addr + DATA_SIZE) & TARGET_PAGE_MASK;
entry2 = tlb_entry(env, mmu_idx, page2);
if (!tlb_hit_page(tlb_addr_write(entry2), page2)
&& !VICTIM_TLB_HIT(addr_write, page2)) {
tlb_fill(ENV_GET_CPU(env), page2, DATA_SIZE, MMU_DATA_STORE,
mmu_idx, retaddr);
}
/* XXX: not efficient, but simple. */
/* This loop must go in the forward direction to avoid issues
with self-modifying code in Windows 64-bit. */
for (i = 0; i < DATA_SIZE; ++i) {
/* Little-endian extract. */
uint8_t val8 = val >> (i * 8);
glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8,
oi, retaddr);
}
return;
}
haddr = addr + entry->addend;
#if DATA_SIZE == 1
glue(glue(st, SUFFIX), _p)((uint8_t *)haddr, val);
#else
glue(glue(st, SUFFIX), _le_p)((uint8_t *)haddr, val);
#endif
}
#if DATA_SIZE > 1
void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
TCGMemOpIdx oi, uintptr_t retaddr)
{
uintptr_t mmu_idx = get_mmuidx(oi);
uintptr_t index = tlb_index(env, mmu_idx, addr);
CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
target_ulong tlb_addr = tlb_addr_write(entry);
unsigned a_bits = get_alignment_bits(get_memop(oi));
uintptr_t haddr;
if (addr & ((1 << a_bits) - 1)) {
cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE,
mmu_idx, retaddr);
}
/* If the TLB entry is for a different page, reload and try again. */
if (!tlb_hit(tlb_addr, addr)) {
if (!VICTIM_TLB_HIT(addr_write, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, MMU_DATA_STORE,
mmu_idx, retaddr);
index = tlb_index(env, mmu_idx, addr);
entry = tlb_entry(env, mmu_idx, addr);
}
tlb_addr = tlb_addr_write(entry) & ~TLB_INVALID_MASK;
}
/* Handle an IO access. */
if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
if ((addr & (DATA_SIZE - 1)) != 0) {
goto do_unaligned_access;
}
/* ??? Note that the io helpers always read data in the target
byte ordering. We should push the LE/BE request down into io. */
val = TGT_BE(val);
glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr,
tlb_addr & TLB_RECHECK);
return;
}
/* Handle slow unaligned access (it spans two pages or IO). */
if (DATA_SIZE > 1
&& unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1
>= TARGET_PAGE_SIZE)) {
int i;
target_ulong page2;
CPUTLBEntry *entry2;
do_unaligned_access:
/* Ensure the second page is in the TLB. Note that the first page
is already guaranteed to be filled, and that the second page
cannot evict the first. */
page2 = (addr + DATA_SIZE) & TARGET_PAGE_MASK;
entry2 = tlb_entry(env, mmu_idx, page2);
if (!tlb_hit_page(tlb_addr_write(entry2), page2)
&& !VICTIM_TLB_HIT(addr_write, page2)) {
tlb_fill(ENV_GET_CPU(env), page2, DATA_SIZE, MMU_DATA_STORE,
mmu_idx, retaddr);
}
/* XXX: not efficient, but simple */
/* This loop must go in the forward direction to avoid issues
with self-modifying code. */
for (i = 0; i < DATA_SIZE; ++i) {
/* Big-endian extract. */
uint8_t val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8));
glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8,
oi, retaddr);
}
return;
}
haddr = addr + entry->addend;
glue(glue(st, SUFFIX), _be_p)((uint8_t *)haddr, val);
}
#endif /* DATA_SIZE > 1 */
#endif /* !defined(SOFTMMU_CODE_ACCESS) */
#undef READ_ACCESS_TYPE
#undef DATA_TYPE
#undef SUFFIX
#undef LSUFFIX
#undef DATA_SIZE
#undef ADDR_READ
#undef WORD_TYPE
#undef SDATA_TYPE
#undef USUFFIX
#undef SSUFFIX
#undef BSWAP
#undef helper_le_ld_name
#undef helper_be_ld_name
#undef helper_le_lds_name
#undef helper_be_lds_name
#undef helper_le_st_name
#undef helper_be_st_name

View File

@@ -1,92 +0,0 @@
/*
* QEMU System Emulator, accelerator interfaces
*
* Copyright (c) 2003-2008 Fabrice Bellard
* Copyright (c) 2014 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "sysemu/accel.h"
#include "sysemu/sysemu.h"
#include "qom/object.h"
#include "qemu-common.h"
#include "qom/cpu.h"
#include "sysemu/cpus.h"
#include "qemu/main-loop.h"
unsigned long tcg_tb_size;
#ifndef CONFIG_USER_ONLY
/* mask must never be zero, except for A20 change call */
static void tcg_handle_interrupt(CPUState *cpu, int mask)
{
int old_mask;
g_assert(qemu_mutex_iothread_locked());
old_mask = cpu->interrupt_request;
cpu->interrupt_request |= mask;
/*
* If called from iothread context, wake the target cpu in
* case its halted.
*/
if (!qemu_cpu_is_self(cpu)) {
qemu_cpu_kick(cpu);
} else {
atomic_set(&cpu->icount_decr.u16.high, -1);
if (use_icount &&
!cpu->can_do_io
&& (mask & ~old_mask) != 0) {
cpu_abort(cpu, "Raised interrupt while not in I/O function");
}
}
}
#endif
static int tcg_init(MachineState *ms)
{
tcg_exec_init(tcg_tb_size * 1024 * 1024);
cpu_interrupt_handler = tcg_handle_interrupt;
return 0;
}
static void tcg_accel_class_init(ObjectClass *oc, void *data)
{
AccelClass *ac = ACCEL_CLASS(oc);
ac->name = "tcg";
ac->init_machine = tcg_init;
ac->allowed = &tcg_allowed;
}
#define TYPE_TCG_ACCEL ACCEL_CLASS_NAME("tcg")
static const TypeInfo tcg_accel_type = {
.name = TYPE_TCG_ACCEL,
.parent = TYPE_ACCEL,
.class_init = tcg_accel_class_init,
};
static void register_accel_types(void)
{
type_register_static(&tcg_accel_type);
}
type_init(register_accel_types);

File diff suppressed because it is too large Load Diff

View File

@@ -1,169 +0,0 @@
/*
* Tiny Code Generator for QEMU
*
* Copyright (c) 2008 Fabrice Bellard
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu/host-utils.h"
#include "cpu.h"
#include "exec/helper-proto.h"
#include "exec/cpu_ldst.h"
#include "exec/exec-all.h"
#include "exec/tb-lookup.h"
#include "disas/disas.h"
#include "exec/log.h"
/* 32-bit helpers */
int32_t HELPER(div_i32)(int32_t arg1, int32_t arg2)
{
return arg1 / arg2;
}
int32_t HELPER(rem_i32)(int32_t arg1, int32_t arg2)
{
return arg1 % arg2;
}
uint32_t HELPER(divu_i32)(uint32_t arg1, uint32_t arg2)
{
return arg1 / arg2;
}
uint32_t HELPER(remu_i32)(uint32_t arg1, uint32_t arg2)
{
return arg1 % arg2;
}
/* 64-bit helpers */
uint64_t HELPER(shl_i64)(uint64_t arg1, uint64_t arg2)
{
return arg1 << arg2;
}
uint64_t HELPER(shr_i64)(uint64_t arg1, uint64_t arg2)
{
return arg1 >> arg2;
}
int64_t HELPER(sar_i64)(int64_t arg1, int64_t arg2)
{
return arg1 >> arg2;
}
int64_t HELPER(div_i64)(int64_t arg1, int64_t arg2)
{
return arg1 / arg2;
}
int64_t HELPER(rem_i64)(int64_t arg1, int64_t arg2)
{
return arg1 % arg2;
}
uint64_t HELPER(divu_i64)(uint64_t arg1, uint64_t arg2)
{
return arg1 / arg2;
}
uint64_t HELPER(remu_i64)(uint64_t arg1, uint64_t arg2)
{
return arg1 % arg2;
}
uint64_t HELPER(muluh_i64)(uint64_t arg1, uint64_t arg2)
{
uint64_t l, h;
mulu64(&l, &h, arg1, arg2);
return h;
}
int64_t HELPER(mulsh_i64)(int64_t arg1, int64_t arg2)
{
uint64_t l, h;
muls64(&l, &h, arg1, arg2);
return h;
}
uint32_t HELPER(clz_i32)(uint32_t arg, uint32_t zero_val)
{
return arg ? clz32(arg) : zero_val;
}
uint32_t HELPER(ctz_i32)(uint32_t arg, uint32_t zero_val)
{
return arg ? ctz32(arg) : zero_val;
}
uint64_t HELPER(clz_i64)(uint64_t arg, uint64_t zero_val)
{
return arg ? clz64(arg) : zero_val;
}
uint64_t HELPER(ctz_i64)(uint64_t arg, uint64_t zero_val)
{
return arg ? ctz64(arg) : zero_val;
}
uint32_t HELPER(clrsb_i32)(uint32_t arg)
{
return clrsb32(arg);
}
uint64_t HELPER(clrsb_i64)(uint64_t arg)
{
return clrsb64(arg);
}
uint32_t HELPER(ctpop_i32)(uint32_t arg)
{
return ctpop32(arg);
}
uint64_t HELPER(ctpop_i64)(uint64_t arg)
{
return ctpop64(arg);
}
void *HELPER(lookup_tb_ptr)(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
TranslationBlock *tb;
target_ulong cs_base, pc;
uint32_t flags;
tb = tb_lookup__cpu_state(cpu, &pc, &cs_base, &flags, curr_cflags());
if (tb == NULL) {
return tcg_ctx->code_gen_epilogue;
}
qemu_log_mask_and_addr(CPU_LOG_EXEC, pc,
"Chain %d: %p ["
TARGET_FMT_lx "/" TARGET_FMT_lx "/%#x] %s\n",
cpu->cpu_index, tb->tc.ptr, cs_base, pc, flags,
lookup_symbol(pc));
return tb->tc.ptr;
}
void HELPER(exit_atomic)(CPUArchState *env)
{
cpu_loop_exit_atomic(ENV_GET_CPU(env), GETPC());
}

View File

@@ -1,285 +0,0 @@
DEF_HELPER_FLAGS_2(div_i32, TCG_CALL_NO_RWG_SE, s32, s32, s32)
DEF_HELPER_FLAGS_2(rem_i32, TCG_CALL_NO_RWG_SE, s32, s32, s32)
DEF_HELPER_FLAGS_2(divu_i32, TCG_CALL_NO_RWG_SE, i32, i32, i32)
DEF_HELPER_FLAGS_2(remu_i32, TCG_CALL_NO_RWG_SE, i32, i32, i32)
DEF_HELPER_FLAGS_2(div_i64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
DEF_HELPER_FLAGS_2(rem_i64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
DEF_HELPER_FLAGS_2(divu_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(remu_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(shl_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(shr_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(sar_i64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
DEF_HELPER_FLAGS_2(mulsh_i64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
DEF_HELPER_FLAGS_2(muluh_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(clz_i32, TCG_CALL_NO_RWG_SE, i32, i32, i32)
DEF_HELPER_FLAGS_2(ctz_i32, TCG_CALL_NO_RWG_SE, i32, i32, i32)
DEF_HELPER_FLAGS_2(clz_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(ctz_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_1(clrsb_i32, TCG_CALL_NO_RWG_SE, i32, i32)
DEF_HELPER_FLAGS_1(clrsb_i64, TCG_CALL_NO_RWG_SE, i64, i64)
DEF_HELPER_FLAGS_1(ctpop_i32, TCG_CALL_NO_RWG_SE, i32, i32)
DEF_HELPER_FLAGS_1(ctpop_i64, TCG_CALL_NO_RWG_SE, i64, i64)
DEF_HELPER_FLAGS_1(lookup_tb_ptr, TCG_CALL_NO_WG_SE, ptr, env)
DEF_HELPER_FLAGS_1(exit_atomic, TCG_CALL_NO_WG, noreturn, env)
#ifdef CONFIG_SOFTMMU
DEF_HELPER_FLAGS_5(atomic_cmpxchgb, TCG_CALL_NO_WG,
i32, env, tl, i32, i32, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgw_be, TCG_CALL_NO_WG,
i32, env, tl, i32, i32, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgw_le, TCG_CALL_NO_WG,
i32, env, tl, i32, i32, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgl_be, TCG_CALL_NO_WG,
i32, env, tl, i32, i32, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgl_le, TCG_CALL_NO_WG,
i32, env, tl, i32, i32, i32)
#ifdef CONFIG_ATOMIC64
DEF_HELPER_FLAGS_5(atomic_cmpxchgq_be, TCG_CALL_NO_WG,
i64, env, tl, i64, i64, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgq_le, TCG_CALL_NO_WG,
i64, env, tl, i64, i64, i32)
#endif
#ifdef CONFIG_ATOMIC64
#define GEN_ATOMIC_HELPERS(NAME) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), b), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), w_le), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), w_be), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), l_le), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), l_be), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), q_le), \
TCG_CALL_NO_WG, i64, env, tl, i64, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), q_be), \
TCG_CALL_NO_WG, i64, env, tl, i64, i32)
#else
#define GEN_ATOMIC_HELPERS(NAME) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), b), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), w_le), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), w_be), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), l_le), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), l_be), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32)
#endif /* CONFIG_ATOMIC64 */
#else
DEF_HELPER_FLAGS_4(atomic_cmpxchgb, TCG_CALL_NO_WG, i32, env, tl, i32, i32)
DEF_HELPER_FLAGS_4(atomic_cmpxchgw_be, TCG_CALL_NO_WG, i32, env, tl, i32, i32)
DEF_HELPER_FLAGS_4(atomic_cmpxchgw_le, TCG_CALL_NO_WG, i32, env, tl, i32, i32)
DEF_HELPER_FLAGS_4(atomic_cmpxchgl_be, TCG_CALL_NO_WG, i32, env, tl, i32, i32)
DEF_HELPER_FLAGS_4(atomic_cmpxchgl_le, TCG_CALL_NO_WG, i32, env, tl, i32, i32)
#ifdef CONFIG_ATOMIC64
DEF_HELPER_FLAGS_4(atomic_cmpxchgq_be, TCG_CALL_NO_WG, i64, env, tl, i64, i64)
DEF_HELPER_FLAGS_4(atomic_cmpxchgq_le, TCG_CALL_NO_WG, i64, env, tl, i64, i64)
#endif
#ifdef CONFIG_ATOMIC64
#define GEN_ATOMIC_HELPERS(NAME) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), b), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), w_le), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), w_be), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), l_le), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), l_be), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), q_le), \
TCG_CALL_NO_WG, i64, env, tl, i64) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), q_be), \
TCG_CALL_NO_WG, i64, env, tl, i64)
#else
#define GEN_ATOMIC_HELPERS(NAME) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), b), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), w_le), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), w_be), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), l_le), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), l_be), \
TCG_CALL_NO_WG, i32, env, tl, i32)
#endif /* CONFIG_ATOMIC64 */
#endif /* CONFIG_SOFTMMU */
GEN_ATOMIC_HELPERS(fetch_add)
GEN_ATOMIC_HELPERS(fetch_and)
GEN_ATOMIC_HELPERS(fetch_or)
GEN_ATOMIC_HELPERS(fetch_xor)
GEN_ATOMIC_HELPERS(fetch_smin)
GEN_ATOMIC_HELPERS(fetch_umin)
GEN_ATOMIC_HELPERS(fetch_smax)
GEN_ATOMIC_HELPERS(fetch_umax)
GEN_ATOMIC_HELPERS(add_fetch)
GEN_ATOMIC_HELPERS(and_fetch)
GEN_ATOMIC_HELPERS(or_fetch)
GEN_ATOMIC_HELPERS(xor_fetch)
GEN_ATOMIC_HELPERS(smin_fetch)
GEN_ATOMIC_HELPERS(umin_fetch)
GEN_ATOMIC_HELPERS(smax_fetch)
GEN_ATOMIC_HELPERS(umax_fetch)
GEN_ATOMIC_HELPERS(xchg)
#undef GEN_ATOMIC_HELPERS
DEF_HELPER_FLAGS_3(gvec_mov, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_dup8, TCG_CALL_NO_RWG, void, ptr, i32, i32)
DEF_HELPER_FLAGS_3(gvec_dup16, TCG_CALL_NO_RWG, void, ptr, i32, i32)
DEF_HELPER_FLAGS_3(gvec_dup32, TCG_CALL_NO_RWG, void, ptr, i32, i32)
DEF_HELPER_FLAGS_3(gvec_dup64, TCG_CALL_NO_RWG, void, ptr, i32, i64)
DEF_HELPER_FLAGS_4(gvec_add8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_add16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_add32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_add64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_adds8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_adds16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_adds32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_adds64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_sub8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sub16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sub32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sub64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_subs8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_subs16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_subs32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_subs64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_mul8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_mul16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_mul32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_mul64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_muls8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_muls16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_muls32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_muls64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_ssadd8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ssadd16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ssadd32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ssadd64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sssub8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sssub16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sssub32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sssub64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_usadd8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_usadd16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_usadd32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_usadd64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ussub8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ussub16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ussub32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ussub64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_smin8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_smin16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_smin32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_smin64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_smax8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_smax16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_smax32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_smax64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_umin8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_umin16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_umin32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_umin64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_umax8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_umax16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_umax32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_umax64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_neg8, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_neg16, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_neg32, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_neg64, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_not, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_and, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_or, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_xor, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_andc, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_orc, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_nand, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_nor, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_eqv, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ands, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_xors, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_ors, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_3(gvec_shl8i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shl16i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shl32i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shl64i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shr8i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shr16i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shr32i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shr64i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_sar8i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_sar16i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_sar32i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_sar64i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_eq8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_eq16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_eq32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_eq64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ne8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ne16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ne32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ne64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_lt8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_lt16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_lt32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_lt64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_le8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_le16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_le32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_le64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ltu8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ltu16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ltu32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ltu64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_leu8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_leu16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_leu32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_leu64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)

View File

@@ -1,10 +0,0 @@
# See docs/devel/tracing.txt for syntax documentation.
# TCG related tracing (mostly disabled by default)
# cpu-exec.c
disable exec_tb(void *tb, uintptr_t pc) "tb:%p pc=0x%"PRIxPTR
disable exec_tb_nocache(void *tb, uintptr_t pc) "tb:%p pc=0x%"PRIxPTR
disable exec_tb_exit(void *last_tb, unsigned int flags) "tb:%p flags=0x%x"
# translate-all.c
translate_block(void *tb, uintptr_t pc, uint8_t *tb_code) "tb:%p, pc:0x%"PRIxPTR", tb_code:%p"

File diff suppressed because it is too large Load Diff

View File

@@ -1,39 +0,0 @@
/*
* Translated block handling
*
* Copyright (c) 2003 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#ifndef TRANSLATE_ALL_H
#define TRANSLATE_ALL_H
#include "exec/exec-all.h"
/* translate-all.c */
struct page_collection *page_collection_lock(tb_page_addr_t start,
tb_page_addr_t end);
void page_collection_unlock(struct page_collection *set);
void tb_invalidate_phys_page_fast(struct page_collection *pages,
tb_page_addr_t start, int len);
void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end,
int is_cpu_write_access);
void tb_check_watchpoint(CPUState *cpu);
#ifdef CONFIG_USER_ONLY
int page_unprotect(target_ulong address, uintptr_t pc);
#endif
#endif /* TRANSLATE_ALL_H */

View File

@@ -1,141 +0,0 @@
/*
* Generic intermediate code generation.
*
* Copyright (C) 2016-2017 Lluís Vilanova <vilanova@ac.upc.edu>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "qemu/error-report.h"
#include "cpu.h"
#include "tcg/tcg.h"
#include "tcg/tcg-op.h"
#include "exec/exec-all.h"
#include "exec/gen-icount.h"
#include "exec/log.h"
#include "exec/translator.h"
/* Pairs with tcg_clear_temp_count.
To be called by #TranslatorOps.{translate_insn,tb_stop} if
(1) the target is sufficiently clean to support reporting,
(2) as and when all temporaries are known to be consumed.
For most targets, (2) is at the end of translate_insn. */
void translator_loop_temp_check(DisasContextBase *db)
{
if (tcg_check_temp_count()) {
qemu_log("warning: TCG temporary leaks before "
TARGET_FMT_lx "\n", db->pc_next);
}
}
void translator_loop(const TranslatorOps *ops, DisasContextBase *db,
CPUState *cpu, TranslationBlock *tb)
{
int bp_insn = 0;
/* Initialize DisasContext */
db->tb = tb;
db->pc_first = tb->pc;
db->pc_next = db->pc_first;
db->is_jmp = DISAS_NEXT;
db->num_insns = 0;
db->singlestep_enabled = cpu->singlestep_enabled;
/* Instruction counting */
db->max_insns = tb_cflags(db->tb) & CF_COUNT_MASK;
if (db->max_insns == 0) {
db->max_insns = CF_COUNT_MASK;
}
if (db->max_insns > TCG_MAX_INSNS) {
db->max_insns = TCG_MAX_INSNS;
}
if (db->singlestep_enabled || singlestep) {
db->max_insns = 1;
}
ops->init_disas_context(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
/* Reset the temp count so that we can identify leaks */
tcg_clear_temp_count();
/* Start translating. */
gen_tb_start(db->tb);
ops->tb_start(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
while (true) {
db->num_insns++;
ops->insn_start(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
/* Pass breakpoint hits to target for further processing */
if (!db->singlestep_enabled
&& unlikely(!QTAILQ_EMPTY(&cpu->breakpoints))) {
CPUBreakpoint *bp;
QTAILQ_FOREACH(bp, &cpu->breakpoints, entry) {
if (bp->pc == db->pc_next) {
if (ops->breakpoint_check(db, cpu, bp)) {
bp_insn = 1;
break;
}
}
}
/* The breakpoint_check hook may use DISAS_TOO_MANY to indicate
that only one more instruction is to be executed. Otherwise
it should use DISAS_NORETURN when generating an exception,
but may use a DISAS_TARGET_* value for Something Else. */
if (db->is_jmp > DISAS_TOO_MANY) {
break;
}
}
/* Disassemble one instruction. The translate_insn hook should
update db->pc_next and db->is_jmp to indicate what should be
done next -- either exiting this loop or locate the start of
the next instruction. */
if (db->num_insns == db->max_insns
&& (tb_cflags(db->tb) & CF_LAST_IO)) {
/* Accept I/O on the last instruction. */
gen_io_start();
ops->translate_insn(db, cpu);
gen_io_end();
} else {
ops->translate_insn(db, cpu);
}
/* Stop translation if translate_insn so indicated. */
if (db->is_jmp != DISAS_NEXT) {
break;
}
/* Stop translation if the output buffer is full,
or we have executed all of the allowed instructions. */
if (tcg_op_buf_full() || db->num_insns >= db->max_insns) {
db->is_jmp = DISAS_TOO_MANY;
break;
}
}
/* Emit code to exit the TB, as indicated by db->is_jmp. */
ops->tb_stop(db, cpu);
gen_tb_end(db->tb, db->num_insns - bp_insn);
/* The disas_log hook may use these values rather than recompute. */
db->tb->size = db->pc_next - db->pc_first;
db->tb->icount = db->num_insns;
#ifdef DEBUG_DISAS
if (qemu_loglevel_mask(CPU_LOG_TB_IN_ASM)
&& qemu_log_in_addr_range(db->pc_first)) {
qemu_log_lock();
qemu_log("----------------\n");
ops->disas_log(db, cpu);
qemu_log("\n");
qemu_log_unlock();
}
#endif
}

View File

@@ -1,37 +0,0 @@
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "qom/cpu.h"
#include "sysemu/replay.h"
#include "sysemu/sysemu.h"
bool enable_cpu_pm = false;
void cpu_resume(CPUState *cpu)
{
}
void qemu_init_vcpu(CPUState *cpu)
{
}
/* User mode emulation does not support record/replay yet. */
bool replay_exception(void)
{
return true;
}
bool replay_has_exception(void)
{
return false;
}
bool replay_interrupt(void)
{
return true;
}
bool replay_has_interrupt(void)
{
return false;
}

View File

@@ -1,745 +0,0 @@
/*
* User emulator execution
*
* Copyright (c) 2003-2005 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "cpu.h"
#include "disas/disas.h"
#include "exec/exec-all.h"
#include "tcg.h"
#include "qemu/bitops.h"
#include "exec/cpu_ldst.h"
#include "translate-all.h"
#include "exec/helper-proto.h"
#include "qemu/atomic128.h"
#undef EAX
#undef ECX
#undef EDX
#undef EBX
#undef ESP
#undef EBP
#undef ESI
#undef EDI
#undef EIP
#ifdef __linux__
#include <sys/ucontext.h>
#endif
__thread uintptr_t helper_retaddr;
//#define DEBUG_SIGNAL
/* exit the current TB from a signal handler. The host registers are
restored in a state compatible with the CPU emulator
*/
static void cpu_exit_tb_from_sighandler(CPUState *cpu, sigset_t *old_set)
{
/* XXX: use siglongjmp ? */
sigprocmask(SIG_SETMASK, old_set, NULL);
cpu_loop_exit_noexc(cpu);
}
/* 'pc' is the host PC at which the exception was raised. 'address' is
the effective address of the memory exception. 'is_write' is 1 if a
write caused the exception and otherwise 0'. 'old_set' is the
signal set which should be restored */
static inline int handle_cpu_signal(uintptr_t pc, siginfo_t *info,
int is_write, sigset_t *old_set)
{
CPUState *cpu = current_cpu;
CPUClass *cc;
int ret;
unsigned long address = (unsigned long)info->si_addr;
/* We must handle PC addresses from two different sources:
* a call return address and a signal frame address.
*
* Within cpu_restore_state_from_tb we assume the former and adjust
* the address by -GETPC_ADJ so that the address is within the call
* insn so that addr does not accidentally match the beginning of the
* next guest insn.
*
* However, when the PC comes from the signal frame, it points to
* the actual faulting host insn and not a call insn. Subtracting
* GETPC_ADJ in that case may accidentally match the previous guest insn.
*
* So for the later case, adjust forward to compensate for what
* will be done later by cpu_restore_state_from_tb.
*/
if (helper_retaddr) {
pc = helper_retaddr;
} else {
pc += GETPC_ADJ;
}
/* For synchronous signals we expect to be coming from the vCPU
* thread (so current_cpu should be valid) and either from running
* code or during translation which can fault as we cross pages.
*
* If neither is true then something has gone wrong and we should
* abort rather than try and restart the vCPU execution.
*/
if (!cpu || !cpu->running) {
printf("qemu:%s received signal outside vCPU context @ pc=0x%"
PRIxPTR "\n", __func__, pc);
abort();
}
#if defined(DEBUG_SIGNAL)
printf("qemu: SIGSEGV pc=0x%08lx address=%08lx w=%d oldset=0x%08lx\n",
pc, address, is_write, *(unsigned long *)old_set);
#endif
/* XXX: locking issue */
/* Note that it is important that we don't call page_unprotect() unless
* this is really a "write to nonwriteable page" fault, because
* page_unprotect() assumes that if it is called for an access to
* a page that's writeable this means we had two threads racing and
* another thread got there first and already made the page writeable;
* so we will retry the access. If we were to call page_unprotect()
* for some other kind of fault that should really be passed to the
* guest, we'd end up in an infinite loop of retrying the faulting
* access.
*/
if (is_write && info->si_signo == SIGSEGV && info->si_code == SEGV_ACCERR &&
h2g_valid(address)) {
switch (page_unprotect(h2g(address), pc)) {
case 0:
/* Fault not caused by a page marked unwritable to protect
* cached translations, must be the guest binary's problem.
*/
break;
case 1:
/* Fault caused by protection of cached translation; TBs
* invalidated, so resume execution. Retain helper_retaddr
* for a possible second fault.
*/
return 1;
case 2:
/* Fault caused by protection of cached translation, and the
* currently executing TB was modified and must be exited
* immediately. Clear helper_retaddr for next execution.
*/
helper_retaddr = 0;
cpu_exit_tb_from_sighandler(cpu, old_set);
/* NORETURN */
default:
g_assert_not_reached();
}
}
/* Convert forcefully to guest address space, invalid addresses
are still valid segv ones */
address = h2g_nocheck(address);
cc = CPU_GET_CLASS(cpu);
/* see if it is an MMU fault */
g_assert(cc->handle_mmu_fault);
ret = cc->handle_mmu_fault(cpu, address, 0, is_write, MMU_USER_IDX);
if (ret == 0) {
/* The MMU fault was handled without causing real CPU fault.
* Retain helper_retaddr for a possible second fault.
*/
return 1;
}
/* All other paths lead to cpu_exit; clear helper_retaddr
* for next execution.
*/
helper_retaddr = 0;
if (ret < 0) {
return 0; /* not an MMU fault */
}
/* Now we have a real cpu fault. */
cpu_restore_state(cpu, pc, true);
sigprocmask(SIG_SETMASK, old_set, NULL);
cpu_loop_exit(cpu);
/* never comes here */
return 1;
}
#if defined(__i386__)
#if defined(__NetBSD__)
#include <ucontext.h>
#define EIP_sig(context) ((context)->uc_mcontext.__gregs[_REG_EIP])
#define TRAP_sig(context) ((context)->uc_mcontext.__gregs[_REG_TRAPNO])
#define ERROR_sig(context) ((context)->uc_mcontext.__gregs[_REG_ERR])
#define MASK_sig(context) ((context)->uc_sigmask)
#elif defined(__FreeBSD__) || defined(__DragonFly__)
#include <ucontext.h>
#define EIP_sig(context) (*((unsigned long *)&(context)->uc_mcontext.mc_eip))
#define TRAP_sig(context) ((context)->uc_mcontext.mc_trapno)
#define ERROR_sig(context) ((context)->uc_mcontext.mc_err)
#define MASK_sig(context) ((context)->uc_sigmask)
#elif defined(__OpenBSD__)
#define EIP_sig(context) ((context)->sc_eip)
#define TRAP_sig(context) ((context)->sc_trapno)
#define ERROR_sig(context) ((context)->sc_err)
#define MASK_sig(context) ((context)->sc_mask)
#else
#define EIP_sig(context) ((context)->uc_mcontext.gregs[REG_EIP])
#define TRAP_sig(context) ((context)->uc_mcontext.gregs[REG_TRAPNO])
#define ERROR_sig(context) ((context)->uc_mcontext.gregs[REG_ERR])
#define MASK_sig(context) ((context)->uc_sigmask)
#endif
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
#if defined(__NetBSD__) || defined(__FreeBSD__) || defined(__DragonFly__)
ucontext_t *uc = puc;
#elif defined(__OpenBSD__)
struct sigcontext *uc = puc;
#else
ucontext_t *uc = puc;
#endif
unsigned long pc;
int trapno;
#ifndef REG_EIP
/* for glibc 2.1 */
#define REG_EIP EIP
#define REG_ERR ERR
#define REG_TRAPNO TRAPNO
#endif
pc = EIP_sig(uc);
trapno = TRAP_sig(uc);
return handle_cpu_signal(pc, info,
trapno == 0xe ? (ERROR_sig(uc) >> 1) & 1 : 0,
&MASK_sig(uc));
}
#elif defined(__x86_64__)
#ifdef __NetBSD__
#define PC_sig(context) _UC_MACHINE_PC(context)
#define TRAP_sig(context) ((context)->uc_mcontext.__gregs[_REG_TRAPNO])
#define ERROR_sig(context) ((context)->uc_mcontext.__gregs[_REG_ERR])
#define MASK_sig(context) ((context)->uc_sigmask)
#elif defined(__OpenBSD__)
#define PC_sig(context) ((context)->sc_rip)
#define TRAP_sig(context) ((context)->sc_trapno)
#define ERROR_sig(context) ((context)->sc_err)
#define MASK_sig(context) ((context)->sc_mask)
#elif defined(__FreeBSD__) || defined(__DragonFly__)
#include <ucontext.h>
#define PC_sig(context) (*((unsigned long *)&(context)->uc_mcontext.mc_rip))
#define TRAP_sig(context) ((context)->uc_mcontext.mc_trapno)
#define ERROR_sig(context) ((context)->uc_mcontext.mc_err)
#define MASK_sig(context) ((context)->uc_sigmask)
#else
#define PC_sig(context) ((context)->uc_mcontext.gregs[REG_RIP])
#define TRAP_sig(context) ((context)->uc_mcontext.gregs[REG_TRAPNO])
#define ERROR_sig(context) ((context)->uc_mcontext.gregs[REG_ERR])
#define MASK_sig(context) ((context)->uc_sigmask)
#endif
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
unsigned long pc;
#if defined(__NetBSD__) || defined(__FreeBSD__) || defined(__DragonFly__)
ucontext_t *uc = puc;
#elif defined(__OpenBSD__)
struct sigcontext *uc = puc;
#else
ucontext_t *uc = puc;
#endif
pc = PC_sig(uc);
return handle_cpu_signal(pc, info,
TRAP_sig(uc) == 0xe ? (ERROR_sig(uc) >> 1) & 1 : 0,
&MASK_sig(uc));
}
#elif defined(_ARCH_PPC)
/***********************************************************************
* signal context platform-specific definitions
* From Wine
*/
#ifdef linux
/* All Registers access - only for local access */
#define REG_sig(reg_name, context) \
((context)->uc_mcontext.regs->reg_name)
/* Gpr Registers access */
#define GPR_sig(reg_num, context) REG_sig(gpr[reg_num], context)
/* Program counter */
#define IAR_sig(context) REG_sig(nip, context)
/* Machine State Register (Supervisor) */
#define MSR_sig(context) REG_sig(msr, context)
/* Count register */
#define CTR_sig(context) REG_sig(ctr, context)
/* User's integer exception register */
#define XER_sig(context) REG_sig(xer, context)
/* Link register */
#define LR_sig(context) REG_sig(link, context)
/* Condition register */
#define CR_sig(context) REG_sig(ccr, context)
/* Float Registers access */
#define FLOAT_sig(reg_num, context) \
(((double *)((char *)((context)->uc_mcontext.regs + 48 * 4)))[reg_num])
#define FPSCR_sig(context) \
(*(int *)((char *)((context)->uc_mcontext.regs + (48 + 32 * 2) * 4)))
/* Exception Registers access */
#define DAR_sig(context) REG_sig(dar, context)
#define DSISR_sig(context) REG_sig(dsisr, context)
#define TRAP_sig(context) REG_sig(trap, context)
#endif /* linux */
#if defined(__FreeBSD__) || defined(__FreeBSD_kernel__)
#include <ucontext.h>
#define IAR_sig(context) ((context)->uc_mcontext.mc_srr0)
#define MSR_sig(context) ((context)->uc_mcontext.mc_srr1)
#define CTR_sig(context) ((context)->uc_mcontext.mc_ctr)
#define XER_sig(context) ((context)->uc_mcontext.mc_xer)
#define LR_sig(context) ((context)->uc_mcontext.mc_lr)
#define CR_sig(context) ((context)->uc_mcontext.mc_cr)
/* Exception Registers access */
#define DAR_sig(context) ((context)->uc_mcontext.mc_dar)
#define DSISR_sig(context) ((context)->uc_mcontext.mc_dsisr)
#define TRAP_sig(context) ((context)->uc_mcontext.mc_exc)
#endif /* __FreeBSD__|| __FreeBSD_kernel__ */
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
#if defined(__FreeBSD__) || defined(__FreeBSD_kernel__)
ucontext_t *uc = puc;
#else
ucontext_t *uc = puc;
#endif
unsigned long pc;
int is_write;
pc = IAR_sig(uc);
is_write = 0;
#if 0
/* ppc 4xx case */
if (DSISR_sig(uc) & 0x00800000) {
is_write = 1;
}
#else
if (TRAP_sig(uc) != 0x400 && (DSISR_sig(uc) & 0x02000000)) {
is_write = 1;
}
#endif
return handle_cpu_signal(pc, info, is_write, &uc->uc_sigmask);
}
#elif defined(__alpha__)
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
ucontext_t *uc = puc;
uint32_t *pc = uc->uc_mcontext.sc_pc;
uint32_t insn = *pc;
int is_write = 0;
/* XXX: need kernel patch to get write flag faster */
switch (insn >> 26) {
case 0x0d: /* stw */
case 0x0e: /* stb */
case 0x0f: /* stq_u */
case 0x24: /* stf */
case 0x25: /* stg */
case 0x26: /* sts */
case 0x27: /* stt */
case 0x2c: /* stl */
case 0x2d: /* stq */
case 0x2e: /* stl_c */
case 0x2f: /* stq_c */
is_write = 1;
}
return handle_cpu_signal(pc, info, is_write, &uc->uc_sigmask);
}
#elif defined(__sparc__)
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
int is_write;
uint32_t insn;
#if !defined(__arch64__) || defined(CONFIG_SOLARIS)
uint32_t *regs = (uint32_t *)(info + 1);
void *sigmask = (regs + 20);
/* XXX: is there a standard glibc define ? */
unsigned long pc = regs[1];
#else
#ifdef __linux__
struct sigcontext *sc = puc;
unsigned long pc = sc->sigc_regs.tpc;
void *sigmask = (void *)sc->sigc_mask;
#elif defined(__OpenBSD__)
struct sigcontext *uc = puc;
unsigned long pc = uc->sc_pc;
void *sigmask = (void *)(long)uc->sc_mask;
#elif defined(__NetBSD__)
ucontext_t *uc = puc;
unsigned long pc = _UC_MACHINE_PC(uc);
void *sigmask = (void *)&uc->uc_sigmask;
#endif
#endif
/* XXX: need kernel patch to get write flag faster */
is_write = 0;
insn = *(uint32_t *)pc;
if ((insn >> 30) == 3) {
switch ((insn >> 19) & 0x3f) {
case 0x05: /* stb */
case 0x15: /* stba */
case 0x06: /* sth */
case 0x16: /* stha */
case 0x04: /* st */
case 0x14: /* sta */
case 0x07: /* std */
case 0x17: /* stda */
case 0x0e: /* stx */
case 0x1e: /* stxa */
case 0x24: /* stf */
case 0x34: /* stfa */
case 0x27: /* stdf */
case 0x37: /* stdfa */
case 0x26: /* stqf */
case 0x36: /* stqfa */
case 0x25: /* stfsr */
case 0x3c: /* casa */
case 0x3e: /* casxa */
is_write = 1;
break;
}
}
return handle_cpu_signal(pc, info, is_write, sigmask);
}
#elif defined(__arm__)
#if defined(__NetBSD__)
#include <ucontext.h>
#endif
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
#if defined(__NetBSD__)
ucontext_t *uc = puc;
#else
ucontext_t *uc = puc;
#endif
unsigned long pc;
int is_write;
#if defined(__NetBSD__)
pc = uc->uc_mcontext.__gregs[_REG_R15];
#elif defined(__GLIBC__) && (__GLIBC__ < 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ <= 3))
pc = uc->uc_mcontext.gregs[R15];
#else
pc = uc->uc_mcontext.arm_pc;
#endif
/* error_code is the FSR value, in which bit 11 is WnR (assuming a v6 or
* later processor; on v5 we will always report this as a read).
*/
is_write = extract32(uc->uc_mcontext.error_code, 11, 1);
return handle_cpu_signal(pc, info, is_write, &uc->uc_sigmask);
}
#elif defined(__aarch64__)
#ifndef ESR_MAGIC
/* Pre-3.16 kernel headers don't have these, so provide fallback definitions */
#define ESR_MAGIC 0x45535201
struct esr_context {
struct _aarch64_ctx head;
uint64_t esr;
};
#endif
static inline struct _aarch64_ctx *first_ctx(ucontext_t *uc)
{
return (struct _aarch64_ctx *)&uc->uc_mcontext.__reserved;
}
static inline struct _aarch64_ctx *next_ctx(struct _aarch64_ctx *hdr)
{
return (struct _aarch64_ctx *)((char *)hdr + hdr->size);
}
int cpu_signal_handler(int host_signum, void *pinfo, void *puc)
{
siginfo_t *info = pinfo;
ucontext_t *uc = puc;
uintptr_t pc = uc->uc_mcontext.pc;
bool is_write;
struct _aarch64_ctx *hdr;
struct esr_context const *esrctx = NULL;
/* Find the esr_context, which has the WnR bit in it */
for (hdr = first_ctx(uc); hdr->magic; hdr = next_ctx(hdr)) {
if (hdr->magic == ESR_MAGIC) {
esrctx = (struct esr_context const *)hdr;
break;
}
}
if (esrctx) {
/* For data aborts ESR.EC is 0b10010x: then bit 6 is the WnR bit */
uint64_t esr = esrctx->esr;
is_write = extract32(esr, 27, 5) == 0x12 && extract32(esr, 6, 1) == 1;
} else {
/*
* Fall back to parsing instructions; will only be needed
* for really ancient (pre-3.16) kernels.
*/
uint32_t insn = *(uint32_t *)pc;
is_write = ((insn & 0xbfff0000) == 0x0c000000 /* C3.3.1 */
|| (insn & 0xbfe00000) == 0x0c800000 /* C3.3.2 */
|| (insn & 0xbfdf0000) == 0x0d000000 /* C3.3.3 */
|| (insn & 0xbfc00000) == 0x0d800000 /* C3.3.4 */
|| (insn & 0x3f400000) == 0x08000000 /* C3.3.6 */
|| (insn & 0x3bc00000) == 0x39000000 /* C3.3.13 */
|| (insn & 0x3fc00000) == 0x3d800000 /* ... 128bit */
/* Ignore bits 10, 11 & 21, controlling indexing. */
|| (insn & 0x3bc00000) == 0x38000000 /* C3.3.8-12 */
|| (insn & 0x3fe00000) == 0x3c800000 /* ... 128bit */
/* Ignore bits 23 & 24, controlling indexing. */
|| (insn & 0x3a400000) == 0x28000000); /* C3.3.7,14-16 */
}
return handle_cpu_signal(pc, info, is_write, &uc->uc_sigmask);
}
#elif defined(__s390__)
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
ucontext_t *uc = puc;
unsigned long pc;
uint16_t *pinsn;
int is_write = 0;
pc = uc->uc_mcontext.psw.addr;
/* ??? On linux, the non-rt signal handler has 4 (!) arguments instead
of the normal 2 arguments. The 3rd argument contains the "int_code"
from the hardware which does in fact contain the is_write value.
The rt signal handler, as far as I can tell, does not give this value
at all. Not that we could get to it from here even if it were. */
/* ??? This is not even close to complete, since it ignores all
of the read-modify-write instructions. */
pinsn = (uint16_t *)pc;
switch (pinsn[0] >> 8) {
case 0x50: /* ST */
case 0x42: /* STC */
case 0x40: /* STH */
is_write = 1;
break;
case 0xc4: /* RIL format insns */
switch (pinsn[0] & 0xf) {
case 0xf: /* STRL */
case 0xb: /* STGRL */
case 0x7: /* STHRL */
is_write = 1;
}
break;
case 0xe3: /* RXY format insns */
switch (pinsn[2] & 0xff) {
case 0x50: /* STY */
case 0x24: /* STG */
case 0x72: /* STCY */
case 0x70: /* STHY */
case 0x8e: /* STPQ */
case 0x3f: /* STRVH */
case 0x3e: /* STRV */
case 0x2f: /* STRVG */
is_write = 1;
}
break;
}
return handle_cpu_signal(pc, info, is_write, &uc->uc_sigmask);
}
#elif defined(__mips__)
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
ucontext_t *uc = puc;
greg_t pc = uc->uc_mcontext.pc;
int is_write;
/* XXX: compute is_write */
is_write = 0;
return handle_cpu_signal(pc, info, is_write, &uc->uc_sigmask);
}
#elif defined(__riscv)
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
ucontext_t *uc = puc;
greg_t pc = uc->uc_mcontext.__gregs[REG_PC];
uint32_t insn = *(uint32_t *)pc;
int is_write = 0;
/* Detect store by reading the instruction at the program
counter. Note: we currently only generate 32-bit
instructions so we thus only detect 32-bit stores */
switch (((insn >> 0) & 0b11)) {
case 3:
switch (((insn >> 2) & 0b11111)) {
case 8:
switch (((insn >> 12) & 0b111)) {
case 0: /* sb */
case 1: /* sh */
case 2: /* sw */
case 3: /* sd */
case 4: /* sq */
is_write = 1;
break;
default:
break;
}
break;
case 9:
switch (((insn >> 12) & 0b111)) {
case 2: /* fsw */
case 3: /* fsd */
case 4: /* fsq */
is_write = 1;
break;
default:
break;
}
break;
default:
break;
}
}
/* Check for compressed instructions */
switch (((insn >> 13) & 0b111)) {
case 7:
switch (insn & 0b11) {
case 0: /*c.sd */
case 2: /* c.sdsp */
is_write = 1;
break;
default:
break;
}
break;
case 6:
switch (insn & 0b11) {
case 0: /* c.sw */
case 3: /* c.swsp */
is_write = 1;
break;
default:
break;
}
break;
default:
break;
}
return handle_cpu_signal(pc, info, is_write, &uc->uc_sigmask);
}
#else
#error host CPU specific signal handler needed
#endif
/* The softmmu versions of these helpers are in cputlb.c. */
/* Do not allow unaligned operations to proceed. Return the host address. */
static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
int size, uintptr_t retaddr)
{
/* Enforce qemu required alignment. */
if (unlikely(addr & (size - 1))) {
cpu_loop_exit_atomic(ENV_GET_CPU(env), retaddr);
}
helper_retaddr = retaddr;
return g2h(addr);
}
/* Macro to call the above, with local variables from the use context. */
#define ATOMIC_MMU_DECLS do {} while (0)
#define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, DATA_SIZE, GETPC())
#define ATOMIC_MMU_CLEANUP do { helper_retaddr = 0; } while (0)
#define ATOMIC_NAME(X) HELPER(glue(glue(atomic_ ## X, SUFFIX), END))
#define EXTRA_ARGS
#define DATA_SIZE 1
#include "atomic_template.h"
#define DATA_SIZE 2
#include "atomic_template.h"
#define DATA_SIZE 4
#include "atomic_template.h"
#ifdef CONFIG_ATOMIC64
#define DATA_SIZE 8
#include "atomic_template.h"
#endif
/* The following is only callable from other helpers, and matches up
with the softmmu version. */
#if HAVE_ATOMIC128 || HAVE_CMPXCHG128
#undef EXTRA_ARGS
#undef ATOMIC_NAME
#undef ATOMIC_MMU_LOOKUP
#define EXTRA_ARGS , TCGMemOpIdx oi, uintptr_t retaddr
#define ATOMIC_NAME(X) \
HELPER(glue(glue(glue(atomic_ ## X, SUFFIX), END), _mmu))
#define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, DATA_SIZE, retaddr)
#define DATA_SIZE 16
#include "atomic_template.h"
#endif

500
aio-posix.c Normal file
View File

@@ -0,0 +1,500 @@
/*
* QEMU aio implementation
*
* Copyright IBM, Corp. 2008
*
* Authors:
* Anthony Liguori <aliguori@us.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "block/block.h"
#include "qemu/queue.h"
#include "qemu/sockets.h"
#ifdef CONFIG_EPOLL_CREATE1
#include <sys/epoll.h>
#endif
struct AioHandler
{
GPollFD pfd;
IOHandler *io_read;
IOHandler *io_write;
int deleted;
void *opaque;
bool is_external;
QLIST_ENTRY(AioHandler) node;
};
#ifdef CONFIG_EPOLL_CREATE1
/* The fd number threashold to switch to epoll */
#define EPOLL_ENABLE_THRESHOLD 64
static void aio_epoll_disable(AioContext *ctx)
{
ctx->epoll_available = false;
if (!ctx->epoll_enabled) {
return;
}
ctx->epoll_enabled = false;
close(ctx->epollfd);
}
static inline int epoll_events_from_pfd(int pfd_events)
{
return (pfd_events & G_IO_IN ? EPOLLIN : 0) |
(pfd_events & G_IO_OUT ? EPOLLOUT : 0) |
(pfd_events & G_IO_HUP ? EPOLLHUP : 0) |
(pfd_events & G_IO_ERR ? EPOLLERR : 0);
}
static bool aio_epoll_try_enable(AioContext *ctx)
{
AioHandler *node;
struct epoll_event event;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
int r;
if (node->deleted || !node->pfd.events) {
continue;
}
event.events = epoll_events_from_pfd(node->pfd.events);
event.data.ptr = node;
r = epoll_ctl(ctx->epollfd, EPOLL_CTL_ADD, node->pfd.fd, &event);
if (r) {
return false;
}
}
ctx->epoll_enabled = true;
return true;
}
static void aio_epoll_update(AioContext *ctx, AioHandler *node, bool is_new)
{
struct epoll_event event;
int r;
if (!ctx->epoll_enabled) {
return;
}
if (!node->pfd.events) {
r = epoll_ctl(ctx->epollfd, EPOLL_CTL_DEL, node->pfd.fd, &event);
if (r) {
aio_epoll_disable(ctx);
}
} else {
event.data.ptr = node;
event.events = epoll_events_from_pfd(node->pfd.events);
if (is_new) {
r = epoll_ctl(ctx->epollfd, EPOLL_CTL_ADD, node->pfd.fd, &event);
if (r) {
aio_epoll_disable(ctx);
}
} else {
r = epoll_ctl(ctx->epollfd, EPOLL_CTL_MOD, node->pfd.fd, &event);
if (r) {
aio_epoll_disable(ctx);
}
}
}
}
static int aio_epoll(AioContext *ctx, GPollFD *pfds,
unsigned npfd, int64_t timeout)
{
AioHandler *node;
int i, ret = 0;
struct epoll_event events[128];
assert(npfd == 1);
assert(pfds[0].fd == ctx->epollfd);
if (timeout > 0) {
ret = qemu_poll_ns(pfds, npfd, timeout);
}
if (timeout <= 0 || ret > 0) {
ret = epoll_wait(ctx->epollfd, events,
sizeof(events) / sizeof(events[0]),
timeout);
if (ret <= 0) {
goto out;
}
for (i = 0; i < ret; i++) {
int ev = events[i].events;
node = events[i].data.ptr;
node->pfd.revents = (ev & EPOLLIN ? G_IO_IN : 0) |
(ev & EPOLLOUT ? G_IO_OUT : 0) |
(ev & EPOLLHUP ? G_IO_HUP : 0) |
(ev & EPOLLERR ? G_IO_ERR : 0);
}
}
out:
return ret;
}
static bool aio_epoll_enabled(AioContext *ctx)
{
/* Fall back to ppoll when external clients are disabled. */
return !aio_external_disabled(ctx) && ctx->epoll_enabled;
}
static bool aio_epoll_check_poll(AioContext *ctx, GPollFD *pfds,
unsigned npfd, int64_t timeout)
{
if (!ctx->epoll_available) {
return false;
}
if (aio_epoll_enabled(ctx)) {
return true;
}
if (npfd >= EPOLL_ENABLE_THRESHOLD) {
if (aio_epoll_try_enable(ctx)) {
return true;
} else {
aio_epoll_disable(ctx);
}
}
return false;
}
#else
static void aio_epoll_update(AioContext *ctx, AioHandler *node, bool is_new)
{
}
static int aio_epoll(AioContext *ctx, GPollFD *pfds,
unsigned npfd, int64_t timeout)
{
assert(false);
}
static bool aio_epoll_enabled(AioContext *ctx)
{
return false;
}
static bool aio_epoll_check_poll(AioContext *ctx, GPollFD *pfds,
unsigned npfd, int64_t timeout)
{
return false;
}
#endif
static AioHandler *find_aio_handler(AioContext *ctx, int fd)
{
AioHandler *node;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->pfd.fd == fd)
if (!node->deleted)
return node;
}
return NULL;
}
void aio_set_fd_handler(AioContext *ctx,
int fd,
bool is_external,
IOHandler *io_read,
IOHandler *io_write,
void *opaque)
{
AioHandler *node;
bool is_new = false;
bool deleted = false;
node = find_aio_handler(ctx, fd);
/* Are we deleting the fd handler? */
if (!io_read && !io_write) {
if (node) {
g_source_remove_poll(&ctx->source, &node->pfd);
/* If the lock is held, just mark the node as deleted */
if (ctx->walking_handlers) {
node->deleted = 1;
node->pfd.revents = 0;
} else {
/* Otherwise, delete it for real. We can't just mark it as
* deleted because deleted nodes are only cleaned up after
* releasing the walking_handlers lock.
*/
QLIST_REMOVE(node, node);
deleted = true;
}
}
} else {
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = g_new0(AioHandler, 1);
node->pfd.fd = fd;
QLIST_INSERT_HEAD(&ctx->aio_handlers, node, node);
g_source_add_poll(&ctx->source, &node->pfd);
is_new = true;
}
/* Update handler with latest information */
node->io_read = io_read;
node->io_write = io_write;
node->opaque = opaque;
node->is_external = is_external;
node->pfd.events = (io_read ? G_IO_IN | G_IO_HUP | G_IO_ERR : 0);
node->pfd.events |= (io_write ? G_IO_OUT | G_IO_ERR : 0);
}
aio_epoll_update(ctx, node, is_new);
aio_notify(ctx);
if (deleted) {
g_free(node);
}
}
void aio_set_event_notifier(AioContext *ctx,
EventNotifier *notifier,
bool is_external,
EventNotifierHandler *io_read)
{
aio_set_fd_handler(ctx, event_notifier_get_fd(notifier),
is_external, (IOHandler *)io_read, NULL, notifier);
}
bool aio_prepare(AioContext *ctx)
{
return false;
}
bool aio_pending(AioContext *ctx)
{
AioHandler *node;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
int revents;
revents = node->pfd.revents & node->pfd.events;
if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read &&
aio_node_check(ctx, node->is_external)) {
return true;
}
if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write &&
aio_node_check(ctx, node->is_external)) {
return true;
}
}
return false;
}
bool aio_dispatch(AioContext *ctx)
{
AioHandler *node;
bool progress = false;
/*
* If there are callbacks left that have been queued, we need to call them.
* Do not call select in this case, because it is possible that the caller
* does not need a complete flush (as is the case for aio_poll loops).
*/
if (aio_bh_poll(ctx)) {
progress = true;
}
/*
* We have to walk very carefully in case aio_set_fd_handler is
* called while we're walking.
*/
node = QLIST_FIRST(&ctx->aio_handlers);
while (node) {
AioHandler *tmp;
int revents;
ctx->walking_handlers++;
revents = node->pfd.revents & node->pfd.events;
node->pfd.revents = 0;
if (!node->deleted &&
(revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
aio_node_check(ctx, node->is_external) &&
node->io_read) {
node->io_read(node->opaque);
/* aio_notify() does not count as progress */
if (node->opaque != &ctx->notifier) {
progress = true;
}
}
if (!node->deleted &&
(revents & (G_IO_OUT | G_IO_ERR)) &&
aio_node_check(ctx, node->is_external) &&
node->io_write) {
node->io_write(node->opaque);
progress = true;
}
tmp = node;
node = QLIST_NEXT(node, node);
ctx->walking_handlers--;
if (!ctx->walking_handlers && tmp->deleted) {
QLIST_REMOVE(tmp, node);
g_free(tmp);
}
}
/* Run our timers */
progress |= timerlistgroup_run_timers(&ctx->tlg);
return progress;
}
/* These thread-local variables are used only in a small part of aio_poll
* around the call to the poll() system call. In particular they are not
* used while aio_poll is performing callbacks, which makes it much easier
* to think about reentrancy!
*
* Stack-allocated arrays would be perfect but they have size limitations;
* heap allocation is expensive enough that we want to reuse arrays across
* calls to aio_poll(). And because poll() has to be called without holding
* any lock, the arrays cannot be stored in AioContext. Thread-local data
* has none of the disadvantages of these three options.
*/
static __thread GPollFD *pollfds;
static __thread AioHandler **nodes;
static __thread unsigned npfd, nalloc;
static __thread Notifier pollfds_cleanup_notifier;
static void pollfds_cleanup(Notifier *n, void *unused)
{
g_assert(npfd == 0);
g_free(pollfds);
g_free(nodes);
nalloc = 0;
}
static void add_pollfd(AioHandler *node)
{
if (npfd == nalloc) {
if (nalloc == 0) {
pollfds_cleanup_notifier.notify = pollfds_cleanup;
qemu_thread_atexit_add(&pollfds_cleanup_notifier);
nalloc = 8;
} else {
g_assert(nalloc <= INT_MAX);
nalloc *= 2;
}
pollfds = g_renew(GPollFD, pollfds, nalloc);
nodes = g_renew(AioHandler *, nodes, nalloc);
}
nodes[npfd] = node;
pollfds[npfd] = (GPollFD) {
.fd = node->pfd.fd,
.events = node->pfd.events,
};
npfd++;
}
bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
int i, ret;
bool progress;
int64_t timeout;
aio_context_acquire(ctx);
progress = false;
/* aio_notify can avoid the expensive event_notifier_set if
* everything (file descriptors, bottom halves, timers) will
* be re-evaluated before the next blocking poll(). This is
* already true when aio_poll is called with blocking == false;
* if blocking == true, it is only true after poll() returns,
* so disable the optimization now.
*/
if (blocking) {
atomic_add(&ctx->notify_me, 2);
}
ctx->walking_handlers++;
assert(npfd == 0);
/* fill pollfds */
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (!node->deleted && node->pfd.events
&& !aio_epoll_enabled(ctx)
&& aio_node_check(ctx, node->is_external)) {
add_pollfd(node);
}
}
timeout = blocking ? aio_compute_timeout(ctx) : 0;
/* wait until next event */
if (timeout) {
aio_context_release(ctx);
}
if (aio_epoll_check_poll(ctx, pollfds, npfd, timeout)) {
AioHandler epoll_handler;
epoll_handler.pfd.fd = ctx->epollfd;
epoll_handler.pfd.events = G_IO_IN | G_IO_OUT | G_IO_HUP | G_IO_ERR;
npfd = 0;
add_pollfd(&epoll_handler);
ret = aio_epoll(ctx, pollfds, npfd, timeout);
} else {
ret = qemu_poll_ns(pollfds, npfd, timeout);
}
if (blocking) {
atomic_sub(&ctx->notify_me, 2);
}
if (timeout) {
aio_context_acquire(ctx);
}
aio_notify_accept(ctx);
/* if we have any readable fds, dispatch event */
if (ret > 0) {
for (i = 0; i < npfd; i++) {
nodes[i]->pfd.revents = pollfds[i].revents;
}
}
npfd = 0;
ctx->walking_handlers--;
/* Run dispatch even if there were no readable fds to run timers */
if (aio_dispatch(ctx)) {
progress = true;
}
aio_context_release(ctx);
return progress;
}
void aio_context_setup(AioContext *ctx)
{
#ifdef CONFIG_EPOLL_CREATE1
assert(!ctx->epollfd);
ctx->epollfd = epoll_create1(EPOLL_CLOEXEC);
if (ctx->epollfd == -1) {
fprintf(stderr, "Failed to create epoll instance: %s", strerror(errno));
ctx->epoll_available = false;
} else {
ctx->epoll_available = true;
}
#endif
}

376
aio-win32.c Normal file
View File

@@ -0,0 +1,376 @@
/*
* QEMU aio implementation
*
* Copyright IBM Corp., 2008
* Copyright Red Hat Inc., 2012
*
* Authors:
* Anthony Liguori <aliguori@us.ibm.com>
* Paolo Bonzini <pbonzini@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "block/block.h"
#include "qemu/queue.h"
#include "qemu/sockets.h"
struct AioHandler {
EventNotifier *e;
IOHandler *io_read;
IOHandler *io_write;
EventNotifierHandler *io_notify;
GPollFD pfd;
int deleted;
void *opaque;
bool is_external;
QLIST_ENTRY(AioHandler) node;
};
void aio_set_fd_handler(AioContext *ctx,
int fd,
bool is_external,
IOHandler *io_read,
IOHandler *io_write,
void *opaque)
{
/* fd is a SOCKET in our case */
AioHandler *node;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->pfd.fd == fd && !node->deleted) {
break;
}
}
/* Are we deleting the fd handler? */
if (!io_read && !io_write) {
if (node) {
/* If the lock is held, just mark the node as deleted */
if (ctx->walking_handlers) {
node->deleted = 1;
node->pfd.revents = 0;
} else {
/* Otherwise, delete it for real. We can't just mark it as
* deleted because deleted nodes are only cleaned up after
* releasing the walking_handlers lock.
*/
QLIST_REMOVE(node, node);
g_free(node);
}
}
} else {
HANDLE event;
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = g_new0(AioHandler, 1);
node->pfd.fd = fd;
QLIST_INSERT_HEAD(&ctx->aio_handlers, node, node);
}
node->pfd.events = 0;
if (node->io_read) {
node->pfd.events |= G_IO_IN;
}
if (node->io_write) {
node->pfd.events |= G_IO_OUT;
}
node->e = &ctx->notifier;
/* Update handler with latest information */
node->opaque = opaque;
node->io_read = io_read;
node->io_write = io_write;
node->is_external = is_external;
event = event_notifier_get_handle(&ctx->notifier);
WSAEventSelect(node->pfd.fd, event,
FD_READ | FD_ACCEPT | FD_CLOSE |
FD_CONNECT | FD_WRITE | FD_OOB);
}
aio_notify(ctx);
}
void aio_set_event_notifier(AioContext *ctx,
EventNotifier *e,
bool is_external,
EventNotifierHandler *io_notify)
{
AioHandler *node;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->e == e && !node->deleted) {
break;
}
}
/* Are we deleting the fd handler? */
if (!io_notify) {
if (node) {
g_source_remove_poll(&ctx->source, &node->pfd);
/* If the lock is held, just mark the node as deleted */
if (ctx->walking_handlers) {
node->deleted = 1;
node->pfd.revents = 0;
} else {
/* Otherwise, delete it for real. We can't just mark it as
* deleted because deleted nodes are only cleaned up after
* releasing the walking_handlers lock.
*/
QLIST_REMOVE(node, node);
g_free(node);
}
}
} else {
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = g_new0(AioHandler, 1);
node->e = e;
node->pfd.fd = (uintptr_t)event_notifier_get_handle(e);
node->pfd.events = G_IO_IN;
node->is_external = is_external;
QLIST_INSERT_HEAD(&ctx->aio_handlers, node, node);
g_source_add_poll(&ctx->source, &node->pfd);
}
/* Update handler with latest information */
node->io_notify = io_notify;
}
aio_notify(ctx);
}
bool aio_prepare(AioContext *ctx)
{
static struct timeval tv0;
AioHandler *node;
bool have_select_revents = false;
fd_set rfds, wfds;
/* fill fd sets */
FD_ZERO(&rfds);
FD_ZERO(&wfds);
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->io_read) {
FD_SET ((SOCKET)node->pfd.fd, &rfds);
}
if (node->io_write) {
FD_SET ((SOCKET)node->pfd.fd, &wfds);
}
}
if (select(0, &rfds, &wfds, NULL, &tv0) > 0) {
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
node->pfd.revents = 0;
if (FD_ISSET(node->pfd.fd, &rfds)) {
node->pfd.revents |= G_IO_IN;
have_select_revents = true;
}
if (FD_ISSET(node->pfd.fd, &wfds)) {
node->pfd.revents |= G_IO_OUT;
have_select_revents = true;
}
}
}
return have_select_revents;
}
bool aio_pending(AioContext *ctx)
{
AioHandler *node;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->pfd.revents && node->io_notify) {
return true;
}
if ((node->pfd.revents & G_IO_IN) && node->io_read) {
return true;
}
if ((node->pfd.revents & G_IO_OUT) && node->io_write) {
return true;
}
}
return false;
}
static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
{
AioHandler *node;
bool progress = false;
/*
* We have to walk very carefully in case aio_set_fd_handler is
* called while we're walking.
*/
node = QLIST_FIRST(&ctx->aio_handlers);
while (node) {
AioHandler *tmp;
int revents = node->pfd.revents;
ctx->walking_handlers++;
if (!node->deleted &&
(revents || event_notifier_get_handle(node->e) == event) &&
node->io_notify) {
node->pfd.revents = 0;
node->io_notify(node->e);
/* aio_notify() does not count as progress */
if (node->e != &ctx->notifier) {
progress = true;
}
}
if (!node->deleted &&
(node->io_read || node->io_write)) {
node->pfd.revents = 0;
if ((revents & G_IO_IN) && node->io_read) {
node->io_read(node->opaque);
progress = true;
}
if ((revents & G_IO_OUT) && node->io_write) {
node->io_write(node->opaque);
progress = true;
}
/* if the next select() will return an event, we have progressed */
if (event == event_notifier_get_handle(&ctx->notifier)) {
WSANETWORKEVENTS ev;
WSAEnumNetworkEvents(node->pfd.fd, event, &ev);
if (ev.lNetworkEvents) {
progress = true;
}
}
}
tmp = node;
node = QLIST_NEXT(node, node);
ctx->walking_handlers--;
if (!ctx->walking_handlers && tmp->deleted) {
QLIST_REMOVE(tmp, node);
g_free(tmp);
}
}
return progress;
}
bool aio_dispatch(AioContext *ctx)
{
bool progress;
progress = aio_bh_poll(ctx);
progress |= aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
progress |= timerlistgroup_run_timers(&ctx->tlg);
return progress;
}
bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
bool progress, have_select_revents, first;
int count;
int timeout;
aio_context_acquire(ctx);
progress = false;
/* aio_notify can avoid the expensive event_notifier_set if
* everything (file descriptors, bottom halves, timers) will
* be re-evaluated before the next blocking poll(). This is
* already true when aio_poll is called with blocking == false;
* if blocking == true, it is only true after poll() returns,
* so disable the optimization now.
*/
if (blocking) {
atomic_add(&ctx->notify_me, 2);
}
have_select_revents = aio_prepare(ctx);
ctx->walking_handlers++;
/* fill fd sets */
count = 0;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (!node->deleted && node->io_notify
&& aio_node_check(ctx, node->is_external)) {
events[count++] = event_notifier_get_handle(node->e);
}
}
ctx->walking_handlers--;
first = true;
/* ctx->notifier is always registered. */
assert(count > 0);
/* Multiple iterations, all of them non-blocking except the first,
* may be necessary to process all pending events. After the first
* WaitForMultipleObjects call ctx->notify_me will be decremented.
*/
do {
HANDLE event;
int ret;
timeout = blocking && !have_select_revents
? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0;
if (timeout) {
aio_context_release(ctx);
}
ret = WaitForMultipleObjects(count, events, FALSE, timeout);
if (blocking) {
assert(first);
atomic_sub(&ctx->notify_me, 2);
}
if (timeout) {
aio_context_acquire(ctx);
}
if (first) {
aio_notify_accept(ctx);
progress |= aio_bh_poll(ctx);
first = false;
}
/* if we have any signaled events, dispatch event */
event = NULL;
if ((DWORD) (ret - WAIT_OBJECT_0) < count) {
event = events[ret - WAIT_OBJECT_0];
events[ret - WAIT_OBJECT_0] = events[--count];
} else if (!have_select_revents) {
break;
}
have_select_revents = false;
blocking = false;
progress |= aio_dispatch_handlers(ctx, event);
} while (count > 0);
progress |= timerlistgroup_run_timers(&ctx->tlg);
aio_context_release(ctx);
return progress;
}
void aio_context_setup(AioContext *ctx)
{
}

View File

@@ -27,11 +27,11 @@
#include "sysemu/sysemu.h"
#include "sysemu/arch_init.h"
#include "hw/pci/pci.h"
#include "hw/audio/soundhw.h"
#include "qapi/qapi-commands-misc.h"
#include "qapi/error.h"
#include "hw/audio/audio.h"
#include "hw/smbios/smbios.h"
#include "qemu/config-file.h"
#include "qemu/error-report.h"
#include "qmp-commands.h"
#include "hw/acpi/acpi.h"
#include "qemu/help_option.h"
@@ -52,44 +52,228 @@ int graphic_depth = 32;
#define QEMU_ARCH QEMU_ARCH_ARM
#elif defined(TARGET_CRIS)
#define QEMU_ARCH QEMU_ARCH_CRIS
#elif defined(TARGET_HPPA)
#define QEMU_ARCH QEMU_ARCH_HPPA
#elif defined(TARGET_I386)
#define QEMU_ARCH QEMU_ARCH_I386
#elif defined(TARGET_LM32)
#define QEMU_ARCH QEMU_ARCH_LM32
#elif defined(TARGET_M68K)
#define QEMU_ARCH QEMU_ARCH_M68K
#elif defined(TARGET_LM32)
#define QEMU_ARCH QEMU_ARCH_LM32
#elif defined(TARGET_MICROBLAZE)
#define QEMU_ARCH QEMU_ARCH_MICROBLAZE
#elif defined(TARGET_MIPS)
#define QEMU_ARCH QEMU_ARCH_MIPS
#elif defined(TARGET_MOXIE)
#define QEMU_ARCH QEMU_ARCH_MOXIE
#elif defined(TARGET_NIOS2)
#define QEMU_ARCH QEMU_ARCH_NIOS2
#elif defined(TARGET_OPENRISC)
#define QEMU_ARCH QEMU_ARCH_OPENRISC
#elif defined(TARGET_PPC)
#define QEMU_ARCH QEMU_ARCH_PPC
#elif defined(TARGET_RISCV)
#define QEMU_ARCH QEMU_ARCH_RISCV
#elif defined(TARGET_S390X)
#define QEMU_ARCH QEMU_ARCH_S390X
#elif defined(TARGET_SH4)
#define QEMU_ARCH QEMU_ARCH_SH4
#elif defined(TARGET_SPARC)
#define QEMU_ARCH QEMU_ARCH_SPARC
#elif defined(TARGET_TRICORE)
#define QEMU_ARCH QEMU_ARCH_TRICORE
#elif defined(TARGET_UNICORE32)
#define QEMU_ARCH QEMU_ARCH_UNICORE32
#elif defined(TARGET_XTENSA)
#define QEMU_ARCH QEMU_ARCH_XTENSA
#elif defined(TARGET_UNICORE32)
#define QEMU_ARCH QEMU_ARCH_UNICORE32
#elif defined(TARGET_TRICORE)
#define QEMU_ARCH QEMU_ARCH_TRICORE
#endif
const uint32_t arch_type = QEMU_ARCH;
static struct defconfig_file {
const char *filename;
/* Indicates it is an user config file (disabled by -no-user-config) */
bool userconfig;
} default_config_files[] = {
{ CONFIG_QEMU_CONFDIR "/qemu.conf", true },
{ NULL }, /* end of list */
};
int qemu_read_default_config_files(bool userconfig)
{
int ret;
struct defconfig_file *f;
for (f = default_config_files; f->filename; f++) {
if (!userconfig && f->userconfig) {
continue;
}
ret = qemu_read_config_file(f->filename);
if (ret < 0 && ret != -ENOENT) {
return ret;
}
}
return 0;
}
struct soundhw {
const char *name;
const char *descr;
int enabled;
int isa;
union {
int (*init_isa) (ISABus *bus);
int (*init_pci) (PCIBus *bus);
} init;
};
static struct soundhw soundhw[9];
static int soundhw_count;
void isa_register_soundhw(const char *name, const char *descr,
int (*init_isa)(ISABus *bus))
{
assert(soundhw_count < ARRAY_SIZE(soundhw) - 1);
soundhw[soundhw_count].name = name;
soundhw[soundhw_count].descr = descr;
soundhw[soundhw_count].isa = 1;
soundhw[soundhw_count].init.init_isa = init_isa;
soundhw_count++;
}
void pci_register_soundhw(const char *name, const char *descr,
int (*init_pci)(PCIBus *bus))
{
assert(soundhw_count < ARRAY_SIZE(soundhw) - 1);
soundhw[soundhw_count].name = name;
soundhw[soundhw_count].descr = descr;
soundhw[soundhw_count].isa = 0;
soundhw[soundhw_count].init.init_pci = init_pci;
soundhw_count++;
}
void select_soundhw(const char *optarg)
{
struct soundhw *c;
if (is_help_option(optarg)) {
show_valid_cards:
if (soundhw_count) {
printf("Valid sound card names (comma separated):\n");
for (c = soundhw; c->name; ++c) {
printf ("%-11s %s\n", c->name, c->descr);
}
printf("\n-soundhw all will enable all of the above\n");
} else {
printf("Machine has no user-selectable audio hardware "
"(it may or may not have always-present audio hardware).\n");
}
exit(!is_help_option(optarg));
}
else {
size_t l;
const char *p;
char *e;
int bad_card = 0;
if (!strcmp(optarg, "all")) {
for (c = soundhw; c->name; ++c) {
c->enabled = 1;
}
return;
}
p = optarg;
while (*p) {
e = strchr(p, ',');
l = !e ? strlen(p) : (size_t) (e - p);
for (c = soundhw; c->name; ++c) {
if (!strncmp(c->name, p, l) && !c->name[l]) {
c->enabled = 1;
break;
}
}
if (!c->name) {
if (l > 80) {
error_report("Unknown sound card name (too big to show)");
}
else {
error_report("Unknown sound card name `%.*s'",
(int) l, p);
}
bad_card = 1;
}
p += l + (e != NULL);
}
if (bad_card) {
goto show_valid_cards;
}
}
}
void audio_init(void)
{
struct soundhw *c;
ISABus *isa_bus = (ISABus *) object_resolve_path_type("", TYPE_ISA_BUS, NULL);
PCIBus *pci_bus = (PCIBus *) object_resolve_path_type("", TYPE_PCI_BUS, NULL);
for (c = soundhw; c->name; ++c) {
if (c->enabled) {
if (c->isa) {
if (!isa_bus) {
error_report("ISA bus not available for %s", c->name);
exit(1);
}
c->init.init_isa(isa_bus);
} else {
if (!pci_bus) {
error_report("PCI bus not available for %s", c->name);
exit(1);
}
c->init.init_pci(pci_bus);
}
}
}
}
int qemu_uuid_parse(const char *str, uint8_t *uuid)
{
int ret;
if (strlen(str) != 36) {
return -1;
}
ret = sscanf(str, UUID_FMT, &uuid[0], &uuid[1], &uuid[2], &uuid[3],
&uuid[4], &uuid[5], &uuid[6], &uuid[7], &uuid[8], &uuid[9],
&uuid[10], &uuid[11], &uuid[12], &uuid[13], &uuid[14],
&uuid[15]);
if (ret != 16) {
return -1;
}
return 0;
}
void do_acpitable_option(const QemuOpts *opts)
{
#ifdef TARGET_I386
Error *err = NULL;
acpi_table_add(opts, &err);
if (err) {
error_reportf_err(err, "Wrong acpi table provided: ");
exit(1);
}
#endif
}
void do_smbios_option(QemuOpts *opts)
{
#ifdef TARGET_I386
smbios_entry_add(opts);
#endif
}
int kvm_available(void)
{
#ifdef CONFIG_KVM
@@ -113,8 +297,7 @@ TargetInfo *qmp_query_target(Error **errp)
{
TargetInfo *info = g_malloc0(sizeof(*info));
info->arch = qapi_enum_parse(&SysEmuTarget_lookup, TARGET_NAME, -1,
&error_abort);
info->arch = g_strdup(TARGET_NAME);
return info;
}

398
async.c Normal file
View File

@@ -0,0 +1,398 @@
/*
* QEMU System Emulator
*
* Copyright (c) 2003-2008 Fabrice Bellard
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "block/aio.h"
#include "block/thread-pool.h"
#include "qemu/main-loop.h"
#include "qemu/atomic.h"
#include "block/raw-aio.h"
/***********************************************************/
/* bottom halves (can be seen as timers which expire ASAP) */
struct QEMUBH {
AioContext *ctx;
QEMUBHFunc *cb;
void *opaque;
QEMUBH *next;
bool scheduled;
bool idle;
bool deleted;
};
QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
{
QEMUBH *bh;
bh = g_new(QEMUBH, 1);
*bh = (QEMUBH){
.ctx = ctx,
.cb = cb,
.opaque = opaque,
};
qemu_mutex_lock(&ctx->bh_lock);
bh->next = ctx->first_bh;
/* Make sure that the members are ready before putting bh into list */
smp_wmb();
ctx->first_bh = bh;
qemu_mutex_unlock(&ctx->bh_lock);
return bh;
}
void aio_bh_call(QEMUBH *bh)
{
bh->cb(bh->opaque);
}
/* Multiple occurrences of aio_bh_poll cannot be called concurrently */
int aio_bh_poll(AioContext *ctx)
{
QEMUBH *bh, **bhp, *next;
int ret;
ctx->walking_bh++;
ret = 0;
for (bh = ctx->first_bh; bh; bh = next) {
/* Make sure that fetching bh happens before accessing its members */
smp_read_barrier_depends();
next = bh->next;
/* The atomic_xchg is paired with the one in qemu_bh_schedule. The
* implicit memory barrier ensures that the callback sees all writes
* done by the scheduling thread. It also ensures that the scheduling
* thread sees the zero before bh->cb has run, and thus will call
* aio_notify again if necessary.
*/
if (!bh->deleted && atomic_xchg(&bh->scheduled, 0)) {
/* Idle BHs and the notify BH don't count as progress */
if (!bh->idle && bh != ctx->notify_dummy_bh) {
ret = 1;
}
bh->idle = 0;
aio_bh_call(bh);
}
}
ctx->walking_bh--;
/* remove deleted bhs */
if (!ctx->walking_bh) {
qemu_mutex_lock(&ctx->bh_lock);
bhp = &ctx->first_bh;
while (*bhp) {
bh = *bhp;
if (bh->deleted) {
*bhp = bh->next;
g_free(bh);
} else {
bhp = &bh->next;
}
}
qemu_mutex_unlock(&ctx->bh_lock);
}
return ret;
}
void qemu_bh_schedule_idle(QEMUBH *bh)
{
bh->idle = 1;
/* Make sure that idle & any writes needed by the callback are done
* before the locations are read in the aio_bh_poll.
*/
atomic_mb_set(&bh->scheduled, 1);
}
void qemu_bh_schedule(QEMUBH *bh)
{
AioContext *ctx;
ctx = bh->ctx;
bh->idle = 0;
/* The memory barrier implicit in atomic_xchg makes sure that:
* 1. idle & any writes needed by the callback are done before the
* locations are read in the aio_bh_poll.
* 2. ctx is loaded before scheduled is set and the callback has a chance
* to execute.
*/
if (atomic_xchg(&bh->scheduled, 1) == 0) {
aio_notify(ctx);
}
}
/* This func is async.
*/
void qemu_bh_cancel(QEMUBH *bh)
{
bh->scheduled = 0;
}
/* This func is async.The bottom half will do the delete action at the finial
* end.
*/
void qemu_bh_delete(QEMUBH *bh)
{
bh->scheduled = 0;
bh->deleted = 1;
}
int64_t
aio_compute_timeout(AioContext *ctx)
{
int64_t deadline;
int timeout = -1;
QEMUBH *bh;
for (bh = ctx->first_bh; bh; bh = bh->next) {
if (!bh->deleted && bh->scheduled) {
if (bh->idle) {
/* idle bottom halves will be polled at least
* every 10ms */
timeout = 10000000;
} else {
/* non-idle bottom halves will be executed
* immediately */
return 0;
}
}
}
deadline = timerlistgroup_deadline_ns(&ctx->tlg);
if (deadline == 0) {
return 0;
} else {
return qemu_soonest_timeout(timeout, deadline);
}
}
static gboolean
aio_ctx_prepare(GSource *source, gint *timeout)
{
AioContext *ctx = (AioContext *) source;
atomic_or(&ctx->notify_me, 1);
/* We assume there is no timeout already supplied */
*timeout = qemu_timeout_ns_to_ms(aio_compute_timeout(ctx));
if (aio_prepare(ctx)) {
*timeout = 0;
}
return *timeout == 0;
}
static gboolean
aio_ctx_check(GSource *source)
{
AioContext *ctx = (AioContext *) source;
QEMUBH *bh;
atomic_and(&ctx->notify_me, ~1);
aio_notify_accept(ctx);
for (bh = ctx->first_bh; bh; bh = bh->next) {
if (!bh->deleted && bh->scheduled) {
return true;
}
}
return aio_pending(ctx) || (timerlistgroup_deadline_ns(&ctx->tlg) == 0);
}
static gboolean
aio_ctx_dispatch(GSource *source,
GSourceFunc callback,
gpointer user_data)
{
AioContext *ctx = (AioContext *) source;
assert(callback == NULL);
aio_dispatch(ctx);
return true;
}
static void
aio_ctx_finalize(GSource *source)
{
AioContext *ctx = (AioContext *) source;
qemu_bh_delete(ctx->notify_dummy_bh);
thread_pool_free(ctx->thread_pool);
#ifdef CONFIG_LINUX_AIO
if (ctx->linux_aio) {
laio_detach_aio_context(ctx->linux_aio, ctx);
laio_cleanup(ctx->linux_aio);
ctx->linux_aio = NULL;
}
#endif
qemu_mutex_lock(&ctx->bh_lock);
while (ctx->first_bh) {
QEMUBH *next = ctx->first_bh->next;
/* qemu_bh_delete() must have been called on BHs in this AioContext */
assert(ctx->first_bh->deleted);
g_free(ctx->first_bh);
ctx->first_bh = next;
}
qemu_mutex_unlock(&ctx->bh_lock);
aio_set_event_notifier(ctx, &ctx->notifier, false, NULL);
event_notifier_cleanup(&ctx->notifier);
rfifolock_destroy(&ctx->lock);
qemu_mutex_destroy(&ctx->bh_lock);
timerlistgroup_deinit(&ctx->tlg);
}
static GSourceFuncs aio_source_funcs = {
aio_ctx_prepare,
aio_ctx_check,
aio_ctx_dispatch,
aio_ctx_finalize
};
GSource *aio_get_g_source(AioContext *ctx)
{
g_source_ref(&ctx->source);
return &ctx->source;
}
ThreadPool *aio_get_thread_pool(AioContext *ctx)
{
if (!ctx->thread_pool) {
ctx->thread_pool = thread_pool_new(ctx);
}
return ctx->thread_pool;
}
#ifdef CONFIG_LINUX_AIO
LinuxAioState *aio_get_linux_aio(AioContext *ctx)
{
if (!ctx->linux_aio) {
ctx->linux_aio = laio_init();
laio_attach_aio_context(ctx->linux_aio, ctx);
}
return ctx->linux_aio;
}
#endif
void aio_notify(AioContext *ctx)
{
/* Write e.g. bh->scheduled before reading ctx->notify_me. Pairs
* with atomic_or in aio_ctx_prepare or atomic_add in aio_poll.
*/
smp_mb();
if (ctx->notify_me) {
event_notifier_set(&ctx->notifier);
atomic_mb_set(&ctx->notified, true);
}
}
void aio_notify_accept(AioContext *ctx)
{
if (atomic_xchg(&ctx->notified, false)) {
event_notifier_test_and_clear(&ctx->notifier);
}
}
static void aio_timerlist_notify(void *opaque)
{
aio_notify(opaque);
}
static void aio_rfifolock_cb(void *opaque)
{
AioContext *ctx = opaque;
/* Kick owner thread in case they are blocked in aio_poll() */
qemu_bh_schedule(ctx->notify_dummy_bh);
}
static void notify_dummy_bh(void *opaque)
{
/* Do nothing, we were invoked just to force the event loop to iterate */
}
static void event_notifier_dummy_cb(EventNotifier *e)
{
}
AioContext *aio_context_new(Error **errp)
{
int ret;
AioContext *ctx;
ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext));
aio_context_setup(ctx);
ret = event_notifier_init(&ctx->notifier, false);
if (ret < 0) {
error_setg_errno(errp, -ret, "Failed to initialize event notifier");
goto fail;
}
g_source_set_can_recurse(&ctx->source, true);
aio_set_event_notifier(ctx, &ctx->notifier,
false,
(EventNotifierHandler *)
event_notifier_dummy_cb);
#ifdef CONFIG_LINUX_AIO
ctx->linux_aio = NULL;
#endif
ctx->thread_pool = NULL;
qemu_mutex_init(&ctx->bh_lock);
rfifolock_init(&ctx->lock, aio_rfifolock_cb, ctx);
timerlistgroup_init(&ctx->tlg, aio_timerlist_notify, ctx);
ctx->notify_dummy_bh = aio_bh_new(ctx, notify_dummy_bh, NULL);
return ctx;
fail:
g_source_destroy(&ctx->source);
return NULL;
}
void aio_context_ref(AioContext *ctx)
{
g_source_ref(&ctx->source);
}
void aio_context_unref(AioContext *ctx)
{
g_source_unref(&ctx->source);
}
void aio_context_acquire(AioContext *ctx)
{
rfifolock_lock(&ctx->lock);
}
void aio_context_release(AioContext *ctx)
{
rfifolock_unlock(&ctx->lock);
}

View File

@@ -1,31 +1,13 @@
common-obj-y = audio.o audio_legacy.o noaudio.o wavaudio.o mixeng.o
common-obj-y = audio.o noaudio.o wavaudio.o mixeng.o
common-obj-$(CONFIG_SDL) += sdlaudio.o
common-obj-$(CONFIG_OSS) += ossaudio.o
common-obj-$(CONFIG_SPICE) += spiceaudio.o
common-obj-$(CONFIG_AUDIO_COREAUDIO) += coreaudio.o
common-obj-$(CONFIG_AUDIO_DSOUND) += dsoundaudio.o
common-obj-$(CONFIG_COREAUDIO) += coreaudio.o
common-obj-$(CONFIG_ALSA) += alsaaudio.o
common-obj-$(CONFIG_DSOUND) += dsoundaudio.o
common-obj-$(CONFIG_PA) += paaudio.o
common-obj-$(CONFIG_AUDIO_PT_INT) += audio_pt_int.o
common-obj-$(CONFIG_AUDIO_WIN_INT) += audio_win_int.o
common-obj-y += wavcapture.o
coreaudio.o-libs := $(COREAUDIO_LIBS)
dsoundaudio.o-libs := $(DSOUND_LIBS)
# alsa module
common-obj-$(CONFIG_AUDIO_ALSA) += alsa.mo
alsa.mo-objs = alsaaudio.o
alsa.mo-libs := $(ALSA_LIBS)
# oss module
common-obj-$(CONFIG_AUDIO_OSS) += oss.mo
oss.mo-objs = ossaudio.o
oss.mo-libs := $(OSS_LIBS)
# pulseaudio module
common-obj-$(CONFIG_AUDIO_PA) += pa.mo
pa.mo-objs = paaudio.o
pa.mo-libs := $(PULSE_LIBS)
# sdl module
common-obj-$(CONFIG_AUDIO_SDL) += sdl.mo
sdl.mo-objs = sdlaudio.o
sdl.mo-cflags := $(SDL_CFLAGS)
sdl.mo-libs := $(SDL_LIBS)
sdlaudio.o-cflags := $(SDL_CFLAGS)

View File

@@ -28,14 +28,35 @@
#include "audio.h"
#include "trace.h"
#if QEMU_GNUC_PREREQ(4, 3)
#pragma GCC diagnostic ignored "-Waddress"
#endif
#define AUDIO_CAP "alsa"
#include "audio_int.h"
typedef struct ALSAConf {
int size_in_usec_in;
int size_in_usec_out;
const char *pcm_name_in;
const char *pcm_name_out;
unsigned int buffer_size_in;
unsigned int period_size_in;
unsigned int buffer_size_out;
unsigned int period_size_out;
unsigned int threshold;
int buffer_size_in_overridden;
int period_size_in_overridden;
int buffer_size_out_overridden;
int period_size_out_overridden;
} ALSAConf;
struct pollhlp {
snd_pcm_t *handle;
struct pollfd *pfds;
ALSAConf *conf;
int count;
int mask;
};
@@ -47,7 +68,6 @@ typedef struct ALSAVoiceOut {
void *pcm_buf;
snd_pcm_t *handle;
struct pollhlp pollhlp;
Audiodev *dev;
} ALSAVoiceOut;
typedef struct ALSAVoiceIn {
@@ -55,18 +75,21 @@ typedef struct ALSAVoiceIn {
snd_pcm_t *handle;
void *pcm_buf;
struct pollhlp pollhlp;
Audiodev *dev;
} ALSAVoiceIn;
struct alsa_params_req {
int freq;
snd_pcm_format_t fmt;
int nchannels;
int size_in_usec;
int override_mask;
unsigned int buffer_size;
unsigned int period_size;
};
struct alsa_params_obt {
int freq;
AudioFormat fmt;
audfmt_e fmt;
int endianness;
int nchannels;
snd_pcm_uframes_t samples;
@@ -273,16 +296,16 @@ static int alsa_write (SWVoiceOut *sw, void *buf, int len)
return audio_pcm_sw_write (sw, buf, len);
}
static snd_pcm_format_t aud_to_alsafmt (AudioFormat fmt, int endianness)
static snd_pcm_format_t aud_to_alsafmt (audfmt_e fmt, int endianness)
{
switch (fmt) {
case AUDIO_FORMAT_S8:
case AUD_FMT_S8:
return SND_PCM_FORMAT_S8;
case AUDIO_FORMAT_U8:
case AUD_FMT_U8:
return SND_PCM_FORMAT_U8;
case AUDIO_FORMAT_S16:
case AUD_FMT_S16:
if (endianness) {
return SND_PCM_FORMAT_S16_BE;
}
@@ -290,7 +313,7 @@ static snd_pcm_format_t aud_to_alsafmt (AudioFormat fmt, int endianness)
return SND_PCM_FORMAT_S16_LE;
}
case AUDIO_FORMAT_U16:
case AUD_FMT_U16:
if (endianness) {
return SND_PCM_FORMAT_U16_BE;
}
@@ -298,7 +321,7 @@ static snd_pcm_format_t aud_to_alsafmt (AudioFormat fmt, int endianness)
return SND_PCM_FORMAT_U16_LE;
}
case AUDIO_FORMAT_S32:
case AUD_FMT_S32:
if (endianness) {
return SND_PCM_FORMAT_S32_BE;
}
@@ -306,7 +329,7 @@ static snd_pcm_format_t aud_to_alsafmt (AudioFormat fmt, int endianness)
return SND_PCM_FORMAT_S32_LE;
}
case AUDIO_FORMAT_U32:
case AUD_FMT_U32:
if (endianness) {
return SND_PCM_FORMAT_U32_BE;
}
@@ -323,58 +346,58 @@ static snd_pcm_format_t aud_to_alsafmt (AudioFormat fmt, int endianness)
}
}
static int alsa_to_audfmt (snd_pcm_format_t alsafmt, AudioFormat *fmt,
static int alsa_to_audfmt (snd_pcm_format_t alsafmt, audfmt_e *fmt,
int *endianness)
{
switch (alsafmt) {
case SND_PCM_FORMAT_S8:
*endianness = 0;
*fmt = AUDIO_FORMAT_S8;
*fmt = AUD_FMT_S8;
break;
case SND_PCM_FORMAT_U8:
*endianness = 0;
*fmt = AUDIO_FORMAT_U8;
*fmt = AUD_FMT_U8;
break;
case SND_PCM_FORMAT_S16_LE:
*endianness = 0;
*fmt = AUDIO_FORMAT_S16;
*fmt = AUD_FMT_S16;
break;
case SND_PCM_FORMAT_U16_LE:
*endianness = 0;
*fmt = AUDIO_FORMAT_U16;
*fmt = AUD_FMT_U16;
break;
case SND_PCM_FORMAT_S16_BE:
*endianness = 1;
*fmt = AUDIO_FORMAT_S16;
*fmt = AUD_FMT_S16;
break;
case SND_PCM_FORMAT_U16_BE:
*endianness = 1;
*fmt = AUDIO_FORMAT_U16;
*fmt = AUD_FMT_U16;
break;
case SND_PCM_FORMAT_S32_LE:
*endianness = 0;
*fmt = AUDIO_FORMAT_S32;
*fmt = AUD_FMT_S32;
break;
case SND_PCM_FORMAT_U32_LE:
*endianness = 0;
*fmt = AUDIO_FORMAT_U32;
*fmt = AUD_FMT_U32;
break;
case SND_PCM_FORMAT_S32_BE:
*endianness = 1;
*fmt = AUDIO_FORMAT_S32;
*fmt = AUD_FMT_S32;
break;
case SND_PCM_FORMAT_U32_BE:
*endianness = 1;
*fmt = AUDIO_FORMAT_U32;
*fmt = AUD_FMT_U32;
break;
default:
@@ -387,18 +410,17 @@ static int alsa_to_audfmt (snd_pcm_format_t alsafmt, AudioFormat *fmt,
static void alsa_dump_info (struct alsa_params_req *req,
struct alsa_params_obt *obt,
snd_pcm_format_t obtfmt,
AudiodevAlsaPerDirectionOptions *apdo)
snd_pcm_format_t obtfmt)
{
dolog("parameter | requested value | obtained value\n");
dolog("format | %10d | %10d\n", req->fmt, obtfmt);
dolog("channels | %10d | %10d\n",
req->nchannels, obt->nchannels);
dolog("frequency | %10d | %10d\n", req->freq, obt->freq);
dolog("============================================\n");
dolog("requested: buffer len %" PRId32 " period len %" PRId32 "\n",
apdo->buffer_length, apdo->period_length);
dolog("obtained: samples %ld\n", obt->samples);
dolog ("parameter | requested value | obtained value\n");
dolog ("format | %10d | %10d\n", req->fmt, obtfmt);
dolog ("channels | %10d | %10d\n",
req->nchannels, obt->nchannels);
dolog ("frequency | %10d | %10d\n", req->freq, obt->freq);
dolog ("============================================\n");
dolog ("requested: buffer size %d period size %d\n",
req->buffer_size, req->period_size);
dolog ("obtained: samples %ld\n", obt->samples);
}
static void alsa_set_threshold (snd_pcm_t *handle, snd_pcm_uframes_t threshold)
@@ -431,23 +453,23 @@ static void alsa_set_threshold (snd_pcm_t *handle, snd_pcm_uframes_t threshold)
}
}
static int alsa_open(bool in, struct alsa_params_req *req,
struct alsa_params_obt *obt, snd_pcm_t **handlep,
Audiodev *dev)
static int alsa_open (int in, struct alsa_params_req *req,
struct alsa_params_obt *obt, snd_pcm_t **handlep,
ALSAConf *conf)
{
AudiodevAlsaOptions *aopts = &dev->u.alsa;
AudiodevAlsaPerDirectionOptions *apdo = in ? aopts->in : aopts->out;
snd_pcm_t *handle;
snd_pcm_hw_params_t *hw_params;
int err;
int size_in_usec;
unsigned int freq, nchannels;
const char *pcm_name = apdo->has_dev ? apdo->dev : "default";
const char *pcm_name = in ? conf->pcm_name_in : conf->pcm_name_out;
snd_pcm_uframes_t obt_buffer_size;
const char *typ = in ? "ADC" : "DAC";
snd_pcm_format_t obtfmt;
freq = req->freq;
nchannels = req->nchannels;
size_in_usec = req->size_in_usec;
snd_pcm_hw_params_alloca (&hw_params);
@@ -507,42 +529,79 @@ static int alsa_open(bool in, struct alsa_params_req *req,
goto err;
}
if (apdo->buffer_length) {
int dir = 0;
unsigned int btime = apdo->buffer_length;
if (req->buffer_size) {
unsigned long obt;
err = snd_pcm_hw_params_set_buffer_time_near(
handle, hw_params, &btime, &dir);
if (size_in_usec) {
int dir = 0;
unsigned int btime = req->buffer_size;
err = snd_pcm_hw_params_set_buffer_time_near (
handle,
hw_params,
&btime,
&dir
);
obt = btime;
}
else {
snd_pcm_uframes_t bsize = req->buffer_size;
err = snd_pcm_hw_params_set_buffer_size_near (
handle,
hw_params,
&bsize
);
obt = bsize;
}
if (err < 0) {
alsa_logerr2(err, typ, "Failed to set buffer time to %" PRId32 "\n",
apdo->buffer_length);
alsa_logerr2 (err, typ, "Failed to set buffer %s to %d\n",
size_in_usec ? "time" : "size", req->buffer_size);
goto err;
}
if (apdo->has_buffer_length && btime != apdo->buffer_length) {
dolog("Requested buffer time %" PRId32
" was rejected, using %u\n", apdo->buffer_length, btime);
}
if ((req->override_mask & 2) && (obt - req->buffer_size))
dolog ("Requested buffer %s %u was rejected, using %lu\n",
size_in_usec ? "time" : "size", req->buffer_size, obt);
}
if (apdo->period_length) {
int dir = 0;
unsigned int ptime = apdo->period_length;
if (req->period_size) {
unsigned long obt;
err = snd_pcm_hw_params_set_period_time_near(handle, hw_params, &ptime,
&dir);
if (size_in_usec) {
int dir = 0;
unsigned int ptime = req->period_size;
err = snd_pcm_hw_params_set_period_time_near (
handle,
hw_params,
&ptime,
&dir
);
obt = ptime;
}
else {
int dir = 0;
snd_pcm_uframes_t psize = req->period_size;
err = snd_pcm_hw_params_set_period_size_near (
handle,
hw_params,
&psize,
&dir
);
obt = psize;
}
if (err < 0) {
alsa_logerr2(err, typ, "Failed to set period time to %" PRId32 "\n",
apdo->period_length);
alsa_logerr2 (err, typ, "Failed to set period %s to %d\n",
size_in_usec ? "time" : "size", req->period_size);
goto err;
}
if (apdo->has_period_length && ptime != apdo->period_length) {
dolog("Requested period time %" PRId32 " was rejected, using %d\n",
apdo->period_length, ptime);
}
if (((req->override_mask & 1) && (obt - req->period_size)))
dolog ("Requested period %s %u was rejected, using %lu\n",
size_in_usec ? "time" : "size", req->period_size, obt);
}
err = snd_pcm_hw_params (handle, hw_params);
@@ -574,12 +633,30 @@ static int alsa_open(bool in, struct alsa_params_req *req,
goto err;
}
if (!in && aopts->has_threshold && aopts->threshold) {
struct audsettings as = { .freq = freq };
alsa_set_threshold(
handle,
audio_buffer_frames(qapi_AudiodevAlsaPerDirectionOptions_base(apdo),
&as, aopts->threshold));
if (!in && conf->threshold) {
snd_pcm_uframes_t threshold;
int bytes_per_sec;
bytes_per_sec = freq << (nchannels == 2);
switch (obt->fmt) {
case AUD_FMT_S8:
case AUD_FMT_U8:
break;
case AUD_FMT_S16:
case AUD_FMT_U16:
bytes_per_sec <<= 1;
break;
case AUD_FMT_S32:
case AUD_FMT_U32:
bytes_per_sec <<= 2;
break;
}
threshold = (conf->threshold * bytes_per_sec) / 1000;
alsa_set_threshold (handle, threshold);
}
obt->nchannels = nchannels;
@@ -592,11 +669,11 @@ static int alsa_open(bool in, struct alsa_params_req *req,
obt->nchannels != req->nchannels ||
obt->freq != req->freq) {
dolog ("Audio parameters for %s\n", typ);
alsa_dump_info(req, obt, obtfmt, apdo);
alsa_dump_info (req, obt, obtfmt);
}
#ifdef DEBUG
alsa_dump_info(req, obt, obtfmt, pdo);
alsa_dump_info (req, obt, obtfmt);
#endif
return 0;
@@ -722,13 +799,19 @@ static int alsa_init_out(HWVoiceOut *hw, struct audsettings *as,
struct alsa_params_obt obt;
snd_pcm_t *handle;
struct audsettings obt_as;
Audiodev *dev = drv_opaque;
ALSAConf *conf = drv_opaque;
req.fmt = aud_to_alsafmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.period_size = conf->period_size_out;
req.buffer_size = conf->buffer_size_out;
req.size_in_usec = conf->size_in_usec_out;
req.override_mask =
(conf->period_size_out_overridden ? 1 : 0) |
(conf->buffer_size_out_overridden ? 2 : 0);
if (alsa_open(0, &req, &obt, &handle, dev)) {
if (alsa_open (0, &req, &obt, &handle, conf)) {
return -1;
}
@@ -740,7 +823,7 @@ static int alsa_init_out(HWVoiceOut *hw, struct audsettings *as,
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = obt.samples;
alsa->pcm_buf = audio_calloc(__func__, obt.samples, 1 << hw->info.shift);
alsa->pcm_buf = audio_calloc (AUDIO_FUNC, obt.samples, 1 << hw->info.shift);
if (!alsa->pcm_buf) {
dolog ("Could not allocate DAC buffer (%d samples, each %d bytes)\n",
hw->samples, 1 << hw->info.shift);
@@ -749,7 +832,7 @@ static int alsa_init_out(HWVoiceOut *hw, struct audsettings *as,
}
alsa->handle = handle;
alsa->dev = dev;
alsa->pollhlp.conf = conf;
return 0;
}
@@ -789,12 +872,16 @@ static int alsa_voice_ctl (snd_pcm_t *handle, const char *typ, int ctl)
static int alsa_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
ALSAVoiceOut *alsa = (ALSAVoiceOut *) hw;
AudiodevAlsaPerDirectionOptions *apdo = alsa->dev->u.alsa.out;
switch (cmd) {
case VOICE_ENABLE:
{
bool poll_mode = apdo->try_poll;
va_list ap;
int poll_mode;
va_start (ap, cmd);
poll_mode = va_arg (ap, int);
va_end (ap);
ldebug ("enabling voice\n");
if (poll_mode && alsa_poll_out (hw)) {
@@ -823,13 +910,19 @@ static int alsa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
struct alsa_params_obt obt;
snd_pcm_t *handle;
struct audsettings obt_as;
Audiodev *dev = drv_opaque;
ALSAConf *conf = drv_opaque;
req.fmt = aud_to_alsafmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.period_size = conf->period_size_in;
req.buffer_size = conf->buffer_size_in;
req.size_in_usec = conf->size_in_usec_in;
req.override_mask =
(conf->period_size_in_overridden ? 1 : 0) |
(conf->buffer_size_in_overridden ? 2 : 0);
if (alsa_open(1, &req, &obt, &handle, dev)) {
if (alsa_open (1, &req, &obt, &handle, conf)) {
return -1;
}
@@ -841,7 +934,7 @@ static int alsa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = obt.samples;
alsa->pcm_buf = audio_calloc(__func__, hw->samples, 1 << hw->info.shift);
alsa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
if (!alsa->pcm_buf) {
dolog ("Could not allocate ADC buffer (%d samples, each %d bytes)\n",
hw->samples, 1 << hw->info.shift);
@@ -850,7 +943,7 @@ static int alsa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
alsa->handle = handle;
alsa->dev = dev;
alsa->pollhlp.conf = conf;
return 0;
}
@@ -992,12 +1085,16 @@ static int alsa_read (SWVoiceIn *sw, void *buf, int size)
static int alsa_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
ALSAVoiceIn *alsa = (ALSAVoiceIn *) hw;
AudiodevAlsaPerDirectionOptions *apdo = alsa->dev->u.alsa.in;
switch (cmd) {
case VOICE_ENABLE:
{
bool poll_mode = apdo->try_poll;
va_list ap;
int poll_mode;
va_start (ap, cmd);
poll_mode = va_arg (ap, int);
va_end (ap);
ldebug ("enabling voice\n");
if (poll_mode && alsa_poll_in (hw)) {
@@ -1020,54 +1117,88 @@ static int alsa_ctl_in (HWVoiceIn *hw, int cmd, ...)
return -1;
}
static void alsa_init_per_direction(AudiodevAlsaPerDirectionOptions *apdo)
static ALSAConf glob_conf = {
.buffer_size_out = 4096,
.period_size_out = 1024,
.pcm_name_out = "default",
.pcm_name_in = "default",
};
static void *alsa_audio_init (void)
{
if (!apdo->has_try_poll) {
apdo->try_poll = true;
apdo->has_try_poll = true;
}
}
static void *alsa_audio_init(Audiodev *dev)
{
AudiodevAlsaOptions *aopts;
assert(dev->driver == AUDIODEV_DRIVER_ALSA);
aopts = &dev->u.alsa;
alsa_init_per_direction(aopts->in);
alsa_init_per_direction(aopts->out);
/*
* need to define them, as otherwise alsa produces no sound
* doesn't set has_* so alsa_open can identify it wasn't set by the user
*/
if (!dev->u.alsa.out->has_period_length) {
/* 1024 frames assuming 44100Hz */
dev->u.alsa.out->period_length = 1024 * 1000000 / 44100;
}
if (!dev->u.alsa.out->has_buffer_length) {
/* 4096 frames assuming 44100Hz */
dev->u.alsa.out->buffer_length = 4096ll * 1000000 / 44100;
}
/*
* OptsVisitor sets unspecified optional fields to zero, but do not depend
* on it...
*/
if (!dev->u.alsa.in->has_period_length) {
dev->u.alsa.in->period_length = 0;
}
if (!dev->u.alsa.in->has_buffer_length) {
dev->u.alsa.in->buffer_length = 0;
}
return dev;
ALSAConf *conf = g_malloc(sizeof(ALSAConf));
*conf = glob_conf;
return conf;
}
static void alsa_audio_fini (void *opaque)
{
g_free(opaque);
}
static struct audio_option alsa_options[] = {
{
.name = "DAC_SIZE_IN_USEC",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.size_in_usec_out,
.descr = "DAC period/buffer size in microseconds (otherwise in frames)"
},
{
.name = "DAC_PERIOD_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.period_size_out,
.descr = "DAC period size (0 to go with system default)",
.overriddenp = &glob_conf.period_size_out_overridden
},
{
.name = "DAC_BUFFER_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.buffer_size_out,
.descr = "DAC buffer size (0 to go with system default)",
.overriddenp = &glob_conf.buffer_size_out_overridden
},
{
.name = "ADC_SIZE_IN_USEC",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.size_in_usec_in,
.descr =
"ADC period/buffer size in microseconds (otherwise in frames)"
},
{
.name = "ADC_PERIOD_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.period_size_in,
.descr = "ADC period size (0 to go with system default)",
.overriddenp = &glob_conf.period_size_in_overridden
},
{
.name = "ADC_BUFFER_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.buffer_size_in,
.descr = "ADC buffer size (0 to go with system default)",
.overriddenp = &glob_conf.buffer_size_in_overridden
},
{
.name = "THRESHOLD",
.tag = AUD_OPT_INT,
.valp = &glob_conf.threshold,
.descr = "(undocumented)"
},
{
.name = "DAC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.pcm_name_out,
.descr = "DAC device name (for instance dmix)"
},
{
.name = "ADC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.pcm_name_in,
.descr = "ADC device name"
},
{ /* End of list */ }
};
static struct audio_pcm_ops alsa_pcm_ops = {
.init_out = alsa_init_out,
.fini_out = alsa_fini_out,
@@ -1082,9 +1213,10 @@ static struct audio_pcm_ops alsa_pcm_ops = {
.ctl_in = alsa_ctl_in,
};
static struct audio_driver alsa_audio_driver = {
struct audio_driver alsa_audio_driver = {
.name = "alsa",
.descr = "ALSA http://www.alsa-project.org",
.options = alsa_options,
.init = alsa_audio_init,
.fini = alsa_audio_fini,
.pcm_ops = &alsa_pcm_ops,
@@ -1094,9 +1226,3 @@ static struct audio_driver alsa_audio_driver = {
.voice_size_out = sizeof (ALSAVoiceOut),
.voice_size_in = sizeof (ALSAVoiceIn)
};
static void register_audio_alsa(void)
{
audio_driver_register(&alsa_audio_driver);
}
type_init(register_audio_alsa);

File diff suppressed because it is too large Load Diff

View File

@@ -26,31 +26,30 @@
#define QEMU_AUDIO_H
#include "qemu/queue.h"
#include "qapi/qapi-types-audio.h"
typedef void (*audio_callback_fn) (void *opaque, int avail);
typedef enum {
AUD_FMT_U8,
AUD_FMT_S8,
AUD_FMT_U16,
AUD_FMT_S16,
AUD_FMT_U32,
AUD_FMT_S32
} audfmt_e;
#ifdef HOST_WORDS_BIGENDIAN
#define AUDIO_HOST_ENDIANNESS 1
#else
#define AUDIO_HOST_ENDIANNESS 0
#endif
typedef struct audsettings {
struct audsettings {
int freq;
int nchannels;
AudioFormat fmt;
audfmt_e fmt;
int endianness;
} audsettings;
audsettings audiodev_to_audsettings(AudiodevPerDirectionOptions *pdo);
int audioformat_bytes_per_sample(AudioFormat fmt);
int audio_buffer_frames(AudiodevPerDirectionOptions *pdo,
audsettings *as, int def_usecs);
int audio_buffer_samples(AudiodevPerDirectionOptions *pdo,
audsettings *as, int def_usecs);
int audio_buffer_bytes(AudiodevPerDirectionOptions *pdo,
audsettings *as, int def_usecs);
};
typedef enum {
AUD_CNOTIFY_ENABLE,
@@ -90,6 +89,7 @@ typedef struct QEMUAudioTimeStamp {
void AUD_vlog (const char *cap, const char *fmt, va_list ap) GCC_FMT_ATTR(2, 0);
void AUD_log (const char *cap, const char *fmt, ...) GCC_FMT_ATTR(2, 3);
void AUD_help (void);
void AUD_register_card (const char *name, QEMUSoundCard *card);
void AUD_remove_card (QEMUSoundCard *card);
CaptureVoiceOut *AUD_add_capture (
@@ -166,13 +166,4 @@ int wav_start_capture (CaptureState *s, const char *path, int freq,
bool audio_is_cleaning_up(void);
void audio_cleanup(void);
void audio_sample_to_uint64(void *samples, int pos,
uint64_t *left, uint64_t *right);
void audio_sample_from_uint64(void *samples, int pos,
uint64_t left, uint64_t right);
void audio_parse_option(const char *opt);
void audio_init_audiodevs(void);
void audio_legacy_help(void);
#endif /* QEMU_AUDIO_H */

View File

@@ -25,7 +25,7 @@
#ifndef QEMU_AUDIO_INT_H
#define QEMU_AUDIO_INT_H
#ifdef CONFIG_AUDIO_COREAUDIO
#ifdef CONFIG_COREAUDIO
#define FLOAT_MIXENG
/* #define RECIPROCAL */
#endif
@@ -33,6 +33,22 @@
struct audio_pcm_ops;
typedef enum {
AUD_OPT_INT,
AUD_OPT_FMT,
AUD_OPT_STR,
AUD_OPT_BOOL
} audio_option_tag_e;
struct audio_option {
const char *name;
audio_option_tag_e tag;
void *valp;
const char *descr;
int *overriddenp;
int overridden;
};
struct audio_callback {
void *opaque;
audio_callback_fn fn;
@@ -125,11 +141,11 @@ struct SWVoiceIn {
QLIST_ENTRY (SWVoiceIn) entries;
};
typedef struct audio_driver audio_driver;
struct audio_driver {
const char *name;
const char *descr;
void *(*init) (Audiodev *);
struct audio_option *options;
void *(*init) (void);
void (*fini) (void *);
struct audio_pcm_ops *pcm_ops;
int can_be_default;
@@ -138,7 +154,6 @@ struct audio_driver {
int voice_size_out;
int voice_size_in;
int ctl_caps;
QLIST_ENTRY(audio_driver) next;
};
struct audio_pcm_ops {
@@ -174,9 +189,8 @@ struct SWVoiceCap {
QLIST_ENTRY (SWVoiceCap) entries;
};
typedef struct AudioState {
struct AudioState {
struct audio_driver *drv;
Audiodev *dev;
void *drv_opaque;
QEMUTimer *ts;
@@ -187,16 +201,19 @@ typedef struct AudioState {
int nb_hw_voices_out;
int nb_hw_voices_in;
int vm_running;
int64_t period_ticks;
} AudioState;
};
extern struct audio_driver no_audio_driver;
extern struct audio_driver oss_audio_driver;
extern struct audio_driver sdl_audio_driver;
extern struct audio_driver wav_audio_driver;
extern struct audio_driver alsa_audio_driver;
extern struct audio_driver coreaudio_audio_driver;
extern struct audio_driver dsound_audio_driver;
extern struct audio_driver pa_audio_driver;
extern struct audio_driver spice_audio_driver;
extern const struct mixeng_volume nominal_volume;
extern const char *audio_prio_list[];
void audio_driver_register(audio_driver *drv);
audio_driver *audio_driver_lookup(const char *name);
void audio_pcm_init_info (struct audio_pcm_info *info, struct audsettings *as);
void audio_pcm_info_clear_buf (struct audio_pcm_info *info, void *buf, int len);
@@ -235,18 +252,10 @@ static inline int audio_ring_dist (int dst, int src, int len)
#define AUDIO_STRINGIFY_(n) #n
#define AUDIO_STRINGIFY(n) AUDIO_STRINGIFY_(n)
typedef struct AudiodevListEntry {
Audiodev *dev;
QSIMPLEQ_ENTRY(AudiodevListEntry) next;
} AudiodevListEntry;
typedef QSIMPLEQ_HEAD(, AudiodevListEntry) AudiodevListHead;
AudiodevListHead audio_handle_legacy_opts(void);
void audio_free_audiodev_list(AudiodevListHead *head);
void audio_create_pdos(Audiodev *dev);
AudiodevPerDirectionOptions *audio_get_pdo_in(Audiodev *dev);
AudiodevPerDirectionOptions *audio_get_pdo_out(Audiodev *dev);
#if defined _MSC_VER || defined __GNUC__
#define AUDIO_FUNC __FUNCTION__
#else
#define AUDIO_FUNC __FILE__ ":" AUDIO_STRINGIFY (__LINE__)
#endif
#endif /* QEMU_AUDIO_INT_H */

View File

@@ -1,550 +0,0 @@
/*
* QEMU Audio subsystem: legacy configuration handling
*
* Copyright (c) 2015-2019 Zoltán Kővágó <DirtY.iCE.hu@gmail.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "audio.h"
#include "audio_int.h"
#include "qemu-common.h"
#include "qemu/cutils.h"
#include "qemu/timer.h"
#include "qapi/error.h"
#include "qapi/qapi-visit-audio.h"
#include "qapi/visitor-impl.h"
#define AUDIO_CAP "audio-legacy"
#include "audio_int.h"
static uint32_t toui32(const char *str)
{
unsigned long long ret;
if (parse_uint_full(str, &ret, 10) || ret > UINT32_MAX) {
dolog("Invalid integer value `%s'\n", str);
exit(1);
}
return ret;
}
/* helper functions to convert env variables */
static void get_bool(const char *env, bool *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
*dst = toui32(val) != 0;
*has_dst = true;
}
}
static void get_int(const char *env, uint32_t *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
*dst = toui32(val);
*has_dst = true;
}
}
static void get_str(const char *env, char **dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
if (*has_dst) {
g_free(*dst);
}
*dst = g_strdup(val);
*has_dst = true;
}
}
static void get_fmt(const char *env, AudioFormat *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
size_t i;
for (i = 0; AudioFormat_lookup.size; ++i) {
if (strcasecmp(val, AudioFormat_lookup.array[i]) == 0) {
*dst = i;
*has_dst = true;
return;
}
}
dolog("Invalid audio format `%s'\n", val);
exit(1);
}
}
static void get_millis_to_usecs(const char *env, uint32_t *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
*dst = toui32(val) * 1000;
*has_dst = true;
}
}
static uint32_t frames_to_usecs(uint32_t frames,
AudiodevPerDirectionOptions *pdo)
{
uint32_t freq = pdo->has_frequency ? pdo->frequency : 44100;
return (frames * 1000000 + freq / 2) / freq;
}
static void get_frames_to_usecs(const char *env, uint32_t *dst, bool *has_dst,
AudiodevPerDirectionOptions *pdo)
{
const char *val = getenv(env);
if (val) {
*dst = frames_to_usecs(toui32(val), pdo);
*has_dst = true;
}
}
static uint32_t samples_to_usecs(uint32_t samples,
AudiodevPerDirectionOptions *pdo)
{
uint32_t channels = pdo->has_channels ? pdo->channels : 2;
return frames_to_usecs(samples / channels, pdo);
}
static void get_samples_to_usecs(const char *env, uint32_t *dst, bool *has_dst,
AudiodevPerDirectionOptions *pdo)
{
const char *val = getenv(env);
if (val) {
*dst = samples_to_usecs(toui32(val), pdo);
*has_dst = true;
}
}
static uint32_t bytes_to_usecs(uint32_t bytes, AudiodevPerDirectionOptions *pdo)
{
AudioFormat fmt = pdo->has_format ? pdo->format : AUDIO_FORMAT_S16;
uint32_t bytes_per_sample = audioformat_bytes_per_sample(fmt);
return samples_to_usecs(bytes / bytes_per_sample, pdo);
}
static void get_bytes_to_usecs(const char *env, uint32_t *dst, bool *has_dst,
AudiodevPerDirectionOptions *pdo)
{
const char *val = getenv(env);
if (val) {
*dst = bytes_to_usecs(toui32(val), pdo);
*has_dst = true;
}
}
/* backend specific functions */
/* ALSA */
static void handle_alsa_per_direction(
AudiodevAlsaPerDirectionOptions *apdo, const char *prefix)
{
char buf[64];
size_t len = strlen(prefix);
bool size_in_usecs = false;
bool dummy;
memcpy(buf, prefix, len);
strcpy(buf + len, "TRY_POLL");
get_bool(buf, &apdo->try_poll, &apdo->has_try_poll);
strcpy(buf + len, "DEV");
get_str(buf, &apdo->dev, &apdo->has_dev);
strcpy(buf + len, "SIZE_IN_USEC");
get_bool(buf, &size_in_usecs, &dummy);
strcpy(buf + len, "PERIOD_SIZE");
get_int(buf, &apdo->period_length, &apdo->has_period_length);
if (apdo->has_period_length && !size_in_usecs) {
apdo->period_length = frames_to_usecs(
apdo->period_length,
qapi_AudiodevAlsaPerDirectionOptions_base(apdo));
}
strcpy(buf + len, "BUFFER_SIZE");
get_int(buf, &apdo->buffer_length, &apdo->has_buffer_length);
if (apdo->has_buffer_length && !size_in_usecs) {
apdo->buffer_length = frames_to_usecs(
apdo->buffer_length,
qapi_AudiodevAlsaPerDirectionOptions_base(apdo));
}
}
static void handle_alsa(Audiodev *dev)
{
AudiodevAlsaOptions *aopt = &dev->u.alsa;
handle_alsa_per_direction(aopt->in, "QEMU_ALSA_ADC_");
handle_alsa_per_direction(aopt->out, "QEMU_ALSA_DAC_");
get_millis_to_usecs("QEMU_ALSA_THRESHOLD",
&aopt->threshold, &aopt->has_threshold);
}
/* coreaudio */
static void handle_coreaudio(Audiodev *dev)
{
get_frames_to_usecs(
"QEMU_COREAUDIO_BUFFER_SIZE",
&dev->u.coreaudio.out->buffer_length,
&dev->u.coreaudio.out->has_buffer_length,
qapi_AudiodevCoreaudioPerDirectionOptions_base(dev->u.coreaudio.out));
get_int("QEMU_COREAUDIO_BUFFER_COUNT",
&dev->u.coreaudio.out->buffer_count,
&dev->u.coreaudio.out->has_buffer_count);
}
/* dsound */
static void handle_dsound(Audiodev *dev)
{
get_millis_to_usecs("QEMU_DSOUND_LATENCY_MILLIS",
&dev->u.dsound.latency, &dev->u.dsound.has_latency);
get_bytes_to_usecs("QEMU_DSOUND_BUFSIZE_OUT",
&dev->u.dsound.out->buffer_length,
&dev->u.dsound.out->has_buffer_length,
dev->u.dsound.out);
get_bytes_to_usecs("QEMU_DSOUND_BUFSIZE_IN",
&dev->u.dsound.in->buffer_length,
&dev->u.dsound.in->has_buffer_length,
dev->u.dsound.in);
}
/* OSS */
static void handle_oss_per_direction(
AudiodevOssPerDirectionOptions *opdo, const char *try_poll_env,
const char *dev_env)
{
get_bool(try_poll_env, &opdo->try_poll, &opdo->has_try_poll);
get_str(dev_env, &opdo->dev, &opdo->has_dev);
get_bytes_to_usecs("QEMU_OSS_FRAGSIZE",
&opdo->buffer_length, &opdo->has_buffer_length,
qapi_AudiodevOssPerDirectionOptions_base(opdo));
get_int("QEMU_OSS_NFRAGS", &opdo->buffer_count,
&opdo->has_buffer_count);
}
static void handle_oss(Audiodev *dev)
{
AudiodevOssOptions *oopt = &dev->u.oss;
handle_oss_per_direction(oopt->in, "QEMU_AUDIO_ADC_TRY_POLL",
"QEMU_OSS_ADC_DEV");
handle_oss_per_direction(oopt->out, "QEMU_AUDIO_DAC_TRY_POLL",
"QEMU_OSS_DAC_DEV");
get_bool("QEMU_OSS_MMAP", &oopt->try_mmap, &oopt->has_try_mmap);
get_bool("QEMU_OSS_EXCLUSIVE", &oopt->exclusive, &oopt->has_exclusive);
get_int("QEMU_OSS_POLICY", &oopt->dsp_policy, &oopt->has_dsp_policy);
}
/* pulseaudio */
static void handle_pa_per_direction(
AudiodevPaPerDirectionOptions *ppdo, const char *env)
{
get_str(env, &ppdo->name, &ppdo->has_name);
}
static void handle_pa(Audiodev *dev)
{
handle_pa_per_direction(dev->u.pa.in, "QEMU_PA_SOURCE");
handle_pa_per_direction(dev->u.pa.out, "QEMU_PA_SINK");
get_samples_to_usecs(
"QEMU_PA_SAMPLES", &dev->u.pa.in->buffer_length,
&dev->u.pa.in->has_buffer_length,
qapi_AudiodevPaPerDirectionOptions_base(dev->u.pa.in));
get_samples_to_usecs(
"QEMU_PA_SAMPLES", &dev->u.pa.out->buffer_length,
&dev->u.pa.out->has_buffer_length,
qapi_AudiodevPaPerDirectionOptions_base(dev->u.pa.out));
get_str("QEMU_PA_SERVER", &dev->u.pa.server, &dev->u.pa.has_server);
}
/* SDL */
static void handle_sdl(Audiodev *dev)
{
/* SDL is output only */
get_samples_to_usecs("QEMU_SDL_SAMPLES", &dev->u.sdl.out->buffer_length,
&dev->u.sdl.out->has_buffer_length, dev->u.sdl.out);
}
/* wav */
static void handle_wav(Audiodev *dev)
{
get_int("QEMU_WAV_FREQUENCY",
&dev->u.wav.out->frequency, &dev->u.wav.out->has_frequency);
get_fmt("QEMU_WAV_FORMAT", &dev->u.wav.out->format,
&dev->u.wav.out->has_format);
get_int("QEMU_WAV_DAC_FIXED_CHANNELS",
&dev->u.wav.out->channels, &dev->u.wav.out->has_channels);
get_str("QEMU_WAV_PATH", &dev->u.wav.path, &dev->u.wav.has_path);
}
/* general */
static void handle_per_direction(
AudiodevPerDirectionOptions *pdo, const char *prefix)
{
char buf[64];
size_t len = strlen(prefix);
memcpy(buf, prefix, len);
strcpy(buf + len, "FIXED_SETTINGS");
get_bool(buf, &pdo->fixed_settings, &pdo->has_fixed_settings);
strcpy(buf + len, "FIXED_FREQ");
get_int(buf, &pdo->frequency, &pdo->has_frequency);
strcpy(buf + len, "FIXED_FMT");
get_fmt(buf, &pdo->format, &pdo->has_format);
strcpy(buf + len, "FIXED_CHANNELS");
get_int(buf, &pdo->channels, &pdo->has_channels);
strcpy(buf + len, "VOICES");
get_int(buf, &pdo->voices, &pdo->has_voices);
}
static AudiodevListEntry *legacy_opt(const char *drvname)
{
AudiodevListEntry *e = g_malloc0(sizeof(AudiodevListEntry));
e->dev = g_malloc0(sizeof(Audiodev));
e->dev->id = g_strdup(drvname);
e->dev->driver = qapi_enum_parse(
&AudiodevDriver_lookup, drvname, -1, &error_abort);
audio_create_pdos(e->dev);
handle_per_direction(audio_get_pdo_in(e->dev), "QEMU_AUDIO_ADC_");
handle_per_direction(audio_get_pdo_out(e->dev), "QEMU_AUDIO_DAC_");
/* Original description: Timer period in HZ (0 - use lowest possible) */
get_int("QEMU_AUDIO_TIMER_PERIOD",
&e->dev->timer_period, &e->dev->has_timer_period);
if (e->dev->has_timer_period && e->dev->timer_period) {
e->dev->timer_period = NANOSECONDS_PER_SECOND / 1000 /
e->dev->timer_period;
}
switch (e->dev->driver) {
case AUDIODEV_DRIVER_ALSA:
handle_alsa(e->dev);
break;
case AUDIODEV_DRIVER_COREAUDIO:
handle_coreaudio(e->dev);
break;
case AUDIODEV_DRIVER_DSOUND:
handle_dsound(e->dev);
break;
case AUDIODEV_DRIVER_OSS:
handle_oss(e->dev);
break;
case AUDIODEV_DRIVER_PA:
handle_pa(e->dev);
break;
case AUDIODEV_DRIVER_SDL:
handle_sdl(e->dev);
break;
case AUDIODEV_DRIVER_WAV:
handle_wav(e->dev);
break;
default:
break;
}
return e;
}
AudiodevListHead audio_handle_legacy_opts(void)
{
const char *drvname = getenv("QEMU_AUDIO_DRV");
AudiodevListHead head = QSIMPLEQ_HEAD_INITIALIZER(head);
if (drvname) {
AudiodevListEntry *e;
audio_driver *driver = audio_driver_lookup(drvname);
if (!driver) {
dolog("Unknown audio driver `%s'\n", drvname);
exit(1);
}
e = legacy_opt(drvname);
QSIMPLEQ_INSERT_TAIL(&head, e, next);
} else {
for (int i = 0; audio_prio_list[i]; i++) {
audio_driver *driver = audio_driver_lookup(audio_prio_list[i]);
if (driver && driver->can_be_default) {
AudiodevListEntry *e = legacy_opt(driver->name);
QSIMPLEQ_INSERT_TAIL(&head, e, next);
}
}
if (QSIMPLEQ_EMPTY(&head)) {
dolog("Internal error: no default audio driver available\n");
exit(1);
}
}
return head;
}
/* visitor to print -audiodev option */
typedef struct {
Visitor visitor;
bool comma;
GList *path;
} LegacyPrintVisitor;
static void lv_start_struct(Visitor *v, const char *name, void **obj,
size_t size, Error **errp)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
lv->path = g_list_append(lv->path, g_strdup(name));
}
static void lv_end_struct(Visitor *v, void **obj)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
lv->path = g_list_delete_link(lv->path, g_list_last(lv->path));
}
static void lv_print_key(Visitor *v, const char *name)
{
GList *e;
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
if (lv->comma) {
putchar(',');
} else {
lv->comma = true;
}
for (e = lv->path; e; e = e->next) {
if (e->data) {
printf("%s.", (const char *) e->data);
}
}
printf("%s=", name);
}
static void lv_type_int64(Visitor *v, const char *name, int64_t *obj,
Error **errp)
{
lv_print_key(v, name);
printf("%" PRIi64, *obj);
}
static void lv_type_uint64(Visitor *v, const char *name, uint64_t *obj,
Error **errp)
{
lv_print_key(v, name);
printf("%" PRIu64, *obj);
}
static void lv_type_bool(Visitor *v, const char *name, bool *obj, Error **errp)
{
lv_print_key(v, name);
printf("%s", *obj ? "on" : "off");
}
static void lv_type_str(Visitor *v, const char *name, char **obj, Error **errp)
{
const char *str = *obj;
lv_print_key(v, name);
while (*str) {
if (*str == ',') {
putchar(',');
}
putchar(*str++);
}
}
static void lv_complete(Visitor *v, void *opaque)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
assert(lv->path == NULL);
}
static void lv_free(Visitor *v)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
g_list_free_full(lv->path, g_free);
g_free(lv);
}
static Visitor *legacy_visitor_new(void)
{
LegacyPrintVisitor *lv = g_malloc0(sizeof(LegacyPrintVisitor));
lv->visitor.start_struct = lv_start_struct;
lv->visitor.end_struct = lv_end_struct;
/* lists not supported */
lv->visitor.type_int64 = lv_type_int64;
lv->visitor.type_uint64 = lv_type_uint64;
lv->visitor.type_bool = lv_type_bool;
lv->visitor.type_str = lv_type_str;
lv->visitor.type = VISITOR_OUTPUT;
lv->visitor.complete = lv_complete;
lv->visitor.free = lv_free;
return &lv->visitor;
}
void audio_legacy_help(void)
{
AudiodevListHead head;
AudiodevListEntry *e;
printf("Environment variable based configuration deprecated.\n");
printf("Please use the new -audiodev option.\n");
head = audio_handle_legacy_opts();
printf("\nEquivalent -audiodev to your current environment variables:\n");
if (!getenv("QEMU_AUDIO_DRV")) {
printf("(Since you didn't specify QEMU_AUDIO_DRV, I'll list all "
"possibilities)\n");
}
QSIMPLEQ_FOREACH(e, &head, next) {
Visitor *v;
Audiodev *dev = e->dev;
printf("-audiodev ");
v = legacy_visitor_new();
visit_type_Audiodev(v, NULL, &dev, &error_abort);
visit_free(v);
printf("\n");
}
audio_free_audiodev_list(&head);
}

View File

@@ -31,7 +31,7 @@ int audio_pt_init (struct audio_pt *p, void *(*func) (void *),
err = sigfillset (&set);
if (err) {
logerr(p, errno, "%s(%s): sigfillset failed", cap, __func__);
logerr (p, errno, "%s(%s): sigfillset failed", cap, AUDIO_FUNC);
return -1;
}
@@ -57,8 +57,8 @@ int audio_pt_init (struct audio_pt *p, void *(*func) (void *),
err2 = pthread_sigmask (SIG_SETMASK, &old_set, NULL);
if (err2) {
logerr(p, err2, "%s(%s): pthread_sigmask (restore) failed",
cap, __func__);
logerr (p, err2, "%s(%s): pthread_sigmask (restore) failed",
cap, AUDIO_FUNC);
/* We have failed to restore original signal mask, all bets are off,
so terminate the process */
exit (EXIT_FAILURE);
@@ -74,17 +74,17 @@ int audio_pt_init (struct audio_pt *p, void *(*func) (void *),
err2:
err2 = pthread_cond_destroy (&p->cond);
if (err2) {
logerr(p, err2, "%s(%s): pthread_cond_destroy failed", cap, __func__);
logerr (p, err2, "%s(%s): pthread_cond_destroy failed", cap, AUDIO_FUNC);
}
err1:
err2 = pthread_mutex_destroy (&p->mutex);
if (err2) {
logerr(p, err2, "%s(%s): pthread_mutex_destroy failed", cap, __func__);
logerr (p, err2, "%s(%s): pthread_mutex_destroy failed", cap, AUDIO_FUNC);
}
err0:
logerr(p, err, "%s(%s): %s failed", cap, __func__, efunc);
logerr (p, err, "%s(%s): %s failed", cap, AUDIO_FUNC, efunc);
return -1;
}
@@ -94,13 +94,13 @@ int audio_pt_fini (struct audio_pt *p, const char *cap)
err = pthread_cond_destroy (&p->cond);
if (err) {
logerr(p, err, "%s(%s): pthread_cond_destroy failed", cap, __func__);
logerr (p, err, "%s(%s): pthread_cond_destroy failed", cap, AUDIO_FUNC);
ret = -1;
}
err = pthread_mutex_destroy (&p->mutex);
if (err) {
logerr(p, err, "%s(%s): pthread_mutex_destroy failed", cap, __func__);
logerr (p, err, "%s(%s): pthread_mutex_destroy failed", cap, AUDIO_FUNC);
ret = -1;
}
return ret;
@@ -112,7 +112,7 @@ int audio_pt_lock (struct audio_pt *p, const char *cap)
err = pthread_mutex_lock (&p->mutex);
if (err) {
logerr(p, err, "%s(%s): pthread_mutex_lock failed", cap, __func__);
logerr (p, err, "%s(%s): pthread_mutex_lock failed", cap, AUDIO_FUNC);
return -1;
}
return 0;
@@ -124,7 +124,7 @@ int audio_pt_unlock (struct audio_pt *p, const char *cap)
err = pthread_mutex_unlock (&p->mutex);
if (err) {
logerr(p, err, "%s(%s): pthread_mutex_unlock failed", cap, __func__);
logerr (p, err, "%s(%s): pthread_mutex_unlock failed", cap, AUDIO_FUNC);
return -1;
}
return 0;
@@ -136,7 +136,7 @@ int audio_pt_wait (struct audio_pt *p, const char *cap)
err = pthread_cond_wait (&p->cond, &p->mutex);
if (err) {
logerr(p, err, "%s(%s): pthread_cond_wait failed", cap, __func__);
logerr (p, err, "%s(%s): pthread_cond_wait failed", cap, AUDIO_FUNC);
return -1;
}
return 0;
@@ -148,12 +148,12 @@ int audio_pt_unlock_and_signal (struct audio_pt *p, const char *cap)
err = pthread_mutex_unlock (&p->mutex);
if (err) {
logerr(p, err, "%s(%s): pthread_mutex_unlock failed", cap, __func__);
logerr (p, err, "%s(%s): pthread_mutex_unlock failed", cap, AUDIO_FUNC);
return -1;
}
err = pthread_cond_signal (&p->cond);
if (err) {
logerr(p, err, "%s(%s): pthread_cond_signal failed", cap, __func__);
logerr (p, err, "%s(%s): pthread_cond_signal failed", cap, AUDIO_FUNC);
return -1;
}
return 0;
@@ -166,7 +166,7 @@ int audio_pt_join (struct audio_pt *p, void **arg, const char *cap)
err = pthread_join (p->thread, &ret);
if (err) {
logerr(p, err, "%s(%s): pthread_join failed", cap, __func__);
logerr (p, err, "%s(%s): pthread_join failed", cap, AUDIO_FUNC);
return -1;
}
*arg = ret;

View File

@@ -57,13 +57,13 @@ static void glue (audio_init_nb_voices_, TYPE) (struct audio_driver *drv)
glue (s->nb_hw_voices_, TYPE) = max_voices;
}
if (audio_bug(__func__, !voice_size && max_voices)) {
if (audio_bug (AUDIO_FUNC, !voice_size && max_voices)) {
dolog ("drv=`%s' voice_size=0 max_voices=%d\n",
drv->name, max_voices);
glue (s->nb_hw_voices_, TYPE) = 0;
}
if (audio_bug(__func__, voice_size && !max_voices)) {
if (audio_bug (AUDIO_FUNC, voice_size && !max_voices)) {
dolog ("drv=`%s' voice_size=%d max_voices=0\n",
drv->name, voice_size);
}
@@ -77,7 +77,7 @@ static void glue (audio_pcm_hw_free_resources_, TYPE) (HW *hw)
static int glue (audio_pcm_hw_alloc_resources_, TYPE) (HW *hw)
{
HWBUF = audio_calloc(__func__, hw->samples, sizeof(struct st_sample));
HWBUF = audio_calloc (AUDIO_FUNC, hw->samples, sizeof (struct st_sample));
if (!HWBUF) {
dolog ("Could not allocate " NAME " buffer (%d samples)\n",
hw->samples);
@@ -105,7 +105,7 @@ static int glue (audio_pcm_sw_alloc_resources_, TYPE) (SW *sw)
samples = ((int64_t) sw->hw->samples << 32) / sw->ratio;
sw->buf = audio_calloc(__func__, samples, sizeof(struct st_sample));
sw->buf = audio_calloc (AUDIO_FUNC, samples, sizeof (struct st_sample));
if (!sw->buf) {
dolog ("Could not allocate buffer for `%s' (%d samples)\n",
SW_NAME (sw), samples);
@@ -238,17 +238,17 @@ static HW *glue (audio_pcm_hw_add_new_, TYPE) (struct audsettings *as)
return NULL;
}
if (audio_bug(__func__, !drv)) {
if (audio_bug (AUDIO_FUNC, !drv)) {
dolog ("No host audio driver\n");
return NULL;
}
if (audio_bug(__func__, !drv->pcm_ops)) {
if (audio_bug (AUDIO_FUNC, !drv->pcm_ops)) {
dolog ("Host audio driver without pcm_ops\n");
return NULL;
}
hw = audio_calloc(__func__, 1, glue(drv->voice_size_, TYPE));
hw = audio_calloc (AUDIO_FUNC, 1, glue (drv->voice_size_, TYPE));
if (!hw) {
dolog ("Can not allocate voice `%s' size %d\n",
drv->name, glue (drv->voice_size_, TYPE));
@@ -266,7 +266,7 @@ static HW *glue (audio_pcm_hw_add_new_, TYPE) (struct audsettings *as)
goto err0;
}
if (audio_bug(__func__, hw->samples <= 0)) {
if (audio_bug (AUDIO_FUNC, hw->samples <= 0)) {
dolog ("hw->samples=%d\n", hw->samples);
goto err1;
}
@@ -299,42 +299,11 @@ static HW *glue (audio_pcm_hw_add_new_, TYPE) (struct audsettings *as)
return NULL;
}
AudiodevPerDirectionOptions *glue(audio_get_pdo_, TYPE)(Audiodev *dev)
{
switch (dev->driver) {
case AUDIODEV_DRIVER_NONE:
return dev->u.none.TYPE;
case AUDIODEV_DRIVER_ALSA:
return qapi_AudiodevAlsaPerDirectionOptions_base(dev->u.alsa.TYPE);
case AUDIODEV_DRIVER_COREAUDIO:
return qapi_AudiodevCoreaudioPerDirectionOptions_base(
dev->u.coreaudio.TYPE);
case AUDIODEV_DRIVER_DSOUND:
return dev->u.dsound.TYPE;
case AUDIODEV_DRIVER_OSS:
return qapi_AudiodevOssPerDirectionOptions_base(dev->u.oss.TYPE);
case AUDIODEV_DRIVER_PA:
return qapi_AudiodevPaPerDirectionOptions_base(dev->u.pa.TYPE);
case AUDIODEV_DRIVER_SDL:
return dev->u.sdl.TYPE;
case AUDIODEV_DRIVER_SPICE:
return dev->u.spice.TYPE;
case AUDIODEV_DRIVER_WAV:
return dev->u.wav.TYPE;
case AUDIODEV_DRIVER__MAX:
break;
}
abort();
}
static HW *glue (audio_pcm_hw_add_, TYPE) (struct audsettings *as)
{
HW *hw;
AudioState *s = &glob_audio_state;
AudiodevPerDirectionOptions *pdo = glue(audio_get_pdo_, TYPE)(s->dev);
if (pdo->fixed_settings) {
if (glue (conf.fixed_, TYPE).enabled && glue (conf.fixed_, TYPE).greedy) {
hw = glue (audio_pcm_hw_add_new_, TYPE) (as);
if (hw) {
return hw;
@@ -362,17 +331,15 @@ static SW *glue (audio_pcm_create_voice_pair_, TYPE) (
SW *sw;
HW *hw;
struct audsettings hw_as;
AudioState *s = &glob_audio_state;
AudiodevPerDirectionOptions *pdo = glue(audio_get_pdo_, TYPE)(s->dev);
if (pdo->fixed_settings) {
hw_as = audiodev_to_audsettings(pdo);
if (glue (conf.fixed_, TYPE).enabled) {
hw_as = glue (conf.fixed_, TYPE).settings;
}
else {
hw_as = *as;
}
sw = audio_calloc(__func__, 1, sizeof(*sw));
sw = audio_calloc (AUDIO_FUNC, 1, sizeof (*sw));
if (!sw) {
dolog ("Could not allocate soft voice `%s' (%zu bytes)\n",
sw_name ? sw_name : "unknown", sizeof (*sw));
@@ -412,7 +379,7 @@ static void glue (audio_close_, TYPE) (SW *sw)
void glue (AUD_close_, TYPE) (QEMUSoundCard *card, SW *sw)
{
if (sw) {
if (audio_bug(__func__, !card)) {
if (audio_bug (AUDIO_FUNC, !card)) {
dolog ("card=%p\n", card);
return;
}
@@ -431,9 +398,8 @@ SW *glue (AUD_open_, TYPE) (
)
{
AudioState *s = &glob_audio_state;
AudiodevPerDirectionOptions *pdo = glue(audio_get_pdo_, TYPE)(s->dev);
if (audio_bug(__func__, !card || !name || !callback_fn || !as)) {
if (audio_bug (AUDIO_FUNC, !card || !name || !callback_fn || !as)) {
dolog ("card=%p name=%p callback_fn=%p as=%p\n",
card, name, callback_fn, as);
goto fail;
@@ -442,12 +408,12 @@ SW *glue (AUD_open_, TYPE) (
ldebug ("open %s, freq %d, nchannels %d, fmt %d\n",
name, as->freq, as->nchannels, as->fmt);
if (audio_bug(__func__, audio_validate_settings(as))) {
if (audio_bug (AUDIO_FUNC, audio_validate_settings (as))) {
audio_print_settings (as);
goto fail;
}
if (audio_bug(__func__, !s->drv)) {
if (audio_bug (AUDIO_FUNC, !s->drv)) {
dolog ("Can not open `%s' (no host audio driver)\n", name);
goto fail;
}
@@ -456,7 +422,7 @@ SW *glue (AUD_open_, TYPE) (
return sw;
}
if (!pdo->fixed_settings && sw) {
if (!glue (conf.fixed_, TYPE).enabled && sw) {
glue (AUD_close_, TYPE) (card, sw);
sw = NULL;
}

View File

@@ -24,20 +24,20 @@ int waveformat_from_audio_settings (WAVEFORMATEX *wfx,
wfx->cbSize = 0;
switch (as->fmt) {
case AUDIO_FORMAT_S8:
case AUDIO_FORMAT_U8:
case AUD_FMT_S8:
case AUD_FMT_U8:
wfx->wBitsPerSample = 8;
break;
case AUDIO_FORMAT_S16:
case AUDIO_FORMAT_U16:
case AUD_FMT_S16:
case AUD_FMT_U16:
wfx->wBitsPerSample = 16;
wfx->nAvgBytesPerSec <<= 1;
wfx->nBlockAlign <<= 1;
break;
case AUDIO_FORMAT_S32:
case AUDIO_FORMAT_U32:
case AUD_FMT_S32:
case AUD_FMT_U32:
wfx->wBitsPerSample = 32;
wfx->nAvgBytesPerSec <<= 2;
wfx->nBlockAlign <<= 2;
@@ -85,15 +85,15 @@ int waveformat_to_audio_settings (WAVEFORMATEX *wfx,
switch (wfx->wBitsPerSample) {
case 8:
as->fmt = AUDIO_FORMAT_U8;
as->fmt = AUD_FMT_U8;
break;
case 16:
as->fmt = AUDIO_FORMAT_S16;
as->fmt = AUD_FMT_S16;
break;
case 32:
as->fmt = AUDIO_FORMAT_S32;
as->fmt = AUD_FMT_S32;
break;
default:

View File

@@ -36,6 +36,11 @@
#define MAC_OS_X_VERSION_10_6 1060
#endif
typedef struct {
int buffer_frames;
int nbuffers;
} CoreaudioConf;
typedef struct coreaudioVoiceOut {
HWVoiceOut hw;
pthread_mutex_t mutex;
@@ -502,9 +507,7 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
int err;
const char *typ = "playback";
AudioValueRange frameRange;
Audiodev *dev = drv_opaque;
AudiodevCoreaudioPerDirectionOptions *cpdo = dev->u.coreaudio.out;
int frames;
CoreaudioConf *conf = drv_opaque;
/* create mutex */
err = pthread_mutex_init(&core->mutex, NULL);
@@ -535,17 +538,16 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
return -1;
}
frames = audio_buffer_frames(
qapi_AudiodevCoreaudioPerDirectionOptions_base(cpdo), as, 11610);
if (frameRange.mMinimum > frames) {
if (frameRange.mMinimum > conf->buffer_frames) {
core->audioDevicePropertyBufferFrameSize = (UInt32) frameRange.mMinimum;
dolog ("warning: Upsizing Buffer Frames to %f\n", frameRange.mMinimum);
} else if (frameRange.mMaximum < frames) {
}
else if (frameRange.mMaximum < conf->buffer_frames) {
core->audioDevicePropertyBufferFrameSize = (UInt32) frameRange.mMaximum;
dolog ("warning: Downsizing Buffer Frames to %f\n", frameRange.mMaximum);
}
else {
core->audioDevicePropertyBufferFrameSize = frames;
core->audioDevicePropertyBufferFrameSize = conf->buffer_frames;
}
/* set Buffer Frame Size */
@@ -566,8 +568,7 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
"Could not get device buffer frame size\n");
return -1;
}
hw->samples = (cpdo->has_buffer_count ? cpdo->buffer_count : 4) *
core->audioDevicePropertyBufferFrameSize;
hw->samples = conf->nbuffers * core->audioDevicePropertyBufferFrameSize;
/* get StreamFormat */
status = coreaudio_get_streamformat(core->outputDeviceID,
@@ -679,15 +680,40 @@ static int coreaudio_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static void *coreaudio_audio_init(Audiodev *dev)
static CoreaudioConf glob_conf = {
.buffer_frames = 512,
.nbuffers = 4,
};
static void *coreaudio_audio_init (void)
{
return dev;
CoreaudioConf *conf = g_malloc(sizeof(CoreaudioConf));
*conf = glob_conf;
return conf;
}
static void coreaudio_audio_fini (void *opaque)
{
g_free(opaque);
}
static struct audio_option coreaudio_options[] = {
{
.name = "BUFFER_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.buffer_frames,
.descr = "Size of the buffer in frames"
},
{
.name = "BUFFER_COUNT",
.tag = AUD_OPT_INT,
.valp = &glob_conf.nbuffers,
.descr = "Number of buffers"
},
{ /* End of list */ }
};
static struct audio_pcm_ops coreaudio_pcm_ops = {
.init_out = coreaudio_init_out,
.fini_out = coreaudio_fini_out,
@@ -696,9 +722,10 @@ static struct audio_pcm_ops coreaudio_pcm_ops = {
.ctl_out = coreaudio_ctl_out
};
static struct audio_driver coreaudio_audio_driver = {
struct audio_driver coreaudio_audio_driver = {
.name = "coreaudio",
.descr = "CoreAudio http://developer.apple.com/audio/coreaudio.html",
.options = coreaudio_options,
.init = coreaudio_audio_init,
.fini = coreaudio_audio_fini,
.pcm_ops = &coreaudio_pcm_ops,
@@ -708,9 +735,3 @@ static struct audio_driver coreaudio_audio_driver = {
.voice_size_out = sizeof (coreaudioVoiceOut),
.voice_size_in = 0
};
static void register_audio_coreaudio(void)
{
audio_driver_register(&coreaudio_audio_driver);
}
type_init(register_audio_coreaudio);

View File

@@ -167,18 +167,17 @@ static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
dsound *s = drv_opaque;
WAVEFORMATEX wfx;
struct audsettings obt_as;
DSoundConf *conf = &s->conf;
#ifdef DSBTYPE_IN
const char *typ = "ADC";
DSoundVoiceIn *ds = (DSoundVoiceIn *) hw;
DSCBUFFERDESC bd;
DSCBCAPS bc;
AudiodevPerDirectionOptions *pdo = s->dev->u.dsound.in;
#else
const char *typ = "DAC";
DSoundVoiceOut *ds = (DSoundVoiceOut *) hw;
DSBUFFERDESC bd;
DSBCAPS bc;
AudiodevPerDirectionOptions *pdo = s->dev->u.dsound.out;
#endif
if (!s->FIELD2) {
@@ -194,8 +193,8 @@ static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
memset (&bd, 0, sizeof (bd));
bd.dwSize = sizeof (bd);
bd.lpwfxFormat = &wfx;
bd.dwBufferBytes = audio_buffer_bytes(pdo, as, 92880);
#ifdef DSBTYPE_IN
bd.dwBufferBytes = conf->bufsize_in;
hr = IDirectSoundCapture_CreateCaptureBuffer (
s->dsound_capture,
&bd,
@@ -204,6 +203,7 @@ static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
);
#else
bd.dwFlags = DSBCAPS_STICKYFOCUS | DSBCAPS_GETCURRENTPOSITION2;
bd.dwBufferBytes = conf->bufsize_out;
hr = IDirectSound_CreateSoundBuffer (
s->dsound,
&bd,

View File

@@ -32,7 +32,6 @@
#define AUDIO_CAP "dsound"
#include "audio_int.h"
#include "qemu/host-utils.h"
#include <windows.h>
#include <mmsystem.h>
@@ -43,11 +42,17 @@
/* #define DEBUG_DSOUND */
typedef struct {
int bufsize_in;
int bufsize_out;
int latency_millis;
} DSoundConf;
typedef struct {
LPDIRECTSOUND dsound;
LPDIRECTSOUNDCAPTURE dsound_capture;
struct audsettings settings;
Audiodev *dev;
DSoundConf conf;
} dsound;
typedef struct {
@@ -243,9 +248,9 @@ static void GCC_FMT_ATTR (3, 4) dsound_logerr2 (
dsound_log_hresult (hr);
}
static uint64_t usecs_to_bytes(struct audio_pcm_info *info, uint32_t usecs)
static DWORD millis_to_bytes (struct audio_pcm_info *info, DWORD millis)
{
return muldiv64(usecs, info->bytes_per_second, 1000000);
return (millis * info->bytes_per_second) / 1000;
}
#ifdef DEBUG_DSOUND
@@ -473,7 +478,7 @@ static int dsound_run_out (HWVoiceOut *hw, int live)
LPVOID p1, p2;
int bufsize;
dsound *s = ds->s;
AudiodevDsoundOptions *dso = &s->dev->u.dsound;
DSoundConf *conf = &s->conf;
if (!dsb) {
dolog ("Attempt to run empty with playback buffer\n");
@@ -496,14 +501,14 @@ static int dsound_run_out (HWVoiceOut *hw, int live)
len = live << hwshift;
if (ds->first_time) {
if (dso->latency) {
if (conf->latency_millis) {
DWORD cur_blat;
cur_blat = audio_ring_dist (wpos, ppos, bufsize);
ds->first_time = 0;
old_pos = wpos;
old_pos +=
usecs_to_bytes(&hw->info, dso->latency) - cur_blat;
millis_to_bytes (&hw->info, conf->latency_millis) - cur_blat;
old_pos %= bufsize;
old_pos &= ~hw->info.align;
}
@@ -538,7 +543,7 @@ static int dsound_run_out (HWVoiceOut *hw, int live)
}
}
if (audio_bug(__func__, len < 0 || len > bufsize)) {
if (audio_bug (AUDIO_FUNC, len < 0 || len > bufsize)) {
dolog ("len=%d bufsize=%d old_pos=%ld ppos=%ld\n",
len, bufsize, old_pos, ppos);
return 0;
@@ -742,6 +747,12 @@ static int dsound_run_in (HWVoiceIn *hw)
return decr;
}
static DSoundConf glob_conf = {
.bufsize_in = 16384,
.bufsize_out = 16384,
.latency_millis = 10
};
static void dsound_audio_fini (void *opaque)
{
HRESULT hr;
@@ -772,22 +783,13 @@ static void dsound_audio_fini (void *opaque)
g_free(s);
}
static void *dsound_audio_init(Audiodev *dev)
static void *dsound_audio_init (void)
{
int err;
HRESULT hr;
dsound *s = g_malloc0(sizeof(dsound));
AudiodevDsoundOptions *dso;
assert(dev->driver == AUDIODEV_DRIVER_DSOUND);
s->dev = dev;
dso = &dev->u.dsound;
if (!dso->has_latency) {
dso->has_latency = true;
dso->latency = 10000; /* 10 ms */
}
s->conf = glob_conf;
hr = CoInitialize (NULL);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not initialize COM\n");
@@ -852,6 +854,28 @@ static void *dsound_audio_init(Audiodev *dev)
return s;
}
static struct audio_option dsound_options[] = {
{
.name = "LATENCY_MILLIS",
.tag = AUD_OPT_INT,
.valp = &glob_conf.latency_millis,
.descr = "(undocumented)"
},
{
.name = "BUFSIZE_OUT",
.tag = AUD_OPT_INT,
.valp = &glob_conf.bufsize_out,
.descr = "(undocumented)"
},
{
.name = "BUFSIZE_IN",
.tag = AUD_OPT_INT,
.valp = &glob_conf.bufsize_in,
.descr = "(undocumented)"
},
{ /* End of list */ }
};
static struct audio_pcm_ops dsound_pcm_ops = {
.init_out = dsound_init_out,
.fini_out = dsound_fini_out,
@@ -866,9 +890,10 @@ static struct audio_pcm_ops dsound_pcm_ops = {
.ctl_in = dsound_ctl_in
};
static struct audio_driver dsound_audio_driver = {
struct audio_driver dsound_audio_driver = {
.name = "dsound",
.descr = "DirectSound http://wikipedia.org/wiki/DirectSound",
.options = dsound_options,
.init = dsound_audio_init,
.fini = dsound_audio_fini,
.pcm_ops = &dsound_pcm_ops,
@@ -878,9 +903,3 @@ static struct audio_driver dsound_audio_driver = {
.voice_size_out = sizeof (DSoundVoiceOut),
.voice_size_in = sizeof (DSoundVoiceIn)
};
static void register_audio_dsound(void)
{
audio_driver_register(&dsound_audio_driver);
}
type_init(register_audio_dsound);

View File

@@ -25,7 +25,6 @@
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "qemu/bswap.h"
#include "qemu/error-report.h"
#include "audio.h"
#define AUDIO_CAP "mixeng"
@@ -268,37 +267,6 @@ f_sample *mixeng_clip[2][2][2][3] = {
}
};
void audio_sample_to_uint64(void *samples, int pos,
uint64_t *left, uint64_t *right)
{
struct st_sample *sample = samples;
sample += pos;
#ifdef FLOAT_MIXENG
error_report(
"Coreaudio and floating point samples are not supported by replay yet");
abort();
#else
*left = sample->l;
*right = sample->r;
#endif
}
void audio_sample_from_uint64(void *samples, int pos,
uint64_t left, uint64_t right)
{
struct st_sample *sample = samples;
sample += pos;
#ifdef FLOAT_MIXENG
error_report(
"Coreaudio and floating point samples are not supported by replay yet");
abort();
#else
sample->l = left;
sample->r = right;
#endif
}
/*
* August 21, 1998
* Copyright 1998 Fabrice Bellard.
@@ -344,7 +312,7 @@ struct rate {
*/
void *st_rate_start (int inrate, int outrate)
{
struct rate *rate = audio_calloc(__func__, 1, sizeof(*rate));
struct rate *rate = audio_calloc (AUDIO_FUNC, 1, sizeof (*rate));
if (!rate) {
dolog ("Could not allocate resampler (%zu bytes)\n", sizeof (*rate));

View File

@@ -136,7 +136,7 @@ static int no_ctl_in (HWVoiceIn *hw, int cmd, ...)
return 0;
}
static void *no_audio_init(Audiodev *dev)
static void *no_audio_init (void)
{
return &no_audio_init;
}
@@ -160,9 +160,10 @@ static struct audio_pcm_ops no_pcm_ops = {
.ctl_in = no_ctl_in
};
static struct audio_driver no_audio_driver = {
struct audio_driver no_audio_driver = {
.name = "none",
.descr = "Timer based audio emulation",
.options = NULL,
.init = no_audio_init,
.fini = no_audio_fini,
.pcm_ops = &no_pcm_ops,
@@ -172,9 +173,3 @@ static struct audio_driver no_audio_driver = {
.voice_size_out = sizeof (NoVoiceOut),
.voice_size_in = sizeof (NoVoiceIn)
};
static void register_audio_none(void)
{
audio_driver_register(&no_audio_driver);
}
type_init(register_audio_none);

View File

@@ -37,6 +37,16 @@
#define USE_DSP_POLICY
#endif
typedef struct OSSConf {
int try_mmap;
int nfrags;
int fragsize;
const char *devpath_out;
const char *devpath_in;
int exclusive;
int policy;
} OSSConf;
typedef struct OSSVoiceOut {
HWVoiceOut hw;
void *pcm_buf;
@@ -46,7 +56,7 @@ typedef struct OSSVoiceOut {
int fragsize;
int mmapped;
int pending;
Audiodev *dev;
OSSConf *conf;
} OSSVoiceOut;
typedef struct OSSVoiceIn {
@@ -55,12 +65,12 @@ typedef struct OSSVoiceIn {
int fd;
int nfrags;
int fragsize;
Audiodev *dev;
OSSConf *conf;
} OSSVoiceIn;
struct oss_params {
int freq;
int fmt;
audfmt_e fmt;
int nchannels;
int nfrags;
int fragsize;
@@ -138,16 +148,16 @@ static int oss_write (SWVoiceOut *sw, void *buf, int len)
return audio_pcm_sw_write (sw, buf, len);
}
static int aud_to_ossfmt (AudioFormat fmt, int endianness)
static int aud_to_ossfmt (audfmt_e fmt, int endianness)
{
switch (fmt) {
case AUDIO_FORMAT_S8:
case AUD_FMT_S8:
return AFMT_S8;
case AUDIO_FORMAT_U8:
case AUD_FMT_U8:
return AFMT_U8;
case AUDIO_FORMAT_S16:
case AUD_FMT_S16:
if (endianness) {
return AFMT_S16_BE;
}
@@ -155,7 +165,7 @@ static int aud_to_ossfmt (AudioFormat fmt, int endianness)
return AFMT_S16_LE;
}
case AUDIO_FORMAT_U16:
case AUD_FMT_U16:
if (endianness) {
return AFMT_U16_BE;
}
@@ -172,37 +182,37 @@ static int aud_to_ossfmt (AudioFormat fmt, int endianness)
}
}
static int oss_to_audfmt (int ossfmt, AudioFormat *fmt, int *endianness)
static int oss_to_audfmt (int ossfmt, audfmt_e *fmt, int *endianness)
{
switch (ossfmt) {
case AFMT_S8:
*endianness = 0;
*fmt = AUDIO_FORMAT_S8;
*fmt = AUD_FMT_S8;
break;
case AFMT_U8:
*endianness = 0;
*fmt = AUDIO_FORMAT_U8;
*fmt = AUD_FMT_U8;
break;
case AFMT_S16_LE:
*endianness = 0;
*fmt = AUDIO_FORMAT_S16;
*fmt = AUD_FMT_S16;
break;
case AFMT_U16_LE:
*endianness = 0;
*fmt = AUDIO_FORMAT_U16;
*fmt = AUD_FMT_U16;
break;
case AFMT_S16_BE:
*endianness = 1;
*fmt = AUDIO_FORMAT_S16;
*fmt = AUD_FMT_S16;
break;
case AFMT_U16_BE:
*endianness = 1;
*fmt = AUDIO_FORMAT_U16;
*fmt = AUD_FMT_U16;
break;
default:
@@ -252,25 +262,19 @@ static int oss_get_version (int fd, int *version, const char *typ)
}
#endif
static int oss_open(int in, struct oss_params *req, audsettings *as,
struct oss_params *obt, int *pfd, Audiodev *dev)
static int oss_open (int in, struct oss_params *req,
struct oss_params *obt, int *pfd, OSSConf* conf)
{
AudiodevOssOptions *oopts = &dev->u.oss;
AudiodevOssPerDirectionOptions *opdo = in ? oopts->in : oopts->out;
int fd;
int oflags = (oopts->has_exclusive && oopts->exclusive) ? O_EXCL : 0;
int oflags = conf->exclusive ? O_EXCL : 0;
audio_buf_info abinfo;
int fmt, freq, nchannels;
int setfragment = 1;
const char *dspname = opdo->has_dev ? opdo->dev : "/dev/dsp";
const char *dspname = in ? conf->devpath_in : conf->devpath_out;
const char *typ = in ? "ADC" : "DAC";
#ifdef USE_DSP_POLICY
int policy = oopts->has_dsp_policy ? oopts->dsp_policy : 5;
#endif
/* Kludge needed to have working mmap on Linux */
oflags |= (oopts->has_try_mmap && oopts->try_mmap) ?
O_RDWR : (in ? O_RDONLY : O_WRONLY);
oflags |= conf->try_mmap ? O_RDWR : (in ? O_RDONLY : O_WRONLY);
fd = open (dspname, oflags | O_NONBLOCK);
if (-1 == fd) {
@@ -281,9 +285,6 @@ static int oss_open(int in, struct oss_params *req, audsettings *as,
freq = req->freq;
nchannels = req->nchannels;
fmt = req->fmt;
req->nfrags = opdo->has_buffer_count ? opdo->buffer_count : 4;
req->fragsize = audio_buffer_bytes(
qapi_AudiodevOssPerDirectionOptions_base(opdo), as, 23220);
if (ioctl (fd, SNDCTL_DSP_SAMPLESIZE, &fmt)) {
oss_logerr2 (errno, typ, "Failed to set sample size %d\n", req->fmt);
@@ -307,18 +308,18 @@ static int oss_open(int in, struct oss_params *req, audsettings *as,
}
#ifdef USE_DSP_POLICY
if (policy >= 0) {
if (conf->policy >= 0) {
int version;
if (!oss_get_version (fd, &version, typ)) {
trace_oss_version(version);
if (version >= 0x040000) {
int policy2 = policy;
if (ioctl(fd, SNDCTL_DSP_POLICY, &policy2)) {
int policy = conf->policy;
if (ioctl (fd, SNDCTL_DSP_POLICY, &policy)) {
oss_logerr2 (errno, typ,
"Failed to set timing policy to %d\n",
policy);
conf->policy);
goto err;
}
setfragment = 0;
@@ -499,18 +500,19 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
int endianness;
int err;
int fd;
AudioFormat effective_fmt;
audfmt_e effective_fmt;
struct audsettings obt_as;
Audiodev *dev = drv_opaque;
AudiodevOssOptions *oopts = &dev->u.oss;
OSSConf *conf = drv_opaque;
oss->fd = -1;
req.fmt = aud_to_ossfmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.fragsize = conf->fragsize;
req.nfrags = conf->nfrags;
if (oss_open(0, &req, as, &obt, &fd, dev)) {
if (oss_open (0, &req, &obt, &fd, conf)) {
return -1;
}
@@ -537,7 +539,7 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
hw->samples = (obt.nfrags * obt.fragsize) >> hw->info.shift;
oss->mmapped = 0;
if (oopts->has_try_mmap && oopts->try_mmap) {
if (conf->try_mmap) {
oss->pcm_buf = mmap (
NULL,
hw->samples << hw->info.shift,
@@ -580,9 +582,11 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
}
if (!oss->mmapped) {
oss->pcm_buf = audio_calloc(__func__,
hw->samples,
1 << hw->info.shift);
oss->pcm_buf = audio_calloc (
AUDIO_FUNC,
hw->samples,
1 << hw->info.shift
);
if (!oss->pcm_buf) {
dolog (
"Could not allocate DAC buffer (%d samples, each %d bytes)\n",
@@ -595,7 +599,7 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
}
oss->fd = fd;
oss->dev = dev;
oss->conf = conf;
return 0;
}
@@ -603,12 +607,16 @@ static int oss_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
int trig;
OSSVoiceOut *oss = (OSSVoiceOut *) hw;
AudiodevOssPerDirectionOptions *opdo = oss->dev->u.oss.out;
switch (cmd) {
case VOICE_ENABLE:
{
bool poll_mode = opdo->try_poll;
va_list ap;
int poll_mode;
va_start (ap, cmd);
poll_mode = va_arg (ap, int);
va_end (ap);
ldebug ("enabling voice\n");
if (poll_mode) {
@@ -661,16 +669,18 @@ static int oss_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
int endianness;
int err;
int fd;
AudioFormat effective_fmt;
audfmt_e effective_fmt;
struct audsettings obt_as;
Audiodev *dev = drv_opaque;
OSSConf *conf = drv_opaque;
oss->fd = -1;
req.fmt = aud_to_ossfmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
if (oss_open(1, &req, as, &obt, &fd, dev)) {
req.fragsize = conf->fragsize;
req.nfrags = conf->nfrags;
if (oss_open (1, &req, &obt, &fd, conf)) {
return -1;
}
@@ -695,7 +705,7 @@ static int oss_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
hw->samples = (obt.nfrags * obt.fragsize) >> hw->info.shift;
oss->pcm_buf = audio_calloc(__func__, hw->samples, 1 << hw->info.shift);
oss->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
if (!oss->pcm_buf) {
dolog ("Could not allocate ADC buffer (%d samples, each %d bytes)\n",
hw->samples, 1 << hw->info.shift);
@@ -704,7 +714,7 @@ static int oss_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
oss->fd = fd;
oss->dev = dev;
oss->conf = conf;
return 0;
}
@@ -795,12 +805,16 @@ static int oss_read (SWVoiceIn *sw, void *buf, int size)
static int oss_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
OSSVoiceIn *oss = (OSSVoiceIn *) hw;
AudiodevOssPerDirectionOptions *opdo = oss->dev->u.oss.out;
switch (cmd) {
case VOICE_ENABLE:
{
bool poll_mode = opdo->try_poll;
va_list ap;
int poll_mode;
va_start (ap, cmd);
poll_mode = va_arg (ap, int);
va_end (ap);
if (poll_mode) {
oss_poll_in (hw);
@@ -820,36 +834,82 @@ static int oss_ctl_in (HWVoiceIn *hw, int cmd, ...)
return 0;
}
static void oss_init_per_direction(AudiodevOssPerDirectionOptions *opdo)
static OSSConf glob_conf = {
.try_mmap = 0,
.nfrags = 4,
.fragsize = 4096,
.devpath_out = "/dev/dsp",
.devpath_in = "/dev/dsp",
.exclusive = 0,
.policy = 5
};
static void *oss_audio_init (void)
{
if (!opdo->has_try_poll) {
opdo->try_poll = true;
opdo->has_try_poll = true;
}
}
OSSConf *conf = g_malloc(sizeof(OSSConf));
*conf = glob_conf;
static void *oss_audio_init(Audiodev *dev)
{
AudiodevOssOptions *oopts;
assert(dev->driver == AUDIODEV_DRIVER_OSS);
oopts = &dev->u.oss;
oss_init_per_direction(oopts->in);
oss_init_per_direction(oopts->out);
if (access(oopts->in->has_dev ? oopts->in->dev : "/dev/dsp",
R_OK | W_OK) < 0 ||
access(oopts->out->has_dev ? oopts->out->dev : "/dev/dsp",
R_OK | W_OK) < 0) {
if (access(conf->devpath_in, R_OK | W_OK) < 0 ||
access(conf->devpath_out, R_OK | W_OK) < 0) {
g_free(conf);
return NULL;
}
return dev;
return conf;
}
static void oss_audio_fini (void *opaque)
{
g_free(opaque);
}
static struct audio_option oss_options[] = {
{
.name = "FRAGSIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.fragsize,
.descr = "Fragment size in bytes"
},
{
.name = "NFRAGS",
.tag = AUD_OPT_INT,
.valp = &glob_conf.nfrags,
.descr = "Number of fragments"
},
{
.name = "MMAP",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.try_mmap,
.descr = "Try using memory mapped access"
},
{
.name = "DAC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.devpath_out,
.descr = "Path to DAC device"
},
{
.name = "ADC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.devpath_in,
.descr = "Path to ADC device"
},
{
.name = "EXCLUSIVE",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.exclusive,
.descr = "Open device in exclusive mode (vmix won't work)"
},
#ifdef USE_DSP_POLICY
{
.name = "POLICY",
.tag = AUD_OPT_INT,
.valp = &glob_conf.policy,
.descr = "Set the timing policy of the device, -1 to use fragment mode",
},
#endif
{ /* End of list */ }
};
static struct audio_pcm_ops oss_pcm_ops = {
.init_out = oss_init_out,
.fini_out = oss_fini_out,
@@ -864,9 +924,10 @@ static struct audio_pcm_ops oss_pcm_ops = {
.ctl_in = oss_ctl_in
};
static struct audio_driver oss_audio_driver = {
struct audio_driver oss_audio_driver = {
.name = "oss",
.descr = "OSS http://www.opensound.com",
.options = oss_options,
.init = oss_audio_init,
.fini = oss_audio_fini,
.pcm_ops = &oss_pcm_ops,
@@ -876,9 +937,3 @@ static struct audio_driver oss_audio_driver = {
.voice_size_out = sizeof (OSSVoiceOut),
.voice_size_in = sizeof (OSSVoiceIn)
};
static void register_audio_oss(void)
{
audio_driver_register(&oss_audio_driver);
}
type_init(register_audio_oss);

View File

@@ -2,7 +2,6 @@
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "audio.h"
#include "qapi/opts-visitor.h"
#include <pulse/pulseaudio.h>
@@ -11,7 +10,14 @@
#include "audio_pt_int.h"
typedef struct {
Audiodev *dev;
int samples;
char *server;
char *sink;
char *source;
} PAConf;
typedef struct {
PAConf conf;
pa_threaded_mainloop *mainloop;
pa_context *context;
} paaudio;
@@ -26,7 +32,6 @@ typedef struct {
void *pcm_buf;
struct audio_pt pt;
paaudio *g;
int samples;
} PAVoiceOut;
typedef struct {
@@ -41,7 +46,6 @@ typedef struct {
const void *read_data;
size_t read_index, read_length;
paaudio *g;
int samples;
} PAVoiceIn;
static void qpa_audio_fini(void *opaque);
@@ -85,7 +89,7 @@ static inline int PA_STREAM_IS_GOOD(pa_stream_state_t x)
} \
goto label; \
} \
} while (0)
} while (0);
#define CHECK_DEAD_GOTO(c, stream, rerror, label) \
do { \
@@ -103,7 +107,7 @@ static inline int PA_STREAM_IS_GOOD(pa_stream_state_t x)
} \
goto label; \
} \
} while (0)
} while (0);
static int qpa_simple_read (PAVoiceIn *p, void *data, size_t length, int *rerror)
{
@@ -202,7 +206,7 @@ static void *qpa_thread_out (void *arg)
PAVoiceOut *pa = arg;
HWVoiceOut *hw = &pa->hw;
if (audio_pt_lock(&pa->pt, __func__)) {
if (audio_pt_lock (&pa->pt, AUDIO_FUNC)) {
return NULL;
}
@@ -218,15 +222,15 @@ static void *qpa_thread_out (void *arg)
break;
}
if (audio_pt_wait(&pa->pt, __func__)) {
if (audio_pt_wait (&pa->pt, AUDIO_FUNC)) {
goto exit;
}
}
decr = to_mix = audio_MIN(pa->live, pa->samples >> 5);
decr = to_mix = audio_MIN (pa->live, pa->g->conf.samples >> 2);
rpos = pa->rpos;
if (audio_pt_unlock(&pa->pt, __func__)) {
if (audio_pt_unlock (&pa->pt, AUDIO_FUNC)) {
return NULL;
}
@@ -247,7 +251,7 @@ static void *qpa_thread_out (void *arg)
to_mix -= chunk;
}
if (audio_pt_lock(&pa->pt, __func__)) {
if (audio_pt_lock (&pa->pt, AUDIO_FUNC)) {
return NULL;
}
@@ -257,7 +261,7 @@ static void *qpa_thread_out (void *arg)
}
exit:
audio_pt_unlock(&pa->pt, __func__);
audio_pt_unlock (&pa->pt, AUDIO_FUNC);
return NULL;
}
@@ -266,7 +270,7 @@ static int qpa_run_out (HWVoiceOut *hw, int live)
int decr;
PAVoiceOut *pa = (PAVoiceOut *) hw;
if (audio_pt_lock(&pa->pt, __func__)) {
if (audio_pt_lock (&pa->pt, AUDIO_FUNC)) {
return 0;
}
@@ -275,10 +279,10 @@ static int qpa_run_out (HWVoiceOut *hw, int live)
pa->live = live - decr;
hw->rpos = pa->rpos;
if (pa->live > 0) {
audio_pt_unlock_and_signal(&pa->pt, __func__);
audio_pt_unlock_and_signal (&pa->pt, AUDIO_FUNC);
}
else {
audio_pt_unlock(&pa->pt, __func__);
audio_pt_unlock (&pa->pt, AUDIO_FUNC);
}
return decr;
}
@@ -294,7 +298,7 @@ static void *qpa_thread_in (void *arg)
PAVoiceIn *pa = arg;
HWVoiceIn *hw = &pa->hw;
if (audio_pt_lock(&pa->pt, __func__)) {
if (audio_pt_lock (&pa->pt, AUDIO_FUNC)) {
return NULL;
}
@@ -310,15 +314,15 @@ static void *qpa_thread_in (void *arg)
break;
}
if (audio_pt_wait(&pa->pt, __func__)) {
if (audio_pt_wait (&pa->pt, AUDIO_FUNC)) {
goto exit;
}
}
incr = to_grab = audio_MIN(pa->dead, pa->samples >> 5);
incr = to_grab = audio_MIN (pa->dead, pa->g->conf.samples >> 2);
wpos = pa->wpos;
if (audio_pt_unlock(&pa->pt, __func__)) {
if (audio_pt_unlock (&pa->pt, AUDIO_FUNC)) {
return NULL;
}
@@ -338,7 +342,7 @@ static void *qpa_thread_in (void *arg)
to_grab -= chunk;
}
if (audio_pt_lock(&pa->pt, __func__)) {
if (audio_pt_lock (&pa->pt, AUDIO_FUNC)) {
return NULL;
}
@@ -348,7 +352,7 @@ static void *qpa_thread_in (void *arg)
}
exit:
audio_pt_unlock(&pa->pt, __func__);
audio_pt_unlock (&pa->pt, AUDIO_FUNC);
return NULL;
}
@@ -357,7 +361,7 @@ static int qpa_run_in (HWVoiceIn *hw)
int live, incr, dead;
PAVoiceIn *pa = (PAVoiceIn *) hw;
if (audio_pt_lock(&pa->pt, __func__)) {
if (audio_pt_lock (&pa->pt, AUDIO_FUNC)) {
return 0;
}
@@ -368,10 +372,10 @@ static int qpa_run_in (HWVoiceIn *hw)
pa->dead = dead - incr;
hw->wpos = pa->wpos;
if (pa->dead > 0) {
audio_pt_unlock_and_signal(&pa->pt, __func__);
audio_pt_unlock_and_signal (&pa->pt, AUDIO_FUNC);
}
else {
audio_pt_unlock(&pa->pt, __func__);
audio_pt_unlock (&pa->pt, AUDIO_FUNC);
}
return incr;
}
@@ -381,21 +385,21 @@ static int qpa_read (SWVoiceIn *sw, void *buf, int len)
return audio_pcm_sw_read (sw, buf, len);
}
static pa_sample_format_t audfmt_to_pa (AudioFormat afmt, int endianness)
static pa_sample_format_t audfmt_to_pa (audfmt_e afmt, int endianness)
{
int format;
switch (afmt) {
case AUDIO_FORMAT_S8:
case AUDIO_FORMAT_U8:
case AUD_FMT_S8:
case AUD_FMT_U8:
format = PA_SAMPLE_U8;
break;
case AUDIO_FORMAT_S16:
case AUDIO_FORMAT_U16:
case AUD_FMT_S16:
case AUD_FMT_U16:
format = endianness ? PA_SAMPLE_S16BE : PA_SAMPLE_S16LE;
break;
case AUDIO_FORMAT_S32:
case AUDIO_FORMAT_U32:
case AUD_FMT_S32:
case AUD_FMT_U32:
format = endianness ? PA_SAMPLE_S32BE : PA_SAMPLE_S32LE;
break;
default:
@@ -406,26 +410,26 @@ static pa_sample_format_t audfmt_to_pa (AudioFormat afmt, int endianness)
return format;
}
static AudioFormat pa_to_audfmt (pa_sample_format_t fmt, int *endianness)
static audfmt_e pa_to_audfmt (pa_sample_format_t fmt, int *endianness)
{
switch (fmt) {
case PA_SAMPLE_U8:
return AUDIO_FORMAT_U8;
return AUD_FMT_U8;
case PA_SAMPLE_S16BE:
*endianness = 1;
return AUDIO_FORMAT_S16;
return AUD_FMT_S16;
case PA_SAMPLE_S16LE:
*endianness = 0;
return AUDIO_FORMAT_S16;
return AUD_FMT_S16;
case PA_SAMPLE_S32BE:
*endianness = 1;
return AUDIO_FORMAT_S32;
return AUD_FMT_S32;
case PA_SAMPLE_S32LE:
*endianness = 0;
return AUDIO_FORMAT_S32;
return AUD_FMT_S32;
default:
dolog ("Internal logic error: Bad pa_sample_format %d\n", fmt);
return AUDIO_FORMAT_U8;
return AUD_FMT_U8;
}
}
@@ -542,15 +546,17 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
struct audsettings obt_as = *as;
PAVoiceOut *pa = (PAVoiceOut *) hw;
paaudio *g = pa->g = drv_opaque;
AudiodevPaOptions *popts = &g->dev->u.pa;
AudiodevPaPerDirectionOptions *ppdo = popts->out;
ss.format = audfmt_to_pa (as->fmt, as->endianness);
ss.channels = as->nchannels;
ss.rate = as->freq;
ba.tlength = pa_usec_to_bytes(ppdo->latency, &ss);
ba.minreq = -1;
/*
* qemu audio tick runs at 100 Hz (by default), so processing
* data chunks worth 10 ms of sound should be a good fit.
*/
ba.tlength = pa_usec_to_bytes (10 * 1000, &ss);
ba.minreq = pa_usec_to_bytes (5 * 1000, &ss);
ba.maxlength = -1;
ba.prebuf = -1;
@@ -560,7 +566,7 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
g,
"qemu",
PA_STREAM_PLAYBACK,
ppdo->has_name ? ppdo->name : NULL,
g->conf.sink,
&ss,
NULL, /* channel map */
&ba, /* buffering attributes */
@@ -572,10 +578,8 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
}
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = pa->samples = audio_buffer_samples(
qapi_AudiodevPaPerDirectionOptions_base(ppdo),
&obt_as, ppdo->buffer_length);
pa->pcm_buf = audio_calloc(__func__, hw->samples, 1 << hw->info.shift);
hw->samples = g->conf.samples;
pa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
pa->rpos = hw->rpos;
if (!pa->pcm_buf) {
dolog ("Could not allocate buffer (%d bytes)\n",
@@ -583,7 +587,7 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
goto fail2;
}
if (audio_pt_init(&pa->pt, qpa_thread_out, hw, AUDIO_CAP, __func__)) {
if (audio_pt_init (&pa->pt, qpa_thread_out, hw, AUDIO_CAP, AUDIO_FUNC)) {
goto fail3;
}
@@ -605,32 +609,24 @@ static int qpa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
{
int error;
pa_sample_spec ss;
pa_buffer_attr ba;
struct audsettings obt_as = *as;
PAVoiceIn *pa = (PAVoiceIn *) hw;
paaudio *g = pa->g = drv_opaque;
AudiodevPaOptions *popts = &g->dev->u.pa;
AudiodevPaPerDirectionOptions *ppdo = popts->in;
ss.format = audfmt_to_pa (as->fmt, as->endianness);
ss.channels = as->nchannels;
ss.rate = as->freq;
ba.fragsize = pa_usec_to_bytes(ppdo->latency, &ss);
ba.maxlength = -1;
ba.minreq = -1;
ba.prebuf = -1;
obt_as.fmt = pa_to_audfmt (ss.format, &obt_as.endianness);
pa->stream = qpa_simple_new (
g,
"qemu",
PA_STREAM_RECORD,
ppdo->has_name ? ppdo->name : NULL,
g->conf.source,
&ss,
NULL, /* channel map */
&ba, /* buffering attributes */
NULL, /* buffering attributes */
&error
);
if (!pa->stream) {
@@ -639,10 +635,8 @@ static int qpa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = pa->samples = audio_buffer_samples(
qapi_AudiodevPaPerDirectionOptions_base(ppdo),
&obt_as, ppdo->buffer_length);
pa->pcm_buf = audio_calloc(__func__, hw->samples, 1 << hw->info.shift);
hw->samples = g->conf.samples;
pa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
pa->wpos = hw->wpos;
if (!pa->pcm_buf) {
dolog ("Could not allocate buffer (%d bytes)\n",
@@ -650,7 +644,7 @@ static int qpa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
goto fail2;
}
if (audio_pt_init(&pa->pt, qpa_thread_in, hw, AUDIO_CAP, __func__)) {
if (audio_pt_init (&pa->pt, qpa_thread_in, hw, AUDIO_CAP, AUDIO_FUNC)) {
goto fail3;
}
@@ -673,17 +667,17 @@ static void qpa_fini_out (HWVoiceOut *hw)
void *ret;
PAVoiceOut *pa = (PAVoiceOut *) hw;
audio_pt_lock(&pa->pt, __func__);
audio_pt_lock (&pa->pt, AUDIO_FUNC);
pa->done = 1;
audio_pt_unlock_and_signal(&pa->pt, __func__);
audio_pt_join(&pa->pt, &ret, __func__);
audio_pt_unlock_and_signal (&pa->pt, AUDIO_FUNC);
audio_pt_join (&pa->pt, &ret, AUDIO_FUNC);
if (pa->stream) {
pa_stream_unref (pa->stream);
pa->stream = NULL;
}
audio_pt_fini(&pa->pt, __func__);
audio_pt_fini (&pa->pt, AUDIO_FUNC);
g_free (pa->pcm_buf);
pa->pcm_buf = NULL;
}
@@ -693,17 +687,17 @@ static void qpa_fini_in (HWVoiceIn *hw)
void *ret;
PAVoiceIn *pa = (PAVoiceIn *) hw;
audio_pt_lock(&pa->pt, __func__);
audio_pt_lock (&pa->pt, AUDIO_FUNC);
pa->done = 1;
audio_pt_unlock_and_signal(&pa->pt, __func__);
audio_pt_join(&pa->pt, &ret, __func__);
audio_pt_unlock_and_signal (&pa->pt, AUDIO_FUNC);
audio_pt_join (&pa->pt, &ret, AUDIO_FUNC);
if (pa->stream) {
pa_stream_unref (pa->stream);
pa->stream = NULL;
}
audio_pt_fini(&pa->pt, __func__);
audio_pt_fini (&pa->pt, AUDIO_FUNC);
g_free (pa->pcm_buf);
pa->pcm_buf = NULL;
}
@@ -813,54 +807,15 @@ static int qpa_ctl_in (HWVoiceIn *hw, int cmd, ...)
return 0;
}
static int qpa_validate_per_direction_opts(Audiodev *dev,
AudiodevPaPerDirectionOptions *pdo)
/* common */
static PAConf glob_conf = {
.samples = 4096,
};
static void *qpa_audio_init (void)
{
if (!pdo->has_buffer_length) {
pdo->has_buffer_length = true;
pdo->buffer_length = 46440;
}
if (!pdo->has_latency) {
pdo->has_latency = true;
pdo->latency = 15000;
}
return 1;
}
static void *qpa_audio_init(Audiodev *dev)
{
paaudio *g;
AudiodevPaOptions *popts = &dev->u.pa;
const char *server;
if (!popts->has_server) {
char pidfile[64];
char *runtime;
struct stat st;
runtime = getenv("XDG_RUNTIME_DIR");
if (!runtime) {
return NULL;
}
snprintf(pidfile, sizeof(pidfile), "%s/pulse/pid", runtime);
if (stat(pidfile, &st) != 0) {
return NULL;
}
}
assert(dev->driver == AUDIODEV_DRIVER_PA);
g = g_malloc(sizeof(paaudio));
server = popts->has_server ? popts->server : NULL;
if (!qpa_validate_per_direction_opts(dev, popts->in)) {
goto fail;
}
if (!qpa_validate_per_direction_opts(dev, popts->out)) {
goto fail;
}
g->dev = dev;
paaudio *g = g_malloc(sizeof(paaudio));
g->conf = glob_conf;
g->mainloop = NULL;
g->context = NULL;
@@ -870,14 +825,14 @@ static void *qpa_audio_init(Audiodev *dev)
}
g->context = pa_context_new (pa_threaded_mainloop_get_api (g->mainloop),
server);
g->conf.server);
if (!g->context) {
goto fail;
}
pa_context_set_state_callback (g->context, context_state_cb, g);
if (pa_context_connect(g->context, server, 0, NULL) < 0) {
if (pa_context_connect (g->context, g->conf.server, 0, NULL) < 0) {
qpa_logerr (pa_context_errno (g->context),
"pa_context_connect() failed\n");
goto fail;
@@ -940,6 +895,34 @@ static void qpa_audio_fini (void *opaque)
g_free(g);
}
struct audio_option qpa_options[] = {
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &glob_conf.samples,
.descr = "buffer size in samples"
},
{
.name = "SERVER",
.tag = AUD_OPT_STR,
.valp = &glob_conf.server,
.descr = "server address"
},
{
.name = "SINK",
.tag = AUD_OPT_STR,
.valp = &glob_conf.sink,
.descr = "sink device name"
},
{
.name = "SOURCE",
.tag = AUD_OPT_STR,
.valp = &glob_conf.source,
.descr = "source device name"
},
{ /* End of list */ }
};
static struct audio_pcm_ops qpa_pcm_ops = {
.init_out = qpa_init_out,
.fini_out = qpa_fini_out,
@@ -954,9 +937,10 @@ static struct audio_pcm_ops qpa_pcm_ops = {
.ctl_in = qpa_ctl_in
};
static struct audio_driver pa_audio_driver = {
struct audio_driver pa_audio_driver = {
.name = "pa",
.descr = "http://www.pulseaudio.org/",
.options = qpa_options,
.init = qpa_audio_init,
.fini = qpa_audio_fini,
.pcm_ops = &qpa_pcm_ops,
@@ -967,9 +951,3 @@ static struct audio_driver pa_audio_driver = {
.voice_size_in = sizeof (PAVoiceIn),
.ctl_caps = VOICE_VOLUME_CAP
};
static void register_audio_pa(void)
{
audio_driver_register(&pa_audio_driver);
}
type_init(register_audio_pa);

View File

@@ -71,12 +71,6 @@ void NAME (void *opaque, struct st_sample *ibuf, struct st_sample *obuf,
while (rate->ipos <= (rate->opos >> 32)) {
ilast = *ibuf++;
rate->ipos++;
/* if ipos overflow, there is a infinite loop */
if (rate->ipos == 0xffffffff) {
rate->ipos = 1;
rate->opos = rate->opos & 0xffffffff;
}
/* See if we finished the input buffer yet */
if (ibuf >= iend) {
goto the_end;

View File

@@ -41,14 +41,22 @@
typedef struct SDLVoiceOut {
HWVoiceOut hw;
int live;
int rpos;
int decr;
} SDLVoiceOut;
static struct {
int nb_samples;
} conf = {
.nb_samples = 1024
};
static struct SDLAudioState {
int exit;
SDL_mutex *mutex;
SDL_sem *sem;
int initialized;
bool driver_created;
Audiodev *dev;
} glob_sdl;
typedef struct SDLAudioState SDLAudioState;
@@ -63,19 +71,64 @@ static void GCC_FMT_ATTR (1, 2) sdl_logerr (const char *fmt, ...)
AUD_log (AUDIO_CAP, "Reason: %s\n", SDL_GetError ());
}
static int aud_to_sdlfmt (AudioFormat fmt)
static int sdl_lock (SDLAudioState *s, const char *forfn)
{
if (SDL_LockMutex (s->mutex)) {
sdl_logerr ("SDL_LockMutex for %s failed\n", forfn);
return -1;
}
return 0;
}
static int sdl_unlock (SDLAudioState *s, const char *forfn)
{
if (SDL_UnlockMutex (s->mutex)) {
sdl_logerr ("SDL_UnlockMutex for %s failed\n", forfn);
return -1;
}
return 0;
}
static int sdl_post (SDLAudioState *s, const char *forfn)
{
if (SDL_SemPost (s->sem)) {
sdl_logerr ("SDL_SemPost for %s failed\n", forfn);
return -1;
}
return 0;
}
static int sdl_wait (SDLAudioState *s, const char *forfn)
{
if (SDL_SemWait (s->sem)) {
sdl_logerr ("SDL_SemWait for %s failed\n", forfn);
return -1;
}
return 0;
}
static int sdl_unlock_and_post (SDLAudioState *s, const char *forfn)
{
if (sdl_unlock (s, forfn)) {
return -1;
}
return sdl_post (s, forfn);
}
static int aud_to_sdlfmt (audfmt_e fmt)
{
switch (fmt) {
case AUDIO_FORMAT_S8:
case AUD_FMT_S8:
return AUDIO_S8;
case AUDIO_FORMAT_U8:
case AUD_FMT_U8:
return AUDIO_U8;
case AUDIO_FORMAT_S16:
case AUD_FMT_S16:
return AUDIO_S16LSB;
case AUDIO_FORMAT_U16:
case AUD_FMT_U16:
return AUDIO_U16LSB;
default:
@@ -87,37 +140,37 @@ static int aud_to_sdlfmt (AudioFormat fmt)
}
}
static int sdl_to_audfmt(int sdlfmt, AudioFormat *fmt, int *endianness)
static int sdl_to_audfmt(int sdlfmt, audfmt_e *fmt, int *endianness)
{
switch (sdlfmt) {
case AUDIO_S8:
*endianness = 0;
*fmt = AUDIO_FORMAT_S8;
*fmt = AUD_FMT_S8;
break;
case AUDIO_U8:
*endianness = 0;
*fmt = AUDIO_FORMAT_U8;
*fmt = AUD_FMT_U8;
break;
case AUDIO_S16LSB:
*endianness = 0;
*fmt = AUDIO_FORMAT_S16;
*fmt = AUD_FMT_S16;
break;
case AUDIO_U16LSB:
*endianness = 0;
*fmt = AUDIO_FORMAT_U16;
*fmt = AUD_FMT_U16;
break;
case AUDIO_S16MSB:
*endianness = 1;
*fmt = AUDIO_FORMAT_S16;
*fmt = AUD_FMT_S16;
break;
case AUDIO_U16MSB:
*endianness = 1;
*fmt = AUDIO_FORMAT_U16;
*fmt = AUD_FMT_U16;
break;
default:
@@ -169,9 +222,9 @@ static int sdl_open (SDL_AudioSpec *req, SDL_AudioSpec *obt)
static void sdl_close (SDLAudioState *s)
{
if (s->initialized) {
SDL_LockAudio();
sdl_lock (s, "sdl_close");
s->exit = 1;
SDL_UnlockAudio();
sdl_unlock_and_post (s, "sdl_close");
SDL_PauseAudio (1);
SDL_CloseAudio ();
s->initialized = 0;
@@ -184,36 +237,57 @@ static void sdl_callback (void *opaque, Uint8 *buf, int len)
SDLAudioState *s = &glob_sdl;
HWVoiceOut *hw = &sdl->hw;
int samples = len >> hw->info.shift;
int to_mix, decr;
if (s->exit || !sdl->live) {
if (s->exit) {
return;
}
/* dolog ("in callback samples=%d live=%d\n", samples, sdl->live); */
while (samples) {
int to_mix, decr;
to_mix = audio_MIN(samples, sdl->live);
decr = to_mix;
while (to_mix) {
int chunk = audio_MIN(to_mix, hw->samples - hw->rpos);
struct st_sample *src = hw->mix_buf + hw->rpos;
/* dolog ("in callback samples=%d\n", samples); */
sdl_wait (s, "sdl_callback");
if (s->exit) {
return;
}
/* dolog ("in callback to_mix %d, chunk %d\n", to_mix, chunk); */
hw->clip(buf, src, chunk);
hw->rpos = (hw->rpos + chunk) % hw->samples;
to_mix -= chunk;
buf += chunk << hw->info.shift;
if (sdl_lock (s, "sdl_callback")) {
return;
}
if (audio_bug (AUDIO_FUNC, sdl->live < 0 || sdl->live > hw->samples)) {
dolog ("sdl->live=%d hw->samples=%d\n",
sdl->live, hw->samples);
return;
}
if (!sdl->live) {
goto again;
}
/* dolog ("in callback live=%d\n", live); */
to_mix = audio_MIN (samples, sdl->live);
decr = to_mix;
while (to_mix) {
int chunk = audio_MIN (to_mix, hw->samples - hw->rpos);
struct st_sample *src = hw->mix_buf + hw->rpos;
/* dolog ("in callback to_mix %d, chunk %d\n", to_mix, chunk); */
hw->clip (buf, src, chunk);
sdl->rpos = (sdl->rpos + chunk) % hw->samples;
to_mix -= chunk;
buf += chunk << hw->info.shift;
}
samples -= decr;
sdl->live -= decr;
sdl->decr += decr;
again:
if (sdl_unlock (s, "sdl_callback")) {
return;
}
}
samples -= decr;
sdl->live -= decr;
sdl->decr += decr;
/* dolog ("done len=%d\n", len); */
/* SDL2 does not clear the remaining buffer for us, so do it on our own */
if (samples) {
memset(buf, 0, samples << hw->info.shift);
}
}
static int sdl_write_out (SWVoiceOut *sw, void *buf, int len)
@@ -225,8 +299,11 @@ static int sdl_run_out (HWVoiceOut *hw, int live)
{
int decr;
SDLVoiceOut *sdl = (SDLVoiceOut *) hw;
SDLAudioState *s = &glob_sdl;
SDL_LockAudio();
if (sdl_lock (s, "sdl_run_out")) {
return 0;
}
if (sdl->decr > live) {
ldebug ("sdl->decr %d live %d sdl->live %d\n",
@@ -238,10 +315,15 @@ static int sdl_run_out (HWVoiceOut *hw, int live)
decr = audio_MIN (sdl->decr, live);
sdl->decr -= decr;
sdl->live = live;
SDL_UnlockAudio();
sdl->live = live - decr;
hw->rpos = sdl->rpos;
if (sdl->live > 0) {
sdl_unlock_and_post (s, "sdl_run_out");
}
else {
sdl_unlock (s, "sdl_run_out");
}
return decr;
}
@@ -260,13 +342,13 @@ static int sdl_init_out(HWVoiceOut *hw, struct audsettings *as,
SDL_AudioSpec req, obt;
int endianness;
int err;
AudioFormat effective_fmt;
audfmt_e effective_fmt;
struct audsettings obt_as;
req.freq = as->freq;
req.format = aud_to_sdlfmt (as->fmt);
req.channels = as->nchannels;
req.samples = audio_buffer_samples(s->dev->u.sdl.out, as, 11610);
req.samples = conf.nb_samples;
req.callback = sdl_callback;
req.userdata = sdl;
@@ -310,7 +392,7 @@ static int sdl_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static void *sdl_audio_init(Audiodev *dev)
static void *sdl_audio_init (void)
{
SDLAudioState *s = &glob_sdl;
if (s->driver_created) {
@@ -323,8 +405,22 @@ static void *sdl_audio_init(Audiodev *dev)
return NULL;
}
s->mutex = SDL_CreateMutex ();
if (!s->mutex) {
sdl_logerr ("Failed to create SDL mutex\n");
SDL_QuitSubSystem (SDL_INIT_AUDIO);
return NULL;
}
s->sem = SDL_CreateSemaphore (0);
if (!s->sem) {
sdl_logerr ("Failed to create SDL semaphore\n");
SDL_DestroyMutex (s->mutex);
SDL_QuitSubSystem (SDL_INIT_AUDIO);
return NULL;
}
s->driver_created = true;
s->dev = dev;
return s;
}
@@ -332,11 +428,22 @@ static void sdl_audio_fini (void *opaque)
{
SDLAudioState *s = opaque;
sdl_close (s);
SDL_DestroySemaphore (s->sem);
SDL_DestroyMutex (s->mutex);
SDL_QuitSubSystem (SDL_INIT_AUDIO);
s->driver_created = false;
s->dev = NULL;
}
static struct audio_option sdl_options[] = {
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.nb_samples,
.descr = "Size of SDL buffer in samples"
},
{ /* End of list */ }
};
static struct audio_pcm_ops sdl_pcm_ops = {
.init_out = sdl_init_out,
.fini_out = sdl_fini_out,
@@ -345,9 +452,10 @@ static struct audio_pcm_ops sdl_pcm_ops = {
.ctl_out = sdl_ctl_out,
};
static struct audio_driver sdl_audio_driver = {
struct audio_driver sdl_audio_driver = {
.name = "sdl",
.descr = "SDL http://www.libsdl.org",
.options = sdl_options,
.init = sdl_audio_init,
.fini = sdl_audio_fini,
.pcm_ops = &sdl_pcm_ops,
@@ -357,9 +465,3 @@ static struct audio_driver sdl_audio_driver = {
.voice_size_out = sizeof (SDLVoiceOut),
.voice_size_in = 0
};
static void register_audio_sdl(void)
{
audio_driver_register(&sdl_audio_driver);
}
type_init(register_audio_sdl);

View File

@@ -77,7 +77,7 @@ static const SpiceRecordInterface record_sif = {
.base.minor_version = SPICE_INTERFACE_RECORD_MINOR,
};
static void *spice_audio_init(Audiodev *dev)
static void *spice_audio_init (void)
{
if (!using_spice) {
return NULL;
@@ -130,7 +130,7 @@ static int line_out_init(HWVoiceOut *hw, struct audsettings *as,
settings.freq = SPICE_INTERFACE_PLAYBACK_FREQ;
#endif
settings.nchannels = SPICE_INTERFACE_PLAYBACK_CHAN;
settings.fmt = AUDIO_FORMAT_S16;
settings.fmt = AUD_FMT_S16;
settings.endianness = AUDIO_HOST_ENDIANNESS;
audio_pcm_init_info (&hw->info, &settings);
@@ -258,7 +258,7 @@ static int line_in_init(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
settings.freq = SPICE_INTERFACE_RECORD_FREQ;
#endif
settings.nchannels = SPICE_INTERFACE_RECORD_CHAN;
settings.fmt = AUDIO_FORMAT_S16;
settings.fmt = AUD_FMT_S16;
settings.endianness = AUDIO_HOST_ENDIANNESS;
audio_pcm_init_info (&hw->info, &settings);
@@ -373,6 +373,10 @@ static int line_in_ctl (HWVoiceIn *hw, int cmd, ...)
return 0;
}
static struct audio_option audio_options[] = {
{ /* end of list */ },
};
static struct audio_pcm_ops audio_callbacks = {
.init_out = line_out_init,
.fini_out = line_out_fini,
@@ -387,9 +391,10 @@ static struct audio_pcm_ops audio_callbacks = {
.ctl_in = line_in_ctl,
};
static struct audio_driver spice_audio_driver = {
struct audio_driver spice_audio_driver = {
.name = "spice",
.descr = "spice audio driver",
.options = audio_options,
.init = spice_audio_init,
.fini = spice_audio_fini,
.pcm_ops = &audio_callbacks,
@@ -406,9 +411,3 @@ void qemu_spice_audio_init (void)
{
spice_audio_driver.can_be_default = 1;
}
static void register_audio_spice(void)
{
audio_driver_register(&spice_audio_driver);
}
type_init(register_audio_spice);

View File

@@ -1,9 +1,9 @@
# See docs/devel/tracing.txt for syntax documentation.
# See docs/tracing.txt for syntax documentation.
# alsaaudio.c
# audio/alsaaudio.c
alsa_revents(int revents) "revents = %d"
alsa_pollout(int i, int fd) "i = %d fd = %d"
alsa_set_handler(int events, int index, int fd, int err) "events=0x%x index=%d fd=%d err=%d"
alsa_set_handler(int events, int index, int fd, int err) "events=%#x index=%d fd=%d err=%d"
alsa_wrote_zero(int len) "Failed to write %d frames (wrote zero)"
alsa_read_zero(long len) "Failed to read %ld frames (read zero)"
alsa_xrun_out(void) "Recovering from playback xrun"
@@ -12,11 +12,6 @@ alsa_resume_out(void) "Resuming suspended output stream"
alsa_resume_in(void) "Resuming suspended input stream"
alsa_no_frames(int state) "No frames available and ALSA state is %d"
# ossaudio.c
oss_version(int version) "OSS version = 0x%x"
# audio/ossaudio.c
oss_version(int version) "OSS version = %#x"
oss_invalid_available_size(int size, int bufsize) "Invalid available size, size=%d bufsize=%d"
# audio.c
audio_timer_start(int interval) "interval %d ms"
audio_timer_stop(void) ""
audio_timer_delayed(int interval) "interval %d ms"

View File

@@ -24,7 +24,6 @@
#include "qemu/osdep.h"
#include "qemu/host-utils.h"
#include "qemu/timer.h"
#include "qapi/opts-visitor.h"
#include "audio.h"
#define AUDIO_CAP "wav"
@@ -38,6 +37,11 @@ typedef struct WAVVoiceOut {
int total_samples;
} WAVVoiceOut;
typedef struct {
struct audsettings settings;
const char *wav_path;
} WAVConf;
static int wav_run_out (HWVoiceOut *hw, int live)
{
WAVVoiceOut *wav = (WAVVoiceOut *) hw;
@@ -108,30 +112,25 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
0x02, 0x00, 0x44, 0xac, 0x00, 0x00, 0x10, 0xb1, 0x02, 0x00, 0x04,
0x00, 0x10, 0x00, 0x64, 0x61, 0x74, 0x61, 0x00, 0x00, 0x00, 0x00
};
Audiodev *dev = drv_opaque;
AudiodevWavOptions *wopts = &dev->u.wav;
struct audsettings wav_as = audiodev_to_audsettings(dev->u.wav.out);
const char *wav_path = wopts->has_path ? wopts->path : "qemu.wav";
WAVConf *conf = drv_opaque;
struct audsettings wav_as = conf->settings;
stereo = wav_as.nchannels == 2;
switch (wav_as.fmt) {
case AUDIO_FORMAT_S8:
case AUDIO_FORMAT_U8:
case AUD_FMT_S8:
case AUD_FMT_U8:
bits16 = 0;
break;
case AUDIO_FORMAT_S16:
case AUDIO_FORMAT_U16:
case AUD_FMT_S16:
case AUD_FMT_U16:
bits16 = 1;
break;
case AUDIO_FORMAT_S32:
case AUDIO_FORMAT_U32:
case AUD_FMT_S32:
case AUD_FMT_U32:
dolog ("WAVE files can not handle 32bit formats\n");
return -1;
default:
abort();
}
hdr[34] = bits16 ? 0x10 : 0x08;
@@ -140,7 +139,7 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
audio_pcm_init_info (&hw->info, &wav_as);
hw->samples = 1024;
wav->pcm_buf = audio_calloc(__func__, hw->samples, 1 << hw->info.shift);
wav->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
if (!wav->pcm_buf) {
dolog ("Could not allocate buffer (%d bytes)\n",
hw->samples << hw->info.shift);
@@ -152,10 +151,10 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
le_store (hdr + 28, hw->info.freq << (bits16 + stereo), 4);
le_store (hdr + 32, 1 << (bits16 + stereo), 2);
wav->f = fopen(wav_path, "wb");
wav->f = fopen (conf->wav_path, "wb");
if (!wav->f) {
dolog ("Failed to open wave file `%s'\nReason: %s\n",
wav_path, strerror(errno));
conf->wav_path, strerror (errno));
g_free (wav->pcm_buf);
wav->pcm_buf = NULL;
return -1;
@@ -223,17 +222,54 @@ static int wav_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static void *wav_audio_init(Audiodev *dev)
static WAVConf glob_conf = {
.settings.freq = 44100,
.settings.nchannels = 2,
.settings.fmt = AUD_FMT_S16,
.wav_path = "qemu.wav"
};
static void *wav_audio_init (void)
{
assert(dev->driver == AUDIODEV_DRIVER_WAV);
return dev;
WAVConf *conf = g_malloc(sizeof(WAVConf));
*conf = glob_conf;
return conf;
}
static void wav_audio_fini (void *opaque)
{
ldebug ("wav_fini");
g_free(opaque);
}
static struct audio_option wav_options[] = {
{
.name = "FREQUENCY",
.tag = AUD_OPT_INT,
.valp = &glob_conf.settings.freq,
.descr = "Frequency"
},
{
.name = "FORMAT",
.tag = AUD_OPT_FMT,
.valp = &glob_conf.settings.fmt,
.descr = "Format"
},
{
.name = "DAC_FIXED_CHANNELS",
.tag = AUD_OPT_INT,
.valp = &glob_conf.settings.nchannels,
.descr = "Number of channels (1 - mono, 2 - stereo)"
},
{
.name = "PATH",
.tag = AUD_OPT_STR,
.valp = &glob_conf.wav_path,
.descr = "Path to wave file"
},
{ /* End of list */ }
};
static struct audio_pcm_ops wav_pcm_ops = {
.init_out = wav_init_out,
.fini_out = wav_fini_out,
@@ -242,9 +278,10 @@ static struct audio_pcm_ops wav_pcm_ops = {
.ctl_out = wav_ctl_out,
};
static struct audio_driver wav_audio_driver = {
struct audio_driver wav_audio_driver = {
.name = "wav",
.descr = "WAV renderer http://wikipedia.org/wiki/WAV",
.options = wav_options,
.init = wav_audio_init,
.fini = wav_audio_fini,
.pcm_ops = &wav_pcm_ops,
@@ -254,9 +291,3 @@ static struct audio_driver wav_audio_driver = {
.voice_size_out = sizeof (WAVVoiceOut),
.voice_size_in = 0
};
static void register_audio_wav(void)
{
audio_driver_register(&wav_audio_driver);
}
type_init(register_audio_wav);

View File

@@ -1,7 +1,6 @@
#include "qemu/osdep.h"
#include "hw/hw.h"
#include "monitor/monitor.h"
#include "qapi/error.h"
#include "qemu/error-report.h"
#include "audio.h"
@@ -38,29 +37,30 @@ static void wav_destroy (void *opaque)
uint8_t dlen[4];
uint32_t datalen = wav->bytes;
uint32_t rifflen = datalen + 36;
Monitor *mon = cur_mon;
if (wav->f) {
le_store (rlen, rifflen, 4);
le_store (dlen, datalen, 4);
if (fseek (wav->f, 4, SEEK_SET)) {
error_report("wav_destroy: rlen fseek failed: %s",
strerror(errno));
monitor_printf (mon, "wav_destroy: rlen fseek failed\nReason: %s\n",
strerror (errno));
goto doclose;
}
if (fwrite (rlen, 4, 1, wav->f) != 1) {
error_report("wav_destroy: rlen fwrite failed: %s",
strerror(errno));
monitor_printf (mon, "wav_destroy: rlen fwrite failed\nReason %s\n",
strerror (errno));
goto doclose;
}
if (fseek (wav->f, 32, SEEK_CUR)) {
error_report("wav_destroy: dlen fseek failed: %s",
strerror(errno));
monitor_printf (mon, "wav_destroy: dlen fseek failed\nReason %s\n",
strerror (errno));
goto doclose;
}
if (fwrite (dlen, 1, 4, wav->f) != 4) {
error_report("wav_destroy: dlen fwrite failed: %s",
strerror(errno));
monitor_printf (mon, "wav_destroy: dlen fwrite failed\nReason %s\n",
strerror (errno));
goto doclose;
}
doclose:
@@ -77,7 +77,8 @@ static void wav_capture (void *opaque, void *buf, int size)
WAVState *wav = opaque;
if (fwrite (buf, size, 1, wav->f) != 1) {
error_report("wav_capture: fwrite error: %s", strerror(errno));
monitor_printf (cur_mon, "wav_capture: fwrite error\nReason: %s",
strerror (errno));
}
wav->bytes += size;
}
@@ -87,7 +88,6 @@ static void wav_capture_destroy (void *opaque)
WAVState *wav = opaque;
AUD_del_capture (wav->cap, wav);
g_free (wav);
}
static void wav_capture_info (void *opaque)
@@ -108,6 +108,7 @@ static struct capture_ops wav_capture_ops = {
int wav_start_capture (CaptureState *s, const char *path, int freq,
int bits, int nchannels)
{
Monitor *mon = cur_mon;
WAVState *wav;
uint8_t hdr[] = {
0x52, 0x49, 0x46, 0x46, 0x00, 0x00, 0x00, 0x00, 0x57, 0x41, 0x56,
@@ -121,13 +122,13 @@ int wav_start_capture (CaptureState *s, const char *path, int freq,
CaptureVoiceOut *cap;
if (bits != 8 && bits != 16) {
error_report("incorrect bit count %d, must be 8 or 16", bits);
monitor_printf (mon, "incorrect bit count %d, must be 8 or 16\n", bits);
return -1;
}
if (nchannels != 1 && nchannels != 2) {
error_report("incorrect channel count %d, must be 1 or 2",
nchannels);
monitor_printf (mon, "incorrect channel count %d, must be 1 or 2\n",
nchannels);
return -1;
}
@@ -136,7 +137,7 @@ int wav_start_capture (CaptureState *s, const char *path, int freq,
as.freq = freq;
as.nchannels = 1 << stereo;
as.fmt = bits16 ? AUDIO_FORMAT_S16 : AUDIO_FORMAT_U8;
as.fmt = bits16 ? AUD_FMT_S16 : AUD_FMT_U8;
as.endianness = 0;
ops.notify = wav_notify;
@@ -155,8 +156,8 @@ int wav_start_capture (CaptureState *s, const char *path, int freq,
wav->f = fopen (path, "wb");
if (!wav->f) {
error_report("Failed to open wave file `%s': %s",
path, strerror(errno));
monitor_printf (mon, "Failed to open wave file `%s'\nReason: %s\n",
path, strerror (errno));
g_free (wav);
return -1;
}
@@ -167,13 +168,14 @@ int wav_start_capture (CaptureState *s, const char *path, int freq,
wav->freq = freq;
if (fwrite (hdr, sizeof (hdr), 1, wav->f) != 1) {
error_report("Failed to write header: %s", strerror(errno));
monitor_printf (mon, "Failed to write header\nReason: %s\n",
strerror (errno));
goto error_free;
}
cap = AUD_add_capture (&as, &ops, wav);
if (!cap) {
error_report("Failed to add audio capture");
monitor_printf (mon, "Failed to add audio capture\n");
goto error_free;
}
@@ -185,7 +187,8 @@ int wav_start_capture (CaptureState *s, const char *path, int freq,
error_free:
g_free (wav->path);
if (fclose (wav->f)) {
error_report("Failed to close wave file: %s", strerror(errno));
monitor_printf (mon, "Failed to close wave file\nReason: %s\n",
strerror (errno));
}
g_free (wav);
return -1;

View File

@@ -1,7 +0,0 @@
authz-obj-y += base.o
authz-obj-y += simple.o
authz-obj-y += list.o
authz-obj-y += listfile.o
authz-obj-$(CONFIG_AUTH_PAM) += pamacct.o
pamacct.o-libs = -lpam

View File

@@ -1,82 +0,0 @@
/*
* QEMU authorization framework base class
*
* Copyright (c) 2018 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu/osdep.h"
#include "authz/base.h"
#include "authz/trace.h"
bool qauthz_is_allowed(QAuthZ *authz,
const char *identity,
Error **errp)
{
QAuthZClass *cls = QAUTHZ_GET_CLASS(authz);
bool allowed;
allowed = cls->is_allowed(authz, identity, errp);
trace_qauthz_is_allowed(authz, identity, allowed);
return allowed;
}
bool qauthz_is_allowed_by_id(const char *authzid,
const char *identity,
Error **errp)
{
QAuthZ *authz;
Object *obj;
Object *container;
container = object_get_objects_root();
obj = object_resolve_path_component(container,
authzid);
if (!obj) {
error_setg(errp, "Cannot find QAuthZ object ID %s",
authzid);
return false;
}
if (!object_dynamic_cast(obj, TYPE_QAUTHZ)) {
error_setg(errp, "Object '%s' is not a QAuthZ subclass",
authzid);
return false;
}
authz = QAUTHZ(obj);
return qauthz_is_allowed(authz, identity, errp);
}
static const TypeInfo authz_info = {
.parent = TYPE_OBJECT,
.name = TYPE_QAUTHZ,
.instance_size = sizeof(QAuthZ),
.class_size = sizeof(QAuthZClass),
.abstract = true,
};
static void qauthz_register_types(void)
{
type_register_static(&authz_info);
}
type_init(qauthz_register_types)

View File

@@ -1,271 +0,0 @@
/*
* QEMU access control list authorization driver
*
* Copyright (c) 2018 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu/osdep.h"
#include "authz/list.h"
#include "authz/trace.h"
#include "qom/object_interfaces.h"
#include "qapi/qapi-visit-authz.h"
static bool qauthz_list_is_allowed(QAuthZ *authz,
const char *identity,
Error **errp)
{
QAuthZList *lauthz = QAUTHZ_LIST(authz);
QAuthZListRuleList *rules = lauthz->rules;
while (rules) {
QAuthZListRule *rule = rules->value;
QAuthZListFormat format = rule->has_format ? rule->format :
QAUTHZ_LIST_FORMAT_EXACT;
trace_qauthz_list_check_rule(authz, rule->match, identity,
format, rule->policy);
switch (format) {
case QAUTHZ_LIST_FORMAT_EXACT:
if (g_str_equal(rule->match, identity)) {
return rule->policy == QAUTHZ_LIST_POLICY_ALLOW;
}
break;
case QAUTHZ_LIST_FORMAT_GLOB:
if (g_pattern_match_simple(rule->match, identity)) {
return rule->policy == QAUTHZ_LIST_POLICY_ALLOW;
}
break;
default:
g_warn_if_reached();
return false;
}
rules = rules->next;
}
trace_qauthz_list_default_policy(authz, identity, lauthz->policy);
return lauthz->policy == QAUTHZ_LIST_POLICY_ALLOW;
}
static void
qauthz_list_prop_set_policy(Object *obj,
int value,
Error **errp G_GNUC_UNUSED)
{
QAuthZList *lauthz = QAUTHZ_LIST(obj);
lauthz->policy = value;
}
static int
qauthz_list_prop_get_policy(Object *obj,
Error **errp G_GNUC_UNUSED)
{
QAuthZList *lauthz = QAUTHZ_LIST(obj);
return lauthz->policy;
}
static void
qauthz_list_prop_get_rules(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
QAuthZList *lauthz = QAUTHZ_LIST(obj);
visit_type_QAuthZListRuleList(v, name, &lauthz->rules, errp);
}
static void
qauthz_list_prop_set_rules(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
QAuthZList *lauthz = QAUTHZ_LIST(obj);
QAuthZListRuleList *oldrules;
oldrules = lauthz->rules;
visit_type_QAuthZListRuleList(v, name, &lauthz->rules, errp);
qapi_free_QAuthZListRuleList(oldrules);
}
static void
qauthz_list_finalize(Object *obj)
{
QAuthZList *lauthz = QAUTHZ_LIST(obj);
qapi_free_QAuthZListRuleList(lauthz->rules);
}
static void
qauthz_list_class_init(ObjectClass *oc, void *data)
{
QAuthZClass *authz = QAUTHZ_CLASS(oc);
object_class_property_add_enum(oc, "policy",
"QAuthZListPolicy",
&QAuthZListPolicy_lookup,
qauthz_list_prop_get_policy,
qauthz_list_prop_set_policy,
NULL);
object_class_property_add(oc, "rules", "QAuthZListRule",
qauthz_list_prop_get_rules,
qauthz_list_prop_set_rules,
NULL, NULL, NULL);
authz->is_allowed = qauthz_list_is_allowed;
}
QAuthZList *qauthz_list_new(const char *id,
QAuthZListPolicy policy,
Error **errp)
{
return QAUTHZ_LIST(
object_new_with_props(TYPE_QAUTHZ_LIST,
object_get_objects_root(),
id, errp,
"policy", QAuthZListPolicy_str(policy),
NULL));
}
ssize_t qauthz_list_append_rule(QAuthZList *auth,
const char *match,
QAuthZListPolicy policy,
QAuthZListFormat format,
Error **errp)
{
QAuthZListRule *rule;
QAuthZListRuleList *rules, *tmp;
size_t i = 0;
rule = g_new0(QAuthZListRule, 1);
rule->policy = policy;
rule->match = g_strdup(match);
rule->format = format;
rule->has_format = true;
tmp = g_new0(QAuthZListRuleList, 1);
tmp->value = rule;
rules = auth->rules;
if (rules) {
while (rules->next) {
i++;
rules = rules->next;
}
rules->next = tmp;
return i + 1;
} else {
auth->rules = tmp;
return 0;
}
}
ssize_t qauthz_list_insert_rule(QAuthZList *auth,
const char *match,
QAuthZListPolicy policy,
QAuthZListFormat format,
size_t index,
Error **errp)
{
QAuthZListRule *rule;
QAuthZListRuleList *rules, *tmp;
size_t i = 0;
rule = g_new0(QAuthZListRule, 1);
rule->policy = policy;
rule->match = g_strdup(match);
rule->format = format;
rule->has_format = true;
tmp = g_new0(QAuthZListRuleList, 1);
tmp->value = rule;
rules = auth->rules;
if (rules && index > 0) {
while (rules->next && i < (index - 1)) {
i++;
rules = rules->next;
}
tmp->next = rules->next;
rules->next = tmp;
return i + 1;
} else {
tmp->next = auth->rules;
auth->rules = tmp;
return 0;
}
}
ssize_t qauthz_list_delete_rule(QAuthZList *auth, const char *match)
{
QAuthZListRule *rule;
QAuthZListRuleList *rules, *prev;
size_t i = 0;
prev = NULL;
rules = auth->rules;
while (rules) {
rule = rules->value;
if (g_str_equal(rule->match, match)) {
if (prev) {
prev->next = rules->next;
} else {
auth->rules = rules->next;
}
rules->next = NULL;
qapi_free_QAuthZListRuleList(rules);
return i;
}
prev = rules;
rules = rules->next;
i++;
}
return -1;
}
static const TypeInfo qauthz_list_info = {
.parent = TYPE_QAUTHZ,
.name = TYPE_QAUTHZ_LIST,
.instance_size = sizeof(QAuthZList),
.instance_finalize = qauthz_list_finalize,
.class_size = sizeof(QAuthZListClass),
.class_init = qauthz_list_class_init,
.interfaces = (InterfaceInfo[]) {
{ TYPE_USER_CREATABLE },
{ }
}
};
static void
qauthz_list_register_types(void)
{
type_register_static(&qauthz_list_info);
}
type_init(qauthz_list_register_types);

View File

@@ -1,283 +0,0 @@
/*
* QEMU access control list file authorization driver
*
* Copyright (c) 2018 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu/osdep.h"
#include "authz/listfile.h"
#include "authz/trace.h"
#include "qemu/error-report.h"
#include "qemu/main-loop.h"
#include "qemu/sockets.h"
#include "qemu/filemonitor.h"
#include "qom/object_interfaces.h"
#include "qapi/qapi-visit-authz.h"
#include "qapi/qmp/qjson.h"
#include "qapi/qmp/qobject.h"
#include "qapi/qmp/qerror.h"
#include "qapi/qobject-input-visitor.h"
static bool
qauthz_list_file_is_allowed(QAuthZ *authz,
const char *identity,
Error **errp)
{
QAuthZListFile *fauthz = QAUTHZ_LIST_FILE(authz);
if (fauthz->list) {
return qauthz_is_allowed(fauthz->list, identity, errp);
}
return false;
}
static QAuthZ *
qauthz_list_file_load(QAuthZListFile *fauthz, Error **errp)
{
GError *err = NULL;
gchar *content = NULL;
gsize len;
QObject *obj = NULL;
QDict *pdict;
Visitor *v = NULL;
QAuthZ *ret = NULL;
trace_qauthz_list_file_load(fauthz, fauthz->filename);
if (!g_file_get_contents(fauthz->filename, &content, &len, &err)) {
error_setg(errp, "Unable to read '%s': %s",
fauthz->filename, err->message);
goto cleanup;
}
obj = qobject_from_json(content, errp);
if (!obj) {
goto cleanup;
}
pdict = qobject_to(QDict, obj);
if (!pdict) {
error_setg(errp, QERR_INVALID_PARAMETER_TYPE, "obj", "dict");
goto cleanup;
}
v = qobject_input_visitor_new(obj);
ret = (QAuthZ *)user_creatable_add_type(TYPE_QAUTHZ_LIST,
NULL, pdict, v, errp);
cleanup:
visit_free(v);
qobject_unref(obj);
if (err) {
g_error_free(err);
}
g_free(content);
return ret;
}
static void
qauthz_list_file_event(int64_t wd G_GNUC_UNUSED,
QFileMonitorEvent ev G_GNUC_UNUSED,
const char *name G_GNUC_UNUSED,
void *opaque)
{
QAuthZListFile *fauthz = opaque;
Error *err = NULL;
if (ev != QFILE_MONITOR_EVENT_MODIFIED &&
ev != QFILE_MONITOR_EVENT_CREATED) {
return;
}
object_unref(OBJECT(fauthz->list));
fauthz->list = qauthz_list_file_load(fauthz, &err);
trace_qauthz_list_file_refresh(fauthz,
fauthz->filename, fauthz->list ? 1 : 0);
if (!fauthz->list) {
error_report_err(err);
}
}
static void
qauthz_list_file_complete(UserCreatable *uc, Error **errp)
{
QAuthZListFile *fauthz = QAUTHZ_LIST_FILE(uc);
gchar *dir = NULL, *file = NULL;
fauthz->list = qauthz_list_file_load(fauthz, errp);
if (!fauthz->refresh) {
return;
}
fauthz->file_monitor = qemu_file_monitor_new(errp);
if (!fauthz->file_monitor) {
return;
}
dir = g_path_get_dirname(fauthz->filename);
if (g_str_equal(dir, ".")) {
error_setg(errp, "Filename must be an absolute path");
goto cleanup;
}
file = g_path_get_basename(fauthz->filename);
if (g_str_equal(file, ".")) {
error_setg(errp, "Path has no trailing filename component");
goto cleanup;
}
fauthz->file_watch = qemu_file_monitor_add_watch(
fauthz->file_monitor, dir, file,
qauthz_list_file_event, fauthz, errp);
if (fauthz->file_watch < 0) {
goto cleanup;
}
cleanup:
g_free(file);
g_free(dir);
}
static void
qauthz_list_file_prop_set_filename(Object *obj,
const char *value,
Error **errp G_GNUC_UNUSED)
{
QAuthZListFile *fauthz = QAUTHZ_LIST_FILE(obj);
g_free(fauthz->filename);
fauthz->filename = g_strdup(value);
}
static char *
qauthz_list_file_prop_get_filename(Object *obj,
Error **errp G_GNUC_UNUSED)
{
QAuthZListFile *fauthz = QAUTHZ_LIST_FILE(obj);
return g_strdup(fauthz->filename);
}
static void
qauthz_list_file_prop_set_refresh(Object *obj,
bool value,
Error **errp G_GNUC_UNUSED)
{
QAuthZListFile *fauthz = QAUTHZ_LIST_FILE(obj);
fauthz->refresh = value;
}
static bool
qauthz_list_file_prop_get_refresh(Object *obj,
Error **errp G_GNUC_UNUSED)
{
QAuthZListFile *fauthz = QAUTHZ_LIST_FILE(obj);
return fauthz->refresh;
}
static void
qauthz_list_file_finalize(Object *obj)
{
QAuthZListFile *fauthz = QAUTHZ_LIST_FILE(obj);
object_unref(OBJECT(fauthz->list));
g_free(fauthz->filename);
qemu_file_monitor_free(fauthz->file_monitor);
}
static void
qauthz_list_file_class_init(ObjectClass *oc, void *data)
{
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
QAuthZClass *authz = QAUTHZ_CLASS(oc);
ucc->complete = qauthz_list_file_complete;
object_class_property_add_str(oc, "filename",
qauthz_list_file_prop_get_filename,
qauthz_list_file_prop_set_filename,
NULL);
object_class_property_add_bool(oc, "refresh",
qauthz_list_file_prop_get_refresh,
qauthz_list_file_prop_set_refresh,
NULL);
authz->is_allowed = qauthz_list_file_is_allowed;
}
static void
qauthz_list_file_init(Object *obj)
{
QAuthZListFile *authz = QAUTHZ_LIST_FILE(obj);
authz->file_watch = -1;
#ifdef CONFIG_INOTIFY1
authz->refresh = TRUE;
#endif
}
QAuthZListFile *qauthz_list_file_new(const char *id,
const char *filename,
bool refresh,
Error **errp)
{
return QAUTHZ_LIST_FILE(
object_new_with_props(TYPE_QAUTHZ_LIST_FILE,
object_get_objects_root(),
id, errp,
"filename", filename,
"refresh", refresh ? "yes" : "no",
NULL));
}
static const TypeInfo qauthz_list_file_info = {
.parent = TYPE_QAUTHZ,
.name = TYPE_QAUTHZ_LIST_FILE,
.instance_init = qauthz_list_file_init,
.instance_size = sizeof(QAuthZListFile),
.instance_finalize = qauthz_list_file_finalize,
.class_size = sizeof(QAuthZListFileClass),
.class_init = qauthz_list_file_class_init,
.interfaces = (InterfaceInfo[]) {
{ TYPE_USER_CREATABLE },
{ }
}
};
static void
qauthz_list_file_register_types(void)
{
type_register_static(&qauthz_list_file_info);
}
type_init(qauthz_list_file_register_types);

View File

@@ -1,148 +0,0 @@
/*
* QEMU PAM authorization driver
*
* Copyright (c) 2018 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu/osdep.h"
#include "authz/pamacct.h"
#include "authz/trace.h"
#include "qom/object_interfaces.h"
#include <security/pam_appl.h>
static bool qauthz_pam_is_allowed(QAuthZ *authz,
const char *identity,
Error **errp)
{
QAuthZPAM *pauthz = QAUTHZ_PAM(authz);
const struct pam_conv pam_conversation = { 0 };
pam_handle_t *pamh = NULL;
int ret;
trace_qauthz_pam_check(authz, identity, pauthz->service);
ret = pam_start(pauthz->service,
identity,
&pam_conversation,
&pamh);
if (ret != PAM_SUCCESS) {
error_setg(errp, "Unable to start PAM transaction: %s",
pam_strerror(NULL, ret));
return false;
}
ret = pam_acct_mgmt(pamh, PAM_SILENT);
pam_end(pamh, ret);
if (ret != PAM_SUCCESS) {
error_setg(errp, "Unable to authorize user '%s': %s",
identity, pam_strerror(pamh, ret));
return false;
}
return true;
}
static void
qauthz_pam_prop_set_service(Object *obj,
const char *service,
Error **errp G_GNUC_UNUSED)
{
QAuthZPAM *pauthz = QAUTHZ_PAM(obj);
g_free(pauthz->service);
pauthz->service = g_strdup(service);
}
static char *
qauthz_pam_prop_get_service(Object *obj,
Error **errp G_GNUC_UNUSED)
{
QAuthZPAM *pauthz = QAUTHZ_PAM(obj);
return g_strdup(pauthz->service);
}
static void
qauthz_pam_complete(UserCreatable *uc, Error **errp)
{
}
static void
qauthz_pam_finalize(Object *obj)
{
QAuthZPAM *pauthz = QAUTHZ_PAM(obj);
g_free(pauthz->service);
}
static void
qauthz_pam_class_init(ObjectClass *oc, void *data)
{
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
QAuthZClass *authz = QAUTHZ_CLASS(oc);
ucc->complete = qauthz_pam_complete;
authz->is_allowed = qauthz_pam_is_allowed;
object_class_property_add_str(oc, "service",
qauthz_pam_prop_get_service,
qauthz_pam_prop_set_service,
NULL);
}
QAuthZPAM *qauthz_pam_new(const char *id,
const char *service,
Error **errp)
{
return QAUTHZ_PAM(
object_new_with_props(TYPE_QAUTHZ_PAM,
object_get_objects_root(),
id, errp,
"service", service,
NULL));
}
static const TypeInfo qauthz_pam_info = {
.parent = TYPE_QAUTHZ,
.name = TYPE_QAUTHZ_PAM,
.instance_size = sizeof(QAuthZPAM),
.instance_finalize = qauthz_pam_finalize,
.class_size = sizeof(QAuthZPAMClass),
.class_init = qauthz_pam_class_init,
.interfaces = (InterfaceInfo[]) {
{ TYPE_USER_CREATABLE },
{ }
}
};
static void
qauthz_pam_register_types(void)
{
type_register_static(&qauthz_pam_info);
}
type_init(qauthz_pam_register_types);

View File

@@ -1,115 +0,0 @@
/*
* QEMU simple authorization driver
*
* Copyright (c) 2018 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu/osdep.h"
#include "authz/simple.h"
#include "authz/trace.h"
#include "qom/object_interfaces.h"
static bool qauthz_simple_is_allowed(QAuthZ *authz,
const char *identity,
Error **errp)
{
QAuthZSimple *sauthz = QAUTHZ_SIMPLE(authz);
trace_qauthz_simple_is_allowed(authz, sauthz->identity, identity);
return g_str_equal(identity, sauthz->identity);
}
static void
qauthz_simple_prop_set_identity(Object *obj,
const char *value,
Error **errp G_GNUC_UNUSED)
{
QAuthZSimple *sauthz = QAUTHZ_SIMPLE(obj);
g_free(sauthz->identity);
sauthz->identity = g_strdup(value);
}
static char *
qauthz_simple_prop_get_identity(Object *obj,
Error **errp G_GNUC_UNUSED)
{
QAuthZSimple *sauthz = QAUTHZ_SIMPLE(obj);
return g_strdup(sauthz->identity);
}
static void
qauthz_simple_finalize(Object *obj)
{
QAuthZSimple *sauthz = QAUTHZ_SIMPLE(obj);
g_free(sauthz->identity);
}
static void
qauthz_simple_class_init(ObjectClass *oc, void *data)
{
QAuthZClass *authz = QAUTHZ_CLASS(oc);
authz->is_allowed = qauthz_simple_is_allowed;
object_class_property_add_str(oc, "identity",
qauthz_simple_prop_get_identity,
qauthz_simple_prop_set_identity,
NULL);
}
QAuthZSimple *qauthz_simple_new(const char *id,
const char *identity,
Error **errp)
{
return QAUTHZ_SIMPLE(
object_new_with_props(TYPE_QAUTHZ_SIMPLE,
object_get_objects_root(),
id, errp,
"identity", identity,
NULL));
}
static const TypeInfo qauthz_simple_info = {
.parent = TYPE_QAUTHZ,
.name = TYPE_QAUTHZ_SIMPLE,
.instance_size = sizeof(QAuthZSimple),
.instance_finalize = qauthz_simple_finalize,
.class_size = sizeof(QAuthZSimpleClass),
.class_init = qauthz_simple_class_init,
.interfaces = (InterfaceInfo[]) {
{ TYPE_USER_CREATABLE },
{ }
}
};
static void
qauthz_simple_register_types(void)
{
type_register_static(&qauthz_simple_info);
}
type_init(qauthz_simple_register_types);

View File

@@ -1,18 +0,0 @@
# See docs/devel/tracing.txt for syntax documentation.
# base.c
qauthz_is_allowed(void *authz, const char *identity, bool allowed) "AuthZ %p check identity=%s allowed=%d"
# simple.c
qauthz_simple_is_allowed(void *authz, const char *wantidentity, const char *gotidentity) "AuthZ simple %p check want identity=%s got identity=%s"
# list.c
qauthz_list_check_rule(void *authz, const char *identity, const char *rule, int format, int policy) "AuthZ list %p check rule=%s identity=%s format=%d policy=%d"
qauthz_list_default_policy(void *authz, const char *identity, int policy) "AuthZ list %p default identity=%s policy=%d"
# listfile.c
qauthz_list_file_load(void *authz, const char *filename) "AuthZ file %p load filename=%s"
qauthz_list_file_refresh(void *authz, const char *filename, int success) "AuthZ file %p load filename=%s success=%d"
# pamacct.c
qauthz_pam_check(void *authz, const char *identity, const char *service) "AuthZ PAM %p identity=%s service=%s"

View File

@@ -1,17 +1,11 @@
common-obj-y += rng.o rng-egd.o
common-obj-$(CONFIG_POSIX) += rng-random.o
common-obj-y += msmouse.o testdev.o
common-obj-$(CONFIG_BRLAPI) += baum.o
baum.o-cflags := $(SDL_CFLAGS)
common-obj-$(CONFIG_TPM) += tpm.o
common-obj-y += hostmem.o hostmem-ram.o
common-obj-$(CONFIG_POSIX) += hostmem-file.o
common-obj-y += cryptodev.o
common-obj-y += cryptodev-builtin.o
ifeq ($(CONFIG_VIRTIO_CRYPTO),y)
common-obj-y += cryptodev-vhost.o
common-obj-$(CONFIG_VHOST_CRYPTO) += cryptodev-vhost-user.o
endif
common-obj-$(CONFIG_LINUX) += hostmem-memfd.o
common-obj-$(CONFIG_LINUX) += hostmem-file.o

649
backends/baum.c Normal file
View File

@@ -0,0 +1,649 @@
/*
* QEMU Baum Braille Device
*
* Copyright (c) 2008 Samuel Thibault
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "sysemu/char.h"
#include "qemu/timer.h"
#include "hw/usb.h"
#include <brlapi.h>
#include <brlapi_constants.h>
#include <brlapi_keycodes.h>
#ifdef CONFIG_SDL
#include <SDL_syswm.h>
#endif
#if 0
#define DPRINTF(fmt, ...) \
printf(fmt, ## __VA_ARGS__)
#else
#define DPRINTF(fmt, ...)
#endif
#define ESC 0x1B
#define BAUM_REQ_DisplayData 0x01
#define BAUM_REQ_GetVersionNumber 0x05
#define BAUM_REQ_GetKeys 0x08
#define BAUM_REQ_SetMode 0x12
#define BAUM_REQ_SetProtocol 0x15
#define BAUM_REQ_GetDeviceIdentity 0x84
#define BAUM_REQ_GetSerialNumber 0x8A
#define BAUM_RSP_CellCount 0x01
#define BAUM_RSP_VersionNumber 0x05
#define BAUM_RSP_ModeSetting 0x11
#define BAUM_RSP_CommunicationChannel 0x16
#define BAUM_RSP_PowerdownSignal 0x17
#define BAUM_RSP_HorizontalSensors 0x20
#define BAUM_RSP_VerticalSensors 0x21
#define BAUM_RSP_RoutingKeys 0x22
#define BAUM_RSP_Switches 0x23
#define BAUM_RSP_TopKeys 0x24
#define BAUM_RSP_HorizontalSensor 0x25
#define BAUM_RSP_VerticalSensor 0x26
#define BAUM_RSP_RoutingKey 0x27
#define BAUM_RSP_FrontKeys6 0x28
#define BAUM_RSP_BackKeys6 0x29
#define BAUM_RSP_CommandKeys 0x2B
#define BAUM_RSP_FrontKeys10 0x2C
#define BAUM_RSP_BackKeys10 0x2D
#define BAUM_RSP_EntryKeys 0x33
#define BAUM_RSP_JoyStick 0x34
#define BAUM_RSP_ErrorCode 0x40
#define BAUM_RSP_InfoBlock 0x42
#define BAUM_RSP_DeviceIdentity 0x84
#define BAUM_RSP_SerialNumber 0x8A
#define BAUM_RSP_BluetoothName 0x8C
#define BAUM_TL1 0x01
#define BAUM_TL2 0x02
#define BAUM_TL3 0x04
#define BAUM_TR1 0x08
#define BAUM_TR2 0x10
#define BAUM_TR3 0x20
#define BUF_SIZE 256
typedef struct {
CharDriverState *chr;
brlapi_handle_t *brlapi;
int brlapi_fd;
unsigned int x, y;
uint8_t in_buf[BUF_SIZE];
uint8_t in_buf_used;
uint8_t out_buf[BUF_SIZE];
uint8_t out_buf_used, out_buf_ptr;
QEMUTimer *cellCount_timer;
} BaumDriverState;
/* Let's assume NABCC by default */
static const uint8_t nabcc_translation[256] = {
[0] = ' ',
#ifndef BRLAPI_DOTS
#define BRLAPI_DOTS(d1,d2,d3,d4,d5,d6,d7,d8) \
((d1?BRLAPI_DOT1:0)|\
(d2?BRLAPI_DOT2:0)|\
(d3?BRLAPI_DOT3:0)|\
(d4?BRLAPI_DOT4:0)|\
(d5?BRLAPI_DOT5:0)|\
(d6?BRLAPI_DOT6:0)|\
(d7?BRLAPI_DOT7:0)|\
(d8?BRLAPI_DOT8:0))
#endif
[BRLAPI_DOTS(1,0,0,0,0,0,0,0)] = 'a',
[BRLAPI_DOTS(1,1,0,0,0,0,0,0)] = 'b',
[BRLAPI_DOTS(1,0,0,1,0,0,0,0)] = 'c',
[BRLAPI_DOTS(1,0,0,1,1,0,0,0)] = 'd',
[BRLAPI_DOTS(1,0,0,0,1,0,0,0)] = 'e',
[BRLAPI_DOTS(1,1,0,1,0,0,0,0)] = 'f',
[BRLAPI_DOTS(1,1,0,1,1,0,0,0)] = 'g',
[BRLAPI_DOTS(1,1,0,0,1,0,0,0)] = 'h',
[BRLAPI_DOTS(0,1,0,1,0,0,0,0)] = 'i',
[BRLAPI_DOTS(0,1,0,1,1,0,0,0)] = 'j',
[BRLAPI_DOTS(1,0,1,0,0,0,0,0)] = 'k',
[BRLAPI_DOTS(1,1,1,0,0,0,0,0)] = 'l',
[BRLAPI_DOTS(1,0,1,1,0,0,0,0)] = 'm',
[BRLAPI_DOTS(1,0,1,1,1,0,0,0)] = 'n',
[BRLAPI_DOTS(1,0,1,0,1,0,0,0)] = 'o',
[BRLAPI_DOTS(1,1,1,1,0,0,0,0)] = 'p',
[BRLAPI_DOTS(1,1,1,1,1,0,0,0)] = 'q',
[BRLAPI_DOTS(1,1,1,0,1,0,0,0)] = 'r',
[BRLAPI_DOTS(0,1,1,1,0,0,0,0)] = 's',
[BRLAPI_DOTS(0,1,1,1,1,0,0,0)] = 't',
[BRLAPI_DOTS(1,0,1,0,0,1,0,0)] = 'u',
[BRLAPI_DOTS(1,1,1,0,0,1,0,0)] = 'v',
[BRLAPI_DOTS(0,1,0,1,1,1,0,0)] = 'w',
[BRLAPI_DOTS(1,0,1,1,0,1,0,0)] = 'x',
[BRLAPI_DOTS(1,0,1,1,1,1,0,0)] = 'y',
[BRLAPI_DOTS(1,0,1,0,1,1,0,0)] = 'z',
[BRLAPI_DOTS(1,0,0,0,0,0,1,0)] = 'A',
[BRLAPI_DOTS(1,1,0,0,0,0,1,0)] = 'B',
[BRLAPI_DOTS(1,0,0,1,0,0,1,0)] = 'C',
[BRLAPI_DOTS(1,0,0,1,1,0,1,0)] = 'D',
[BRLAPI_DOTS(1,0,0,0,1,0,1,0)] = 'E',
[BRLAPI_DOTS(1,1,0,1,0,0,1,0)] = 'F',
[BRLAPI_DOTS(1,1,0,1,1,0,1,0)] = 'G',
[BRLAPI_DOTS(1,1,0,0,1,0,1,0)] = 'H',
[BRLAPI_DOTS(0,1,0,1,0,0,1,0)] = 'I',
[BRLAPI_DOTS(0,1,0,1,1,0,1,0)] = 'J',
[BRLAPI_DOTS(1,0,1,0,0,0,1,0)] = 'K',
[BRLAPI_DOTS(1,1,1,0,0,0,1,0)] = 'L',
[BRLAPI_DOTS(1,0,1,1,0,0,1,0)] = 'M',
[BRLAPI_DOTS(1,0,1,1,1,0,1,0)] = 'N',
[BRLAPI_DOTS(1,0,1,0,1,0,1,0)] = 'O',
[BRLAPI_DOTS(1,1,1,1,0,0,1,0)] = 'P',
[BRLAPI_DOTS(1,1,1,1,1,0,1,0)] = 'Q',
[BRLAPI_DOTS(1,1,1,0,1,0,1,0)] = 'R',
[BRLAPI_DOTS(0,1,1,1,0,0,1,0)] = 'S',
[BRLAPI_DOTS(0,1,1,1,1,0,1,0)] = 'T',
[BRLAPI_DOTS(1,0,1,0,0,1,1,0)] = 'U',
[BRLAPI_DOTS(1,1,1,0,0,1,1,0)] = 'V',
[BRLAPI_DOTS(0,1,0,1,1,1,1,0)] = 'W',
[BRLAPI_DOTS(1,0,1,1,0,1,1,0)] = 'X',
[BRLAPI_DOTS(1,0,1,1,1,1,1,0)] = 'Y',
[BRLAPI_DOTS(1,0,1,0,1,1,1,0)] = 'Z',
[BRLAPI_DOTS(0,0,1,0,1,1,0,0)] = '0',
[BRLAPI_DOTS(0,1,0,0,0,0,0,0)] = '1',
[BRLAPI_DOTS(0,1,1,0,0,0,0,0)] = '2',
[BRLAPI_DOTS(0,1,0,0,1,0,0,0)] = '3',
[BRLAPI_DOTS(0,1,0,0,1,1,0,0)] = '4',
[BRLAPI_DOTS(0,1,0,0,0,1,0,0)] = '5',
[BRLAPI_DOTS(0,1,1,0,1,0,0,0)] = '6',
[BRLAPI_DOTS(0,1,1,0,1,1,0,0)] = '7',
[BRLAPI_DOTS(0,1,1,0,0,1,0,0)] = '8',
[BRLAPI_DOTS(0,0,1,0,1,0,0,0)] = '9',
[BRLAPI_DOTS(0,0,0,1,0,1,0,0)] = '.',
[BRLAPI_DOTS(0,0,1,1,0,1,0,0)] = '+',
[BRLAPI_DOTS(0,0,1,0,0,1,0,0)] = '-',
[BRLAPI_DOTS(1,0,0,0,0,1,0,0)] = '*',
[BRLAPI_DOTS(0,0,1,1,0,0,0,0)] = '/',
[BRLAPI_DOTS(1,1,1,0,1,1,0,0)] = '(',
[BRLAPI_DOTS(0,1,1,1,1,1,0,0)] = ')',
[BRLAPI_DOTS(1,1,1,1,0,1,0,0)] = '&',
[BRLAPI_DOTS(0,0,1,1,1,1,0,0)] = '#',
[BRLAPI_DOTS(0,0,0,0,0,1,0,0)] = ',',
[BRLAPI_DOTS(0,0,0,0,1,1,0,0)] = ';',
[BRLAPI_DOTS(1,0,0,0,1,1,0,0)] = ':',
[BRLAPI_DOTS(0,1,1,1,0,1,0,0)] = '!',
[BRLAPI_DOTS(1,0,0,1,1,1,0,0)] = '?',
[BRLAPI_DOTS(0,0,0,0,1,0,0,0)] = '"',
[BRLAPI_DOTS(0,0,1,0,0,0,0,0)] ='\'',
[BRLAPI_DOTS(0,0,0,1,0,0,0,0)] = '`',
[BRLAPI_DOTS(0,0,0,1,1,0,1,0)] = '^',
[BRLAPI_DOTS(0,0,0,1,1,0,0,0)] = '~',
[BRLAPI_DOTS(0,1,0,1,0,1,1,0)] = '[',
[BRLAPI_DOTS(1,1,0,1,1,1,1,0)] = ']',
[BRLAPI_DOTS(0,1,0,1,0,1,0,0)] = '{',
[BRLAPI_DOTS(1,1,0,1,1,1,0,0)] = '}',
[BRLAPI_DOTS(1,1,1,1,1,1,0,0)] = '=',
[BRLAPI_DOTS(1,1,0,0,0,1,0,0)] = '<',
[BRLAPI_DOTS(0,0,1,1,1,0,0,0)] = '>',
[BRLAPI_DOTS(1,1,0,1,0,1,0,0)] = '$',
[BRLAPI_DOTS(1,0,0,1,0,1,0,0)] = '%',
[BRLAPI_DOTS(0,0,0,1,0,0,1,0)] = '@',
[BRLAPI_DOTS(1,1,0,0,1,1,0,0)] = '|',
[BRLAPI_DOTS(1,1,0,0,1,1,1,0)] ='\\',
[BRLAPI_DOTS(0,0,0,1,1,1,0,0)] = '_',
};
/* The serial port can receive more of our data */
static void baum_accept_input(struct CharDriverState *chr)
{
BaumDriverState *baum = chr->opaque;
int room, first;
if (!baum->out_buf_used)
return;
room = qemu_chr_be_can_write(chr);
if (!room)
return;
if (room > baum->out_buf_used)
room = baum->out_buf_used;
first = BUF_SIZE - baum->out_buf_ptr;
if (room > first) {
qemu_chr_be_write(chr, baum->out_buf + baum->out_buf_ptr, first);
baum->out_buf_ptr = 0;
baum->out_buf_used -= first;
room -= first;
}
qemu_chr_be_write(chr, baum->out_buf + baum->out_buf_ptr, room);
baum->out_buf_ptr += room;
baum->out_buf_used -= room;
}
/* We want to send a packet */
static void baum_write_packet(BaumDriverState *baum, const uint8_t *buf, int len)
{
uint8_t io_buf[1 + 2 * len], *cur = io_buf;
int room;
*cur++ = ESC;
while (len--)
if ((*cur++ = *buf++) == ESC)
*cur++ = ESC;
room = qemu_chr_be_can_write(baum->chr);
len = cur - io_buf;
if (len <= room) {
/* Fits */
qemu_chr_be_write(baum->chr, io_buf, len);
} else {
int first;
uint8_t out;
/* Can't fit all, send what can be, and store the rest. */
qemu_chr_be_write(baum->chr, io_buf, room);
len -= room;
cur = io_buf + room;
if (len > BUF_SIZE - baum->out_buf_used) {
/* Can't even store it, drop the previous data... */
assert(len <= BUF_SIZE);
baum->out_buf_used = 0;
baum->out_buf_ptr = 0;
}
out = baum->out_buf_ptr;
baum->out_buf_used += len;
first = BUF_SIZE - baum->out_buf_ptr;
if (len > first) {
memcpy(baum->out_buf + out, cur, first);
out = 0;
len -= first;
cur += first;
}
memcpy(baum->out_buf + out, cur, len);
}
}
/* Called when the other end seems to have a wrong idea of our display size */
static void baum_cellCount_timer_cb(void *opaque)
{
BaumDriverState *baum = opaque;
uint8_t cell_count[] = { BAUM_RSP_CellCount, baum->x * baum->y };
DPRINTF("Timeout waiting for DisplayData, sending cell count\n");
baum_write_packet(baum, cell_count, sizeof(cell_count));
}
/* Try to interpret a whole incoming packet */
static int baum_eat_packet(BaumDriverState *baum, const uint8_t *buf, int len)
{
const uint8_t *cur = buf;
uint8_t req = 0;
if (!len--)
return 0;
if (*cur++ != ESC) {
while (*cur != ESC) {
if (!len--)
return 0;
cur++;
}
DPRINTF("Dropped %td bytes!\n", cur - buf);
}
#define EAT(c) do {\
if (!len--) \
return 0; \
if ((c = *cur++) == ESC) { \
if (!len--) \
return 0; \
if (*cur++ != ESC) { \
DPRINTF("Broken packet %#2x, tossing\n", req); \
if (timer_pending(baum->cellCount_timer)) { \
timer_del(baum->cellCount_timer); \
baum_cellCount_timer_cb(baum); \
} \
return (cur - 2 - buf); \
} \
} \
} while (0)
EAT(req);
switch (req) {
case BAUM_REQ_DisplayData:
{
uint8_t cells[baum->x * baum->y], c;
uint8_t text[baum->x * baum->y];
uint8_t zero[baum->x * baum->y];
int cursor = BRLAPI_CURSOR_OFF;
int i;
/* Allow 100ms to complete the DisplayData packet */
timer_mod(baum->cellCount_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
NANOSECONDS_PER_SECOND / 10);
for (i = 0; i < baum->x * baum->y ; i++) {
EAT(c);
cells[i] = c;
if ((c & (BRLAPI_DOT7|BRLAPI_DOT8))
== (BRLAPI_DOT7|BRLAPI_DOT8)) {
cursor = i + 1;
c &= ~(BRLAPI_DOT7|BRLAPI_DOT8);
}
if (!(c = nabcc_translation[c]))
c = '?';
text[i] = c;
}
timer_del(baum->cellCount_timer);
memset(zero, 0, sizeof(zero));
brlapi_writeArguments_t wa = {
.displayNumber = BRLAPI_DISPLAY_DEFAULT,
.regionBegin = 1,
.regionSize = baum->x * baum->y,
.text = (char *)text,
.textSize = baum->x * baum->y,
.andMask = zero,
.orMask = cells,
.cursor = cursor,
.charset = (char *)"ISO-8859-1",
};
if (brlapi__write(baum->brlapi, &wa) == -1)
brlapi_perror("baum brlapi_write");
break;
}
case BAUM_REQ_SetMode:
{
uint8_t mode, setting;
DPRINTF("SetMode\n");
EAT(mode);
EAT(setting);
/* ignore */
break;
}
case BAUM_REQ_SetProtocol:
{
uint8_t protocol;
DPRINTF("SetProtocol\n");
EAT(protocol);
/* ignore */
break;
}
case BAUM_REQ_GetDeviceIdentity:
{
uint8_t identity[17] = { BAUM_RSP_DeviceIdentity,
'B','a','u','m',' ','V','a','r','i','o' };
DPRINTF("GetDeviceIdentity\n");
identity[11] = '0' + baum->x / 10;
identity[12] = '0' + baum->x % 10;
baum_write_packet(baum, identity, sizeof(identity));
break;
}
case BAUM_REQ_GetVersionNumber:
{
uint8_t version[] = { BAUM_RSP_VersionNumber, 1 }; /* ? */
DPRINTF("GetVersionNumber\n");
baum_write_packet(baum, version, sizeof(version));
break;
}
case BAUM_REQ_GetSerialNumber:
{
uint8_t serial[] = { BAUM_RSP_SerialNumber,
'0','0','0','0','0','0','0','0' };
DPRINTF("GetSerialNumber\n");
baum_write_packet(baum, serial, sizeof(serial));
break;
}
case BAUM_REQ_GetKeys:
{
DPRINTF("Get%0#2x\n", req);
/* ignore */
break;
}
default:
DPRINTF("unrecognized request %0#2x\n", req);
do
if (!len--)
return 0;
while (*cur++ != ESC);
cur--;
break;
}
return cur - buf;
}
/* The other end is writing some data. Store it and try to interpret */
static int baum_write(CharDriverState *chr, const uint8_t *buf, int len)
{
BaumDriverState *baum = chr->opaque;
int tocopy, cur, eaten, orig_len = len;
if (!len)
return 0;
if (!baum->brlapi)
return len;
while (len) {
/* Complete our buffer as much as possible */
tocopy = len;
if (tocopy > BUF_SIZE - baum->in_buf_used)
tocopy = BUF_SIZE - baum->in_buf_used;
memcpy(baum->in_buf + baum->in_buf_used, buf, tocopy);
baum->in_buf_used += tocopy;
buf += tocopy;
len -= tocopy;
/* Interpret it as much as possible */
cur = 0;
while (cur < baum->in_buf_used &&
(eaten = baum_eat_packet(baum, baum->in_buf + cur, baum->in_buf_used - cur)))
cur += eaten;
/* Shift the remainder */
if (cur) {
memmove(baum->in_buf, baum->in_buf + cur, baum->in_buf_used - cur);
baum->in_buf_used -= cur;
}
/* And continue if any data left */
}
return orig_len;
}
/* Send the key code to the other end */
static void baum_send_key(BaumDriverState *baum, uint8_t type, uint8_t value) {
uint8_t packet[] = { type, value };
DPRINTF("writing key %x %x\n", type, value);
baum_write_packet(baum, packet, sizeof(packet));
}
/* We got some data on the BrlAPI socket */
static void baum_chr_read(void *opaque)
{
BaumDriverState *baum = opaque;
brlapi_keyCode_t code;
int ret;
if (!baum->brlapi)
return;
while ((ret = brlapi__readKey(baum->brlapi, 0, &code)) == 1) {
DPRINTF("got key %"BRLAPI_PRIxKEYCODE"\n", code);
/* Emulate */
switch (code & BRLAPI_KEY_TYPE_MASK) {
case BRLAPI_KEY_TYPE_CMD:
switch (code & BRLAPI_KEY_CMD_BLK_MASK) {
case BRLAPI_KEY_CMD_ROUTE:
baum_send_key(baum, BAUM_RSP_RoutingKey, (code & BRLAPI_KEY_CMD_ARG_MASK)+1);
baum_send_key(baum, BAUM_RSP_RoutingKey, 0);
break;
case 0:
switch (code & BRLAPI_KEY_CMD_ARG_MASK) {
case BRLAPI_KEY_CMD_FWINLT:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TL2);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_FWINRT:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TR2);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_LNUP:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TR1);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_LNDN:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TR3);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_TOP:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TL1|BAUM_TR1);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_BOT:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TL3|BAUM_TR3);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_TOP_LEFT:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TL2|BAUM_TR1);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_BOT_LEFT:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TL2|BAUM_TR3);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_HOME:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TL2|BAUM_TR1|BAUM_TR3);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_PREFMENU:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TL1|BAUM_TL3|BAUM_TR1);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
}
}
break;
case BRLAPI_KEY_TYPE_SYM:
break;
}
}
if (ret == -1 && (brlapi_errno != BRLAPI_ERROR_LIBCERR || errno != EINTR)) {
brlapi_perror("baum: brlapi_readKey");
brlapi__closeConnection(baum->brlapi);
g_free(baum->brlapi);
baum->brlapi = NULL;
}
}
static void baum_close(struct CharDriverState *chr)
{
BaumDriverState *baum = chr->opaque;
timer_free(baum->cellCount_timer);
if (baum->brlapi) {
brlapi__closeConnection(baum->brlapi);
g_free(baum->brlapi);
}
g_free(baum);
}
static CharDriverState *chr_baum_init(const char *id,
ChardevBackend *backend,
ChardevReturn *ret,
Error **errp)
{
ChardevCommon *common = backend->u.braille.data;
BaumDriverState *baum;
CharDriverState *chr;
brlapi_handle_t *handle;
#if defined(CONFIG_SDL)
#if SDL_COMPILEDVERSION < SDL_VERSIONNUM(2, 0, 0)
SDL_SysWMinfo info;
#endif
#endif
int tty;
chr = qemu_chr_alloc(common, errp);
if (!chr) {
return NULL;
}
baum = g_malloc0(sizeof(BaumDriverState));
baum->chr = chr;
chr->opaque = baum;
chr->chr_write = baum_write;
chr->chr_accept_input = baum_accept_input;
chr->chr_close = baum_close;
handle = g_malloc0(brlapi_getHandleSize());
baum->brlapi = handle;
baum->brlapi_fd = brlapi__openConnection(handle, NULL, NULL);
if (baum->brlapi_fd == -1) {
error_setg(errp, "brlapi__openConnection: %s",
brlapi_strerror(brlapi_error_location()));
goto fail_handle;
}
baum->cellCount_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, baum_cellCount_timer_cb, baum);
if (brlapi__getDisplaySize(handle, &baum->x, &baum->y) == -1) {
error_setg(errp, "brlapi__getDisplaySize: %s",
brlapi_strerror(brlapi_error_location()));
goto fail;
}
#if defined(CONFIG_SDL)
#if SDL_COMPILEDVERSION < SDL_VERSIONNUM(2, 0, 0)
memset(&info, 0, sizeof(info));
SDL_VERSION(&info.version);
if (SDL_GetWMInfo(&info))
tty = info.info.x11.wmwindow;
else
#endif
#endif
tty = BRLAPI_TTY_DEFAULT;
if (brlapi__enterTtyMode(handle, tty, NULL) == -1) {
error_setg(errp, "brlapi__enterTtyMode: %s",
brlapi_strerror(brlapi_error_location()));
goto fail;
}
qemu_set_fd_handler(baum->brlapi_fd, baum_chr_read, NULL, baum);
return chr;
fail:
timer_free(baum->cellCount_timer);
brlapi__closeConnection(handle);
fail_handle:
g_free(handle);
g_free(chr);
g_free(baum);
return NULL;
}
static void register_types(void)
{
register_char_driver("braille", CHARDEV_BACKEND_KIND_BRAILLE, NULL,
chr_baum_init);
}
type_init(register_types);

View File

@@ -1,401 +0,0 @@
/*
* QEMU Cryptodev backend for QEMU cipher APIs
*
* Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD.
*
* Authors:
* Gonglei <arei.gonglei@huawei.com>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu/osdep.h"
#include "sysemu/cryptodev.h"
#include "hw/boards.h"
#include "qapi/error.h"
#include "standard-headers/linux/virtio_crypto.h"
#include "crypto/cipher.h"
/**
* @TYPE_CRYPTODEV_BACKEND_BUILTIN:
* name of backend that uses QEMU cipher API
*/
#define TYPE_CRYPTODEV_BACKEND_BUILTIN "cryptodev-backend-builtin"
#define CRYPTODEV_BACKEND_BUILTIN(obj) \
OBJECT_CHECK(CryptoDevBackendBuiltin, \
(obj), TYPE_CRYPTODEV_BACKEND_BUILTIN)
typedef struct CryptoDevBackendBuiltin
CryptoDevBackendBuiltin;
typedef struct CryptoDevBackendBuiltinSession {
QCryptoCipher *cipher;
uint8_t direction; /* encryption or decryption */
uint8_t type; /* cipher? hash? aead? */
QTAILQ_ENTRY(CryptoDevBackendBuiltinSession) next;
} CryptoDevBackendBuiltinSession;
/* Max number of symmetric sessions */
#define MAX_NUM_SESSIONS 256
#define CRYPTODEV_BUITLIN_MAX_AUTH_KEY_LEN 512
#define CRYPTODEV_BUITLIN_MAX_CIPHER_KEY_LEN 64
struct CryptoDevBackendBuiltin {
CryptoDevBackend parent_obj;
CryptoDevBackendBuiltinSession *sessions[MAX_NUM_SESSIONS];
};
static void cryptodev_builtin_init(
CryptoDevBackend *backend, Error **errp)
{
/* Only support one queue */
int queues = backend->conf.peers.queues;
CryptoDevBackendClient *cc;
if (queues != 1) {
error_setg(errp,
"Only support one queue in cryptdov-builtin backend");
return;
}
cc = cryptodev_backend_new_client(
"cryptodev-builtin", NULL);
cc->info_str = g_strdup_printf("cryptodev-builtin0");
cc->queue_index = 0;
cc->type = CRYPTODEV_BACKEND_TYPE_BUILTIN;
backend->conf.peers.ccs[0] = cc;
backend->conf.crypto_services =
1u << VIRTIO_CRYPTO_SERVICE_CIPHER |
1u << VIRTIO_CRYPTO_SERVICE_HASH |
1u << VIRTIO_CRYPTO_SERVICE_MAC;
backend->conf.cipher_algo_l = 1u << VIRTIO_CRYPTO_CIPHER_AES_CBC;
backend->conf.hash_algo = 1u << VIRTIO_CRYPTO_HASH_SHA1;
/*
* Set the Maximum length of crypto request.
* Why this value? Just avoid to overflow when
* memory allocation for each crypto request.
*/
backend->conf.max_size = LONG_MAX - sizeof(CryptoDevBackendSymOpInfo);
backend->conf.max_cipher_key_len = CRYPTODEV_BUITLIN_MAX_CIPHER_KEY_LEN;
backend->conf.max_auth_key_len = CRYPTODEV_BUITLIN_MAX_AUTH_KEY_LEN;
cryptodev_backend_set_ready(backend, true);
}
static int
cryptodev_builtin_get_unused_session_index(
CryptoDevBackendBuiltin *builtin)
{
size_t i;
for (i = 0; i < MAX_NUM_SESSIONS; i++) {
if (builtin->sessions[i] == NULL) {
return i;
}
}
return -1;
}
#define AES_KEYSIZE_128 16
#define AES_KEYSIZE_192 24
#define AES_KEYSIZE_256 32
#define AES_KEYSIZE_128_XTS AES_KEYSIZE_256
#define AES_KEYSIZE_256_XTS 64
static int
cryptodev_builtin_get_aes_algo(uint32_t key_len, int mode, Error **errp)
{
int algo;
if (key_len == AES_KEYSIZE_128) {
algo = QCRYPTO_CIPHER_ALG_AES_128;
} else if (key_len == AES_KEYSIZE_192) {
algo = QCRYPTO_CIPHER_ALG_AES_192;
} else if (key_len == AES_KEYSIZE_256) { /* equals AES_KEYSIZE_128_XTS */
if (mode == QCRYPTO_CIPHER_MODE_XTS) {
algo = QCRYPTO_CIPHER_ALG_AES_128;
} else {
algo = QCRYPTO_CIPHER_ALG_AES_256;
}
} else if (key_len == AES_KEYSIZE_256_XTS) {
if (mode == QCRYPTO_CIPHER_MODE_XTS) {
algo = QCRYPTO_CIPHER_ALG_AES_256;
} else {
goto err;
}
} else {
goto err;
}
return algo;
err:
error_setg(errp, "Unsupported key length :%u", key_len);
return -1;
}
static int cryptodev_builtin_create_cipher_session(
CryptoDevBackendBuiltin *builtin,
CryptoDevBackendSymSessionInfo *sess_info,
Error **errp)
{
int algo;
int mode;
QCryptoCipher *cipher;
int index;
CryptoDevBackendBuiltinSession *sess;
if (sess_info->op_type != VIRTIO_CRYPTO_SYM_OP_CIPHER) {
error_setg(errp, "Unsupported optype :%u", sess_info->op_type);
return -1;
}
index = cryptodev_builtin_get_unused_session_index(builtin);
if (index < 0) {
error_setg(errp, "Total number of sessions created exceeds %u",
MAX_NUM_SESSIONS);
return -1;
}
switch (sess_info->cipher_alg) {
case VIRTIO_CRYPTO_CIPHER_AES_ECB:
mode = QCRYPTO_CIPHER_MODE_ECB;
algo = cryptodev_builtin_get_aes_algo(sess_info->key_len,
mode, errp);
if (algo < 0) {
return -1;
}
break;
case VIRTIO_CRYPTO_CIPHER_AES_CBC:
mode = QCRYPTO_CIPHER_MODE_CBC;
algo = cryptodev_builtin_get_aes_algo(sess_info->key_len,
mode, errp);
if (algo < 0) {
return -1;
}
break;
case VIRTIO_CRYPTO_CIPHER_AES_CTR:
mode = QCRYPTO_CIPHER_MODE_CTR;
algo = cryptodev_builtin_get_aes_algo(sess_info->key_len,
mode, errp);
if (algo < 0) {
return -1;
}
break;
case VIRTIO_CRYPTO_CIPHER_AES_XTS:
mode = QCRYPTO_CIPHER_MODE_XTS;
algo = cryptodev_builtin_get_aes_algo(sess_info->key_len,
mode, errp);
if (algo < 0) {
return -1;
}
break;
case VIRTIO_CRYPTO_CIPHER_3DES_ECB:
mode = QCRYPTO_CIPHER_MODE_ECB;
algo = QCRYPTO_CIPHER_ALG_3DES;
break;
case VIRTIO_CRYPTO_CIPHER_3DES_CBC:
mode = QCRYPTO_CIPHER_MODE_CBC;
algo = QCRYPTO_CIPHER_ALG_3DES;
break;
case VIRTIO_CRYPTO_CIPHER_3DES_CTR:
mode = QCRYPTO_CIPHER_MODE_CTR;
algo = QCRYPTO_CIPHER_ALG_3DES;
break;
default:
error_setg(errp, "Unsupported cipher alg :%u",
sess_info->cipher_alg);
return -1;
}
cipher = qcrypto_cipher_new(algo, mode,
sess_info->cipher_key,
sess_info->key_len,
errp);
if (!cipher) {
return -1;
}
sess = g_new0(CryptoDevBackendBuiltinSession, 1);
sess->cipher = cipher;
sess->direction = sess_info->direction;
sess->type = sess_info->op_type;
builtin->sessions[index] = sess;
return index;
}
static int64_t cryptodev_builtin_sym_create_session(
CryptoDevBackend *backend,
CryptoDevBackendSymSessionInfo *sess_info,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendBuiltin *builtin =
CRYPTODEV_BACKEND_BUILTIN(backend);
int64_t session_id = -1;
int ret;
switch (sess_info->op_code) {
case VIRTIO_CRYPTO_CIPHER_CREATE_SESSION:
ret = cryptodev_builtin_create_cipher_session(
builtin, sess_info, errp);
if (ret < 0) {
return ret;
} else {
session_id = ret;
}
break;
case VIRTIO_CRYPTO_HASH_CREATE_SESSION:
case VIRTIO_CRYPTO_MAC_CREATE_SESSION:
default:
error_setg(errp, "Unsupported opcode :%" PRIu32 "",
sess_info->op_code);
return -1;
}
return session_id;
}
static int cryptodev_builtin_sym_close_session(
CryptoDevBackend *backend,
uint64_t session_id,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendBuiltin *builtin =
CRYPTODEV_BACKEND_BUILTIN(backend);
if (session_id >= MAX_NUM_SESSIONS ||
builtin->sessions[session_id] == NULL) {
error_setg(errp, "Cannot find a valid session id: %" PRIu64 "",
session_id);
return -1;
}
qcrypto_cipher_free(builtin->sessions[session_id]->cipher);
g_free(builtin->sessions[session_id]);
builtin->sessions[session_id] = NULL;
return 0;
}
static int cryptodev_builtin_sym_operation(
CryptoDevBackend *backend,
CryptoDevBackendSymOpInfo *op_info,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendBuiltin *builtin =
CRYPTODEV_BACKEND_BUILTIN(backend);
CryptoDevBackendBuiltinSession *sess;
int ret;
if (op_info->session_id >= MAX_NUM_SESSIONS ||
builtin->sessions[op_info->session_id] == NULL) {
error_setg(errp, "Cannot find a valid session id: %" PRIu64 "",
op_info->session_id);
return -VIRTIO_CRYPTO_INVSESS;
}
if (op_info->op_type == VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING) {
error_setg(errp,
"Algorithm chain is unsupported for cryptdoev-builtin");
return -VIRTIO_CRYPTO_NOTSUPP;
}
sess = builtin->sessions[op_info->session_id];
if (op_info->iv_len > 0) {
ret = qcrypto_cipher_setiv(sess->cipher, op_info->iv,
op_info->iv_len, errp);
if (ret < 0) {
return -VIRTIO_CRYPTO_ERR;
}
}
if (sess->direction == VIRTIO_CRYPTO_OP_ENCRYPT) {
ret = qcrypto_cipher_encrypt(sess->cipher, op_info->src,
op_info->dst, op_info->src_len, errp);
if (ret < 0) {
return -VIRTIO_CRYPTO_ERR;
}
} else {
ret = qcrypto_cipher_decrypt(sess->cipher, op_info->src,
op_info->dst, op_info->src_len, errp);
if (ret < 0) {
return -VIRTIO_CRYPTO_ERR;
}
}
return VIRTIO_CRYPTO_OK;
}
static void cryptodev_builtin_cleanup(
CryptoDevBackend *backend,
Error **errp)
{
CryptoDevBackendBuiltin *builtin =
CRYPTODEV_BACKEND_BUILTIN(backend);
size_t i;
int queues = backend->conf.peers.queues;
CryptoDevBackendClient *cc;
for (i = 0; i < MAX_NUM_SESSIONS; i++) {
if (builtin->sessions[i] != NULL) {
cryptodev_builtin_sym_close_session(
backend, i, 0, errp);
}
}
for (i = 0; i < queues; i++) {
cc = backend->conf.peers.ccs[i];
if (cc) {
cryptodev_backend_free_client(cc);
backend->conf.peers.ccs[i] = NULL;
}
}
cryptodev_backend_set_ready(backend, false);
}
static void
cryptodev_builtin_class_init(ObjectClass *oc, void *data)
{
CryptoDevBackendClass *bc = CRYPTODEV_BACKEND_CLASS(oc);
bc->init = cryptodev_builtin_init;
bc->cleanup = cryptodev_builtin_cleanup;
bc->create_session = cryptodev_builtin_sym_create_session;
bc->close_session = cryptodev_builtin_sym_close_session;
bc->do_sym_op = cryptodev_builtin_sym_operation;
}
static const TypeInfo cryptodev_builtin_info = {
.name = TYPE_CRYPTODEV_BACKEND_BUILTIN,
.parent = TYPE_CRYPTODEV_BACKEND,
.class_init = cryptodev_builtin_class_init,
.instance_size = sizeof(CryptoDevBackendBuiltin),
};
static void
cryptodev_builtin_register_types(void)
{
type_register_static(&cryptodev_builtin_info);
}
type_init(cryptodev_builtin_register_types);

View File

@@ -1,380 +0,0 @@
/*
* QEMU Cryptodev backend for QEMU cipher APIs
*
* Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD.
*
* Authors:
* Gonglei <arei.gonglei@huawei.com>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu/osdep.h"
#include "hw/boards.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "qemu/error-report.h"
#include "hw/virtio/vhost-user.h"
#include "standard-headers/linux/virtio_crypto.h"
#include "sysemu/cryptodev-vhost.h"
#include "chardev/char-fe.h"
#include "sysemu/cryptodev-vhost-user.h"
/**
* @TYPE_CRYPTODEV_BACKEND_VHOST_USER:
* name of backend that uses vhost user server
*/
#define TYPE_CRYPTODEV_BACKEND_VHOST_USER "cryptodev-vhost-user"
#define CRYPTODEV_BACKEND_VHOST_USER(obj) \
OBJECT_CHECK(CryptoDevBackendVhostUser, \
(obj), TYPE_CRYPTODEV_BACKEND_VHOST_USER)
typedef struct CryptoDevBackendVhostUser {
CryptoDevBackend parent_obj;
VhostUserState vhost_user;
CharBackend chr;
char *chr_name;
bool opened;
CryptoDevBackendVhost *vhost_crypto[MAX_CRYPTO_QUEUE_NUM];
} CryptoDevBackendVhostUser;
static int
cryptodev_vhost_user_running(
CryptoDevBackendVhost *crypto)
{
return crypto ? 1 : 0;
}
CryptoDevBackendVhost *
cryptodev_vhost_user_get_vhost(
CryptoDevBackendClient *cc,
CryptoDevBackend *b,
uint16_t queue)
{
CryptoDevBackendVhostUser *s =
CRYPTODEV_BACKEND_VHOST_USER(b);
assert(cc->type == CRYPTODEV_BACKEND_TYPE_VHOST_USER);
assert(queue < MAX_CRYPTO_QUEUE_NUM);
return s->vhost_crypto[queue];
}
static void cryptodev_vhost_user_stop(int queues,
CryptoDevBackendVhostUser *s)
{
size_t i;
for (i = 0; i < queues; i++) {
if (!cryptodev_vhost_user_running(s->vhost_crypto[i])) {
continue;
}
cryptodev_vhost_cleanup(s->vhost_crypto[i]);
s->vhost_crypto[i] = NULL;
}
}
static int
cryptodev_vhost_user_start(int queues,
CryptoDevBackendVhostUser *s)
{
CryptoDevBackendVhostOptions options;
CryptoDevBackend *b = CRYPTODEV_BACKEND(s);
int max_queues;
size_t i;
for (i = 0; i < queues; i++) {
if (cryptodev_vhost_user_running(s->vhost_crypto[i])) {
continue;
}
options.opaque = &s->vhost_user;
options.backend_type = VHOST_BACKEND_TYPE_USER;
options.cc = b->conf.peers.ccs[i];
s->vhost_crypto[i] = cryptodev_vhost_init(&options);
if (!s->vhost_crypto[i]) {
error_report("failed to init vhost_crypto for queue %zu", i);
goto err;
}
if (i == 0) {
max_queues =
cryptodev_vhost_get_max_queues(s->vhost_crypto[i]);
if (queues > max_queues) {
error_report("you are asking more queues than supported: %d",
max_queues);
goto err;
}
}
}
return 0;
err:
cryptodev_vhost_user_stop(i + 1, s);
return -1;
}
static Chardev *
cryptodev_vhost_claim_chardev(CryptoDevBackendVhostUser *s,
Error **errp)
{
Chardev *chr;
if (s->chr_name == NULL) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
"chardev", "a valid character device");
return NULL;
}
chr = qemu_chr_find(s->chr_name);
if (chr == NULL) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", s->chr_name);
return NULL;
}
return chr;
}
static void cryptodev_vhost_user_event(void *opaque, int event)
{
CryptoDevBackendVhostUser *s = opaque;
CryptoDevBackend *b = CRYPTODEV_BACKEND(s);
int queues = b->conf.peers.queues;
assert(queues < MAX_CRYPTO_QUEUE_NUM);
switch (event) {
case CHR_EVENT_OPENED:
if (cryptodev_vhost_user_start(queues, s) < 0) {
exit(1);
}
b->ready = true;
break;
case CHR_EVENT_CLOSED:
b->ready = false;
cryptodev_vhost_user_stop(queues, s);
break;
}
}
static void cryptodev_vhost_user_init(
CryptoDevBackend *backend, Error **errp)
{
int queues = backend->conf.peers.queues;
size_t i;
Error *local_err = NULL;
Chardev *chr;
CryptoDevBackendClient *cc;
CryptoDevBackendVhostUser *s =
CRYPTODEV_BACKEND_VHOST_USER(backend);
chr = cryptodev_vhost_claim_chardev(s, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
s->opened = true;
for (i = 0; i < queues; i++) {
cc = cryptodev_backend_new_client(
"cryptodev-vhost-user", NULL);
cc->info_str = g_strdup_printf("cryptodev-vhost-user%zu to %s ",
i, chr->label);
cc->queue_index = i;
cc->type = CRYPTODEV_BACKEND_TYPE_VHOST_USER;
backend->conf.peers.ccs[i] = cc;
if (i == 0) {
if (!qemu_chr_fe_init(&s->chr, chr, &local_err)) {
error_propagate(errp, local_err);
return;
}
}
}
if (!vhost_user_init(&s->vhost_user, &s->chr, errp)) {
return;
}
qemu_chr_fe_set_handlers(&s->chr, NULL, NULL,
cryptodev_vhost_user_event, NULL, s, NULL, true);
backend->conf.crypto_services =
1u << VIRTIO_CRYPTO_SERVICE_CIPHER |
1u << VIRTIO_CRYPTO_SERVICE_HASH |
1u << VIRTIO_CRYPTO_SERVICE_MAC;
backend->conf.cipher_algo_l = 1u << VIRTIO_CRYPTO_CIPHER_AES_CBC;
backend->conf.hash_algo = 1u << VIRTIO_CRYPTO_HASH_SHA1;
backend->conf.max_size = UINT64_MAX;
backend->conf.max_cipher_key_len = VHOST_USER_MAX_CIPHER_KEY_LEN;
backend->conf.max_auth_key_len = VHOST_USER_MAX_AUTH_KEY_LEN;
}
static int64_t cryptodev_vhost_user_sym_create_session(
CryptoDevBackend *backend,
CryptoDevBackendSymSessionInfo *sess_info,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendClient *cc =
backend->conf.peers.ccs[queue_index];
CryptoDevBackendVhost *vhost_crypto;
uint64_t session_id = 0;
int ret;
vhost_crypto = cryptodev_vhost_user_get_vhost(cc, backend, queue_index);
if (vhost_crypto) {
struct vhost_dev *dev = &(vhost_crypto->dev);
ret = dev->vhost_ops->vhost_crypto_create_session(dev,
sess_info,
&session_id);
if (ret < 0) {
return -1;
} else {
return session_id;
}
}
return -1;
}
static int cryptodev_vhost_user_sym_close_session(
CryptoDevBackend *backend,
uint64_t session_id,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendClient *cc =
backend->conf.peers.ccs[queue_index];
CryptoDevBackendVhost *vhost_crypto;
int ret;
vhost_crypto = cryptodev_vhost_user_get_vhost(cc, backend, queue_index);
if (vhost_crypto) {
struct vhost_dev *dev = &(vhost_crypto->dev);
ret = dev->vhost_ops->vhost_crypto_close_session(dev,
session_id);
if (ret < 0) {
return -1;
} else {
return 0;
}
}
return -1;
}
static void cryptodev_vhost_user_cleanup(
CryptoDevBackend *backend,
Error **errp)
{
CryptoDevBackendVhostUser *s =
CRYPTODEV_BACKEND_VHOST_USER(backend);
size_t i;
int queues = backend->conf.peers.queues;
CryptoDevBackendClient *cc;
cryptodev_vhost_user_stop(queues, s);
for (i = 0; i < queues; i++) {
cc = backend->conf.peers.ccs[i];
if (cc) {
cryptodev_backend_free_client(cc);
backend->conf.peers.ccs[i] = NULL;
}
}
vhost_user_cleanup(&s->vhost_user);
}
static void cryptodev_vhost_user_set_chardev(Object *obj,
const char *value, Error **errp)
{
CryptoDevBackendVhostUser *s =
CRYPTODEV_BACKEND_VHOST_USER(obj);
if (s->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
} else {
g_free(s->chr_name);
s->chr_name = g_strdup(value);
}
}
static char *
cryptodev_vhost_user_get_chardev(Object *obj, Error **errp)
{
CryptoDevBackendVhostUser *s =
CRYPTODEV_BACKEND_VHOST_USER(obj);
Chardev *chr = qemu_chr_fe_get_driver(&s->chr);
if (chr && chr->label) {
return g_strdup(chr->label);
}
return NULL;
}
static void cryptodev_vhost_user_instance_int(Object *obj)
{
object_property_add_str(obj, "chardev",
cryptodev_vhost_user_get_chardev,
cryptodev_vhost_user_set_chardev,
NULL);
}
static void cryptodev_vhost_user_finalize(Object *obj)
{
CryptoDevBackendVhostUser *s =
CRYPTODEV_BACKEND_VHOST_USER(obj);
qemu_chr_fe_deinit(&s->chr, false);
g_free(s->chr_name);
}
static void
cryptodev_vhost_user_class_init(ObjectClass *oc, void *data)
{
CryptoDevBackendClass *bc = CRYPTODEV_BACKEND_CLASS(oc);
bc->init = cryptodev_vhost_user_init;
bc->cleanup = cryptodev_vhost_user_cleanup;
bc->create_session = cryptodev_vhost_user_sym_create_session;
bc->close_session = cryptodev_vhost_user_sym_close_session;
bc->do_sym_op = NULL;
}
static const TypeInfo cryptodev_vhost_user_info = {
.name = TYPE_CRYPTODEV_BACKEND_VHOST_USER,
.parent = TYPE_CRYPTODEV_BACKEND,
.class_init = cryptodev_vhost_user_class_init,
.instance_init = cryptodev_vhost_user_instance_int,
.instance_finalize = cryptodev_vhost_user_finalize,
.instance_size = sizeof(CryptoDevBackendVhostUser),
};
static void
cryptodev_vhost_user_register_types(void)
{
type_register_static(&cryptodev_vhost_user_info);
}
type_init(cryptodev_vhost_user_register_types);

View File

@@ -1,347 +0,0 @@
/*
* QEMU Cryptodev backend for QEMU cipher APIs
*
* Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD.
*
* Authors:
* Gonglei <arei.gonglei@huawei.com>
* Jay Zhou <jianjay.zhou@huawei.com>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu/osdep.h"
#include "hw/virtio/virtio-bus.h"
#include "sysemu/cryptodev-vhost.h"
#ifdef CONFIG_VHOST_CRYPTO
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "qemu/error-report.h"
#include "hw/virtio/virtio-crypto.h"
#include "sysemu/cryptodev-vhost-user.h"
uint64_t
cryptodev_vhost_get_max_queues(
CryptoDevBackendVhost *crypto)
{
return crypto->dev.max_queues;
}
void cryptodev_vhost_cleanup(CryptoDevBackendVhost *crypto)
{
vhost_dev_cleanup(&crypto->dev);
g_free(crypto);
}
struct CryptoDevBackendVhost *
cryptodev_vhost_init(
CryptoDevBackendVhostOptions *options)
{
int r;
CryptoDevBackendVhost *crypto;
crypto = g_new(CryptoDevBackendVhost, 1);
crypto->dev.max_queues = 1;
crypto->dev.nvqs = 1;
crypto->dev.vqs = crypto->vqs;
crypto->cc = options->cc;
crypto->dev.protocol_features = 0;
crypto->backend = -1;
/* vhost-user needs vq_index to initiate a specific queue pair */
crypto->dev.vq_index = crypto->cc->queue_index * crypto->dev.nvqs;
r = vhost_dev_init(&crypto->dev, options->opaque, options->backend_type, 0);
if (r < 0) {
goto fail;
}
return crypto;
fail:
g_free(crypto);
return NULL;
}
static int
cryptodev_vhost_start_one(CryptoDevBackendVhost *crypto,
VirtIODevice *dev)
{
int r;
crypto->dev.nvqs = 1;
crypto->dev.vqs = crypto->vqs;
r = vhost_dev_enable_notifiers(&crypto->dev, dev);
if (r < 0) {
goto fail_notifiers;
}
r = vhost_dev_start(&crypto->dev, dev);
if (r < 0) {
goto fail_start;
}
return 0;
fail_start:
vhost_dev_disable_notifiers(&crypto->dev, dev);
fail_notifiers:
return r;
}
static void
cryptodev_vhost_stop_one(CryptoDevBackendVhost *crypto,
VirtIODevice *dev)
{
vhost_dev_stop(&crypto->dev, dev);
vhost_dev_disable_notifiers(&crypto->dev, dev);
}
CryptoDevBackendVhost *
cryptodev_get_vhost(CryptoDevBackendClient *cc,
CryptoDevBackend *b,
uint16_t queue)
{
CryptoDevBackendVhost *vhost_crypto = NULL;
if (!cc) {
return NULL;
}
switch (cc->type) {
#if defined(CONFIG_VHOST_USER) && defined(CONFIG_LINUX)
case CRYPTODEV_BACKEND_TYPE_VHOST_USER:
vhost_crypto = cryptodev_vhost_user_get_vhost(cc, b, queue);
break;
#endif
default:
break;
}
return vhost_crypto;
}
static void
cryptodev_vhost_set_vq_index(CryptoDevBackendVhost *crypto,
int vq_index)
{
crypto->dev.vq_index = vq_index;
}
static int
vhost_set_vring_enable(CryptoDevBackendClient *cc,
CryptoDevBackend *b,
uint16_t queue, int enable)
{
CryptoDevBackendVhost *crypto =
cryptodev_get_vhost(cc, b, queue);
const VhostOps *vhost_ops;
cc->vring_enable = enable;
if (!crypto) {
return 0;
}
vhost_ops = crypto->dev.vhost_ops;
if (vhost_ops->vhost_set_vring_enable) {
return vhost_ops->vhost_set_vring_enable(&crypto->dev, enable);
}
return 0;
}
int cryptodev_vhost_start(VirtIODevice *dev, int total_queues)
{
VirtIOCrypto *vcrypto = VIRTIO_CRYPTO(dev);
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
VirtioBusState *vbus = VIRTIO_BUS(qbus);
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
int r, e;
int i;
CryptoDevBackend *b = vcrypto->cryptodev;
CryptoDevBackendVhost *vhost_crypto;
CryptoDevBackendClient *cc;
if (!k->set_guest_notifiers) {
error_report("binding does not support guest notifiers");
return -ENOSYS;
}
for (i = 0; i < total_queues; i++) {
cc = b->conf.peers.ccs[i];
vhost_crypto = cryptodev_get_vhost(cc, b, i);
cryptodev_vhost_set_vq_index(vhost_crypto, i);
/* Suppress the masking guest notifiers on vhost user
* because vhost user doesn't interrupt masking/unmasking
* properly.
*/
if (cc->type == CRYPTODEV_BACKEND_TYPE_VHOST_USER) {
dev->use_guest_notifier_mask = false;
}
}
r = k->set_guest_notifiers(qbus->parent, total_queues, true);
if (r < 0) {
error_report("error binding guest notifier: %d", -r);
goto err;
}
for (i = 0; i < total_queues; i++) {
cc = b->conf.peers.ccs[i];
vhost_crypto = cryptodev_get_vhost(cc, b, i);
r = cryptodev_vhost_start_one(vhost_crypto, dev);
if (r < 0) {
goto err_start;
}
if (cc->vring_enable) {
/* restore vring enable state */
r = vhost_set_vring_enable(cc, b, i, cc->vring_enable);
if (r < 0) {
goto err_start;
}
}
}
return 0;
err_start:
while (--i >= 0) {
cc = b->conf.peers.ccs[i];
vhost_crypto = cryptodev_get_vhost(cc, b, i);
cryptodev_vhost_stop_one(vhost_crypto, dev);
}
e = k->set_guest_notifiers(qbus->parent, total_queues, false);
if (e < 0) {
error_report("vhost guest notifier cleanup failed: %d", e);
}
err:
return r;
}
void cryptodev_vhost_stop(VirtIODevice *dev, int total_queues)
{
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
VirtioBusState *vbus = VIRTIO_BUS(qbus);
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
VirtIOCrypto *vcrypto = VIRTIO_CRYPTO(dev);
CryptoDevBackend *b = vcrypto->cryptodev;
CryptoDevBackendVhost *vhost_crypto;
CryptoDevBackendClient *cc;
size_t i;
int r;
for (i = 0; i < total_queues; i++) {
cc = b->conf.peers.ccs[i];
vhost_crypto = cryptodev_get_vhost(cc, b, i);
cryptodev_vhost_stop_one(vhost_crypto, dev);
}
r = k->set_guest_notifiers(qbus->parent, total_queues, false);
if (r < 0) {
error_report("vhost guest notifier cleanup failed: %d", r);
}
assert(r >= 0);
}
void cryptodev_vhost_virtqueue_mask(VirtIODevice *dev,
int queue,
int idx, bool mask)
{
VirtIOCrypto *vcrypto = VIRTIO_CRYPTO(dev);
CryptoDevBackend *b = vcrypto->cryptodev;
CryptoDevBackendVhost *vhost_crypto;
CryptoDevBackendClient *cc;
assert(queue < MAX_CRYPTO_QUEUE_NUM);
cc = b->conf.peers.ccs[queue];
vhost_crypto = cryptodev_get_vhost(cc, b, queue);
vhost_virtqueue_mask(&vhost_crypto->dev, dev, idx, mask);
}
bool cryptodev_vhost_virtqueue_pending(VirtIODevice *dev,
int queue, int idx)
{
VirtIOCrypto *vcrypto = VIRTIO_CRYPTO(dev);
CryptoDevBackend *b = vcrypto->cryptodev;
CryptoDevBackendVhost *vhost_crypto;
CryptoDevBackendClient *cc;
assert(queue < MAX_CRYPTO_QUEUE_NUM);
cc = b->conf.peers.ccs[queue];
vhost_crypto = cryptodev_get_vhost(cc, b, queue);
return vhost_virtqueue_pending(&vhost_crypto->dev, idx);
}
#else
uint64_t
cryptodev_vhost_get_max_queues(CryptoDevBackendVhost *crypto)
{
return 0;
}
void cryptodev_vhost_cleanup(CryptoDevBackendVhost *crypto)
{
}
struct CryptoDevBackendVhost *
cryptodev_vhost_init(CryptoDevBackendVhostOptions *options)
{
return NULL;
}
CryptoDevBackendVhost *
cryptodev_get_vhost(CryptoDevBackendClient *cc,
CryptoDevBackend *b,
uint16_t queue)
{
return NULL;
}
int cryptodev_vhost_start(VirtIODevice *dev, int total_queues)
{
return -1;
}
void cryptodev_vhost_stop(VirtIODevice *dev, int total_queues)
{
}
void cryptodev_vhost_virtqueue_mask(VirtIODevice *dev,
int queue,
int idx, bool mask)
{
}
bool cryptodev_vhost_virtqueue_pending(VirtIODevice *dev,
int queue, int idx)
{
return false;
}
#endif

View File

@@ -1,269 +0,0 @@
/*
* QEMU Crypto Device Implementation
*
* Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD.
*
* Authors:
* Gonglei <arei.gonglei@huawei.com>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu/osdep.h"
#include "sysemu/cryptodev.h"
#include "hw/boards.h"
#include "qapi/error.h"
#include "qapi/visitor.h"
#include "qemu/config-file.h"
#include "qom/object_interfaces.h"
#include "hw/virtio/virtio-crypto.h"
static QTAILQ_HEAD(, CryptoDevBackendClient) crypto_clients;
CryptoDevBackendClient *
cryptodev_backend_new_client(const char *model,
const char *name)
{
CryptoDevBackendClient *cc;
cc = g_malloc0(sizeof(CryptoDevBackendClient));
cc->model = g_strdup(model);
if (name) {
cc->name = g_strdup(name);
}
QTAILQ_INSERT_TAIL(&crypto_clients, cc, next);
return cc;
}
void cryptodev_backend_free_client(
CryptoDevBackendClient *cc)
{
QTAILQ_REMOVE(&crypto_clients, cc, next);
g_free(cc->name);
g_free(cc->model);
g_free(cc->info_str);
g_free(cc);
}
void cryptodev_backend_cleanup(
CryptoDevBackend *backend,
Error **errp)
{
CryptoDevBackendClass *bc =
CRYPTODEV_BACKEND_GET_CLASS(backend);
if (bc->cleanup) {
bc->cleanup(backend, errp);
}
}
int64_t cryptodev_backend_sym_create_session(
CryptoDevBackend *backend,
CryptoDevBackendSymSessionInfo *sess_info,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendClass *bc =
CRYPTODEV_BACKEND_GET_CLASS(backend);
if (bc->create_session) {
return bc->create_session(backend, sess_info, queue_index, errp);
}
return -1;
}
int cryptodev_backend_sym_close_session(
CryptoDevBackend *backend,
uint64_t session_id,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendClass *bc =
CRYPTODEV_BACKEND_GET_CLASS(backend);
if (bc->close_session) {
return bc->close_session(backend, session_id, queue_index, errp);
}
return -1;
}
static int cryptodev_backend_sym_operation(
CryptoDevBackend *backend,
CryptoDevBackendSymOpInfo *op_info,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendClass *bc =
CRYPTODEV_BACKEND_GET_CLASS(backend);
if (bc->do_sym_op) {
return bc->do_sym_op(backend, op_info, queue_index, errp);
}
return -VIRTIO_CRYPTO_ERR;
}
int cryptodev_backend_crypto_operation(
CryptoDevBackend *backend,
void *opaque,
uint32_t queue_index, Error **errp)
{
VirtIOCryptoReq *req = opaque;
if (req->flags == CRYPTODEV_BACKEND_ALG_SYM) {
CryptoDevBackendSymOpInfo *op_info;
op_info = req->u.sym_op_info;
return cryptodev_backend_sym_operation(backend,
op_info, queue_index, errp);
} else {
error_setg(errp, "Unsupported cryptodev alg type: %" PRIu32 "",
req->flags);
return -VIRTIO_CRYPTO_NOTSUPP;
}
return -VIRTIO_CRYPTO_ERR;
}
static void
cryptodev_backend_get_queues(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
CryptoDevBackend *backend = CRYPTODEV_BACKEND(obj);
uint32_t value = backend->conf.peers.queues;
visit_type_uint32(v, name, &value, errp);
}
static void
cryptodev_backend_set_queues(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
CryptoDevBackend *backend = CRYPTODEV_BACKEND(obj);
Error *local_err = NULL;
uint32_t value;
visit_type_uint32(v, name, &value, &local_err);
if (local_err) {
goto out;
}
if (!value) {
error_setg(&local_err, "Property '%s.%s' doesn't take value '%"
PRIu32 "'", object_get_typename(obj), name, value);
goto out;
}
backend->conf.peers.queues = value;
out:
error_propagate(errp, local_err);
}
static void
cryptodev_backend_complete(UserCreatable *uc, Error **errp)
{
CryptoDevBackend *backend = CRYPTODEV_BACKEND(uc);
CryptoDevBackendClass *bc = CRYPTODEV_BACKEND_GET_CLASS(uc);
Error *local_err = NULL;
if (bc->init) {
bc->init(backend, &local_err);
if (local_err) {
goto out;
}
}
return;
out:
error_propagate(errp, local_err);
}
void cryptodev_backend_set_used(CryptoDevBackend *backend, bool used)
{
backend->is_used = used;
}
bool cryptodev_backend_is_used(CryptoDevBackend *backend)
{
return backend->is_used;
}
void cryptodev_backend_set_ready(CryptoDevBackend *backend, bool ready)
{
backend->ready = ready;
}
bool cryptodev_backend_is_ready(CryptoDevBackend *backend)
{
return backend->ready;
}
static bool
cryptodev_backend_can_be_deleted(UserCreatable *uc)
{
return !cryptodev_backend_is_used(CRYPTODEV_BACKEND(uc));
}
static void cryptodev_backend_instance_init(Object *obj)
{
object_property_add(obj, "queues", "uint32",
cryptodev_backend_get_queues,
cryptodev_backend_set_queues,
NULL, NULL, NULL);
/* Initialize devices' queues property to 1 */
object_property_set_int(obj, 1, "queues", NULL);
}
static void cryptodev_backend_finalize(Object *obj)
{
CryptoDevBackend *backend = CRYPTODEV_BACKEND(obj);
cryptodev_backend_cleanup(backend, NULL);
}
static void
cryptodev_backend_class_init(ObjectClass *oc, void *data)
{
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
ucc->complete = cryptodev_backend_complete;
ucc->can_be_deleted = cryptodev_backend_can_be_deleted;
QTAILQ_INIT(&crypto_clients);
}
static const TypeInfo cryptodev_backend_info = {
.name = TYPE_CRYPTODEV_BACKEND,
.parent = TYPE_OBJECT,
.instance_size = sizeof(CryptoDevBackend),
.instance_init = cryptodev_backend_instance_init,
.instance_finalize = cryptodev_backend_finalize,
.class_size = sizeof(CryptoDevBackendClass),
.class_init = cryptodev_backend_class_init,
.interfaces = (InterfaceInfo[]) {
{ TYPE_USER_CREATABLE },
{ }
}
};
static void
cryptodev_backend_register_types(void)
{
type_register_static(&cryptodev_backend_info);
}
type_init(cryptodev_backend_register_types);

View File

@@ -12,7 +12,6 @@
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "qemu/error-report.h"
#include "sysemu/hostmem.h"
#include "sysemu/sysemu.h"
#include "qom/object_interfaces.h"
@@ -32,21 +31,14 @@ typedef struct HostMemoryBackendFile HostMemoryBackendFile;
struct HostMemoryBackendFile {
HostMemoryBackend parent_obj;
bool share;
char *mem_path;
uint64_t align;
bool discard_data;
bool is_pmem;
};
static void
file_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
#ifndef CONFIG_POSIX
error_setg(errp, "backend '%s' not supported on this host",
object_get_typename(OBJECT(backend)));
#else
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(backend);
gchar *name;
if (!backend->size) {
error_setg(errp, "can't create backend with size 0");
@@ -56,41 +48,30 @@ file_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
error_setg(errp, "mem-path property not set");
return;
}
/*
* Verify pmem file size since starting a guest with an incorrect size
* leads to confusing failures inside the guest.
*/
if (fb->is_pmem) {
Error *local_err = NULL;
uint64_t size;
size = qemu_get_pmem_size(fb->mem_path, &local_err);
if (!size) {
error_propagate(errp, local_err);
return;
}
if (backend->size > size) {
error_setg(errp, "size property %" PRIu64 " is larger than "
"pmem file \"%s\" size %" PRIu64, backend->size,
fb->mem_path, size);
return;
}
#ifndef CONFIG_LINUX
error_setg(errp, "-mem-path not supported on this host");
#else
if (!memory_region_size(&backend->mr)) {
gchar *path;
backend->force_prealloc = mem_prealloc;
path = object_get_canonical_path(OBJECT(backend));
memory_region_init_ram_from_file(&backend->mr, OBJECT(backend),
path,
backend->size, fb->share,
fb->mem_path, errp);
g_free(path);
}
backend->force_prealloc = mem_prealloc;
name = host_memory_backend_get_name(backend);
memory_region_init_ram_from_file(&backend->mr, OBJECT(backend),
name,
backend->size, fb->align,
(backend->share ? RAM_SHARED : 0) |
(fb->is_pmem ? RAM_PMEM : 0),
fb->mem_path, errp);
g_free(name);
#endif
}
static void
file_backend_class_init(ObjectClass *oc, void *data)
{
HostMemoryBackendClass *bc = MEMORY_BACKEND_CLASS(oc);
bc->alloc = file_backend_memory_alloc;
}
static char *get_mem_path(Object *o, Error **errp)
{
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
@@ -103,128 +84,41 @@ static void set_mem_path(Object *o, const char *str, Error **errp)
HostMemoryBackend *backend = MEMORY_BACKEND(o);
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
if (host_memory_backend_mr_inited(backend)) {
error_setg(errp, "cannot change property 'mem-path' of %s",
object_get_typename(o));
if (memory_region_size(&backend->mr)) {
error_setg(errp, "cannot change property value");
return;
}
g_free(fb->mem_path);
fb->mem_path = g_strdup(str);
}
static bool file_memory_backend_get_discard_data(Object *o, Error **errp)
{
return MEMORY_BACKEND_FILE(o)->discard_data;
}
static void file_memory_backend_set_discard_data(Object *o, bool value,
Error **errp)
{
MEMORY_BACKEND_FILE(o)->discard_data = value;
}
static void file_memory_backend_get_align(Object *o, Visitor *v,
const char *name, void *opaque,
Error **errp)
static bool file_memory_backend_get_share(Object *o, Error **errp)
{
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
uint64_t val = fb->align;
visit_type_size(v, name, &val, errp);
return fb->share;
}
static void file_memory_backend_set_align(Object *o, Visitor *v,
const char *name, void *opaque,
Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(o);
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
Error *local_err = NULL;
uint64_t val;
if (host_memory_backend_mr_inited(backend)) {
error_setg(&local_err, "cannot change property '%s' of %s",
name, object_get_typename(o));
goto out;
}
visit_type_size(v, name, &val, &local_err);
if (local_err) {
goto out;
}
fb->align = val;
out:
error_propagate(errp, local_err);
}
static bool file_memory_backend_get_pmem(Object *o, Error **errp)
{
return MEMORY_BACKEND_FILE(o)->is_pmem;
}
static void file_memory_backend_set_pmem(Object *o, bool value, Error **errp)
static void file_memory_backend_set_share(Object *o, bool value, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(o);
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
if (host_memory_backend_mr_inited(backend)) {
error_setg(errp, "cannot change property 'pmem' of %s.",
object_get_typename(o));
if (memory_region_size(&backend->mr)) {
error_setg(errp, "cannot change property value");
return;
}
#ifndef CONFIG_LIBPMEM
if (value) {
Error *local_err = NULL;
error_setg(&local_err,
"Lack of libpmem support while setting the 'pmem=on'"
" of %s. We can't ensure data persistence.",
object_get_typename(o));
error_propagate(errp, local_err);
return;
}
#endif
fb->is_pmem = value;
}
static void file_backend_unparent(Object *obj)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(obj);
if (host_memory_backend_mr_inited(backend) && fb->discard_data) {
void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr);
qemu_madvise(ptr, sz, QEMU_MADV_REMOVE);
}
fb->share = value;
}
static void
file_backend_class_init(ObjectClass *oc, void *data)
file_backend_instance_init(Object *o)
{
HostMemoryBackendClass *bc = MEMORY_BACKEND_CLASS(oc);
bc->alloc = file_backend_memory_alloc;
oc->unparent = file_backend_unparent;
object_class_property_add_bool(oc, "discard-data",
file_memory_backend_get_discard_data, file_memory_backend_set_discard_data,
&error_abort);
object_class_property_add_str(oc, "mem-path",
get_mem_path, set_mem_path,
&error_abort);
object_class_property_add(oc, "align", "int",
file_memory_backend_get_align,
file_memory_backend_set_align,
NULL, NULL, &error_abort);
object_class_property_add_bool(oc, "pmem",
file_memory_backend_get_pmem, file_memory_backend_set_pmem,
&error_abort);
object_property_add_bool(o, "share",
file_memory_backend_get_share,
file_memory_backend_set_share, NULL);
object_property_add_str(o, "mem-path", get_mem_path,
set_mem_path, NULL);
}
static void file_backend_instance_finalize(Object *o)
@@ -238,6 +132,7 @@ static const TypeInfo file_backend_info = {
.name = TYPE_MEMORY_BACKEND_FILE,
.parent = TYPE_MEMORY_BACKEND,
.class_init = file_backend_class_init,
.instance_init = file_backend_instance_init,
.instance_finalize = file_backend_instance_finalize,
.instance_size = sizeof(HostMemoryBackendFile),
};

View File

@@ -1,181 +0,0 @@
/*
* QEMU host memfd memory backend
*
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Marc-André Lureau <marcandre.lureau@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "sysemu/hostmem.h"
#include "sysemu/sysemu.h"
#include "qom/object_interfaces.h"
#include "qemu/memfd.h"
#include "qapi/error.h"
#define TYPE_MEMORY_BACKEND_MEMFD "memory-backend-memfd"
#define MEMORY_BACKEND_MEMFD(obj) \
OBJECT_CHECK(HostMemoryBackendMemfd, (obj), TYPE_MEMORY_BACKEND_MEMFD)
typedef struct HostMemoryBackendMemfd HostMemoryBackendMemfd;
struct HostMemoryBackendMemfd {
HostMemoryBackend parent_obj;
bool hugetlb;
uint64_t hugetlbsize;
bool seal;
};
static void
memfd_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
HostMemoryBackendMemfd *m = MEMORY_BACKEND_MEMFD(backend);
char *name;
int fd;
if (!backend->size) {
error_setg(errp, "can't create backend with size 0");
return;
}
backend->force_prealloc = mem_prealloc;
fd = qemu_memfd_create(TYPE_MEMORY_BACKEND_MEMFD, backend->size,
m->hugetlb, m->hugetlbsize, m->seal ?
F_SEAL_GROW | F_SEAL_SHRINK | F_SEAL_SEAL : 0,
errp);
if (fd == -1) {
return;
}
name = host_memory_backend_get_name(backend);
memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend),
name, backend->size,
backend->share, fd, errp);
g_free(name);
}
static bool
memfd_backend_get_hugetlb(Object *o, Error **errp)
{
return MEMORY_BACKEND_MEMFD(o)->hugetlb;
}
static void
memfd_backend_set_hugetlb(Object *o, bool value, Error **errp)
{
MEMORY_BACKEND_MEMFD(o)->hugetlb = value;
}
static void
memfd_backend_set_hugetlbsize(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
HostMemoryBackendMemfd *m = MEMORY_BACKEND_MEMFD(obj);
Error *local_err = NULL;
uint64_t value;
if (host_memory_backend_mr_inited(MEMORY_BACKEND(obj))) {
error_setg(&local_err, "cannot change property value");
goto out;
}
visit_type_size(v, name, &value, &local_err);
if (local_err) {
goto out;
}
if (!value) {
error_setg(&local_err, "Property '%s.%s' doesn't take value '%"
PRIu64 "'", object_get_typename(obj), name, value);
goto out;
}
m->hugetlbsize = value;
out:
error_propagate(errp, local_err);
}
static void
memfd_backend_get_hugetlbsize(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
HostMemoryBackendMemfd *m = MEMORY_BACKEND_MEMFD(obj);
uint64_t value = m->hugetlbsize;
visit_type_size(v, name, &value, errp);
}
static bool
memfd_backend_get_seal(Object *o, Error **errp)
{
return MEMORY_BACKEND_MEMFD(o)->seal;
}
static void
memfd_backend_set_seal(Object *o, bool value, Error **errp)
{
MEMORY_BACKEND_MEMFD(o)->seal = value;
}
static void
memfd_backend_instance_init(Object *obj)
{
HostMemoryBackendMemfd *m = MEMORY_BACKEND_MEMFD(obj);
/* default to sealed file */
m->seal = true;
MEMORY_BACKEND(m)->share = true;
}
static void
memfd_backend_class_init(ObjectClass *oc, void *data)
{
HostMemoryBackendClass *bc = MEMORY_BACKEND_CLASS(oc);
bc->alloc = memfd_backend_memory_alloc;
if (qemu_memfd_check(MFD_HUGETLB)) {
object_class_property_add_bool(oc, "hugetlb",
memfd_backend_get_hugetlb,
memfd_backend_set_hugetlb,
&error_abort);
object_class_property_set_description(oc, "hugetlb",
"Use huge pages",
&error_abort);
object_class_property_add(oc, "hugetlbsize", "int",
memfd_backend_get_hugetlbsize,
memfd_backend_set_hugetlbsize,
NULL, NULL, &error_abort);
object_class_property_set_description(oc, "hugetlbsize",
"Huge pages size (ex: 2M, 1G)",
&error_abort);
}
object_class_property_add_bool(oc, "seal",
memfd_backend_get_seal,
memfd_backend_set_seal,
&error_abort);
object_class_property_set_description(oc, "seal",
"Seal growing & shrinking",
&error_abort);
}
static const TypeInfo memfd_backend_info = {
.name = TYPE_MEMORY_BACKEND_MEMFD,
.parent = TYPE_MEMORY_BACKEND,
.instance_init = memfd_backend_instance_init,
.class_init = memfd_backend_class_init,
.instance_size = sizeof(HostMemoryBackendMemfd),
};
static void register_types(void)
{
if (qemu_memfd_check(MFD_ALLOW_SEALING)) {
type_register_static(&memfd_backend_info);
}
}
type_init(register_types);

View File

@@ -16,20 +16,21 @@
#define TYPE_MEMORY_BACKEND_RAM "memory-backend-ram"
static void
ram_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
char *name;
char *path;
if (!backend->size) {
error_setg(errp, "can't create backend with size 0");
return;
}
name = host_memory_backend_get_name(backend);
memory_region_init_ram_shared_nomigrate(&backend->mr, OBJECT(backend), name,
backend->size, backend->share, errp);
g_free(name);
path = object_get_canonical_path_component(OBJECT(backend));
memory_region_init_ram(&backend->mr, OBJECT(backend), path,
backend->size, errp);
g_free(path);
}
static void

View File

@@ -9,16 +9,15 @@
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/hostmem.h"
#include "hw/boards.h"
#include "qapi/error.h"
#include "qapi/qapi-builtin-visit.h"
#include "qapi/visitor.h"
#include "qapi-types.h"
#include "qapi-visit.h"
#include "qemu/config-file.h"
#include "qom/object_interfaces.h"
#include "qemu/mmap-alloc.h"
#ifdef CONFIG_NUMA
#include <numaif.h>
@@ -28,16 +27,6 @@ QEMU_BUILD_BUG_ON(HOST_MEM_POLICY_BIND != MPOL_BIND);
QEMU_BUILD_BUG_ON(HOST_MEM_POLICY_INTERLEAVE != MPOL_INTERLEAVE);
#endif
char *
host_memory_backend_get_name(HostMemoryBackend *backend)
{
if (!backend->use_canonical_path) {
return object_get_canonical_path_component(OBJECT(backend));
}
return object_get_canonical_path(OBJECT(backend));
}
static void
host_memory_backend_get_size(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
@@ -56,9 +45,8 @@ host_memory_backend_set_size(Object *obj, Visitor *v, const char *name,
Error *local_err = NULL;
uint64_t value;
if (host_memory_backend_mr_inited(backend)) {
error_setg(&local_err, "cannot change property %s of %s ",
name, object_get_typename(obj));
if (memory_region_size(&backend->mr)) {
error_setg(&local_err, "cannot change property value");
goto out;
}
@@ -67,9 +55,8 @@ host_memory_backend_set_size(Object *obj, Visitor *v, const char *name,
goto out;
}
if (!value) {
error_setg(&local_err,
"property '%s' of %s doesn't take value '%" PRIu64 "'",
name, object_get_typename(obj), value);
error_setg(&local_err, "Property '%s.%s' doesn't take value '%"
PRIu64 "'", object_get_typename(obj), name, value);
goto out;
}
backend->size = value;
@@ -77,6 +64,14 @@ out:
error_propagate(errp, local_err);
}
static uint16List **host_memory_append_node(uint16List **node,
unsigned long value)
{
*node = g_malloc0(sizeof(**node));
(*node)->value = value;
return &(*node)->next;
}
static void
host_memory_backend_get_host_nodes(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
@@ -87,13 +82,12 @@ host_memory_backend_get_host_nodes(Object *obj, Visitor *v, const char *name,
unsigned long value;
value = find_first_bit(backend->host_nodes, MAX_NODES);
if (value == MAX_NODES) {
goto ret;
}
*node = g_malloc0(sizeof(**node));
(*node)->value = value;
node = &(*node)->next;
node = host_memory_append_node(node, value);
if (value == MAX_NODES) {
goto out;
}
do {
value = find_next_bit(backend->host_nodes, MAX_NODES, value + 1);
@@ -101,12 +95,10 @@ host_memory_backend_get_host_nodes(Object *obj, Visitor *v, const char *name,
break;
}
*node = g_malloc0(sizeof(**node));
(*node)->value = value;
node = &(*node)->next;
node = host_memory_append_node(node, value);
} while (true);
ret:
out:
visit_type_uint16List(v, name, &host_nodes, errp);
}
@@ -116,23 +108,14 @@ host_memory_backend_set_host_nodes(Object *obj, Visitor *v, const char *name,
{
#ifdef CONFIG_NUMA
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
uint16List *l, *host_nodes = NULL;
uint16List *l = NULL;
visit_type_uint16List(v, name, &host_nodes, errp);
visit_type_uint16List(v, name, &l, errp);
for (l = host_nodes; l; l = l->next) {
if (l->value >= MAX_NODES) {
error_setg(errp, "Invalid host-nodes value: %d", l->value);
goto out;
}
}
for (l = host_nodes; l; l = l->next) {
while (l) {
bitmap_set(backend->host_nodes, l->value, 1);
l = l->next;
}
out:
qapi_free_uint16List(host_nodes);
#else
error_setg(errp, "NUMA node binding are not supported by this QEMU");
#endif
@@ -169,7 +152,7 @@ static void host_memory_backend_set_merge(Object *obj, bool value, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (!host_memory_backend_mr_inited(backend)) {
if (!memory_region_size(&backend->mr)) {
backend->merge = value;
return;
}
@@ -195,7 +178,7 @@ static void host_memory_backend_set_dump(Object *obj, bool value, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (!host_memory_backend_mr_inited(backend)) {
if (!memory_region_size(&backend->mr)) {
backend->dump = value;
return;
}
@@ -231,7 +214,7 @@ static void host_memory_backend_set_prealloc(Object *obj, bool value,
}
}
if (!host_memory_backend_mr_inited(backend)) {
if (!memory_region_size(&backend->mr)) {
backend->prealloc = value;
return;
}
@@ -241,7 +224,7 @@ static void host_memory_backend_set_prealloc(Object *obj, bool value,
void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr);
os_mem_prealloc(fd, ptr, sz, smp_cpus, &local_err);
os_mem_prealloc(fd, ptr, sz, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
@@ -258,25 +241,32 @@ static void host_memory_backend_init(Object *obj)
backend->merge = machine_mem_merge(machine);
backend->dump = machine_dump_guest_core(machine);
backend->prealloc = mem_prealloc;
object_property_add_bool(obj, "merge",
host_memory_backend_get_merge,
host_memory_backend_set_merge, NULL);
object_property_add_bool(obj, "dump",
host_memory_backend_get_dump,
host_memory_backend_set_dump, NULL);
object_property_add_bool(obj, "prealloc",
host_memory_backend_get_prealloc,
host_memory_backend_set_prealloc, NULL);
object_property_add(obj, "size", "int",
host_memory_backend_get_size,
host_memory_backend_set_size, NULL, NULL, NULL);
object_property_add(obj, "host-nodes", "int",
host_memory_backend_get_host_nodes,
host_memory_backend_set_host_nodes, NULL, NULL, NULL);
object_property_add_enum(obj, "policy", "HostMemPolicy",
HostMemPolicy_lookup,
host_memory_backend_get_policy,
host_memory_backend_set_policy, NULL);
}
static void host_memory_backend_post_init(Object *obj)
MemoryRegion *
host_memory_backend_get_memory(HostMemoryBackend *backend, Error **errp)
{
object_apply_compat_props(obj);
}
bool host_memory_backend_mr_inited(HostMemoryBackend *backend)
{
/*
* NOTE: We forbid zero-length memory backend, so here zero means
* "we haven't inited the backend memory region yet".
*/
return memory_region_size(&backend->mr) != 0;
}
MemoryRegion *host_memory_backend_get_memory(HostMemoryBackend *backend)
{
return host_memory_backend_mr_inited(backend) ? &backend->mr : NULL;
return memory_region_size(&backend->mr) ? &backend->mr : NULL;
}
void host_memory_backend_set_mapped(HostMemoryBackend *backend, bool mapped)
@@ -289,23 +279,6 @@ bool host_memory_backend_is_mapped(HostMemoryBackend *backend)
return backend->is_mapped;
}
#ifdef __linux__
size_t host_memory_backend_pagesize(HostMemoryBackend *memdev)
{
Object *obj = OBJECT(memdev);
char *path = object_property_get_str(obj, "mem-path", NULL);
size_t pagesize = qemu_mempath_getpagesize(path);
g_free(path);
return pagesize;
}
#else
size_t host_memory_backend_pagesize(HostMemoryBackend *memdev)
{
return getpagesize();
}
#endif
static void
host_memory_backend_memory_complete(UserCreatable *uc, Error **errp)
{
@@ -348,7 +321,7 @@ host_memory_backend_memory_complete(UserCreatable *uc, Error **errp)
return;
} else if (maxnode == 0 && backend->policy != MPOL_DEFAULT) {
error_setg(errp, "host-nodes must be set for policy %s",
HostMemPolicy_str(backend->policy));
HostMemPolicy_lookup[backend->policy]);
return;
}
@@ -375,7 +348,7 @@ host_memory_backend_memory_complete(UserCreatable *uc, Error **errp)
*/
if (backend->prealloc) {
os_mem_prealloc(memory_region_get_fd(&backend->mr), ptr, sz,
smp_cpus, &local_err);
&local_err);
if (local_err) {
goto out;
}
@@ -386,7 +359,7 @@ out:
}
static bool
host_memory_backend_can_be_deleted(UserCreatable *uc)
host_memory_backend_can_be_deleted(UserCreatable *uc, Error **errp)
{
if (host_memory_backend_is_mapped(MEMORY_BACKEND(uc))) {
return false;
@@ -395,41 +368,6 @@ host_memory_backend_can_be_deleted(UserCreatable *uc)
}
}
static bool host_memory_backend_get_share(Object *o, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(o);
return backend->share;
}
static void host_memory_backend_set_share(Object *o, bool value, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(o);
if (host_memory_backend_mr_inited(backend)) {
error_setg(errp, "cannot change property value");
return;
}
backend->share = value;
}
static bool
host_memory_backend_get_use_canonical_path(Object *obj, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
return backend->use_canonical_path;
}
static void
host_memory_backend_set_use_canonical_path(Object *obj, bool value,
Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
backend->use_canonical_path = value;
}
static void
host_memory_backend_class_init(ObjectClass *oc, void *data)
{
@@ -437,48 +375,6 @@ host_memory_backend_class_init(ObjectClass *oc, void *data)
ucc->complete = host_memory_backend_memory_complete;
ucc->can_be_deleted = host_memory_backend_can_be_deleted;
object_class_property_add_bool(oc, "merge",
host_memory_backend_get_merge,
host_memory_backend_set_merge, &error_abort);
object_class_property_set_description(oc, "merge",
"Mark memory as mergeable", &error_abort);
object_class_property_add_bool(oc, "dump",
host_memory_backend_get_dump,
host_memory_backend_set_dump, &error_abort);
object_class_property_set_description(oc, "dump",
"Set to 'off' to exclude from core dump", &error_abort);
object_class_property_add_bool(oc, "prealloc",
host_memory_backend_get_prealloc,
host_memory_backend_set_prealloc, &error_abort);
object_class_property_set_description(oc, "prealloc",
"Preallocate memory", &error_abort);
object_class_property_add(oc, "size", "int",
host_memory_backend_get_size,
host_memory_backend_set_size,
NULL, NULL, &error_abort);
object_class_property_set_description(oc, "size",
"Size of the memory region (ex: 500M)", &error_abort);
object_class_property_add(oc, "host-nodes", "int",
host_memory_backend_get_host_nodes,
host_memory_backend_set_host_nodes,
NULL, NULL, &error_abort);
object_class_property_set_description(oc, "host-nodes",
"Binds memory to the list of NUMA host nodes", &error_abort);
object_class_property_add_enum(oc, "policy", "HostMemPolicy",
&HostMemPolicy_lookup,
host_memory_backend_get_policy,
host_memory_backend_set_policy, &error_abort);
object_class_property_set_description(oc, "policy",
"Set the NUMA policy", &error_abort);
object_class_property_add_bool(oc, "share",
host_memory_backend_get_share, host_memory_backend_set_share,
&error_abort);
object_class_property_set_description(oc, "share",
"Mark the memory as private to QEMU or shared", &error_abort);
object_class_property_add_bool(oc, "x-use-canonical-path-for-ramblock-id",
host_memory_backend_get_use_canonical_path,
host_memory_backend_set_use_canonical_path, &error_abort);
}
static const TypeInfo host_memory_backend_info = {
@@ -489,7 +385,6 @@ static const TypeInfo host_memory_backend_info = {
.class_init = host_memory_backend_class_init,
.instance_size = sizeof(HostMemoryBackend),
.instance_init = host_memory_backend_init,
.instance_post_init = host_memory_backend_post_init,
.interfaces = (InterfaceInfo[]) {
{ TYPE_USER_CREATABLE },
{ }

185
backends/msmouse.c Normal file
View File

@@ -0,0 +1,185 @@
/*
* QEMU Microsoft serial mouse emulation
*
* Copyright (c) 2008 Lubomir Rintel
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "sysemu/char.h"
#include "ui/console.h"
#include "ui/input.h"
#define MSMOUSE_LO6(n) ((n) & 0x3f)
#define MSMOUSE_HI2(n) (((n) & 0xc0) >> 6)
typedef struct {
CharDriverState *chr;
QemuInputHandlerState *hs;
int axis[INPUT_AXIS__MAX];
bool btns[INPUT_BUTTON__MAX];
bool btnc[INPUT_BUTTON__MAX];
uint8_t outbuf[32];
int outlen;
} MouseState;
static void msmouse_chr_accept_input(CharDriverState *chr)
{
MouseState *mouse = chr->opaque;
int len;
len = qemu_chr_be_can_write(chr);
if (len > mouse->outlen) {
len = mouse->outlen;
}
if (!len) {
return;
}
qemu_chr_be_write(chr, mouse->outbuf, len);
mouse->outlen -= len;
if (mouse->outlen) {
memmove(mouse->outbuf, mouse->outbuf + len, mouse->outlen);
}
}
static void msmouse_queue_event(MouseState *mouse)
{
unsigned char bytes[4] = { 0x40, 0x00, 0x00, 0x00 };
int dx, dy, count = 3;
dx = mouse->axis[INPUT_AXIS_X];
mouse->axis[INPUT_AXIS_X] = 0;
dy = mouse->axis[INPUT_AXIS_Y];
mouse->axis[INPUT_AXIS_Y] = 0;
/* Movement deltas */
bytes[0] |= (MSMOUSE_HI2(dy) << 2) | MSMOUSE_HI2(dx);
bytes[1] |= MSMOUSE_LO6(dx);
bytes[2] |= MSMOUSE_LO6(dy);
/* Buttons */
bytes[0] |= (mouse->btns[INPUT_BUTTON_LEFT] ? 0x20 : 0x00);
bytes[0] |= (mouse->btns[INPUT_BUTTON_RIGHT] ? 0x10 : 0x00);
if (mouse->btns[INPUT_BUTTON_MIDDLE] ||
mouse->btnc[INPUT_BUTTON_MIDDLE]) {
bytes[3] |= (mouse->btns[INPUT_BUTTON_MIDDLE] ? 0x20 : 0x00);
mouse->btnc[INPUT_BUTTON_MIDDLE] = false;
count = 4;
}
if (mouse->outlen <= sizeof(mouse->outbuf) - count) {
memcpy(mouse->outbuf + mouse->outlen, bytes, count);
mouse->outlen += count;
} else {
/* queue full -> drop event */
}
}
static void msmouse_input_event(DeviceState *dev, QemuConsole *src,
InputEvent *evt)
{
MouseState *mouse = (MouseState *)dev;
InputMoveEvent *move;
InputBtnEvent *btn;
switch (evt->type) {
case INPUT_EVENT_KIND_REL:
move = evt->u.rel.data;
mouse->axis[move->axis] += move->value;
break;
case INPUT_EVENT_KIND_BTN:
btn = evt->u.btn.data;
mouse->btns[btn->button] = btn->down;
mouse->btnc[btn->button] = true;
break;
default:
/* keep gcc happy */
break;
}
}
static void msmouse_input_sync(DeviceState *dev)
{
MouseState *mouse = (MouseState *)dev;
msmouse_queue_event(mouse);
msmouse_chr_accept_input(mouse->chr);
}
static int msmouse_chr_write (struct CharDriverState *s, const uint8_t *buf, int len)
{
/* Ignore writes to mouse port */
return len;
}
static void msmouse_chr_close (struct CharDriverState *chr)
{
MouseState *mouse = chr->opaque;
qemu_input_handler_unregister(mouse->hs);
g_free(mouse);
}
static QemuInputHandler msmouse_handler = {
.name = "QEMU Microsoft Mouse",
.mask = INPUT_EVENT_MASK_BTN | INPUT_EVENT_MASK_REL,
.event = msmouse_input_event,
.sync = msmouse_input_sync,
};
static CharDriverState *qemu_chr_open_msmouse(const char *id,
ChardevBackend *backend,
ChardevReturn *ret,
Error **errp)
{
ChardevCommon *common = backend->u.msmouse.data;
MouseState *mouse;
CharDriverState *chr;
chr = qemu_chr_alloc(common, errp);
if (!chr) {
return NULL;
}
chr->chr_write = msmouse_chr_write;
chr->chr_close = msmouse_chr_close;
chr->chr_accept_input = msmouse_chr_accept_input;
chr->explicit_be_open = true;
mouse = g_new0(MouseState, 1);
mouse->hs = qemu_input_handler_register((DeviceState *)mouse,
&msmouse_handler);
mouse->chr = chr;
chr->opaque = mouse;
return chr;
}
static void register_types(void)
{
register_char_driver("msmouse", CHARDEV_BACKEND_KIND_MSMOUSE, NULL,
qemu_chr_open_msmouse);
}
type_init(register_types);

View File

@@ -12,9 +12,10 @@
#include "qemu/osdep.h"
#include "sysemu/rng.h"
#include "chardev/char-fe.h"
#include "sysemu/char.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "hw/qdev.h" /* just for DEFINE_PROP_CHR */
#define TYPE_RNG_EGD "rng-egd"
#define RNG_EGD(obj) OBJECT_CHECK(RngEgd, (obj), TYPE_RNG_EGD)
@@ -23,7 +24,7 @@ typedef struct RngEgd
{
RngBackend parent;
CharBackend chr;
CharDriverState *chr;
char *chr_name;
} RngEgd;
@@ -40,9 +41,7 @@ static void rng_egd_request_entropy(RngBackend *b, RngRequest *req)
header[0] = 0x02;
header[1] = len;
/* XXX this blocks entire thread. Rewrite to use
* qemu_chr_fe_write and background I/O callbacks */
qemu_chr_fe_write_all(&s->chr, header, sizeof(header));
qemu_chr_fe_write(s->chr, header, sizeof(header));
size -= len;
}
@@ -86,7 +85,6 @@ static void rng_egd_chr_read(void *opaque, const uint8_t *buf, int size)
static void rng_egd_opened(RngBackend *b, Error **errp)
{
RngEgd *s = RNG_EGD(b);
Chardev *chr;
if (s->chr_name == NULL) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
@@ -94,19 +92,21 @@ static void rng_egd_opened(RngBackend *b, Error **errp)
return;
}
chr = qemu_chr_find(s->chr_name);
if (chr == NULL) {
s->chr = qemu_chr_find(s->chr_name);
if (s->chr == NULL) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", s->chr_name);
return;
}
if (!qemu_chr_fe_init(&s->chr, chr, errp)) {
if (qemu_chr_fe_claim(s->chr) != 0) {
error_setg(errp, QERR_DEVICE_IN_USE, s->chr_name);
return;
}
/* FIXME we should resubmit pending requests when the CDS reconnects. */
qemu_chr_fe_set_handlers(&s->chr, rng_egd_chr_can_read,
rng_egd_chr_read, NULL, NULL, s, NULL, true);
qemu_chr_add_handlers(s->chr, rng_egd_chr_can_read, rng_egd_chr_read,
NULL, s);
}
static void rng_egd_set_chardev(Object *obj, const char *value, Error **errp)
@@ -125,10 +125,9 @@ static void rng_egd_set_chardev(Object *obj, const char *value, Error **errp)
static char *rng_egd_get_chardev(Object *obj, Error **errp)
{
RngEgd *s = RNG_EGD(obj);
Chardev *chr = qemu_chr_fe_get_driver(&s->chr);
if (chr && chr->label) {
return g_strdup(chr->label);
if (s->chr && s->chr->label) {
return g_strdup(s->chr->label);
}
return NULL;
@@ -145,7 +144,11 @@ static void rng_egd_finalize(Object *obj)
{
RngEgd *s = RNG_EGD(obj);
qemu_chr_fe_deinit(&s->chr, false);
if (s->chr) {
qemu_chr_add_handlers(s->chr, NULL, NULL, NULL, NULL);
qemu_chr_fe_release(s->chr);
}
g_free(s->chr_name);
}

136
backends/testdev.c Normal file
View File

@@ -0,0 +1,136 @@
/*
* QEMU Char Device for testsuite control
*
* Copyright (c) 2014 Red Hat, Inc.
*
* Author: Paolo Bonzini <pbonzini@redhat.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "sysemu/char.h"
#define BUF_SIZE 32
typedef struct {
CharDriverState *chr;
uint8_t in_buf[32];
int in_buf_used;
} TestdevCharState;
/* Try to interpret a whole incoming packet */
static int testdev_eat_packet(TestdevCharState *testdev)
{
const uint8_t *cur = testdev->in_buf;
int len = testdev->in_buf_used;
uint8_t c;
int arg;
#define EAT(c) do { \
if (!len--) { \
return 0; \
} \
c = *cur++; \
} while (0)
EAT(c);
while (isspace(c)) {
EAT(c);
}
arg = 0;
while (isdigit(c)) {
arg = arg * 10 + c - '0';
EAT(c);
}
while (isspace(c)) {
EAT(c);
}
switch (c) {
case 'q':
exit((arg << 1) | 1);
break;
default:
break;
}
return cur - testdev->in_buf;
}
/* The other end is writing some data. Store it and try to interpret */
static int testdev_write(CharDriverState *chr, const uint8_t *buf, int len)
{
TestdevCharState *testdev = chr->opaque;
int tocopy, eaten, orig_len = len;
while (len) {
/* Complete our buffer as much as possible */
tocopy = MIN(len, BUF_SIZE - testdev->in_buf_used);
memcpy(testdev->in_buf + testdev->in_buf_used, buf, tocopy);
testdev->in_buf_used += tocopy;
buf += tocopy;
len -= tocopy;
/* Interpret it as much as possible */
while (testdev->in_buf_used > 0 &&
(eaten = testdev_eat_packet(testdev)) > 0) {
memmove(testdev->in_buf, testdev->in_buf + eaten,
testdev->in_buf_used - eaten);
testdev->in_buf_used -= eaten;
}
}
return orig_len;
}
static void testdev_close(struct CharDriverState *chr)
{
TestdevCharState *testdev = chr->opaque;
g_free(testdev);
}
static CharDriverState *chr_testdev_init(const char *id,
ChardevBackend *backend,
ChardevReturn *ret,
Error **errp)
{
TestdevCharState *testdev;
CharDriverState *chr;
testdev = g_new0(TestdevCharState, 1);
testdev->chr = chr = g_new0(CharDriverState, 1);
chr->opaque = testdev;
chr->chr_write = testdev_write;
chr->chr_close = testdev_close;
return chr;
}
static void register_types(void)
{
register_char_driver("testdev", CHARDEV_BACKEND_KIND_TESTDEV, NULL,
chr_testdev_init);
}
type_init(register_types);

View File

@@ -15,193 +15,184 @@
#include "qemu/osdep.h"
#include "sysemu/tpm_backend.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "sysemu/tpm.h"
#include "qemu/thread.h"
#include "qemu/main-loop.h"
#include "block/thread-pool.h"
#include "qemu/error-report.h"
static void tpm_backend_request_completed(void *opaque, int ret)
{
TPMBackend *s = TPM_BACKEND(opaque);
TPMIfClass *tic = TPM_IF_GET_CLASS(s->tpmif);
tic->request_completed(s->tpmif, ret);
/* no need for atomic, as long the BQL is taken */
s->cmd = NULL;
object_unref(OBJECT(s));
}
static int tpm_backend_worker_thread(gpointer data)
{
TPMBackend *s = TPM_BACKEND(data);
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
Error *err = NULL;
k->handle_request(s, s->cmd, &err);
if (err) {
error_report_err(err);
return -1;
}
return 0;
}
void tpm_backend_finish_sync(TPMBackend *s)
{
while (s->cmd) {
aio_poll(qemu_get_aio_context(), true);
}
}
#include "sysemu/tpm_backend_int.h"
enum TpmType tpm_backend_get_type(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->type;
return k->ops->type;
}
int tpm_backend_init(TPMBackend *s, TPMIf *tpmif, Error **errp)
const char *tpm_backend_get_desc(TPMBackend *s)
{
if (s->tpmif) {
error_setg(errp, "TPM backend '%s' is already initialized", s->id);
return -1;
}
s->tpmif = tpmif;
object_ref(OBJECT(tpmif));
s->had_startup_error = false;
return 0;
}
int tpm_backend_startup_tpm(TPMBackend *s, size_t buffersize)
{
int res = 0;
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
/* terminate a running TPM */
tpm_backend_finish_sync(s);
return k->ops->desc();
}
res = k->startup_tpm ? k->startup_tpm(s, buffersize) : 0;
void tpm_backend_destroy(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
s->had_startup_error = (res != 0);
k->ops->destroy(s);
}
return res;
int tpm_backend_init(TPMBackend *s, TPMState *state,
TPMRecvDataCB *datacb)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->init(s, state, datacb);
}
int tpm_backend_startup_tpm(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->startup_tpm(s);
}
bool tpm_backend_had_startup_error(TPMBackend *s)
{
return s->had_startup_error;
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->had_startup_error(s);
}
void tpm_backend_deliver_request(TPMBackend *s, TPMBackendCmd *cmd)
size_t tpm_backend_realloc_buffer(TPMBackend *s, TPMSizedBuffer *sb)
{
ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
if (s->cmd != NULL) {
error_report("There is a TPM request pending");
return;
}
return k->ops->realloc_buffer(sb);
}
s->cmd = cmd;
object_ref(OBJECT(s));
thread_pool_submit_aio(pool, tpm_backend_worker_thread, s,
tpm_backend_request_completed, s);
void tpm_backend_deliver_request(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
k->ops->deliver_request(s);
}
void tpm_backend_reset(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
if (k->reset) {
k->reset(s);
}
tpm_backend_finish_sync(s);
s->had_startup_error = false;
k->ops->reset(s);
}
void tpm_backend_cancel_cmd(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
k->cancel_cmd(s);
k->ops->cancel_cmd(s);
}
bool tpm_backend_get_tpm_established_flag(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->get_tpm_established_flag ?
k->get_tpm_established_flag(s) : false;
return k->ops->get_tpm_established_flag(s);
}
int tpm_backend_reset_tpm_established_flag(TPMBackend *s, uint8_t locty)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->reset_tpm_established_flag ?
k->reset_tpm_established_flag(s, locty) : 0;
return k->ops->reset_tpm_established_flag(s, locty);
}
TPMVersion tpm_backend_get_tpm_version(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->get_tpm_version(s);
return k->ops->get_tpm_version(s);
}
size_t tpm_backend_get_buffer_size(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->get_buffer_size(s);
}
TPMInfo *tpm_backend_query_tpm(TPMBackend *s)
{
TPMInfo *info = g_new0(TPMInfo, 1);
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
TPMIfClass *tic = TPM_IF_GET_CLASS(s->tpmif);
info->id = g_strdup(s->id);
info->model = tic->model;
info->options = k->get_tpm_options(s);
return info;
}
static void tpm_backend_instance_finalize(Object *obj)
static bool tpm_backend_prop_get_opened(Object *obj, Error **errp)
{
TPMBackend *s = TPM_BACKEND(obj);
object_unref(OBJECT(s->tpmif));
g_free(s->id);
return s->opened;
}
void tpm_backend_open(TPMBackend *s, Error **errp)
{
object_property_set_bool(OBJECT(s), true, "opened", errp);
}
static void tpm_backend_prop_set_opened(Object *obj, bool value, Error **errp)
{
TPMBackend *s = TPM_BACKEND(obj);
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
Error *local_err = NULL;
if (value == s->opened) {
return;
}
if (!value && s->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
return;
}
if (k->opened) {
k->opened(s, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
}
s->opened = true;
}
static void tpm_backend_instance_init(Object *obj)
{
object_property_add_bool(obj, "opened",
tpm_backend_prop_get_opened,
tpm_backend_prop_set_opened,
NULL);
}
void tpm_backend_thread_deliver_request(TPMBackendThread *tbt)
{
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_PROCESS_CMD, NULL);
}
void tpm_backend_thread_create(TPMBackendThread *tbt,
GFunc func, gpointer user_data)
{
if (!tbt->pool) {
tbt->pool = g_thread_pool_new(func, user_data, 1, TRUE, NULL);
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_INIT, NULL);
}
}
void tpm_backend_thread_end(TPMBackendThread *tbt)
{
if (tbt->pool) {
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_END, NULL);
g_thread_pool_free(tbt->pool, FALSE, TRUE);
tbt->pool = NULL;
}
}
static const TypeInfo tpm_backend_info = {
.name = TYPE_TPM_BACKEND,
.parent = TYPE_OBJECT,
.instance_size = sizeof(TPMBackend),
.instance_finalize = tpm_backend_instance_finalize,
.instance_init = tpm_backend_instance_init,
.class_size = sizeof(TPMBackendClass),
.abstract = true,
};
static const TypeInfo tpm_if_info = {
.name = TYPE_TPM_IF,
.parent = TYPE_INTERFACE,
.class_size = sizeof(TPMIfClass),
};
static void register_types(void)
{
type_register_static(&tpm_backend_info);
type_register_static(&tpm_if_info);
}
type_init(register_types);

View File

@@ -26,34 +26,27 @@
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "qemu/atomic.h"
#include "exec/cpu-common.h"
#include "sysemu/kvm.h"
#include "sysemu/balloon.h"
#include "trace-root.h"
#include "qapi/error.h"
#include "qapi/qapi-commands-misc.h"
#include "trace.h"
#include "qmp-commands.h"
#include "qapi/qmp/qerror.h"
#include "qapi/qmp/qjson.h"
static QEMUBalloonEvent *balloon_event_fn;
static QEMUBalloonStatus *balloon_stat_fn;
static void *balloon_opaque;
static int balloon_inhibit_count;
static bool balloon_inhibited;
bool qemu_balloon_is_inhibited(void)
{
return atomic_read(&balloon_inhibit_count) > 0;
return balloon_inhibited;
}
void qemu_balloon_inhibit(bool state)
{
if (state) {
atomic_inc(&balloon_inhibit_count);
} else {
atomic_dec(&balloon_inhibit_count);
}
assert(atomic_read(&balloon_inhibit_count) >= 0);
balloon_inhibited = state;
}
static bool have_balloon(Error **errp)

4053
block.c

File diff suppressed because it is too large Load Diff

View File

@@ -1,48 +1,33 @@
block-obj-y += raw-format.o vmdk.o vpc.o
block-obj-$(CONFIG_QCOW1) += qcow.o
block-obj-$(CONFIG_VDI) += vdi.o
block-obj-$(CONFIG_CLOOP) += cloop.o
block-obj-$(CONFIG_BOCHS) += bochs.o
block-obj-$(CONFIG_VVFAT) += vvfat.o
block-obj-$(CONFIG_DMG) += dmg.o
block-obj-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-cache.o qcow2-bitmap.o
block-obj-$(CONFIG_QED) += qed.o qed-l2-cache.o qed-table.o qed-cluster.o
block-obj-$(CONFIG_QED) += qed-check.o
block-obj-y += vhdx.o vhdx-endian.o vhdx-log.o
block-obj-y += raw_bsd.o qcow.o vdi.o vmdk.o cloop.o bochs.o vpc.o vvfat.o
block-obj-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-cache.o
block-obj-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o qed-cluster.o
block-obj-y += qed-check.o
block-obj-$(CONFIG_VHDX) += vhdx.o vhdx-endian.o vhdx-log.o
block-obj-y += quorum.o
block-obj-y += blkdebug.o blkverify.o blkreplay.o
block-obj-$(CONFIG_PARALLELS) += parallels.o
block-obj-y += blklogwrites.o
block-obj-y += parallels.o blkdebug.o blkverify.o blkreplay.o
block-obj-y += block-backend.o snapshot.o qapi.o
block-obj-$(CONFIG_WIN32) += file-win32.o win32-aio.o
block-obj-$(CONFIG_POSIX) += file-posix.o
block-obj-$(CONFIG_WIN32) += raw-win32.o win32-aio.o
block-obj-$(CONFIG_POSIX) += raw-posix.o
block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
block-obj-y += null.o mirror.o commit.o io.o create.o
block-obj-y += null.o mirror.o commit.o io.o
block-obj-y += throttle-groups.o
block-obj-$(CONFIG_LINUX) += nvme.o
block-obj-y += nbd.o nbd-client.o
block-obj-$(CONFIG_SHEEPDOG) += sheepdog.o
block-obj-y += nbd.o nbd-client.o sheepdog.o
block-obj-$(CONFIG_LIBISCSI) += iscsi.o
block-obj-$(if $(CONFIG_LIBISCSI),y,n) += iscsi-opts.o
block-obj-$(CONFIG_LIBNFS) += nfs.o
block-obj-$(CONFIG_CURL) += curl.o
block-obj-$(CONFIG_RBD) += rbd.o
block-obj-$(CONFIG_GLUSTERFS) += gluster.o
block-obj-$(CONFIG_VXHS) += vxhs.o
block-obj-$(CONFIG_ARCHIPELAGO) += archipelago.o
block-obj-$(CONFIG_LIBSSH2) += ssh.o
block-obj-y += accounting.o dirty-bitmap.o
block-obj-y += write-threshold.o
block-obj-y += backup.o
block-obj-$(CONFIG_REPLICATION) += replication.o
block-obj-y += throttle.o copy-on-read.o
block-obj-y += crypto.o
common-obj-y += stream.o
common-obj-y += backup.o
nfs.o-libs := $(LIBNFS_LIBS)
iscsi.o-cflags := $(LIBISCSI_CFLAGS)
iscsi.o-libs := $(LIBISCSI_LIBS)
curl.o-cflags := $(CURL_CFLAGS)
@@ -51,15 +36,10 @@ rbd.o-cflags := $(RBD_CFLAGS)
rbd.o-libs := $(RBD_LIBS)
gluster.o-cflags := $(GLUSTERFS_CFLAGS)
gluster.o-libs := $(GLUSTERFS_LIBS)
vxhs.o-libs := $(VXHS_LIBS)
ssh.o-cflags := $(LIBSSH2_CFLAGS)
ssh.o-libs := $(LIBSSH2_LIBS)
block-obj-dmg-bz2-$(CONFIG_BZIP2) += dmg-bz2.o
block-obj-$(if $(CONFIG_DMG),m,n) += $(block-obj-dmg-bz2-y)
dmg-bz2.o-libs := $(BZIP2_LIBS)
block-obj-$(if $(CONFIG_LZFSE),m,n) += dmg-lzfse.o
dmg-lzfse.o-libs := $(LZFSE_LIBS)
archipelago.o-libs := $(ARCHIPELAGO_LIBS)
block-obj-m += dmg.o
dmg.o-libs := $(BZIP2_LIBS)
qcow.o-libs := -lz
linux-aio.o-libs := -laio
parallels.o-cflags := $(LIBXML2_CFLAGS)
parallels.o-libs := $(LIBXML2_LIBS)

View File

@@ -32,19 +32,15 @@
static QEMUClockType clock_type = QEMU_CLOCK_REALTIME;
static const int qtest_latency_ns = NANOSECONDS_PER_SECOND / 1000;
void block_acct_init(BlockAcctStats *stats)
{
qemu_mutex_init(&stats->lock);
if (qtest_enabled()) {
clock_type = QEMU_CLOCK_VIRTUAL;
}
}
void block_acct_setup(BlockAcctStats *stats, bool account_invalid,
bool account_failed)
void block_acct_init(BlockAcctStats *stats, bool account_invalid,
bool account_failed)
{
stats->account_invalid = account_invalid;
stats->account_failed = account_failed;
if (qtest_enabled()) {
clock_type = QEMU_CLOCK_VIRTUAL;
}
}
void block_acct_cleanup(BlockAcctStats *stats)
@@ -53,7 +49,6 @@ void block_acct_cleanup(BlockAcctStats *stats)
QSLIST_FOREACH_SAFE(s, &stats->intervals, entries, next) {
g_free(s);
}
qemu_mutex_destroy(&stats->lock);
}
void block_acct_add_interval(BlockAcctStats *stats, unsigned interval_length)
@@ -63,15 +58,12 @@ void block_acct_add_interval(BlockAcctStats *stats, unsigned interval_length)
s = g_new0(BlockAcctTimedStats, 1);
s->interval_length = interval_length;
s->stats = stats;
qemu_mutex_lock(&stats->lock);
QSLIST_INSERT_HEAD(&stats->intervals, s, entries);
for (i = 0; i < BLOCK_MAX_IOTYPE; i++) {
timed_average_init(&s->latency[i], clock_type,
(uint64_t) interval_length * NANOSECONDS_PER_SECOND);
}
qemu_mutex_unlock(&stats->lock);
}
BlockAcctTimedStats *block_acct_interval_next(BlockAcctStats *stats,
@@ -94,96 +86,7 @@ void block_acct_start(BlockAcctStats *stats, BlockAcctCookie *cookie,
cookie->type = type;
}
/* block_latency_histogram_compare_func:
* Compare @key with interval [@it[0], @it[1]).
* Return: -1 if @key < @it[0]
* 0 if @key in [@it[0], @it[1])
* +1 if @key >= @it[1]
*/
static int block_latency_histogram_compare_func(const void *key, const void *it)
{
uint64_t k = *(uint64_t *)key;
uint64_t a = ((uint64_t *)it)[0];
uint64_t b = ((uint64_t *)it)[1];
return k < a ? -1 : (k < b ? 0 : 1);
}
static void block_latency_histogram_account(BlockLatencyHistogram *hist,
int64_t latency_ns)
{
uint64_t *pos;
if (hist->bins == NULL) {
/* histogram disabled */
return;
}
if (latency_ns < hist->boundaries[0]) {
hist->bins[0]++;
return;
}
if (latency_ns >= hist->boundaries[hist->nbins - 2]) {
hist->bins[hist->nbins - 1]++;
return;
}
pos = bsearch(&latency_ns, hist->boundaries, hist->nbins - 2,
sizeof(hist->boundaries[0]),
block_latency_histogram_compare_func);
assert(pos != NULL);
hist->bins[pos - hist->boundaries + 1]++;
}
int block_latency_histogram_set(BlockAcctStats *stats, enum BlockAcctType type,
uint64List *boundaries)
{
BlockLatencyHistogram *hist = &stats->latency_histogram[type];
uint64List *entry;
uint64_t *ptr;
uint64_t prev = 0;
int new_nbins = 1;
for (entry = boundaries; entry; entry = entry->next) {
if (entry->value <= prev) {
return -EINVAL;
}
new_nbins++;
prev = entry->value;
}
hist->nbins = new_nbins;
g_free(hist->boundaries);
hist->boundaries = g_new(uint64_t, hist->nbins - 1);
for (entry = boundaries, ptr = hist->boundaries; entry;
entry = entry->next, ptr++)
{
*ptr = entry->value;
}
g_free(hist->bins);
hist->bins = g_new0(uint64_t, hist->nbins);
return 0;
}
void block_latency_histograms_clear(BlockAcctStats *stats)
{
int i;
for (i = 0; i < BLOCK_MAX_IOTYPE; i++) {
BlockLatencyHistogram *hist = &stats->latency_histogram[i];
g_free(hist->bins);
g_free(hist->boundaries);
memset(hist, 0, sizeof(*hist));
}
}
static void block_account_one_io(BlockAcctStats *stats, BlockAcctCookie *cookie,
bool failed)
void block_acct_done(BlockAcctStats *stats, BlockAcctCookie *cookie)
{
BlockAcctTimedStats *s;
int64_t time_ns = qemu_clock_get_ns(clock_type);
@@ -195,19 +98,31 @@ static void block_account_one_io(BlockAcctStats *stats, BlockAcctCookie *cookie,
assert(cookie->type < BLOCK_MAX_IOTYPE);
qemu_mutex_lock(&stats->lock);
stats->nr_bytes[cookie->type] += cookie->bytes;
stats->nr_ops[cookie->type]++;
stats->total_time_ns[cookie->type] += latency_ns;
stats->last_access_time_ns = time_ns;
if (failed) {
stats->failed_ops[cookie->type]++;
} else {
stats->nr_bytes[cookie->type] += cookie->bytes;
stats->nr_ops[cookie->type]++;
QSLIST_FOREACH(s, &stats->intervals, entries) {
timed_average_account(&s->latency[cookie->type], latency_ns);
}
}
block_latency_histogram_account(&stats->latency_histogram[cookie->type],
latency_ns);
void block_acct_failed(BlockAcctStats *stats, BlockAcctCookie *cookie)
{
assert(cookie->type < BLOCK_MAX_IOTYPE);
stats->failed_ops[cookie->type]++;
if (stats->account_failed) {
BlockAcctTimedStats *s;
int64_t time_ns = qemu_clock_get_ns(clock_type);
int64_t latency_ns = time_ns - cookie->start_time_ns;
if (qtest_enabled()) {
latency_ns = qtest_latency_ns;
}
if (!failed || stats->account_failed) {
stats->total_time_ns[cookie->type] += latency_ns;
stats->last_access_time_ns = time_ns;
@@ -215,45 +130,29 @@ static void block_account_one_io(BlockAcctStats *stats, BlockAcctCookie *cookie,
timed_average_account(&s->latency[cookie->type], latency_ns);
}
}
qemu_mutex_unlock(&stats->lock);
}
void block_acct_done(BlockAcctStats *stats, BlockAcctCookie *cookie)
{
block_account_one_io(stats, cookie, false);
}
void block_acct_failed(BlockAcctStats *stats, BlockAcctCookie *cookie)
{
block_account_one_io(stats, cookie, true);
}
void block_acct_invalid(BlockAcctStats *stats, enum BlockAcctType type)
{
assert(type < BLOCK_MAX_IOTYPE);
/* block_account_one_io() updates total_time_ns[], but this one does
* not. The reason is that invalid requests are accounted during their
* submission, therefore there's no actual I/O involved.
*/
qemu_mutex_lock(&stats->lock);
/* block_acct_done() and block_acct_failed() update
* total_time_ns[], but this one does not. The reason is that
* invalid requests are accounted during their submission,
* therefore there's no actual I/O involved. */
stats->invalid_ops[type]++;
if (stats->account_invalid) {
stats->last_access_time_ns = qemu_clock_get_ns(clock_type);
}
qemu_mutex_unlock(&stats->lock);
}
void block_acct_merge_done(BlockAcctStats *stats, enum BlockAcctType type,
int num_requests)
{
assert(type < BLOCK_MAX_IOTYPE);
qemu_mutex_lock(&stats->lock);
stats->merged[type] += num_requests;
qemu_mutex_unlock(&stats->lock);
}
int64_t block_acct_idle_time_ns(BlockAcctStats *stats)
@@ -268,9 +167,7 @@ double block_acct_queue_depth(BlockAcctTimedStats *stats,
assert(type < BLOCK_MAX_IOTYPE);
qemu_mutex_lock(&stats->stats->lock);
sum = timed_average_sum(&stats->latency[type], &elapsed);
qemu_mutex_unlock(&stats->stats->lock);
return (double) sum / elapsed;
}

1082
block/archipelago.c Normal file

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More