Compare commits

..

153 Commits

Author SHA1 Message Date
Michael Roth
c2b0926634 Update version for v2.1.3 release
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-21 19:16:38 -06:00
Marcel Apfelbaum
b316937d38 vl.c: fix regression when reading machine type from config file
After 'Machine as QOM' series the machine type input triggers
the creation of the machine class.
If the machine type is set in the configuration file, the machine
class is not updated accordingly and remains the default.

Fixed that by querying the machine options after the configuration
file is loaded.

Cc: qemu-stable@nongnu.org
Reported-by: William Dauchy <william@gandi.net>
Signed-off-by: Marcel Apfelbaum <marcel@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 364c3e6b8d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:44 -06:00
David Gibson
5b5c7bf8e5 PPC: Fix crash on spapr_tce_table_finalize()
spapr_tce_table_finalize() can SEGV if the object was not previously
realized.  In particular this can be triggered by running
         qemu-system-ppc -device spapr-tce-table,?

The basic problem is that we have mismatched initialization versus
finalization: spapr_tce_table_finalize() is attempting to undo things that
are done in spapr_tce_table_realize(), not an instance_init function.

Therefore, replace spapr_tce_table_finalize() with
spapr_tce_table_unrealize().

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Cc: qemu-stable@nongnu.org
Signed-off-by: Alexander Graf <agraf@suse.de>
(cherry picked from commit 5f9490de56)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:44 -06:00
Paolo Bonzini
6df8cd2e27 atomic: fix position of volatile qualifier
What needs to be volatile is not the pointer, but the pointed-to
value!

Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 2cbcfb281a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:44 -06:00
Vladimir Sementsov-Ogievskiy
ff2fff6211 migration/block: fix pending() return value
Because of wrong return value of .save_live_pending() in
migration/block.c, migration finishes before the whole disk is
transferred. Such situation occurs when the migration process is fast
enough, for example when source and dest are on the same host.

If in the bulk phase we return something < max_size, we will skip
transferring the tail of the device. Currently we have "set pending to
BLOCK_SIZE if it is zero" for bulk phase, but there no guarantee, that
it will be < max_size.

True approach is to return, for example, max_size+1 when we are in the
bulk phase.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@parallels.com>
Message-id: 1419933856-4018-2-git-send-email-vsementsov@parallels.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 04636dc410)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:44 -06:00
Igor Mammedov
83a66746c0 pc: acpi: mark all possible CPUs as enabled in SRAT
If QEMU is started with  -numa ... Windows only notices that
CPU has been hot-added but it will not online such CPUs.

It's caused by the fact that possible CPUs are flagged as
not enabled in SRAT and Windows honoring that information
doesn't use corresponding CPU.

ACPI 5.0 Spec regarding to flag says:
"
Table 5-47 Local APIC Flags
...
Enabled: if zero, this processor is unusable, and the operating system
support will not attempt to use it.
"

Fix QEMU to adhere to spec and mark possible CPUs as enabled
in SRAT.

With that Windows onlines hot-added CPUs as expected.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit dd0247e09a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:43 -06:00
Max Filippov
39639d81e3 target-xtensa: test cross-page opcode
Alter cross-page TB test to also test cross-page opcode.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
(cherry picked from commit 85d36377e4)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:43 -06:00
Max Filippov
6e64c4e6f1 target-xtensa: fix translation for opcodes crossing page boundary
If TB ends with an opcode that crosses page boundary and the following
page is not executable then EPC1 for the code fetch exception wrongly
points at the beginning of the TB. Always treat instruction that crosses
page boundary as a separate TB.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
(cherry picked from commit 01673a3401)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:43 -06:00
Peter Maydell
73c1527f96 audio: Don't free hw resources until after hw backend is stopped
When stopping an audio voice, call the audio backend's fini
method before calling audio_pcm_hw_free_resources_ rather than
afterwards. This allows backends which use helper threads (like
pulseaudio) to terminate those threads before the conv_buf or
mix_buf are freed and avoids race conditions where the helper
may access a NULL pointer or freed memory.

Cc: qemu-stable@nongnu.org
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1418406239-9838-1-git-send-email-peter.maydell@linaro.org
(cherry picked from commit b28fb27b5e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:43 -06:00
Paolo Bonzini
b466e1731b linuxboot: fix loading old kernels
Old kernels that used high memory only allowed the initrd to be in the
first 896MB of memory.  If you load the initrd above, they complain
that "initrd extends beyond end of memory".

In order to fix this, while not breaking machines with small amounts
of memory fixed by cdebec5 (linuxboot: compute initrd loading address,
2014-10-06), we need to distinguish two cases.  If pc.c placed the
initrd at end of memory, use the new algorithm based on the e801
memory map.  If instead pc.c placed the initrd at the maximum address
specified by the bzImage, leave it there.

The only interesting part is that the low-memory info block is now
loaded very early, in real mode, and thus the 32-bit address has
to be converted into a real mode segment.  The initrd address is
also patched in the info block before entering real mode, it is
simpler that way.

This fixes booting the RHEL4.8 32-bit installation image with 1GB
of RAM.

Cc: qemu-stable@nongnu.org
Cc: mst@redhat.com
Cc: jsnow@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 269e235849)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:43 -06:00
Paolo Bonzini
6a47ae2d41 linuxboot: compute initrd loading address
Even though hw/i386/pc.c tries to compute a valid loading address for the
initrd, close to the top of RAM, this does not take into account other
data that is malloced into that memory by SeaBIOS.

Luckily we can easily look at the memory map to find out how much memory is
used up there.  This patch places the initrd in the first four gigabytes,
below the first hole (as returned by INT 15h, AX=e801h).

Without this patch:
[    0.000000] init_memory_mapping: [mem 0x07000000-0x07fdffff]
[    0.000000] RAMDISK: [mem 0x0710a000-0x07fd7fff]

With this patch:
[    0.000000] init_memory_mapping: [mem 0x07000000-0x07fdffff]
[    0.000000] RAMDISK: [mem 0x07112000-0x07fdffff]

So linuxboot is able to use the 64k that were added as padding for
QEMU <= 2.1.

Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit cdebec5e40)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:43 -06:00
Kevin Wolf
5f0681e1c3 block: Don't probe for unknown backing file format
If a qcow2 image specifies a backing file format that doesn't correspond
to any format driver that qemu knows, we shouldn't fall back to probing,
but simply error out.

Not looking up the backing file driver in bdrv_open_backing_file(), but
just filling in the "driver" option if it isn't there moves us closer to
the goal of having everything in QDict options and gets us the error
handling of bdrv_open(), which correctly refuses unknown drivers.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1416935562-7760-4-git-send-email-kwolf@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit c5f6e493bb)

Conflicts:
	tests/qemu-iotests/group

*removed context from upstream iotest groups

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:43 -06:00
Kevin Wolf
75eb0f5dbb qcow2.py: Add required padding for header extensions
The qcow2 specification requires that the header extension data be
padded to round up the extension size to the next multiple of 8 bytes.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1416935562-7760-3-git-send-email-kwolf@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 8884dd1bbc)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
(cherry picked from commit a163ac3f57b5baa117158f7c0488d276ba3377e2)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:42 -06:00
Kevin Wolf
b495764ae8 qcow2: Fix header extension size check
After reading the extension header, offset is incremented, but not
checked against end_offset any more. This way an integer overflow could
happen when checking whether the extension end is within the allowed
range, effectively disabling the check.

This patch adds the missing check and a test case for it.

Cc: qemu-stable@nongnu.org
Reported-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1416935562-7760-2-git-send-email-kwolf@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 2ebafc854d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:42 -06:00
Gary R Hook
21640bf6e0 block migration: fix return value
Modify block_save_iterate() to return positive/zero/negative
(success/not done/failure) return status. The computation of
the blocks transferred (an int64_t) exceeds the size of an
int return value.

Signed-off-by: Gary R Hook <gary.hook@nimboxx.com>
Reviewed-by: ChenLiang <chenliang88@huawei.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1416958202-15913-1-git-send-email-gary.hook@nimboxx.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit ebd9fbd7e1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:42 -06:00
Max Reitz
6bbb939a80 block/raw-posix: Fix ret in raw_open_common()
The return value must be negative on error; there is one place in
raw_open_common() where errp is set, but ret remains 0. Fix it.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 01212d4ed6)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:42 -06:00
Max Reitz
178ed9aad3 qcow2: Respect bdrv_truncate() error
bdrv_truncate() may fail and qcow2_write_compressed() should return the
error code in that case.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 6a69b9620a)

Conflicts:
	block/qcow2.c

*removed context dependency on 75d3d21

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:42 -06:00
Max Reitz
0505d48c83 qcow2: Flushing the caches in qcow2_close may fail
qcow2_cache_flush() may fail; if one of the caches failed to be flushed
successfully to disk in qcow2_close() the image should not be marked
clean, and we should emit a warning.

This breaks the (qcow2-specific) iotests 026, 071 and 089; change their
output accordingly.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 3b5e14c76a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:42 -06:00
Paolo Bonzini
0073781fea blkdebug: report errors on flush too
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 9e52c53b8c)

*included to maintain parity with unit tests which inject errors
 via blkdebug. needed for:
 "qcow2: Flushing the caches in qcow2_close may fail"

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-14 17:08:42 -06:00
Max Reitz
175117c159 qcow2: Prevent numerical overflow
In qcow2_alloc_cluster_offset(), *num is limited to
INT_MAX >> BDRV_SECTOR_BITS by all callers. However, since remaining is
of type uint64_t, we might as well cast *num to that type before
performing the shift.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 11c89769dc)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 15:11:53 -06:00
Max Reitz
aa58eedb35 iotests: Add test for unsupported image creation
Add a test for creating and amending images (amendment uses the creation
options) with formats not supporting creation over protocols not
supporting creation.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 2247798d13)

Conflicts:
	tests/qemu-iotests/group

*removed context dependencies from upstream iotest groups

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 15:10:25 -06:00
Max Reitz
e6c172ad9e iotests: Only kill NBD server if it runs
There may be NBD tests which do not create a sample image and simply
test whether wrong usage of the protocol is rejected as expected. In
this case, there will be no NBD server and trying to kill it during
clean-up will fail.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit f798068c56)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 15:08:47 -06:00
Max Reitz
07ede68671 qemu-img: Check create_opts before image amendment
The image options which can be amended are described by the .create_opts
field for every driver. This field must therefore be non-NULL so that
anything can be amended in the first place. Check that this holds true
before going into qemu_opts_create() (because if .create_opts is NULL,
the create_opts pointer in img_amend() will be NULL after
qemu_opts_append()).

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit b2439d26f0)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 15:08:35 -06:00
Max Reitz
2fbad1f944 qemu-img: Check create_opts before image creation
If a driver supports image creation, it needs to set the .create_opts
field. We can use that to make sure .create_opts for both drivers
involved is not NULL for the target image in qemu-img convert, which is
important so that the create_opts pointer in img_convert() is not NULL
after the qemu_opts_append() calls and when going into
qemu_opts_create().

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit f75613cf24)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 15:08:20 -06:00
Max Reitz
dee284885a block: Check create_opts before image creation
If a driver supports image creation, it needs to set the .create_opts
field. We can use that to make sure .create_opts for both drivers
involved is not NULL in bdrv_img_create(), which is important so that
the create_opts pointer in that function is not NULL after the
qemu_opts_append() calls and when going into qemu_opts_create().

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit c614972408)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 15:07:52 -06:00
Max Reitz
ad0983b5d1 block/nfs: Add create_opts
The nfs protocol driver is capable of creating images, but did not
specify any creation options. Fix it.

A way to test this issue is the following:

$ qemu-img create -f nfs nfs://127.0.0.1/foo.qcow2 64M

Without this patch, it segfaults. With this patch, it does not. However,
this is not something that should really work; qemu-img should check
whether the parameter for the -f option (and -O for convert) is indeed a
format, and error out if it is not. Therefore, I am not making it an
iotest.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit fd752801ae)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 15:07:43 -06:00
Max Reitz
b3729b2ec2 block/vvfat: qcow driver may not be found
Although virtually impossible right now, bdrv_find_format("qcow") may
fail. The vvfat block driver should heed that case.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 1bcb15cf77)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 15:07:33 -06:00
Max Reitz
1b9ea8961a block: Omit bdrv_find_format for essential drivers
We can always assume raw, file and qcow2 being available; so do not use
bdrv_find_format() to locate their BlockDriver objects but statically
reference the respective objects.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit ef8104378c)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 15:07:22 -06:00
Max Reitz
cdeb85cf24 block: Make essential BlockDriver objects public
There are some block drivers which are essential to QEMU and may not be
removed: These are raw, file and qcow2 (as the default non-raw format).
Make their BlockDriver objects public so they can be directly referenced
throughout the block layer without needing to call bdrv_find_format()
and having to deal with an error at runtime, while the real problem
occurred during linking (where raw, file or qcow2 were not linked into
qemu).

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 5f535a941e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 15:07:12 -06:00
Jason Wang
b28d7b585a virtio-net: fix unmap leak
virtio_net_handle_ctrl() and other functions that process control vq
request call iov_discard_front() which will shorten the iov. This will
lead unmapping in virtqueue_push() leaks mapping.

Fixes this by keeping the original iov untouched and using a temp variable
in those functions.

Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Message-id: 1417082643-23907-1-git-send-email-jasowang@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 771b6ed37e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 14:58:53 -06:00
Don Slutz
cd2f44cc3e hw/ide/core.c: Prevent SIGSEGV during migration
The other callers to blk_set_enable_write_cache() in this file
already check for s->blk == NULL.

Signed-off-by: Don Slutz <dslutz@verizon.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1416259239-13281-1-git-send-email-dslutz@verizon.com
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 6b896ab261)

Conflicts:
	hw/ide/core.c

*removed dependency on 4be746345

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 14:57:45 -06:00
Peter Maydell
844470158c exec: Handle multipage ranges in invalidate_and_set_dirty()
The code in invalidate_and_set_dirty() needs to handle addr/length
combinations which cross guest physical page boundaries. This can happen,
for example, when disk I/O reads large blocks into guest RAM which previously
held code that we have cached translations for. Unfortunately we were only
checking the clean/dirty status of the first page in the range, and then
were calling a tb_invalidate function which only handles ranges that don't
cross page boundaries. Fix the function to deal with multipage ranges.

The symptoms of this bug were that guest code would misbehave (eg segfault),
in particular after a guest reboot but potentially any time the guest
reused a page of its physical RAM for new code.

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1416167061-13203-1-git-send-email-peter.maydell@linaro.org
(cherry picked from commit f874bf905f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 14:47:42 -06:00
zhanghailiang
05c5febf8c l2tpv3: fix possible double free
freeaddrinfo(result) does not assign result = NULL, after frees it.
There will be a double free when it goes error case.
It is reported by covertiy.

Reviewed-by: Gonglei <arei.gonglei@huawei.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 77374582ab)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 14:45:28 -06:00
zhanghailiang
de98dc9539 libcacard: fix resource leak
In function connect_to_qemu(), getaddrinfo() will allocate memory
that is stored into server, it should be freed by using freeaddrinfo()
before connect_to_qemu() return.

Cc: qemu-stable@nongnu.org
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 5bbebf6228)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 14:45:17 -06:00
Paolo Bonzini
0c80570170 virtio-scsi: work around bug in old BIOSes
Old BIOSes left some padding by mistake after the req_size/resp_size.
New QEMU does not like it, thinking it is a bidirectional command.

As a workaround, we can check if the ANY_LAYOUT bit is set; if not, we
always consider the first buffer as the virtio-scsi request/response,
because, back when QEMU did not support ANY_LAYOUT, it expected the
payload to start at the second element of the iovec.

This can show up during migration.

Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 55783a5521)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 14:43:28 -06:00
Alexander Graf
14b51b6718 kvm: Fix memory slot page alignment logic
Memory slots have to be page aligned to get entered into KVM. There
is existing logic that tries to ensure that we pad memory slots that
are not page aligned to the biggest region that would still fit in the
alignment requirements.

Unfortunately, that logic is broken. It tries to calculate the start
offset based on the region size.

Fix up the logic to do the thing it was intended to do and document it
properly in the comment above it.

With this patch applied, I can successfully run an e500 guest with more
than 3GB RAM (at which point RAM starts overlapping subpage memory regions).

Cc: qemu-stable@nongnu.org
Signed-off-by: Alexander Graf <agraf@suse.de>
(cherry picked from commit f2a64032a1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 14:42:04 -06:00
Max Filippov
ea227e222b target-xtensa: add missing window check for entry
Entry opcode needs to check if moving to new register frame would cause
register window overflow. Entry used in function prologue never
overflows because preceding windowed call* opcode writes return address
to the target register window frame, causing overflow exceptions at the
point of call. But when a sequence of entry opcodes is used for register
window spilling there may not be a call or other opcode that would cause
window check between entries and they would not raise overflow exception
themselves resulting in data corruption.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
(cherry picked from commit 1b3e71f8ee)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 14:31:19 -06:00
Hannes Reinecke
aae114b7ed esp-pci: fixup deadlock with linux
A linux guest will be issuing messages:

[   32.124042] DC390: Deadlock in DataIn_0: DMA aborted unfinished: 000000 bytes remain!!
[   32.126348] DC390: DataIn_0: DMA State: 0

and the HBA will fail to work properly.
Reason is the emulation is not setting the 'DMA transfer done'
status correctly.

Signed-off-by: Hannes Reinecke <hare@suse.de>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit c3543fb5fe)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 14:27:24 -06:00
Peter Maydell
cfa86bcb7d hw/ppc/spapr_pci.c: Avoid functions not in glib 2.12 (g_hash_table_iter_*)
The g_hash_table_iter_* functions for iterating through a hash table
are not present in glib 2.12, which is our current minimum requirement.
Rewrite the code to use g_hash_table_foreach() instead.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
(cherry picked from commit f8833a37c0)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-07 14:18:31 -06:00
Zhang Haoyu
b57b7ec340 snapshot: add bdrv_drain_all() to bdrv_snapshot_delete() to avoid concurrency problem
If there are still pending i/o while deleting snapshot,
because deleting snapshot is done in non-coroutine context, and
the pending i/o read/write (bdrv_co_do_rw) is done in coroutine context,
so it's possible to cause concurrency problem between above two operations.
Add bdrv_drain_all() to bdrv_snapshot_delete() to avoid this problem.

Signed-off-by: Zhang Haoyu <zhanghy@sangfor.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 201410211637596311287@sangfor.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 3432a1929e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 18:40:28 -06:00
Max Filippov
f8c61ebdd2 hw/xtensa/xtfpga: treat uImage load address as virtual
U-boot for xtensa always treats uImage load address as virtual address.
This is important when booting uImage on xtensa core with MMUv2, because
MMUv2 has fixed non-identity virtual-to-physical mapping after reset.

Always do virtual-to-physical translation of uImage load address and
load uImage at the translated address. This fixes booting uImage kernels
on dc232b and other MMUv2 cores.

Cc: qemu-stable@nongnu.org
Reported-by: Waldemar Brodkorb <mail@waldemar-brodkorb.de>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
(cherry picked from commit 6d2e453053)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 18:39:24 -06:00
Max Filippov
c448fb7651 hw/core/loader: implement address translation in uimage loader
Such address translation is needed when load address recorded in uImage
is a virtual address. When the actual load address is requested, return
untranslated address: user that needs the translated address can always
apply translation function to it and those that need it untranslated
don't need to do the inverse translation.

Add translation function pointer and its parameter to uimage_load
prototype. Update all existing users.

No user-visible functional changes.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Alexander Graf <agraf@suse.de>
(cherry picked from commit 25bda50a0c)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 18:39:10 -06:00
Aurelien Jarno
8239a583c1 tcg/mips: fix store softmmu slow path
Commit 9d8bf2d1 moved the softmmu slow path out of line and introduce a
regression at the same time by always calling tcg_out_tlb_load with
is_load=1. This makes impossible to run any significant code under
qemu-system-mips*.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-stable@nongnu.org
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit 0a2923f848)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 18:36:20 -06:00
Ting Wang
cb91dce13e virtio-scsi: sense in virtio_scsi_command_complete
If req->resp.cmd.status is not GOOD, the address of sense for
qemu_iovec_from_buf should be modified from &req->resp to sense.

Cc: qemu-stable@nongnu.org
Signed-off-by: Ting Wang <kathy.wangting@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit b7890c40e5)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 18:27:21 -06:00
Petr Matousek
b2f1d90530 vnc: sanitize bits_per_pixel from the client
bits_per_pixel that are less than 8 could result in accessing
non-initialized buffers later in the code due to the expectation
that bytes_per_pixel value that is used to initialize these buffers is
never zero.

To fix this check that bits_per_pixel from the client is one of the
values that the rfb protocol specification allows.

This is CVE-2014-7815.

Signed-off-by: Petr Matousek <pmatouse@redhat.com>

[ kraxel: apply codestyle fix ]

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit e6908bfe8e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 18:26:44 -06:00
Jan Kiszka
5a6af97243 Make qemu_shutdown_requested signal-safe
qemu_shutdown_requested may be interrupted by qemu_system_killed. If the
latter sets shutdown_requested after qemu_shutdown_requested has read it
but before it was cleared, the shutdown event is lost. Fix this by using
atomic_xchg.

This provides a different fix for the problem which commit 15124e142
attempts to deal with. That commit breaks use of ^C to drop into gdb,
and so this approach is better (and 15124e142 can be reverted).

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
[PMM: commit message tweak]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

(cherry picked from commit 817ef04db2)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 18:25:45 -06:00
Ray Strode
90de7a03bb libcacard: don't free sign buffer while sign op is pending
commit 57f97834ef cleaned up
the cac_applet_pki_process_apdu function to have a single
exit point. Unfortunately, that commit introduced a bug
where the sign buffer can get free'd and nullified while
it's still being used.

This commit corrects the bug by introducing a boolean to
track whether or not the sign buffer should be freed in
the function exit path.

Signed-off-by: Ray Strode <rstrode@redhat.com>
Reviewed-by: Alon Levy <alon@pobox.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 81b49e8f89)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 18:13:49 -06:00
Max Reitz
57248587af qcow2: Do not overflow when writing an L1 sector
While writing an L1 table sector, qcow2_write_l1_entry() copies the
respective range from s->l1_table to the local "buf" array. The size of
s->l1_table does not have to be a multiple of L1_ENTRIES_PER_SECTOR;
thus, limit the index which is used for copying all entries to the L1
size.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit a1391444fe)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 18:12:17 -06:00
Gerd Hoffmann
ff830f9d88 vmware-vga: use vmsvga_verify_rect in vmsvga_fill_rect
Add verification to vmsvga_fill_rect, re-enable HW_FILL_ACCEL.

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Don Koch <dkoch@verizon.com>
(cherry picked from commit bd9ccd8517)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 17:41:34 -06:00
Gerd Hoffmann
82e8913341 vmware-vga: use vmsvga_verify_rect in vmsvga_copy_rect
Add verification to vmsvga_copy_rect, re-enable HW_RECT_ACCEL.

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Don Koch <dkoch@verizon.com>
(cherry picked from commit 61b41b4c20)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 17:41:19 -06:00
Gerd Hoffmann
38e6e1c6a3 vmware-vga: use vmsvga_verify_rect in vmsvga_update_rect
Switch vmsvga_update_rect over to use vmsvga_verify_rect.  Slight change
in behavior:  We don't try to automatically fixup rectangles any more.
In case we find invalid update requests we'll do a full-screen update
instead.

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Don Koch <dkoch@verizon.com>
(cherry picked from commit 1735fe1edb)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 17:39:22 -06:00
Gerd Hoffmann
4bcf40b288 vmware-vga: add vmsvga_verify_rect
Add verification function for rectangles, returning
true if verification passes and false otherwise.

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Don Koch <dkoch@verizon.com>
(cherry picked from commit 07258900fd)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 17:38:57 -06:00
Gerd Hoffmann
8bf7738ff2 vmware-vga: CVE-2014-3689: turn off hw accel
Quick & easy stopgap for CVE-2014-3689:  We just compile out the
hardware acceleration functions which lack sanity checks.  Thankfully
we have capability bits for them (SVGA_CAP_RECT_COPY and
SVGA_CAP_RECT_FILL), so guests should deal just fine, in theory.

Subsequent patches will add the missing checks and re-enable the
hardware acceleration emulation.

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Don Koch <dkoch@verizon.com>
(cherry picked from commit 83afa38eb2)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 17:38:01 -06:00
Jan Kiszka
8100812711 pc: Fix disabling of vapic for compat PC models
We used to be able to address both the QEMU and the KVM APIC via "apic".
This doesn't work anymore. So we need to use their parent class to turn
off the vapic on machines that should not expose them.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

(cherry picked from commit df1fd4b541)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 16:06:25 -06:00
Gonglei
cf0276b7c0 virtio-9p: fix virtio-9p child refcount in transports
object_initialize() leaves the object with a refcount of 1.
object_property_add_child() adds its own reference which is
dropped again when the property is deleted.

The upshot of this is that we always have a refcount >= 1. Upon
unplug the virtio-9p child is not finalized!

Drop our reference after the child property has been added to the
parent.

Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 8f3d60e568)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 16:03:37 -06:00
Gonglei
b5ad76a709 virtio-9p: use aliases instead of duplicate qdev properties
virtio-9p-pci all duplicate the qdev properties of their
V9fsState child. This approach does not work well with
string or pointer properties since we must be careful
about leaking or double-freeing them.

Use the QOM alias property to forward property accesses to the
V9fsState child.  This way no duplication is necessary.

Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 48833071d9)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 16:03:30 -06:00
Gonglei
20dc758b7f virtio-balloon: fix virtio-balloon child refcount in transports
object_initialize() leaves the object with a refcount of 1.
object_property_add_child() adds its own reference which is dropped
again when the property is deleted.

The upshot of this is that we always have a refcount >= 1.  Upon hot
unplug the virtio-balloon child is not finalized!

Drop our reference after the child property has been added to the
parent.

Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 91ba212088)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 16:03:19 -06:00
Gonglei
0077793a00 virtio-rng: fix virtio-rng child refcount in transports
object_initialize() leaves the object with a refcount of 1.
object_property_add_child() adds its own reference which is dropped
again when the property is deleted.

The upshot of this is that we always have a refcount >= 1.  Upon hot
unplug the virtio-rng child is not finalized!

Drop our reference after the child property has been added to the
parent.

Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 352fa88dfb)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 16:02:58 -06:00
Gonglei
c4164eae39 virtio-rng: use aliases instead of duplicate qdev properties
virtio-rng-{pci, s390, ccw} all duplicate the
qdev properties of their VirtIORNG child.
This approach does not work well with string or pointer
properties since we must be careful about leaking or
double-freeing them.

Use the QOM alias property to forward property accesses to the
VirtIORNG child.  This way no duplication is necessary.

Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 8ee486ae33)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 16:02:50 -06:00
Gonglei
8c64b47eeb virtio-serial: fix virtio-serial child refcount in transports
object_initialize() leaves the object with a refcount of 1.
object_property_add_child() adds its own reference which is dropped
again when the property is deleted.

The upshot of this is that we always have a refcount >= 1.  Upon hot
unplug the virtio-serial child is not finalized!

Drop our reference after the child property has been added to the
parent.

Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit e77ca8b92a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 16:02:35 -06:00
Gonglei
aa383e9a83 virtio-serial: use aliases instead of duplicate qdev properties
virtio-serial-{pci, s390, ccw} all duplicate the
qdev properties of their VirtIOSerial child.
This approach does not work well with string or pointer
properties since we must be careful about leaking or
double-freeing them.

Use the QOM alias property to forward property accesses to the
VirtIOSerial child.  This way no duplication is necessary.

Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 4f456d8025)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 16:02:28 -06:00
Gonglei
f06c87b119 virtio/vhost-scsi: fix virtio-scsi/vhost-scsi child refcount in transports
object_initialize() leaves the object with a refcount of 1.
object_property_add_child() adds its own reference which is dropped
again when the property is deleted.

The upshot of this is that we always have a refcount >= 1.  Upon hot
unplug the virtio-scsi/vhost-scsi child is not finalized!

Drop our reference after the child property has been added to the
parent.

Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 1312f12bcc)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 16:02:13 -06:00
Gonglei
eb5388e260 virtio/vhost-scsi: use aliases instead of duplicate qdev properties
{virtio, vhost}-scsi-{pci, s390, ccw} all duplicate the
qdev properties of their VirtIOSCSI/VHostSCSI child.
This approach does not work well with string or pointer
properties since we must be careful about leaking or
double-freeing them.

Use the QOM alias property to forward property accesses to the
VirtIOSCSI/VHostSCSI child. This way no duplication is necessary.

Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit c39343fd81)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 16:01:43 -06:00
Gonglei
83f81f344f virtio-net: fix virtio-net child refcount in transports
object_initialize() leaves the object with a refcount of 1.
object_property_add_child() adds its own reference which is dropped
again when the property is deleted.

The upshot of this is that we always have a refcount >= 1.  Upon hot
unplug the virtio-net child is not finalized!

Drop our reference after the child property has been added to the
parent.

Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 6a0c6b5978)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 16:01:03 -06:00
Gonglei
b6bd501d6a virtio-net: use aliases instead of duplicate qdev properties
virtio-net-pci, virtio-net-s390, and virtio-net-ccw all duplicate the
qdev properties of their VirtIONet child. This approach does not work
well with string or pointer properties since we must be careful about
leaking or double-freeing them.

Use the QOM alias property to forward property accesses to the
VirtIONet child.  This way no duplication is necessary.

Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 7779edfeb1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 16:00:47 -06:00
Paolo Bonzini
0369529b37 vhost-scsi: use virtio_ldl_p
This helps for cross-endian configurations.

Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 7ce0425575)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 16:00:06 -06:00
Eduardo Habkost
c29bf825ee smbios: Fix assertion on socket count calculation
QEMU currently allows the number of VCPUs to not be a multiple of the
number of threads per socket, but the smbios socket count calculation
introduced by commit c97294ec1b doesn't
take that into account, triggering an assertion. e.g.:

  $ ./x86_64-softmmu/qemu-system-x86_64 -smp 4,sockets=2,cores=6,threads=1
  qemu-system-x86_64: /home/ehabkost/rh/proj/virt/qemu/hw/i386/smbios.c:825: smbios_get_tables: Assertion `smbios_smp_sockets >= 1' failed.
  Aborted (core dumped)

Socket count calculation doesn't belong to smbios.c and should
eventually be moved to the main SMP topology configuration code. But
while we don't move the code, at least make it correct by rounding up
the division.

Cc: Gabriel Somlo <somlo@cmu.edu>
Cc: qemu-stable@nongnu.org
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-By: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

(cherry picked from commit 7dfddd7f88)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 15:58:49 -06:00
Zhang Haoyu
e2d402d0a1 snapshot: fix referencing wrong variable in while loop in do_delvm
The while loop variabal is "bs1",
but "bs" is always passed to bdrv_snapshot_delete_by_id_or_name.
Broken in commit a89d89d, v1.7.0.

Signed-off-by: Zhang Haoyu <zhanghy@sangfor.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit af95738754)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 15:55:00 -06:00
Michael Roth
4d492e8909 tests: avoid running duplicate qom-tests
Since 3687d532 we've been unconditionally adding qom-test to our qtests
for every arch. However, some archs inherit their tests from Makefile
variables for other archs, such as i386/x86_64,
microblaze/microblazeel, and xtensa/xtensaeb. Since these are evaluated
in a lazy manner, we ultimately end up adding qom-test twice.

In the case x86_64, where we have a large number of machine types that
we rerun qom-test for, this has lead to a fairly noticeable increase
in the overall run-time of `make check` (78s vs. 42s on my machine).
Similar speed-ups are visible for other such archs, but not nearly as
significant.

Fix this by only adding qom-test to an arch's test list if it's not
already present.

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Cc: qemu-stable@nongnu.org
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit 2b8419cb49)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 15:52:23 -06:00
zhanghailiang
45c46f20c6 pc-dimm: Don't check dimm->node when there is non-NUMA config
It should not break memory hotplug feature if there is non-NUMA option.

This patch would also allow to use pc-dimm as replacement for initial memory
for non-NUMA configs.

Note: After this patch, the memory hotplug can work normally for Linux guest OS
when there is non-NUMA option and NUMA option. But not support Windows guest OS
to hotplug memory with no-NUMA config, actully, it's Windows limitation.

Reviewed-By: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit fc50ff0666)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 15:49:49 -06:00
Andreas Färber
c4379ce8ef ivshmem: Fix fd leak on error
Reported-by: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 3a31cff112)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 15:43:56 -06:00
Sebastian Krahmer
a95569d24f ivshmem: Fix potential OOB r/w access
Fix OOB access via malformed incoming_posn parameters
and check that requested memory is actually alloc'ed.

Signed-off-by: Sebastian Krahmer <krahmer@suse.de>
[AF: Rebased, cleanups, avoid fd leak]
Cc: qemu-stable@nongnu.org
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

(cherry picked from commit 34bc07c528)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 15:43:42 -06:00
Stefan Hajnoczi
15905fde7b ivshmem: validate incoming_posn value from server
Check incoming_posn to avoid out-of-bounds array accesses if the ivshmem
server on the host sends invalid values.

Cc: Cam Macdonell <cam@cs.ualberta.ca>
Reported-by: Sebastian Krahmer <krahmer@suse.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
[AF: Tighten upper bound check for posn in close_guest_eventfds()]
Cc: qemu-stable@nongnu.org
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

(cherry picked from commit 363ba1c72f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 15:43:21 -06:00
Stefan Hajnoczi
f1a842948a ivshmem: Check ivshmem_read() size argument
The third argument to the fd_read() callback implemented by
ivshmem_read() is the number of bytes, not a flags field.  Fix this and
check we received enough bytes before accessing the buffer pointer.

Cc: Cam Macdonell <cam@cs.ualberta.ca>
Reported-by: Sebastian Krahmer <krahmer@suse.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
[AF: Handle partial reads via FIFO]
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

(cherry picked from commit a2e9011b41)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-06 15:43:03 -06:00
Damjan Marion
09d552b40f vhost-user: fix VIRTIO_NET_F_MRG_RXBUF negotiation
Header length check should happen only if backend is kernel. For user
backend there is no reason to reset this bit.

vhost-user code does not define .has_vnet_hdr_len so
VIRTIO_NET_F_MRG_RXBUF cannot be negotiated even if both sides
support it.

Signed-off-by: Damjan Marion <damarion@cisco.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit d8e80ae37a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-05 20:22:59 -06:00
Luiz Capitulino
d754428b9b virtio-balloon: fix integer overflow in memory stats feature
When a QMP client changes the polling interval time by setting
the guest-stats-polling-interval property, the interval value
is stored and manipulated as an int64_t variable.

However, the balloon_stats_change_timer() function, which is
used to set the actual timer with the interval value, takes
an int instead, causing an overflow for big interval values.

This commit fix this bug by changing balloon_stats_change_timer()
to take an int64_t and also it limits the polling interval value
to UINT_MAX to avoid other kinds of overflow.

Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
(cherry picked from commit 1f9296b51a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-05 20:18:58 -06:00
Stratos Psomadakis
5d350980f6 monitor: Reset HMP mon->rs in CHR_EVENT_OPEN
Commit cdaa86a54 ("Add G_IO_HUP handler for socket chardev") exposed a bug in
the way the HMP monitor handles its command buffer. When a client closes the
connection to the monitor, tcp_chr_read() will detect the G_IO_HUP condition
and call tcp_chr_disconnect() to close the server-side connection too. Due to
the fact that monitor reads 1 byte at a time (for each tcp_chr_read()), the
monitor readline state / buffers might contain junk (i.e. a half-finished
command). Thus, without calling readline_restart() on mon->rs in
CHR_EVENT_OPEN, future HMP commands will fail.

Signed-off-by: Stratos Psomadakis <psomas@grnet.gr>
Signed-off-by: Dimitris Aragiorgis <dimara@grnet.gr>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
(cherry picked from commit e5554e2015)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-05 09:46:07 -06:00
Fam Zheng
ff1f973003 qemu-iotests: Test missing "driver" key for blockdev-add
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
(cherry picked from commit fe509ee237)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-04 13:44:49 -06:00
Michael Roth
0b2d2e094a tests: add QMP input visitor test for unions with no discriminator
This is more of an exercise of the dealloc visitor, where it may
erroneously use an uninitialized discriminator field as indication
that union fields corresponding to that discriminator field/type are
present, which can lead to attempts to free random chunks of heap
memory.

Cc: qemu-stable@nongnu.org
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
(cherry picked from commit cb55111b4e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-04 13:44:20 -06:00
Michael Roth
4a58f3c2d8 qapi: dealloc visitor, implement visit_start_union
If the .data field of a QAPI Union is NULL, we don't need to free
any of the union fields.

Make use of the new visit_start_union interface to access this
information and instruct the generated code to not visit these
fields when this occurs.

Cc: qemu-stable@nongnu.org
Reported-by: Fam Zheng <famz@redhat.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
(cherry picked from commit 146db9f919)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-04 13:44:04 -06:00
Michael Roth
96c6cf6d30 qapi: add visit_start_union and visit_end_union
In some cases an input visitor might bail out on filling out a
struct for various reasons, such as missing fields when running
in strict mode. In the case of a QAPI Union type, this may lead
to cases where the .kind field which encodes the union type
is uninitialized. Subsequently, other visitors, such as the
dealloc visitor, may use this .kind value as if it were
initialized, leading to assumptions about the union type which
in this case may lead to segfaults. For example, freeing an
integer value.

However, we can generally rely on the fact that the always-present
.data void * field that we generate for these union types will
always be NULL in cases where .kind is uninitialized (at least,
there shouldn't be a reason where we'd do this purposefully).

So pass this information on to Visitor implementation via these
optional start_union/end_union interfaces so this information
can be used to guard against the situation above. We will make
use of this information in a subsequent patch for the dealloc
visitor.

Cc: qemu-stable@nongnu.org
Reported-by: Fam Zheng <famz@redhat.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
(cherry picked from commit cee2dedb85)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-04 13:43:48 -06:00
Pavel Dovgalyuk
b5fc105016 gdbstub: init mon_chr through qemu_chr_alloc
This patch initializes monitor for gdbstub with the qemu_chr_alloc function
instead of just allocating the memory. Initialization function call
is required, because it also creates chr_write_lock mutex, which is used
when writing to this character device.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 462efe9e53)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-01-04 13:41:55 -06:00
Peter Maydell
e1cf5a23d1 hw/arm/virt: fix pl011 and pl031 irq flags
The pl011 and pl031 devices both use level triggered interrupts,
but the device tree we construct was incorrectly telling the
kernel to configure the GIC to treat them as edge triggered.
This meant that output from the pl011 would hang after a while.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1410274423-9461-1-git-send-email-peter.maydell@linaro.org
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Cc: qemu-stable@nongnu.org
(cherry picked from commit 0be969a2d9)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-12-24 16:10:19 -06:00
Greg Kurz
490a0f887e spapr_pci: map the MSI window in each PHB
On sPAPR, virtio devices are connected to the PCI bus and use MSI-X.
Commit cc943c36fa has modified MSI-X
so that writes are made using the bus master address space and follow
the IOMMU path.

Unfortunately, the IOMMU address space address space does not have an
MSI window: the notification is silently dropped in unassigned_mem_write
instead of reaching the guest... The most visible effect is that all
virtio devices are non-functional on sPAPR since then. :(

This patch does the following:
1) map the MSI window into the IOMMU address space for each PHB
   - since each PHB instantiates its own IOMMU address space, we
     can safely map the window at a fixed address (SPAPR_PCI_MSI_WINDOW)
   - no real need to keep the MSI window setup in a separate function,
     the spapr_pci_msi_init() code moves to spapr_phb_realize().

2) kill the global MSI window as it is not needed in the end

Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
(cherry picked from commit 8c46f7ec85)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-12-24 16:08:16 -06:00
Michael S. Tsirkin
e4fb3debc3 virtio-pci: enable bus master for old guests
commit cc943c36fa
    pci: Use bus master address space for delivering MSI/MSI-X messages
breaks virtio-net for rhel6.[56] x86 guests because they don't
enable bus mastering for virtio PCI devices. For the same reason,
rhel6.[56] ppc64 guests cannot boot on a virtio-blk disk anymore.

Old guests forgot to enable bus mastering, enable it automatically on
DRIVER (guests use some devices before DRIVER_OK).

Reported-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Reviewed-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Tested-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit e43c0b2ea5)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-12-24 16:08:16 -06:00
Jan Kiszka
7fb768ea30 pci: Use bus master address space for delivering MSI/MSI-X messages
The spec says (and real HW confirms this) that, if the bus master bit
is 0, the device will not generate any PCI accesses. MSI and MSI-X
messages fall among these, so we should use the corresponding address
space to deliver them. This will prevent delivery if bus master support
is disabled.

Cc: qemu-stable@nongnu.org
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit cc943c36fa)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-12-24 16:08:15 -06:00
Eduardo Habkost
2151206778 kvmclock: Add comment explaining why we need cpu_clean_all_dirty()
Try to explain why commit 317b0a6d8b
needed a cpu_clean_all_dirty() call just after calling
cpu_synchronize_all_states().

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Cc: Andrey Korolyov <andrey@xdel.ru>
Cc: Marcin Gibuła <m.gibula@beyond.pl>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 1154d84dcc)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-12-24 16:08:15 -06:00
Alexander Graf
c35ba0d9e4 kvmclock: Ensure time in migration never goes backward
When we migrate we ask the kernel about its current belief on what the guest
time would be. However, I've seen cases where the kvmclock guest structure
indicates a time more recent than the kvm returned time.

To make sure we never go backwards, calculate what the guest would have seen as time at the point of migration and use that value instead of the kernel returned one when it's more recent.
This bases the view of the kvmclock after migration on the
same foundation in host as well as guest.

Signed-off-by: Alexander Graf <agraf@suse.de>
Cc: qemu-stable@nongnu.org
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 9a48bcd1b8)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-12-24 15:59:11 -06:00
Marcelo Tosatti
61048e1942 kvmclock: Ensure proper env->tsc value for kvmclock_current_nsec calculation
Ensure proper env->tsc value for kvmclock_current_nsec calculation.

Reported-by: Marcin Gibuła <m.gibula@beyond.pl>
Analyzed-by: Marcin Gibuła <m.gibula@beyond.pl>
Cc: qemu-stable@nongnu.org
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 317b0a6d8b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-12-24 15:59:00 -06:00
Marcelo Tosatti
a9ed61533f Introduce cpu_clean_all_dirty
Introduce cpu_clean_all_dirty, to force subsequent cpu_synchronize_all_states
to read in-kernel register state.

Cc: qemu-stable@nongnu.org
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit de9d61e83d)
Conflicts:
	kvm-all.c

*removed context dependency on kvm_cpu_synchronize_post_init

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-12-24 15:56:18 -06:00
Dr. David Alan Gilbert
3807aeb1d4 xhci PCIe endpoint migration compatibility fix
Add back the PCIe config capabilities on XHCI cards in non-PCIe slots,
but only for machine types before 2.1.

This fixes a migration incompatibility in the XHCI PCI devices
caused by:
   058fdcf52c - xhci: add endpoint cap on express bus only

Note that in fixing it for compatibility with older QEMUs, it breaks
compatibility with existing QEMU 2.1's on older machine types.

The status before this patch was (if it used an XHCI adapter):
   machine type | source qemu
     any           pre-2.1     - FAIL
     any           2.1...      - PASS

With this patch:
   machine type | source qemu
     any           pre-2.1    - PASS
     pre-2.1       2.1...     - FAIL
     2.1           2.1...     - PASS

A test to trigger it is to add '-device nec-usb-xhci,id=xhci,addr=0x12'
to the command line.

Cc: qemu-stable@nongnu.org
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit e6043e92c2)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-12-24 15:45:02 -06:00
Luiz Capitulino
ff3bd5e4bb exec: file_ram_alloc(): print error when prealloc fails
If memory allocation fails when using the -mem-prealloc command-line
option, QEMU exits without printing any error information to
the user:

 # qemu [...] -m 1G -mem-prealloc -mem-path /dev/hugepages
 # echo $?
 1

This commit adds an error message, so that we print instead:

 # qemu [...] -m 1G -mem-prealloc -mem-path /dev/hugepages
 qemu: unable to map backing store for hugepages: Cannot allocate memory

Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
(cherry picked from commit e4d9df4fb1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-12-24 15:43:39 -06:00
Gonglei
d6af26d6ce qdev: Add cleanup logic in device_set_realized() to avoid resource leak
At present, this function doesn't have partial cleanup implemented,
which will cause resource leaks in some scenarios.

Example:

1. Assume that "dc->realize(dev, &local_err)" executes successful
   and local_err == NULL;
2. device hotplug in hotplug_handler_plug() executes but fails
   (it is prone to occur). Then local_err != NULL;
3. error_propagate(errp, local_err) and return. But the resources
   which have been allocated in dc->realize() will be leaked.
Simple backtrace:
  dc->realize()
   |->device_realize
            |->pci_qdev_init()
                |->do_pci_register_device()
                |->etc.

Add fuller cleanup logic which assures that function can
goto appropriate error label as local_err population is
detected at each relevant point.

Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit 1d45a705fc)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-12-24 15:36:22 -06:00
Gonglei
8bb90ee80a qdev: Use NULL instead of local_err for qbus_child unrealize
Forcefully unrealize all children regardless of errors in earlier
iterations (if any). We should keep going with cleanup operation
rather than report an error immediately. Therefore store the first
child unrealization failure and propagate it at the end. We also
forcefully unregister vmsd and unrealize actual object, too.

Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit cd4520adca)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-12-24 15:35:32 -06:00
Michael Roth
562d6b4f7f Update version for v2.1.2 release
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-25 14:52:04 -05:00
Petr Matousek
9a72433843 slirp: udp: fix NULL pointer dereference because of uninitialized socket
When guest sends udp packet with source port and source addr 0,
uninitialized socket is picked up when looking for matching and already
created udp sockets, and later passed to sosendto() where NULL pointer
dereference is hit during so->slirp->vnetwork_mask.s_addr access.

Fix this by checking that the socket is not just a socket stub.

This is CVE-2014-3640.

Signed-off-by: Petr Matousek <pmatouse@redhat.com>
Reported-by: Xavier Mehrenberger <xavier.mehrenberger@airbus.com>
Reported-by: Stephane Duverger <stephane.duverger@eads.net>
Reviewed-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Message-id: 20140918063537.GX9321@dhcp-25-225.brq.redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 01f7cecf00)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-24 11:11:52 -05:00
Michael S. Tsirkin
00dd2b22f6 pc: leave more space for BIOS allocations
Since QEMU 2.1, we are allocating more space for ACPI tables, so no
space is left after initrd for the BIOS to allocate memory.

Besides ACPI tables, there are a few other uses of high memory in
SeaBIOS: SMBIOS tables and USB drivers use it in particular.  These uses
allocate a very small amount of memory.  Malloc metadata also lives
there.  So we need _some_ extra padding there to avoid initrd breakage,
but not much.

John Snow found a case where RHEL5 was broken by the recent change to
ACPI_TABLE_SIZE; in his case 4KB of extra padding are fine, but just to
be safe I am adding 32KB, which is roughly the same amount of padding
that was left by QEMU 2.0 and earlier.

Move initrd to leave some space for the BIOS.

Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reported-by: John Snow <jsnow@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 438f92ee9f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-23 10:48:06 -05:00
Michael S. Tsirkin
80f4d021f0 Revert "virtio: don't call device on !vm_running"
This reverts commit a1bc7b827e422e1ff065640d8ec5347c4aadfcd8.
    virtio: don't call device on !vm_running
It turns out that virtio net assumes that vm_running
is updated before device status callback in many places,
so this change leads to asserts.
Previous commit fixes the root issue that motivated
a1bc7b827e422e1ff065640d8ec5347c4aadfcd8 differently,
so there's no longer a need for this change.

In the future, we might be able to drop checking vm_running
completely, and check vm state directly.

Reported-by: Dietmar Maurer <dietmar@proxmox.com>
Cc: qemu-stable@nongnu.org
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 9e8e8c4865)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-23 10:48:06 -05:00
Michael S. Tsirkin
074e347138 virtio-net: drop assert on vm stop
On vm stop, vm_running state set to stopped
before device is notified, so callbacks can get envoked with
vm_running = false; and this is not an error.

Cc: qemu-stable@nongnu.org
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 131c5221fe)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-23 10:48:06 -05:00
Eduardo Habkost
9e8d994111 Revert "rng-egd: remove redundant free"
This reverts commit 5e490b6a50.

Cc: qemu-stable@nongnu.org
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit abb4d5f2e2)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-23 10:48:06 -05:00
Eduardo Habkost
a56b9cfd86 hw/machine: Free old values of string properties
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Marcel Apfelbaum <marcel.a@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Amos Kong <akong@redhat.com>
Cc: qemu-stable@nongnu.org
(cherry picked from commit 556068eed0)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-23 10:48:06 -05:00
Greg Kurz
07178559a9 Revert "spapr_pci: map the MSI window in each PHB"
This patch is predicated on cc943c, which was dropped from
stable tree for other reasons.

This reverts commit 0824ca6bd1.

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-23 10:48:06 -05:00
Michael Roth
3cb451edb2 Update version for v2.1.1 release
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 14:30:45 -05:00
Eduardo Habkost
82d80e1f0b target-i386: Support migratable=no properly
When the "migratable" property was implemented, the behavior was tested
by changing the default on the code, but actually using the option on
the command-line (e.g. "-cpu host,migratable=false") doesn't work as
expected. This is a regression for a common use case of "-cpu host",
which is to enable features that are supported by the host CPU + kernel
before feature-specific code is added to QEMU.

Fix this by initializing the feature words for "-cpu host" on
x86_cpu_parse_featurestr(), right after parsing the CPU options.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit 4d1b279b06)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:58 -05:00
Pavel Dovgaluk
5dd076a9f8 exec: Save CPUState::exception_index field
This patch adds a subsection with exception_index field to the VMState for
correct saving the CPU state.
Without this patch, simulator could miss the pending exception in the saved
virtual machine state.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Cc: qemu-stable@nongnu.org
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit 6c3bff0ed8)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:58 -05:00
Sebastian Tanase
257e9cfce2 pty: Fix byte loss bug when connecting to pty
When trying to print data to the pty, we first check if it is connected.
If not, we try to reconnect, but we drop the pending data even if we
have successfully reconnected; this makes us lose the first byte of the very
first transmission.
This small fix addresses the issue by checking once more if the pty is connected
after having tried to reconnect.

Signed-off-by: Sebastian Tanase <sebastian.tanase@openwide.fr>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit cf7330c759)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:58 -05:00
Gerd Hoffmann
1aa87d3689 spice: make sure we don't overflow ssd->buf
Related spice-only bug.  We have a fixed 16 MB buffer here, being
presented to the spice-server as qxl video memory in case spice is
used with a non-qxl card.  It's also used with qxl in vga mode.

When using display resolutions requiring more than 16 MB of memory we
are going to overflow that buffer.  In theory the guest can write,
indirectly via spice-server.  The spice-server clears the memory after
setting a new video mode though, triggering a segfault in the overflow
case, so qemu crashes before the guest has a chance to do something
evil.

Fix that by switching to dynamic allocation for the buffer.

CVE-2014-3615

Cc: qemu-stable@nongnu.org
Cc: secalert@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
(cherry picked from commit ab9509ccea)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:58 -05:00
Gerd Hoffmann
7fe5418d9f vbe: rework sanity checks
Plug a bunch of holes in the bochs dispi interface parameter checking.
Add a function doing verification on all registers.  Call that
unconditionally on every register write.  That way we should catch
everything, even changing one register affecting the valid range of
another register.

Some of the holes have been added by commit
e9c6149f6a.  Before that commit the
maximum possible framebuffer (VBE_DISPI_MAX_XRES * VBE_DISPI_MAX_YRES *
32 bpp) has been smaller than the qemu vga memory (8MB) and the checking
for VBE_DISPI_MAX_XRES + VBE_DISPI_MAX_YRES + VBE_DISPI_MAX_BPP was ok.

Some of the holes have been there forever, such as
VBE_DISPI_INDEX_X_OFFSET and VBE_DISPI_INDEX_Y_OFFSET register writes
lacking any verification.

Security impact:

(1) Guest can make the ui (gtk/vnc/...) use memory rages outside the vga
frame buffer as source  ->  host memory leak.  Memory isn't leaked to
the guest but to the vnc client though.

(2) Qemu will segfault in case the memory range happens to include
unmapped areas  ->  Guest can DoS itself.

The guest can not modify host memory, so I don't think this can be used
by the guest to escape.

CVE-2014-3615

Cc: qemu-stable@nongnu.org
Cc: secalert@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
(cherry picked from commit c1b886c45d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:58 -05:00
Gerd Hoffmann
c5042f04f7 vbe: make bochs dispi interface return the correct memory size with qxl
VgaState->vram_size is the size of the pci bar.  In case of qxl not the
whole pci bar can be used as vga framebuffer.  Add a new variable
vbe_size to handle that case.  By default (if unset) it equals
vram_size, but qxl can set vbe_size to something else.

This makes sure VBE_DISPI_INDEX_VIDEO_MEMORY_64K returns correct results
and sanity checks are done with the correct size too.

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
(cherry picked from commit 54a85d4624)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:58 -05:00
Michael S. Tsirkin
cf29a88391 virtio-net: purge outstanding packets when starting vhost
whenever we start vhost, virtio could have outstanding packets
queued, when they complete later we'll modify the ring
while vhost is processing it.

To prevent this, purge outstanding packets on vhost start.

Cc: qemu-stable@nongnu.org
Cc: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 086abc1ccd)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:58 -05:00
Michael S. Tsirkin
08743db463 net: complete all queued packets on VM stop
This completes all packets, ensuring that callbacks
will not run when VM is stopped.

Cc: qemu-stable@nongnu.org
Cc: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit ca77d85e1d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:58 -05:00
Michael S. Tsirkin
d9c06c0d79 net: invoke callback when purging queue
devices rely on packet callbacks eventually running,
but we violate this rule whenever we purge the queue.
To fix, invoke callbacks on all packets on purge.
Set length to 0, this way callers can detect that
this happened and re-queue if necessary.

Cc: qemu-stable@nongnu.org
Cc: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 07d8084624)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:58 -05:00
Michael S. Tsirkin
f321710cd4 virtio: don't call device on !vm_running
On vm stop, virtio changes vm_running state
too soon, so callbacks can get envoked with
vm_running = false;

Cc: qemu-stable@nongnu.org
Cc: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 269bd822e7)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:58 -05:00
zhanghailiang
ec48bfd57b net: Forbid dealing with packets when VM is not running
For all NICs(except virtio-net) emulated by qemu,
Such as e1000, rtl8139, pcnet and ne2k_pci,
Qemu can still receive packets when VM is not running.

If this happened in *migration's* last PAUSE VM stage, but
before the end of the migration, the new receiving packets will possibly dirty
parts of RAM which has been cached in *iovec*(will be sent asynchronously) and
dirty parts of new RAM which will be missed.
This will lead serious network fault in VM.

To avoid this, we forbid receiving packets in generic net code when
VM is not running.

Bug reproduction steps:
(1) Start a VM which configured at least one NIC
(2) In VM, open several Terminal and do *Ping IP -i 0.1*
(3) Migrate the VM repeatedly between two Hosts
And the *PING* command in VM will very likely fail with message:
'Destination HOST Unreachable', the NIC in VM will stay unavailable unless you
run 'service network restart'

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit e1d64c084b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:58 -05:00
zhanghailiang
eb36f79d59 acpi-build: Set FORCE_APIC_CLUSTER_MODEL bit for FADT flags
If we start Windows 2008 R2 DataCenter with number of cpu less than 8,
The system will use APIC Flat Logical destination mode as default configuration,
Which has an upper limit of 8 CPUs.

The fault is that VM can not show all processors within Task Manager if
we hot-add cpus when the number of cpus in VM extends the limit of 8.

If we use cluster destination model, the problem will be solved.

Note:
This flag was introduced later than ACPI v1.0 specification while QEMU
generates v1.0 tables only, but...

linux kernel ignores this flag, so patch has no influence on it.

Tested with Win[XPsp3|Srv2003EE|Srv2008DC|Srv2008R2|Srv2012R2], there
isn't BSODs and guests boot just fine. In cases guest doesn't support
cpu-hotplug, cpu becomes visible after reboot and in case the guest
supports cpu-hotplug, it works as expected with this patch.

Cc: qemu-stable@nongnu.org
Signed-off-by: huangzhichao <huangzhichao@huawei.com>
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-By: Igor Mammedov <imammedo@redhat.com>
(cherry picked from commit 07b81ed937)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:58 -05:00
Michael S. Tsirkin
34d41c1a20 vhost-scsi: init backend features earlier
As vhost core can use backend_features during init, clear it earlier to
avoid using uninitialized memory.
This use would be harmless since vhost scsi ignores the result
anyway, but initializing earlier will help prevent valgrind errors,
and make scsi and net behave similarly.

Cc: qemu-stable@nongnu.org
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 3a1655fc53)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:57 -05:00
Jason Wang
6f8d05a8f8 vhost_net: init acked_features to backend_features
commit 2e6d46d77e (vhost: add
vhost_get_features and vhost_ack_features) removes the step that
initializes the acked_features to backend_features.

As this field is now uninitialized, vhost initialization will sometimes
fail.

To fix, initialize acked_features on each ack.

Tested-by: Andrey Korolyov <andrey@xdel.ru>
Cc: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit b49ae9138d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:57 -05:00
Jason Wang
5e83dae44e vhost_net: start/stop guest notifiers properly
commit a9f98bb5eb "vhost: multiqueue
support" changed the order of stopping the device. Previously
vhost_dev_stop would disable backend and only afterwards, unset guest
notifiers. We now unset guest notifiers while vhost is still
active. This can lose interrupts causing guest networking to fail. In
particular, this has been observed during migration.

To fix this, several other changes are needed:
- remove the hdev->started assertion in vhost.c since we may want to
start the guest notifiers before vhost starts and stop the guest
notifiers after vhost is stopped.
- introduce the vhost_net_set_vq_index() and call it before setting
guest notifiers. This is to guarantee vhost_net has the correct
virtqueue index when setting guest notifiers.

MST: fix up error handling.

Cc: qemu-stable@nongnu.org
Cc: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Andrey Korolyov <andrey@xdel.ru>
Reported-by: "Zhangjie (HZ)" <zhangjie14@huawei.com>
Tested-by: William Dauchy <william@gandi.net>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit cd7d1d26b0)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:57 -05:00
Knut Omang
ff34ca00fd pci: avoid losing config updates to MSI/MSIX cap regs
Since
commit 95d6580024
    msi: Invoke msi/msix_write_config from PCI core
msix config writes are lost, the value written is always 0.

Fix pci_default_write_config to avoid this.

Cc: qemu-stable@nongnu.org
Signed-off-by: Knut Omang <knut.omang@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit d7efb7e08e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:57 -05:00
Michael S. Tsirkin
e685d2abf7 virtio-net: don't run bh on vm stopped
commit 783e770693
    virtio-net: stop/start bh when appropriate

is incomplete: BH might execute within the same main loop iteration but
after vmstop, so in theory, we might trigger an assertion.
I was unable to reproduce this in practice,
but it seems clear enough that the potential is there, so worth fixing.

Cc: qemu-stable@nongnu.org
Reported-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit e8bcf84200)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:57 -05:00
Gerd Hoffmann
67cfda8776 qxl-render: add more sanity checks
Damn, the dirty rectangle values are signed integers.  So the checks
added by commit 788fbf042f are not good
enough, we also have to make sure they are not negative.

[ Note: There must be something broken in spice-server so we get
  negative values in the first place.  Bug opened:
  https://bugzilla.redhat.com/show_bug.cgi?id=1135372 ]

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
(cherry picked from commit 503b3b33fe)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:57 -05:00
Peter Maydell
4fd144f8f5 target-arm: Correct Cortex-A57 ISAR5 and AA64ISAR0 ID register values
We implement the crypto extensions but were incorrectly reporting
ID register values for the Cortex-A57 which did not advertise
crypto. Use the correct values as described in the TRM.
With this fix Linux correctly detects presence of the crypto
features and advertises them in /proc/cpuinfo.

Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1408718660-7295-1-git-send-email-peter.maydell@linaro.org
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit c379621451)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:57 -05:00
Peter Maydell
ea774b8dd0 target-arm: Fix regression that disabled VFP for ARMv5 CPUs
Commit 2c7ffc414 added support for honouring the CPACR coprocessor
access control register bits which may disable access to VFP
and Neon instructions. However it failed to account for the
fact that the CPACR is only present starting from the ARMv6
architecture version, so it accidentally disabled VFP completely
for ARMv5 CPUs like the ARM926. Linux would detect this as
"no VFP present" and probably fall back to its own emulation,
but other guest OSes might crash or misbehave.

This fixes bug LP:1359930.

Reported-by: Jakub Jermar <jakub@jermar.eu>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1408714940-7192-1-git-send-email-peter.maydell@linaro.org
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit ed1f13d607)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:57 -05:00
Alex Williamson
3e8966df02 x86: Clear MTRRs on vCPU reset
The SDM specifies (June 2014 Vol3 11.11.5):

    On a hardware reset, the P6 and more recent processors clear the
    valid flags in variable-range MTRRs and clear the E flag in the
    IA32_MTRR_DEF_TYPE MSR to disable all MTRRs. All other bits in the
    MTRRs are undefined.

We currently do none of that, so whatever MTRR settings you had prior
to reset is what you have after reset.  Usually this doesn't matter
because KVM often ignores the guest mappings and uses write-back
anyway.  However, if you have an assigned device and an IOMMU that
allows NoSnoop for that device, KVM defers to the guest memory
mappings which are now stale after reset.  The result is that OVMF
rebooting on such a configuration takes a full minute to LZMA
decompress the firmware volume, a process that is nearly instant on
the initial boot.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 9db2efd95e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:57 -05:00
Alex Williamson
ba8576f338 x86: kvm: Add MTRR support for kvm_get|put_msrs()
The MTRR state in KVM currently runs completely independent of the
QEMU state in CPUX86State.mtrr_*.  This means that on migration, the
target loses MTRR state from the source.  Generally that's ok though
because KVM ignores it and maps everything as write-back anyway.  The
exception to this rule is when we have an assigned device and an IOMMU
that doesn't promote NoSnoop transactions from that device to be cache
coherent.  In that case KVM trusts the guest mapping of memory as
configured in the MTRR.

This patch updates kvm_get|put_msrs() so that we retrieve the actual
vCPU MTRR settings and therefore keep CPUX86State synchronized for
migration.  kvm_put_msrs() is also used on vCPU reset and therefore
allows future modificaitons of MTRR state at reset to be realized.

Note that the entries array used by both functions was already
slightly undersized for holding every possible MSR, so this patch
increases it beyond the 28 new entries necessary for MTRR state.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit d1ae67f626)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:57 -05:00
Alex Williamson
07f8c97f84 x86: Use common variable range MTRR counts
We currently define the number of variable range MTRR registers as 8
in the CPUX86State structure and vmstate, but use MSR_MTRRcap_VCNT
(also 8) to report to guests the number available.  Change this to
use MSR_MTRRcap_VCNT consistently.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit d8b5c67b05)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:57 -05:00
William Grant
72c9c9a05e target-i386: Don't forbid NX bit on PAE PDEs and PTEs
Commit e8f6d00c30 ("target-i386: raise
page fault for reserved physical address bits") added a check that the
NX bit is not set on PAE PDPEs, but it also added it to rsvd_mask for
the rest of the function. This caused any PDEs or PTEs with NX set to be
erroneously rejected, making PAE guests with NX support unusable.

Signed-off-by: William Grant <wgrant@ubuntu.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 1844e68eca)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:57 -05:00
Paolo Bonzini
3d8cc86e4f vl: process -object after other backend options
QOM backends can refer to chardevs, but not vice versa.  So
process -chardev and -fsdev options before -object

This fixes the rng-egd backend to virtio-rng.

Reported-by: Amos Kong <akong@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 7b71758d79)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:56 -05:00
Greg Kurz
0824ca6bd1 spapr_pci: map the MSI window in each PHB
On sPAPR, virtio devices are connected to the PCI bus and use MSI-X.
Commit cc943c36fa has modified MSI-X
so that writes are made using the bus master address space and follow
the IOMMU path.

Unfortunately, the IOMMU address space address space does not have an
MSI window: the notification is silently dropped in unassigned_mem_write
instead of reaching the guest... The most visible effect is that all
virtio devices are non-functional on sPAPR since then. :(

This patch does the following:
1) map the MSI window into the IOMMU address space for each PHB
   - since each PHB instantiates its own IOMMU address space, we
     can safely map the window at a fixed address (SPAPR_PCI_MSI_WINDOW)
   - no real need to keep the MSI window setup in a separate function,
     the spapr_pci_msi_init() code moves to spapr_phb_realize().

2) kill the global MSI window as it is not needed in the end

Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
(cherry picked from commit 8c46f7ec85)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-10 09:30:28 -05:00
Stefan Hajnoczi
feb633411f thread-pool: avoid deadlock in nested aio_poll() calls
The thread pool has a race condition if two elements complete before
thread_pool_completion_bh() runs:

  If element A's callback waits for element B using aio_poll() it will
  deadlock since pool->completion_bh is not marked scheduled when the
  nested aio_poll() runs.

Fix this by marking the BH scheduled while thread_pool_completion_bh()
is executing.  This way any nested aio_poll() loops will enter
thread_pool_completion_bh() and complete the remaining elements.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 3c80ca158c)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:06 -05:00
Stefan Hajnoczi
75ada6b763 thread-pool: avoid per-thread-pool EventNotifier
EventNotifier is implemented using an eventfd or pipe.  It therefore
consumes file descriptors, which can be limited by rlimits and should
therefore be used sparingly.

Switch from EventNotifier to QEMUBH in thread-pool.c.  Originally
EventNotifier was used because qemu_bh_schedule() was not thread-safe
yet.

Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit c2e50e3d11)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:06 -05:00
Michael S. Tsirkin
be3af755ac pc: reserve more memory for ACPI for new machine types
commit 868270f23d
    acpi-build: tweak acpi migration limits
broke kernel loading with -kernel/-initrd: it doubled
the size of ACPI tables but did not reserve
enough memory.

As a result, issues on boot and halt are observed.

Fix this up by doubling reserved memory for new machine types.

Cc: qemu-stable@nongnu.org
Reported-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 927766c7d3)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:05 -05:00
Gonglei
bfe3e6f5e3 pcihp: fix possible array out of bounds
Prevent out-of-bounds array access on
acpi_pcihp_pci_status.

Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Cc: qemu-stable@nongnu.org
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
(cherry picked from commit fa365d7cd1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:05 -05:00
Michael S. Tsirkin
cd4acff8d0 hostmem: set MPOL_MF_MOVE
When memory is allocated on a wrong node, MPOL_MF_STRICT
doesn't move it - it just fails the allocation.
A simple way to reproduce the failure is with mlock=on
realtime feature.

The code comment actually says: "ensure policy won't be ignored"
so setting MPOL_MF_MOVE seems like a better way to do this.

Cc: qemu-stable@nongnu.org
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

(cherry picked from commit 288d332202)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:05 -05:00
Ben Draper
4b59161253 vmxnet3: Pad short frames to minimum size (60 bytes)
When running VMware ESXi under qemu-kvm the guest discards frames
that are too short. Short ARP Requests will be dropped, this prevents
guests on the same bridge as VMware ESXi from communicating. This patch
simply adds the padding on the network device itself.

Signed-off-by: Ben Draper <ben@xrsa.net>
Reviewed-by: Dmitry Fleytman <dmitry@daynix.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit 40a87c6c9b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:05 -05:00
Fam Zheng
fab7560c35 blkdebug: Delete BH in bdrv_aio_cancel
Otherwise error_callback_bh will access the already released acb.

Cc: qemu-stable@nongnu.org
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit cbf95a0b11)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:05 -05:00
Stefan Hajnoczi
16c92cd629 qemu-iotests: add test case 101 for short file I/O
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 8d9eb33ca0)

Conflicts:
	tests/qemu-iotests/group

*fix up context mismatches due to lack of 099 and 103 tests

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:05 -05:00
Stefan Hajnoczi
dea6efe883 raw-posix: fix O_DIRECT short reads
The following O_DIRECT read from a <512 byte file fails:

  $ truncate -s 320 test.img
  $ qemu-io -n -c 'read -P 0 0 512' test.img
  qemu-io: can't open device test.img: Could not read image for determining its format: Invalid argument

Note that qemu-io completes successfully without the -n (O_DIRECT)
option.

This patch fixes qemu-iotests ./check -nocache -vmdk 059.

Cc: qemu-stable@nongnu.org
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Reported-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 61ed73cff4)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:05 -05:00
Peter Lieven
8c4edd743c block/iscsi: fix memory corruption on iscsi resize
bs->total_sectors is not yet updated at this point. resulting
in memory corruption if the volume has grown and data is written
to the newly availble areas.

CC: qemu-stable@nongnu.org
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit d832fb4d66)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:05 -05:00
Christoffer Dall
504e2a7139 arm/virt: Use PSCI v0.2 function IDs in the DT when KVM uses PSCI v0.2
The current code supplies the PSCI v0.1 function IDs in the DT even when
KVM uses PSCI v0.2.

This will break guest kernels that only support PSCI v0.1 as they will
use the IDs provided in the DT.  Guest kernels with PSCI v0.2 support
are not affected by this patch, because they ignore the function IDs in
the device tree and rely on the architecture definition.

Define QEMU versions of the constants and check that they correspond to
the Linux defines on Linux build hosts.  After this patch, both guest
kernels with PSCI v0.1 support and guest kernels with PSCI v0.2 should
work.

Tested on TC2 for 32-bit and APM Mustang for 64-bit (aarch64 guest
only).  Both cases tested with 3.14 and linus/master and verified I
could bring up 2 cpus with both guest kernels.  Also tested 32-bit with
a 3.14 host kernel with only PSCI v0.1 and both guests booted here as
well.

Cc: qemu-stable@nongnu.org
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 863714ba6c)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:05 -05:00
Christoffer Dall
2f6d5e1c9c target-arm: Rename QEMU PSCI v0.1 definitions
The function IDs for PSCI v0.1 are exported by KVM and defined as
KVM_PSCI_FN_<something>.  To build using these defines in non-KVM code,
QEMU defines these IDs locally and check their correctness against the
KVM headers when those are available.

However, the naming scheme used for QEMU (almost) clashes with the PSCI
v0.2 definitions from Linux so to avoid unfortunate naming when we
introduce local PSCI v0.2 defines, rename the current local defines with
QEMU_ prependend and clearly identify the PSCI version as v0.1 in the
defines.

Cc: qemu-stable@nongnu.org
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit a65c9c17ce)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:05 -05:00
Peter Maydell
20463dc874 target-arm: Fix return address for A64 BRK instructions
When we take an exception resulting from a BRK instruction,
the architecture requires that the "preferred return address"
reported to the exception handler is the address of the BRK
itself, not the following instruction (like undefined
insns, and in contrast with SVC, HVC and SMC). Follow this,
rather than incorrectly reporting the address of the following
insn.

(We do get this correct for the A32/T32 BKPT insns.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org
(cherry picked from commit 229a138d74)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:05 -05:00
zhanghailiang
2a575c450e virtio-blk: fix reference a pointer which might be freed
In function virtio_blk_handle_request, it may freed memory pointed by req,
So do not access member of req after calling this function.

Cc: qemu-stable@nongnu.org
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 1bdb176ac5)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:04 -05:00
Michael S. Tsirkin
1ad9dcec47 acpi: align RSDP
RSDP should be aligned at a 16-byte boundary.
This would by chance at the moment, fix up acpi build
to make it robust.

Cc: qemu-stable@nongnu.org
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
(cherry picked from commit d67aadccfa)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:04 -05:00
Hu Tao
ba1bc81991 numa: show hex number in error message for consistency and prefix them with 0x
The error messages before and after patch are:

before:
qemu-system-x86_64: total memory for NUMA nodes (134217728) should equal RAM size (20000000)

after:
qemu-system-x86_64: total memory for NUMA nodes (0x8000000) should equal RAM size (0x20000000)

Cc: qemu-stable@nongnu.org
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit c68233aee8)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:04 -05:00
Michael S. Tsirkin
948574e0d2 pc-dimm: fix up error message
- int should be printed using %d
- print actual wrong value for property

Cc: qemu-stable@nongnu.org
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 988eba0f68)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:04 -05:00
Hu Tao
044af98ea8 pc-dimm: validate node property
If user specifies a node number that exceeds the available numa nodes in
emulated system for pc-dimm device, the device will report an invalid _PXM
to OSPM. Fix this by checking the node property value.

Cc: qemu-stable@nongnu.org
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit cfe0ffd027)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:04 -05:00
Hu Tao
7c68c5402a hw:i386: typo fix: MEMORY_HOPTLUG_DEVICE -> MEMORY_HOTPLUG_DEVICE
Cc: qemu-stable@nongnu.org
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 41d2f71376)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-09-08 11:23:04 -05:00
Michael Tokarev
bd4740621c ide: only constrain read/write requests to drive size, not other types
Commit 58ac321135 introduced a check to ide dma processing which
constrains all requests to drive size.  However, apparently, some
valid requests (like TRIM) does not fit in this constraint, and
fails in 2.1.  So check the range only for reads and writes.

Cc: qemu-stable@nongnu.org
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit d66168ed68)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-08-26 16:58:56 -05:00
Michael Tokarev
e22d5dc073 l2tpv3 (configure): it is linux-specific
Some non-linux systems, for example a system with
FreeBSD kernel and glibc, may declare struct mmsghdr
(in glibc) but may not have linux-specific header
file linux/ip.h.  The actual implementation in qemu
includes this linux-specific header file unconditionally,
so compilation fails if it is not present.  Include
this header in the configure test too.

Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit bff6cb7296)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-08-26 16:57:28 -05:00
Alex Williamson
dfd4808222 vfio: Fix MSI-X vector expansion
When new MSI-X vectors are enabled we need to disable MSI-X and
re-enable it with the correct number of vectors.  That means we need
to reprogram the eventfd triggers for each vector.  Prior to f4d45d47
vector->use tracked whether a vector was masked or unmasked and we
could always pick the KVM path when available for unmasked vectors.
Now vfio doesn't track mask state itself and vector->use and virq
remains configured even for masked vectors.  Therefore we need to ask
the MSI-X code whether a vector is masked in order to select the
correct signaling path.  As noted in the comment, MSI relies on
hardware to handle masking.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-stable@nongnu.org # QEMU 2.1
(cherry picked from commit c048be5cc9)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-08-26 16:48:12 -05:00
Stefan Hajnoczi
5f26e63b17 qdev-monitor: include QOM properties in -device FOO, help output
Update -device FOO,help to include QOM properties in addition to qdev
properties.  Devices are gradually adding more QOM properties that are
not reflected as qdev properties.

It is important to report all device properties since management tools
like libvirt use this information (and device-list-properties QMP) to
detect the presence of QEMU features.

This patch reuses the device-list-properties QMP machinery to avoid code
duplication.

Reported-by: Cole Robinson <crobinso@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Tested-by: Cole Robinson <crobinso@redhat.com>
(cherry picked from commit ef523587da)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-08-26 16:46:19 -05:00
Stefan Hajnoczi
42f7a13178 qmp: hide "hotplugged" device property from device-list-properties
The "hotplugged" device property was not reported before commit
f4eb32b590 ("qmp: show QOM properties in
device-list-properties").  Fix this difference.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
(cherry picked from commit 4115dd6527)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-08-26 16:46:01 -05:00
5449 changed files with 452163 additions and 1097077 deletions

View File

@@ -1,2 +0,0 @@
((c-mode . ((c-file-style . "stroustrup")
(indent-tabs-mode . nil))))

View File

@@ -1,15 +0,0 @@
# http://editorconfig.org
root = true
[*]
end_of_line = lf
insert_final_newline = true
charset = utf-8
[Makefile*]
indent_style = tab
indent_size = 8
[*.{c,h}]
indent_style = space
indent_size = 4

View File

@@ -1,8 +0,0 @@
# GDB may have ./.gdbinit loading disabled by default. In that case you can
# follow the instructions it prints. They boil down to adding the following to
# your home directory's ~/.gdbinit file:
#
# add-auto-load-safe-path /path/to/qemu/.gdbinit
# Load QEMU-specific sub-commands and settings
source scripts/qemu-gdb.py

161
.gitignore vendored
View File

@@ -5,106 +5,42 @@
/config-target.*
/config.status
/config-temp
/trace-events-all
/trace/generated-tracers.h
/trace/generated-tracers.c
/trace/generated-tracers-dtrace.h
/trace/generated-tracers.dtrace
/trace/generated-events.h
/trace/generated-events.c
/trace/generated-helpers-wrappers.h
/trace/generated-helpers.h
/trace/generated-helpers.c
/trace/generated-tcg-tracers.h
/ui/shader/texture-blit-frag.h
/ui/shader/texture-blit-vert.h
/ui/shader/texture-blit-flip-vert.h
/ui/input-keymap-*.c
/trace/generated-ust-provider.h
/trace/generated-ust.c
/libcacard/trace/generated-tracers.c
*-timestamp
/*-softmmu
/*-darwin-user
/*-linux-user
/*-bsd-user
/ivshmem-client
/ivshmem-server
/libdis*
/libuser
/linux-headers/asm
/qga/qapi-generated
/qapi-gen-timestamp
/qapi/qapi-builtin-types.[ch]
/qapi/qapi-builtin-visit.[ch]
/qapi/qapi-commands-block-core.[ch]
/qapi/qapi-commands-block.[ch]
/qapi/qapi-commands-char.[ch]
/qapi/qapi-commands-common.[ch]
/qapi/qapi-commands-crypto.[ch]
/qapi/qapi-commands-introspect.[ch]
/qapi/qapi-commands-migration.[ch]
/qapi/qapi-commands-misc.[ch]
/qapi/qapi-commands-net.[ch]
/qapi/qapi-commands-rocker.[ch]
/qapi/qapi-commands-run-state.[ch]
/qapi/qapi-commands-sockets.[ch]
/qapi/qapi-commands-tpm.[ch]
/qapi/qapi-commands-trace.[ch]
/qapi/qapi-commands-transaction.[ch]
/qapi/qapi-commands-ui.[ch]
/qapi/qapi-commands.[ch]
/qapi/qapi-events-block-core.[ch]
/qapi/qapi-events-block.[ch]
/qapi/qapi-events-char.[ch]
/qapi/qapi-events-common.[ch]
/qapi/qapi-events-crypto.[ch]
/qapi/qapi-events-introspect.[ch]
/qapi/qapi-events-migration.[ch]
/qapi/qapi-events-misc.[ch]
/qapi/qapi-events-net.[ch]
/qapi/qapi-events-rocker.[ch]
/qapi/qapi-events-run-state.[ch]
/qapi/qapi-events-sockets.[ch]
/qapi/qapi-events-tpm.[ch]
/qapi/qapi-events-trace.[ch]
/qapi/qapi-events-transaction.[ch]
/qapi/qapi-events-ui.[ch]
/qapi/qapi-events.[ch]
/qapi/qapi-introspect.[ch]
/qapi/qapi-types-block-core.[ch]
/qapi/qapi-types-block.[ch]
/qapi/qapi-types-char.[ch]
/qapi/qapi-types-common.[ch]
/qapi/qapi-types-crypto.[ch]
/qapi/qapi-types-introspect.[ch]
/qapi/qapi-types-migration.[ch]
/qapi/qapi-types-misc.[ch]
/qapi/qapi-types-net.[ch]
/qapi/qapi-types-rocker.[ch]
/qapi/qapi-types-run-state.[ch]
/qapi/qapi-types-sockets.[ch]
/qapi/qapi-types-tpm.[ch]
/qapi/qapi-types-trace.[ch]
/qapi/qapi-types-transaction.[ch]
/qapi/qapi-types-ui.[ch]
/qapi/qapi-types.[ch]
/qapi/qapi-visit-block-core.[ch]
/qapi/qapi-visit-block.[ch]
/qapi/qapi-visit-char.[ch]
/qapi/qapi-visit-common.[ch]
/qapi/qapi-visit-crypto.[ch]
/qapi/qapi-visit-introspect.[ch]
/qapi/qapi-visit-migration.[ch]
/qapi/qapi-visit-misc.[ch]
/qapi/qapi-visit-net.[ch]
/qapi/qapi-visit-rocker.[ch]
/qapi/qapi-visit-run-state.[ch]
/qapi/qapi-visit-sockets.[ch]
/qapi/qapi-visit-tpm.[ch]
/qapi/qapi-visit-trace.[ch]
/qapi/qapi-visit-transaction.[ch]
/qapi/qapi-visit-ui.[ch]
/qapi/qapi-visit.[ch]
/qapi/qapi-doc.texi
/qapi-generated
/qapi-types.[ch]
/qapi-visit.[ch]
/qapi-event.[ch]
/qmp-commands.h
/qmp-marshal.c
/qemu-doc.html
/qemu-tech.html
/qemu-doc.info
/qemu-doc.txt
/qemu-tech.info
/qemu.1
/qemu.pod
/qemu-img.1
/qemu-img.pod
/qemu-img
/qemu-nbd
/qemu-nbd.8
/qemu-nbd.pod
/qemu-options.def
/qemu-options.texi
/qemu-img-cmds.texi
@@ -112,23 +48,17 @@
/qemu-io
/qemu-ga
/qemu-bridge-helper
/qemu-keymap
/qemu-monitor.texi
/qemu-monitor-info.texi
/qemu-version.h
/qemu-version.h.tmp
/module_block.h
/scsi/qemu-pr-helper
/vhost-user-scsi
/vhost-user-blk
/qmp-commands.txt
/vscclient
/fsdev/virtfs-proxy-helper
*.tmp
*.[1-9]
/fsdev/virtfs-proxy-helper.1
/fsdev/virtfs-proxy-helper.pod
*.a
*.aux
*.cp
*.dvi
*.exe
*.msi
*.dll
*.so
*.mo
@@ -136,7 +66,6 @@
*.ky
*.log
*.pdf
*.pod
*.cps
*.fns
*.kys
@@ -148,6 +77,10 @@
*.d
!/scripts/qemu-guest-agent/fsfreeze-hook.d
*.o
*.lo
*.la
*.pc
.libs
.sdk
*.gcda
*.gcno
@@ -157,10 +90,6 @@
/pc-bios/optionrom/linuxboot.bin
/pc-bios/optionrom/linuxboot.raw
/pc-bios/optionrom/linuxboot.img
/pc-bios/optionrom/linuxboot_dma.asm
/pc-bios/optionrom/linuxboot_dma.bin
/pc-bios/optionrom/linuxboot_dma.raw
/pc-bios/optionrom/linuxboot_dma.img
/pc-bios/optionrom/multiboot.asm
/pc-bios/optionrom/multiboot.bin
/pc-bios/optionrom/multiboot.raw
@@ -171,38 +100,8 @@
/pc-bios/optionrom/kvmvapic.img
/pc-bios/s390-ccw/s390-ccw.elf
/pc-bios/s390-ccw/s390-ccw.img
/docs/interop/qemu-ga-qapi.texi
/docs/interop/qemu-ga-ref.html
/docs/interop/qemu-ga-ref.info*
/docs/interop/qemu-ga-ref.txt
/docs/interop/qemu-qmp-qapi.texi
/docs/interop/qemu-qmp-ref.html
/docs/interop/qemu-qmp-ref.info*
/docs/interop/qemu-qmp-ref.txt
/docs/version.texi
*.tps
.stgit-*
.git-submodule-status
cscope.*
tags
TAGS
docker-src.*
*~
*.ast_raw
*.depend_raw
trace.h
trace.c
trace-ust.h
trace-ust.h
trace-dtrace.h
trace-dtrace.dtrace
trace-root.h
trace-root.c
trace-ust-root.h
trace-ust-root.h
trace-ust-all.h
trace-ust-all.c
trace-dtrace-root.h
trace-dtrace-root.dtrace
trace-ust-all.h
trace-ust-all.c

21
.gitmodules vendored
View File

@@ -22,27 +22,12 @@
[submodule "roms/sgabios"]
path = roms/sgabios
url = git://git.qemu-project.org/sgabios.git
[submodule "pixman"]
path = pixman
url = git://anongit.freedesktop.org/pixman
[submodule "dtc"]
path = dtc
url = git://git.qemu-project.org/dtc.git
[submodule "roms/u-boot"]
path = roms/u-boot
url = git://git.qemu-project.org/u-boot.git
[submodule "roms/skiboot"]
path = roms/skiboot
url = git://git.qemu.org/skiboot.git
[submodule "roms/QemuMacDrivers"]
path = roms/QemuMacDrivers
url = git://git.qemu.org/QemuMacDrivers.git
[submodule "ui/keycodemapdb"]
path = ui/keycodemapdb
url = git://git.qemu.org/keycodemapdb.git
[submodule "capstone"]
path = capstone
url = git://git.qemu.org/capstone.git
[submodule "roms/seabios-hppa"]
path = roms/seabios-hppa
url = git://github.com/hdeller/seabios-hppa.git
[submodule "roms/u-boot-sam460ex"]
path = roms/u-boot-sam460ex
url = git://github.com/zbalaton/u-boot-sam460ex

View File

@@ -1,51 +0,0 @@
#
# Common git-publish profiles that can be used to send patches to QEMU upstream.
#
# See https://github.com/stefanha/git-publish for more information
#
[gitpublishprofile "default"]
base = master
to = qemu-devel@nongnu.org
cccmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback 2>/dev/null
[gitpublishprofile "rfc"]
base = master
prefix = RFC PATCH
to = qemu-devel@nongnu.org
cccmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback 2>/dev/null
[gitpublishprofile "stable"]
base = master
to = qemu-devel@nongnu.org
cc = qemu-stable@nongnu.org
cccmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback 2>/dev/null
[gitpublishprofile "trivial"]
base = master
to = qemu-devel@nongnu.org
cc = qemu-trivial@nongnu.org
cccmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback 2>/dev/null
[gitpublishprofile "block"]
base = master
to = qemu-devel@nongnu.org
cc = qemu-block@nongnu.org
cccmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback 2>/dev/null
[gitpublishprofile "arm"]
base = master
to = qemu-devel@nongnu.org
cc = qemu-arm@nongnu.org
cccmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback 2>/dev/null
[gitpublishprofile "s390"]
base = master
to = qemu-devel@nongnu.org
cc = qemu-s390@nongnu.org
cccmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback 2>/dev/null
[gitpublishprofile "ppc"]
base = master
to = qemu-devel@nongnu.org
cc = qemu-ppc@nongnu.org
cccmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback 2>/dev/null

View File

@@ -8,17 +8,10 @@ Aurelien Jarno <aurelien@aurel32.net> aurel32 <aurel32@c046a42c-6fe2-441c-8c8c-7
Blue Swirl <blauwirbel@gmail.com> blueswir1 <blueswir1@c046a42c-6fe2-441c-8c8c-71466251a162>
Edgar E. Iglesias <edgar.iglesias@gmail.com> edgar_igl <edgar_igl@c046a42c-6fe2-441c-8c8c-71466251a162>
Fabrice Bellard <fabrice@bellard.org> bellard <bellard@c046a42c-6fe2-441c-8c8c-71466251a162>
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
Jocelyn Mayer <l_indien@magic.fr> j_mayer <j_mayer@c046a42c-6fe2-441c-8c8c-71466251a162>
Paul Brook <paul@codesourcery.com> pbrook <pbrook@c046a42c-6fe2-441c-8c8c-71466251a162>
Paul Burton <paul.burton@mips.com> <paul.burton@imgtec.com>
Paul Burton <paul.burton@mips.com> <paul@archlinuxmips.org>
Thiemo Seufer <ths@networkno.de> ths <ths@c046a42c-6fe2-441c-8c8c-71466251a162>
malc <av1474@comtv.ru> malc <malc@c046a42c-6fe2-441c-8c8c-71466251a162>
# There is also a:
# (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162>
# for the cvs2svn initialization commit e63c3dc74bf.
#
# Also list preferred name forms where people have changed their
# git author config
Daniel P. Berrangé <berrange@redhat.com>

View File

@@ -1,47 +0,0 @@
language: c
git:
submodules: false
env:
global:
- LC_ALL=C
matrix:
- IMAGE=debian-amd64
TARGET_LIST=x86_64-softmmu,x86_64-linux-user
- IMAGE=debian-win32-cross
TARGET_LIST=arm-softmmu,i386-softmmu,lm32-softmmu
- IMAGE=debian-win64-cross
TARGET_LIST=aarch64-softmmu,sparc64-softmmu,x86_64-softmmu
- IMAGE=debian-armel-cross
TARGET_LIST=arm-softmmu,arm-linux-user,armeb-linux-user
- IMAGE=debian-armhf-cross
TARGET_LIST=arm-softmmu,arm-linux-user,armeb-linux-user
- IMAGE=debian-arm64-cross
TARGET_LIST=aarch64-softmmu,aarch64-linux-user
- IMAGE=debian-s390x-cross
TARGET_LIST=s390x-softmmu,s390x-linux-user
- IMAGE=debian-mips-cross
TARGET_LIST=mips-softmmu,mipsel-linux-user
- IMAGE=debian-mips64el-cross
TARGET_LIST=mips64el-softmmu,mips64el-linux-user
- IMAGE=debian-ppc64el-cross
TARGET_LIST=ppc64-softmmu,ppc64-linux-user,ppc64abi32-linux-user
build:
pre_ci:
- make docker-image-${IMAGE} V=1
pre_ci_boot:
image_name: qemu
image_tag: ${IMAGE}
pull: false
options: "-e HOME=/root"
ci:
- unset CC
# some targets require newer up to date packages, for example TARGET_LIST matching
# aarch64*-softmmu|arm*-softmmu|ppc*-softmmu|microblaze*-softmmu|mips64el-softmmu)
# see the configure script:
# error_exit "DTC (libfdt) version >= 1.4.2 not present. Your options:"
# " (1) Preferred: Install the DTC (libfdt) devel package"
# " (2) Fetch the DTC submodule, using:"
# " git submodule update --init dtc"
- dpkg --compare-versions `dpkg-query --showformat='${Version}' --show libfdt-dev` ge 1.4.2 || git submodule update --init dtc
- ./configure ${QEMU_CONFIGURE_OPTS} --target-list=${TARGET_LIST}
- make -j$(($(getconf _NPROCESSORS_ONLN) + 1))

View File

@@ -1,203 +1,81 @@
sudo: false
language: c
python:
- "2.6"
- "2.4"
compiler:
- gcc
cache: ccache
addons:
apt:
packages:
# Build dependencies
- libaio-dev
- libattr1-dev
- libbrlapi-dev
- libcap-ng-dev
- libgcc-4.8-dev
- libgnutls-dev
- libgtk-3-dev
- libiscsi-dev
- liblttng-ust-dev
- libncurses5-dev
- libnfs-dev
- libnss3-dev
- libpixman-1-dev
- libpng12-dev
- librados-dev
- libsdl1.2-dev
- libseccomp-dev
- libspice-protocol-dev
- libspice-server-dev
- libssh2-1-dev
- liburcu-dev
- libusb-1.0-0-dev
- libvte-2.90-dev
- sparse
- uuid-dev
# The channel name "irc.oftc.net#qemu" is encrypted against qemu/qemu
# to prevent IRC notifications from forks. This was created using:
# $ travis encrypt -r "qemu/qemu" "irc.oftc.net#qemu"
- clang
notifications:
irc:
channels:
- secure: "F7GDRgjuOo5IUyRLqSkmDL7kvdU4UcH3Lm/W2db2JnDHTGCqgEdaYEYKciyCLZ57vOTsTsOgesN8iUT7hNHBd1KWKjZe9KDTZWppWRYVwAwQMzVeSOsbbU4tRoJ6Pp+3qhH1Z0eGYR9ZgKYAoTumDFgSAYRp4IscKS8jkoedOqM="
- "irc.oftc.net#qemu"
on_success: change
on_failure: always
env:
global:
- TEST_CMD="make check"
- MAKEFLAGS="-j3"
- EXTRA_CONFIG=""
# Development packages, EXTRA_PKGS saved for additional builds
- CORE_PKGS="libusb-1.0-0-dev libiscsi-dev librados-dev libncurses5-dev"
- NET_PKGS="libseccomp-dev libgnutls-dev libssh2-1-dev libspice-server-dev libspice-protocol-dev libnss3-dev"
- GUI_PKGS="libgtk-3-dev libvte-2.90-dev libsdl1.2-dev libpng12-dev libpixman-1-dev"
- EXTRA_PKGS=""
matrix:
- CONFIG=""
- CONFIG="--enable-debug --enable-debug-tcg --enable-trace-backends=log"
- CONFIG="--disable-linux-aio --disable-cap-ng --disable-attr --disable-brlapi --disable-uuid --disable-libusb"
- CONFIG="--enable-modules --disable-linux-user"
- CONFIG="--with-coroutine=ucontext --disable-linux-user"
- CONFIG="--with-coroutine=sigaltstack --disable-linux-user"
git:
# we want to do this ourselves
submodules: false
- TARGETS=alpha-softmmu,alpha-linux-user
- TARGETS=arm-softmmu,arm-linux-user
- TARGETS=aarch64-softmmu,aarch64-linux-user
- TARGETS=cris-softmmu
- TARGETS=i386-softmmu,x86_64-softmmu
- TARGETS=lm32-softmmu
- TARGETS=m68k-softmmu
- TARGETS=microblaze-softmmu,microblazeel-softmmu
- TARGETS=mips-softmmu,mips64-softmmu,mips64el-softmmu,mipsel-softmmu
- TARGETS=moxie-softmmu
- TARGETS=or32-softmmu,
- TARGETS=ppc-softmmu,ppc64-softmmu,ppcemb-softmmu
- TARGETS=s390x-softmmu
- TARGETS=sh4-softmmu,sh4eb-softmmu
- TARGETS=sparc-softmmu,sparc64-softmmu
- TARGETS=unicore32-softmmu
- TARGETS=xtensa-softmmu,xtensaeb-softmmu
before_install:
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew update ; fi
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew install libffi gettext glib pixman ; fi
- wget -O - http://people.linaro.org/~alex.bennee/qemu-submodule-git-seed.tar.xz | tar -xvJ
- git submodule update --init --recursive
before_script:
- ./configure ${CONFIG}
script:
- make ${MAKEFLAGS} && ${TEST_CMD}
- sudo apt-get update -qq
- sudo apt-get install -qq ${CORE_PKGS} ${NET_PKGS} ${GUI_PKGS} ${EXTRA_PKGS}
script: "./configure --target-list=${TARGETS} ${EXTRA_CONFIG} && make && ${TEST_CMD}"
matrix:
# We manually include a number of additional build for non-standard bits
include:
# Test with CLang for compile portability
- env: CONFIG=""
compiler: clang
# gprof/gcov are GCC features
- env: CONFIG="--enable-gprof --enable-gcov --disable-pie"
# Debug related options
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-debug"
compiler: gcc
# We manually include builds which we disable "make check" for
- env: CONFIG="--enable-debug --enable-tcg-interpreter"
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-debug --enable-tcg-interpreter"
compiler: gcc
# All the extra -dev packages
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="libaio-dev libcap-ng-dev libattr1-dev libbrlapi-dev uuid-dev libusb-1.0.0-dev"
compiler: gcc
# Currently configure doesn't force --disable-pie
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-gprof --enable-gcov --disable-pie"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="sparse"
EXTRA_CONFIG="--enable-sparse"
compiler: gcc
# All the trace backends (apart from dtrace)
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backends=stderr"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backends=simple"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backends=ftrace"
TEST_CMD=""
compiler: gcc
- env: CONFIG="--enable-trace-backends=simple"
TEST_CMD=""
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="liblttng-ust-dev liburcu-dev"
EXTRA_CONFIG="--enable-trace-backends=ust"
compiler: gcc
- env: CONFIG="--enable-trace-backends=ftrace"
TEST_CMD=""
compiler: gcc
- env: CONFIG="--enable-trace-backends=ust"
TEST_CMD=""
compiler: gcc
- env: CONFIG="--disable-tcg"
TEST_CMD=""
compiler: gcc
- env: CONFIG=""
os: osx
compiler: clang
# Plain Trusty System Build
- env: CONFIG="--disable-linux-user"
sudo: required
addons:
dist: trusty
compiler: gcc
before_install:
- sudo apt-get update -qq
- sudo apt-get build-dep -qq qemu
- wget -O - http://people.linaro.org/~alex.bennee/qemu-submodule-git-seed.tar.xz | tar -xvJ
- git submodule update --init --recursive
# Plain Trusty Linux User Build
- env: CONFIG="--disable-system"
sudo: required
addons:
dist: trusty
compiler: gcc
before_install:
- sudo apt-get update -qq
- sudo apt-get build-dep -qq qemu
- wget -O - http://people.linaro.org/~alex.bennee/qemu-submodule-git-seed.tar.xz | tar -xvJ
- git submodule update --init --recursive
# Trusty System build with latest stable clang & python 3.0
- sudo: required
addons:
dist: trusty
language: generic
compiler: none
python:
- "3.0"
env:
- COMPILER_NAME=clang CXX=clang++-3.9 CC=clang-3.9
- CONFIG="--disable-linux-user --cc=clang-3.9 --cxx=clang++-3.9 --python=/usr/bin/python3"
before_install:
- wget -nv -O - http://llvm.org/apt/llvm-snapshot.gpg.key | sudo apt-key add -
- sudo apt-add-repository -y 'deb http://llvm.org/apt/trusty llvm-toolchain-trusty-3.9 main'
- sudo apt-get update -qq
- sudo apt-get install -qq -y clang-3.9
- sudo apt-get build-dep -qq qemu
- wget -O - http://people.linaro.org/~alex.bennee/qemu-submodule-git-seed.tar.xz | tar -xvJ
- git submodule update --init --recursive
before_script:
- ./configure ${CONFIG} || cat config.log
# Trusty Linux User build with latest stable clang & python 3.6
- sudo: required
addons:
dist: trusty
language: generic
compiler: none
python:
- "3.6"
env:
- COMPILER_NAME=clang CXX=clang++-3.9 CC=clang-3.9
- CONFIG="--disable-system --cc=clang-3.9 --cxx=clang++-3.9 --python=/usr/bin/python3"
before_install:
- wget -nv -O - http://llvm.org/apt/llvm-snapshot.gpg.key | sudo apt-key add -
- sudo apt-add-repository -y 'deb http://llvm.org/apt/trusty llvm-toolchain-trusty-3.9 main'
- sudo apt-get update -qq
- sudo apt-get install -qq -y clang-3.9
- sudo apt-get build-dep -qq qemu
- wget -O - http://people.linaro.org/~alex.bennee/qemu-submodule-git-seed.tar.xz | tar -xvJ
- git submodule update --init --recursive
before_script:
- ./configure ${CONFIG} || cat config.log
# Using newer GCC with sanitizers
- addons:
apt:
sources:
# PPAs for newer toolchains
- ubuntu-toolchain-r-test
packages:
# Extra toolchains
- gcc-5
- g++-5
# Build dependencies
- libaio-dev
- libattr1-dev
- libbrlapi-dev
- libcap-ng-dev
- libgnutls-dev
- libgtk-3-dev
- libiscsi-dev
- liblttng-ust-dev
- libnfs-dev
- libncurses5-dev
- libnss3-dev
- libpixman-1-dev
- libpng12-dev
- librados-dev
- libsdl1.2-dev
- libseccomp-dev
- libspice-protocol-dev
- libspice-server-dev
- libssh2-1-dev
- liburcu-dev
- libusb-1.0-0-dev
- libvte-2.90-dev
- sparse
- uuid-dev
language: generic
compiler: none
env:
- COMPILER_NAME=gcc CXX=g++-5 CC=gcc-5
- CONFIG="--cc=gcc-5 --cxx=g++-5 --disable-pie --disable-linux-user"
- TEST_CMD=""
before_script:
- ./configure ${CONFIG} --extra-cflags="-g3 -O0 -fsanitize=thread -fuse-ld=gold" || cat config.log

View File

@@ -9,7 +9,7 @@ patches before submitting.
Of course, the most important aspect in any coding style is whitespace.
Crusty old coders who have trouble spotting the glasses on their noses
can tell the difference between a tab and eight spaces from a distance
of approximately fifteen parsecs. Many a flamewar has been fought and
of approximately fifteen parsecs. Many a flamewar have been fought and
lost on this issue.
QEMU indents are four spaces. Tabs are never used, except in Makefiles
@@ -31,11 +31,7 @@ Do not leave whitespace dangling off the ends of lines.
2. Line width
Lines should be 80 characters; try not to make them longer.
Sometimes it is hard to do, especially when dealing with QEMU subsystems
that use long function or symbol names. Even in that case, do not make
lines much longer than 80 characters.
Lines are 80 characters; not longer.
Rationale:
- Some people like to tile their 24" screens with a 6x4 matrix of 80x24
@@ -43,8 +39,6 @@ Rationale:
let them keep doing it.
- Code and especially patches is much more readable if limited to a sane
line length. Eighty is traditional.
- The four-space indentation makes the most common excuse ("But look
at all that white space on the left!") moot.
- It is the QEMU coding style.
3. Naming
@@ -93,68 +87,7 @@ Furthermore, it is the QEMU coding style.
5. Declarations
Mixed declarations (interleaving statements and declarations within
blocks) are generally not allowed; declarations should be at the beginning
of blocks.
Every now and then, an exception is made for declarations inside a
#ifdef or #ifndef block: if the code looks nicer, such declarations can
be placed at the top of the block even if there are statements above.
On the other hand, however, it's often best to move that #ifdef/#ifndef
block to a separate function altogether.
6. Conditional statements
When comparing a variable for (in)equality with a constant, list the
constant on the right, as in:
if (a == 1) {
/* Reads like: "If a equals 1" */
do_something();
}
Rationale: Yoda conditions (as in 'if (1 == a)') are awkward to read.
Besides, good compilers already warn users when '==' is mis-typed as '=',
even when the constant is on the right.
7. Comment style
We use traditional C-style /* */ comments and avoid // comments.
Rationale: The // form is valid in C99, so this is purely a matter of
consistency of style. The checkpatch script will warn you about this.
8. trace-events style
8.1 0x prefix
In trace-events files, use a '0x' prefix to specify hex numbers, as in:
some_trace(unsigned x, uint64_t y) "x 0x%x y 0x" PRIx64
An exception is made for groups of numbers that are hexadecimal by
convention and separated by the symbols '.', '/', ':', or ' ' (such as
PCI bus id):
another_trace(int cssid, int ssid, int dev_num) "bus id: %x.%x.%04x"
However, you can use '0x' for such groups if you want. Anyway, be sure that
it is obvious that numbers are in hex, ex.:
data_dump(uint8_t c1, uint8_t c2, uint8_t c3) "bytes (in hex): %02x %02x %02x"
Rationale: hex numbers are hard to read in logs when there is no 0x prefix,
especially when (occasionally) the representation doesn't contain any letters
and especially in one line with other decimal numbers. Number groups are allowed
to not use '0x' because for some things notations like %x.%x.%x are used not
only in Qemu. Also dumping raw data bytes with '0x' is less readable.
8.2 '#' printf flag
Do not use printf flag '#', like '%#x'.
Rationale: there are two ways to add a '0x' prefix to printed number: '0x%...'
and '%#...'. For consistency the only one way should be used. Arguments for
'0x%' are:
- it is more popular
- '%#' omits the 0x for the value 0 which makes output inconsistent
Mixed declarations (interleaving statements and declarations within blocks)
are not allowed; declarations should be at the beginning of blocks. In other
words, the code should not generate warnings if using GCC's
-Wdeclaration-after-statement option.

View File

@@ -1,270 +0,0 @@
A. HISTORY OF THE SOFTWARE
==========================
Python was created in the early 1990s by Guido van Rossum at Stichting
Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands
as a successor of a language called ABC. Guido remains Python's
principal author, although it includes many contributions from others.
In 1995, Guido continued his work on Python at the Corporation for
National Research Initiatives (CNRI, see http://www.cnri.reston.va.us)
in Reston, Virginia where he released several versions of the
software.
In May 2000, Guido and the Python core development team moved to
BeOpen.com to form the BeOpen PythonLabs team. In October of the same
year, the PythonLabs team moved to Digital Creations (now Zope
Corporation, see http://www.zope.com). In 2001, the Python Software
Foundation (PSF, see http://www.python.org/psf/) was formed, a
non-profit organization created specifically to own Python-related
Intellectual Property. Zope Corporation is a sponsoring member of
the PSF.
All Python releases are Open Source (see http://www.opensource.org for
the Open Source Definition). Historically, most, but not all, Python
releases have also been GPL-compatible; the table below summarizes
the various releases.
Release Derived Year Owner GPL-
from compatible? (1)
0.9.0 thru 1.2 1991-1995 CWI yes
1.3 thru 1.5.2 1.2 1995-1999 CNRI yes
1.6 1.5.2 2000 CNRI no
2.0 1.6 2000 BeOpen.com no
1.6.1 1.6 2001 CNRI yes (2)
2.1 2.0+1.6.1 2001 PSF no
2.0.1 2.0+1.6.1 2001 PSF yes
2.1.1 2.1+2.0.1 2001 PSF yes
2.2 2.1.1 2001 PSF yes
2.1.2 2.1.1 2002 PSF yes
2.1.3 2.1.2 2002 PSF yes
2.2.1 2.2 2002 PSF yes
2.2.2 2.2.1 2002 PSF yes
2.2.3 2.2.2 2003 PSF yes
2.3 2.2.2 2002-2003 PSF yes
2.3.1 2.3 2002-2003 PSF yes
2.3.2 2.3.1 2002-2003 PSF yes
2.3.3 2.3.2 2002-2003 PSF yes
2.3.4 2.3.3 2004 PSF yes
2.3.5 2.3.4 2005 PSF yes
2.4 2.3 2004 PSF yes
2.4.1 2.4 2005 PSF yes
2.4.2 2.4.1 2005 PSF yes
2.4.3 2.4.2 2006 PSF yes
2.5 2.4 2006 PSF yes
2.7 2.6 2010 PSF yes
Footnotes:
(1) GPL-compatible doesn't mean that we're distributing Python under
the GPL. All Python licenses, unlike the GPL, let you distribute
a modified version without making your changes open source. The
GPL-compatible licenses make it possible to combine Python with
other software that is released under the GPL; the others don't.
(2) According to Richard Stallman, 1.6.1 is not GPL-compatible,
because its license has a choice of law clause. According to
CNRI, however, Stallman's lawyer has told CNRI's lawyer that 1.6.1
is "not incompatible" with the GPL.
Thanks to the many outside volunteers who have worked under Guido's
direction to make these releases possible.
B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON
===============================================================
PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
--------------------------------------------
1. This LICENSE AGREEMENT is between the Python Software Foundation
("PSF"), and the Individual or Organization ("Licensee") accessing and
otherwise using this software ("Python") in source or binary form and
its associated documentation.
2. Subject to the terms and conditions of this License Agreement, PSF
hereby grants Licensee a nonexclusive, royalty-free, world-wide
license to reproduce, analyze, test, perform and/or display publicly,
prepare derivative works, distribute, and otherwise use Python
alone or in any derivative version, provided, however, that PSF's
License Agreement and PSF's notice of copyright, i.e., "Copyright (c)
2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation; All Rights
Reserved" are retained in Python alone or in any derivative version
prepared by Licensee.
3. In the event Licensee prepares a derivative work that is based on
or incorporates Python or any part thereof, and wants to make
the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary of
the changes made to Python.
4. PSF is making Python available to Licensee on an "AS IS"
basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between PSF and
Licensee. This License Agreement does not grant permission to use PSF
trademarks or trade name in a trademark sense to endorse or promote
products or services of Licensee, or any third party.
8. By copying, installing or otherwise using Python, Licensee
agrees to be bound by the terms and conditions of this License
Agreement.
BEOPEN.COM LICENSE AGREEMENT FOR PYTHON 2.0
-------------------------------------------
BEOPEN PYTHON OPEN SOURCE LICENSE AGREEMENT VERSION 1
1. This LICENSE AGREEMENT is between BeOpen.com ("BeOpen"), having an
office at 160 Saratoga Avenue, Santa Clara, CA 95051, and the
Individual or Organization ("Licensee") accessing and otherwise using
this software in source or binary form and its associated
documentation ("the Software").
2. Subject to the terms and conditions of this BeOpen Python License
Agreement, BeOpen hereby grants Licensee a non-exclusive,
royalty-free, world-wide license to reproduce, analyze, test, perform
and/or display publicly, prepare derivative works, distribute, and
otherwise use the Software alone or in any derivative version,
provided, however, that the BeOpen Python License is retained in the
Software, alone or in any derivative version prepared by Licensee.
3. BeOpen is making the Software available to Licensee on an "AS IS"
basis. BEOPEN MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, BEOPEN MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE SOFTWARE WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.
4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
AS A RESULT OF USING, MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY
DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
5. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
6. This License Agreement shall be governed by and interpreted in all
respects by the law of the State of California, excluding conflict of
law provisions. Nothing in this License Agreement shall be deemed to
create any relationship of agency, partnership, or joint venture
between BeOpen and Licensee. This License Agreement does not grant
permission to use BeOpen trademarks or trade names in a trademark
sense to endorse or promote products or services of Licensee, or any
third party. As an exception, the "BeOpen Python" logos available at
http://www.pythonlabs.com/logos.html may be used according to the
permissions granted on that web page.
7. By copying, installing or otherwise using the software, Licensee
agrees to be bound by the terms and conditions of this License
Agreement.
CNRI LICENSE AGREEMENT FOR PYTHON 1.6.1
---------------------------------------
1. This LICENSE AGREEMENT is between the Corporation for National
Research Initiatives, having an office at 1895 Preston White Drive,
Reston, VA 20191 ("CNRI"), and the Individual or Organization
("Licensee") accessing and otherwise using Python 1.6.1 software in
source or binary form and its associated documentation.
2. Subject to the terms and conditions of this License Agreement, CNRI
hereby grants Licensee a nonexclusive, royalty-free, world-wide
license to reproduce, analyze, test, perform and/or display publicly,
prepare derivative works, distribute, and otherwise use Python 1.6.1
alone or in any derivative version, provided, however, that CNRI's
License Agreement and CNRI's notice of copyright, i.e., "Copyright (c)
1995-2001 Corporation for National Research Initiatives; All Rights
Reserved" are retained in Python 1.6.1 alone or in any derivative
version prepared by Licensee. Alternately, in lieu of CNRI's License
Agreement, Licensee may substitute the following text (omitting the
quotes): "Python 1.6.1 is made available subject to the terms and
conditions in CNRI's License Agreement. This Agreement together with
Python 1.6.1 may be located on the Internet using the following
unique, persistent identifier (known as a handle): 1895.22/1013. This
Agreement may also be obtained from a proxy server on the Internet
using the following URL: http://hdl.handle.net/1895.22/1013".
3. In the event Licensee prepares a derivative work that is based on
or incorporates Python 1.6.1 or any part thereof, and wants to make
the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary of
the changes made to Python 1.6.1.
4. CNRI is making Python 1.6.1 available to Licensee on an "AS IS"
basis. CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6.1 WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.
5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
1.6.1 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1,
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. This License Agreement shall be governed by the federal
intellectual property law of the United States, including without
limitation the federal copyright law, and, to the extent such
U.S. federal law does not apply, by the law of the Commonwealth of
Virginia, excluding Virginia's conflict of law provisions.
Notwithstanding the foregoing, with regard to derivative works based
on Python 1.6.1 that incorporate non-separable material that was
previously distributed under the GNU General Public License (GPL), the
law of the Commonwealth of Virginia shall govern this License
Agreement only as to issues arising under or with respect to
Paragraphs 4, 5, and 7 of this License Agreement. Nothing in this
License Agreement shall be deemed to create any relationship of
agency, partnership, or joint venture between CNRI and Licensee. This
License Agreement does not grant permission to use CNRI trademarks or
trade name in a trademark sense to endorse or promote products or
services of Licensee, or any third party.
8. By clicking on the "ACCEPT" button where indicated, or by copying,
installing or otherwise using Python 1.6.1, Licensee agrees to be
bound by the terms and conditions of this License Agreement.
ACCEPT
CWI LICENSE AGREEMENT FOR PYTHON 0.9.0 THROUGH 1.2
--------------------------------------------------
Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam,
The Netherlands. All rights reserved.
Permission to use, copy, modify, and distribute this software and its
documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that
both that copyright notice and this permission notice appear in
supporting documentation, and that the name of Stichting Mathematisch
Centrum or CWI not be used in advertising or publicity pertaining to
distribution of the software without specific, written prior
permission.
STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO
THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE
FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -1,6 +1,6 @@
This file documents changes for QEMU releases 0.12 and earlier.
For changelog information for later releases, see
https://wiki.qemu.org/ChangeLog or look at the git history for
http://wiki.qemu-project.org/ChangeLog or look at the git history for
more detailed information.

77
HACKING
View File

@@ -1,28 +1,10 @@
1. Preprocessor
1.1. Variadic macros
For variadic macros, stick with this C99-like syntax:
#define DPRINTF(fmt, ...) \
do { printf("IRQ: " fmt, ## __VA_ARGS__); } while (0)
1.2. Include directives
Order include directives as follows:
#include "qemu/osdep.h" /* Always first... */
#include <...> /* then system headers... */
#include "..." /* and finally QEMU headers. */
The "qemu/osdep.h" header contains preprocessor macros that affect the behavior
of core system headers like <stdint.h>. It must be the first include so that
core system headers included by external libraries get the preprocessor macros
that QEMU depends on.
Do not include "qemu/osdep.h" from header files since the .c file will have
already included it.
2. C types
It should be common sense to use the right type, but we have collected
@@ -175,62 +157,3 @@ painful. These are:
* you may assume that integers are 2s complement representation
* you may assume that right shift of a signed integer duplicates
the sign bit (ie it is an arithmetic shift, not a logical shift)
In addition, QEMU assumes that the compiler does not use the latitude
given in C99 and C11 to treat aspects of signed '<<' as undefined, as
documented in the GNU Compiler Collection manual starting at version 4.0.
7. Error handling and reporting
7.1 Reporting errors to the human user
Do not use printf(), fprintf() or monitor_printf(). Instead, use
error_report() or error_vreport() from error-report.h. This ensures the
error is reported in the right place (current monitor or stderr), and in
a uniform format.
Use error_printf() & friends to print additional information.
error_report() prints the current location. In certain common cases
like command line parsing, the current location is tracked
automatically. To manipulate it manually, use the loc_*() from
error-report.h.
7.2 Propagating errors
An error can't always be reported to the user right where it's detected,
but often needs to be propagated up the call chain to a place that can
handle it. This can be done in various ways.
The most flexible one is Error objects. See error.h for usage
information.
Use the simplest suitable method to communicate success / failure to
callers. Stick to common methods: non-negative on success / -1 on
error, non-negative / -errno, non-null / null, or Error objects.
Example: when a function returns a non-null pointer on success, and it
can fail only in one way (as far as the caller is concerned), returning
null on failure is just fine, and certainly simpler and a lot easier on
the eyes than propagating an Error object through an Error ** parameter.
Example: when a function's callers need to report details on failure
only the function really knows, use Error **, and set suitable errors.
Do not report an error to the user when you're also returning an error
for somebody else to handle. Leave the reporting to the place that
consumes the error returned.
7.3 Handling errors
Calling exit() is fine when handling configuration errors during
startup. It's problematic during normal operation. In particular,
monitor commands should never exit().
Do not call exit() or abort() to handle an error that can be triggered
by the guest (e.g., some unimplemented corner case in guest code
translation or device emulation). Guests should not be able to
terminate QEMU.
Note that &error_fatal is just another way to exit(1), and &error_abort
is just another way to abort().

View File

@@ -11,7 +11,7 @@ option) any later version.
As of July 2013, contributions under version 2 of the GNU General Public
License (and no later version) are only accepted for the following files
or directories: bsd-user/, linux-user/, hw/vfio/, hw/xen/xen_pt*.
or directories: bsd-user/, linux-user/, hw/misc/vfio.c, hw/xen/xen_pt*.
3) The Tiny Code Generator (TCG) is released under the BSD license
(see license headers in files).

File diff suppressed because it is too large Load Diff

922
Makefile

File diff suppressed because it is too large Load Diff

View File

@@ -1,90 +1,38 @@
#######################################################################
# Common libraries for tools and emulators
stub-obj-y = stubs/ crypto/
util-obj-y = util/ qobject/ qapi/
util-obj-y += qapi/qapi-builtin-types.o
util-obj-y += qapi/qapi-types.o
util-obj-y += qapi/qapi-types-block-core.o
util-obj-y += qapi/qapi-types-block.o
util-obj-y += qapi/qapi-types-char.o
util-obj-y += qapi/qapi-types-common.o
util-obj-y += qapi/qapi-types-crypto.o
util-obj-y += qapi/qapi-types-introspect.o
util-obj-y += qapi/qapi-types-migration.o
util-obj-y += qapi/qapi-types-misc.o
util-obj-y += qapi/qapi-types-net.o
util-obj-y += qapi/qapi-types-rocker.o
util-obj-y += qapi/qapi-types-run-state.o
util-obj-y += qapi/qapi-types-sockets.o
util-obj-y += qapi/qapi-types-tpm.o
util-obj-y += qapi/qapi-types-trace.o
util-obj-y += qapi/qapi-types-transaction.o
util-obj-y += qapi/qapi-types-ui.o
util-obj-y += qapi/qapi-builtin-visit.o
util-obj-y += qapi/qapi-visit.o
util-obj-y += qapi/qapi-visit-block-core.o
util-obj-y += qapi/qapi-visit-block.o
util-obj-y += qapi/qapi-visit-char.o
util-obj-y += qapi/qapi-visit-common.o
util-obj-y += qapi/qapi-visit-crypto.o
util-obj-y += qapi/qapi-visit-introspect.o
util-obj-y += qapi/qapi-visit-migration.o
util-obj-y += qapi/qapi-visit-misc.o
util-obj-y += qapi/qapi-visit-net.o
util-obj-y += qapi/qapi-visit-rocker.o
util-obj-y += qapi/qapi-visit-run-state.o
util-obj-y += qapi/qapi-visit-sockets.o
util-obj-y += qapi/qapi-visit-tpm.o
util-obj-y += qapi/qapi-visit-trace.o
util-obj-y += qapi/qapi-visit-transaction.o
util-obj-y += qapi/qapi-visit-ui.o
util-obj-y += qapi/qapi-events.o
util-obj-y += qapi/qapi-events-block-core.o
util-obj-y += qapi/qapi-events-block.o
util-obj-y += qapi/qapi-events-char.o
util-obj-y += qapi/qapi-events-common.o
util-obj-y += qapi/qapi-events-crypto.o
util-obj-y += qapi/qapi-events-introspect.o
util-obj-y += qapi/qapi-events-migration.o
util-obj-y += qapi/qapi-events-misc.o
util-obj-y += qapi/qapi-events-net.o
util-obj-y += qapi/qapi-events-rocker.o
util-obj-y += qapi/qapi-events-run-state.o
util-obj-y += qapi/qapi-events-sockets.o
util-obj-y += qapi/qapi-events-tpm.o
util-obj-y += qapi/qapi-events-trace.o
util-obj-y += qapi/qapi-events-transaction.o
util-obj-y += qapi/qapi-events-ui.o
util-obj-y += qapi/qapi-introspect.o
chardev-obj-y = chardev/
stub-obj-y = stubs/
util-obj-y = util/ qobject/ qapi/ trace/
#######################################################################
# block-obj-y is code used by both qemu system emulation and qemu-img
block-obj-y += nbd/
block-obj-y += block.o blockjob.o
block-obj-y += block/ scsi/
block-obj-y = async.o thread-pool.o
block-obj-y += nbd.o block.o blockjob.o
block-obj-y += main-loop.o iohandler.o qemu-timer.o
block-obj-$(CONFIG_POSIX) += aio-posix.o
block-obj-$(CONFIG_WIN32) += aio-win32.o
block-obj-y += block/
block-obj-y += qapi-types.o qapi-visit.o qapi-event.o
block-obj-y += qemu-io-cmds.o
block-obj-$(CONFIG_REPLICATION) += replication.o
block-obj-y += qemu-coroutine.o qemu-coroutine-lock.o qemu-coroutine-io.o
block-obj-y += qemu-coroutine-sleep.o
block-obj-y += coroutine-$(CONFIG_COROUTINE_BACKEND).o
block-obj-m = block/
#######################################################################
# crypto-obj-y is code used by both qemu system emulation and qemu-img
crypto-obj-y = crypto/
crypto-aes-obj-y = crypto/
######################################################################
# smartcard
#######################################################################
# qom-obj-y is code used by both qemu system emulation and qemu-img
qom-obj-y = qom/
#######################################################################
# io-obj-y is code used by both qemu system emulation and qemu-img
io-obj-y = io/
libcacard-y += libcacard/cac.o libcacard/event.o
libcacard-y += libcacard/vcard.o libcacard/vreader.o
libcacard-y += libcacard/vcard_emul_nss.o
libcacard-y += libcacard/vcard_emul_type.o
libcacard-y += libcacard/card_7816.o
libcacard-y += libcacard/vcardt.o
libcacard/vcard_emul_nss.o-cflags := $(NSS_CFLAGS)
libcacard/vcard_emul_nss.o-libs := $(NSS_LIBS)
######################################################################
# Target independent part of system emulation. The long term path is to
@@ -93,7 +41,7 @@ io-obj-y = io/
ifeq ($(CONFIG_SOFTMMU),y)
common-obj-y = blockdev.o blockdev-nbd.o block/
common-obj-y += bootdevice.o iothread.o
common-obj-y += iothread.o
common-obj-y += net/
common-obj-y += qdev-monitor.o device-hotplug.o
common-obj-$(CONFIG_WIN32) += os-win32.o
@@ -101,62 +49,54 @@ common-obj-$(CONFIG_POSIX) += os-posix.o
common-obj-$(CONFIG_LINUX) += fsdev/
common-obj-y += migration/
common-obj-y += migration.o migration-tcp.o
common-obj-y += vmstate.o
common-obj-y += qemu-file.o
common-obj-$(CONFIG_RDMA) += migration-rdma.o
common-obj-y += qemu-char.o #aio.o
common-obj-y += block-migration.o
common-obj-y += page_cache.o xbzrle.o
common-obj-$(CONFIG_POSIX) += migration-exec.o migration-unix.o migration-fd.o
common-obj-$(CONFIG_SPICE) += spice-qemu-char.o
common-obj-y += audio/
common-obj-m += audio/
common-obj-y += hw/
common-obj-y += replay/
common-obj-y += ui/
common-obj-m += ui/
common-obj-y += bt-host.o bt-vhci.o
bt-host.o-cflags := $(BLUEZ_CFLAGS)
common-obj-y += dma-helpers.o
common-obj-y += vl.o
vl.o-cflags := $(GPROF_CFLAGS) $(SDL_CFLAGS)
common-obj-$(CONFIG_TPM) += tpm.o
common-obj-y += tpm.o
common-obj-$(CONFIG_SLIRP) += slirp/
common-obj-y += backends/
common-obj-y += chardev/
common-obj-$(CONFIG_SECCOMP) += qemu-seccomp.o
qemu-seccomp.o-cflags := $(SECCOMP_CFLAGS)
qemu-seccomp.o-libs := $(SECCOMP_LIBS)
common-obj-$(CONFIG_FDT) += device_tree.o
common-obj-$(CONFIG_SMARTCARD_NSS) += $(libcacard-y)
######################################################################
# qapi
common-obj-y += qapi/qapi-commands.o
common-obj-y += qapi/qapi-commands-block-core.o
common-obj-y += qapi/qapi-commands-block.o
common-obj-y += qapi/qapi-commands-char.o
common-obj-y += qapi/qapi-commands-common.o
common-obj-y += qapi/qapi-commands-crypto.o
common-obj-y += qapi/qapi-commands-introspect.o
common-obj-y += qapi/qapi-commands-migration.o
common-obj-y += qapi/qapi-commands-misc.o
common-obj-y += qapi/qapi-commands-net.o
common-obj-y += qapi/qapi-commands-rocker.o
common-obj-y += qapi/qapi-commands-run-state.o
common-obj-y += qapi/qapi-commands-sockets.o
common-obj-y += qapi/qapi-commands-tpm.o
common-obj-y += qapi/qapi-commands-trace.o
common-obj-y += qapi/qapi-commands-transaction.o
common-obj-y += qapi/qapi-commands-ui.o
common-obj-y += qapi/qapi-introspect.o
common-obj-y += qmp-marshal.o
common-obj-y += qmp.o hmp.o
endif
######################################################################
# some qapi visitors are used by both system and user emulation:
common-obj-y += qapi-visit.o qapi-types.o
#######################################################################
# Target-independent parts used in system and user emulation
common-obj-y += cpus-common.o
common-obj-y += qemu-log.o
common-obj-y += tcg-runtime.o
common-obj-y += hw/
common-obj-y += qom/
common-obj-y += disas/
@@ -164,98 +104,12 @@ common-obj-y += disas/
######################################################################
# Resource file for Windows executables
version-obj-$(CONFIG_WIN32) += $(BUILD_DIR)/version.o
######################################################################
# tracing
util-obj-y += trace/
target-obj-y += trace/
version-lobj-$(CONFIG_WIN32) += $(BUILD_DIR)/version.lo
######################################################################
# guest agent
# FIXME: a few definitions from qapi/qapi-types.o and
# qapi/qapi-visit.o are needed by libqemuutil.a. These should be
# extracted into a QAPI schema module, or perhaps a separate schema.
qga-obj-y = qga/
# FIXME: a few definitions from qapi-types.o/qapi-visit.o are needed
# by libqemuutil.a. These should be moved to a separate .json schema.
qga-obj-y = qga/ qapi-types.o qapi-visit.o
qga-vss-dll-obj-y = qga/
######################################################################
# contrib
ivshmem-client-obj-$(CONFIG_IVSHMEM) = contrib/ivshmem-client/
ivshmem-server-obj-$(CONFIG_IVSHMEM) = contrib/ivshmem-server/
libvhost-user-obj-y = contrib/libvhost-user/
vhost-user-scsi.o-cflags := $(LIBISCSI_CFLAGS)
vhost-user-scsi.o-libs := $(LIBISCSI_LIBS)
vhost-user-scsi-obj-y = contrib/vhost-user-scsi/
vhost-user-blk-obj-y = contrib/vhost-user-blk/
######################################################################
trace-events-subdirs =
trace-events-subdirs += util
trace-events-subdirs += crypto
trace-events-subdirs += io
trace-events-subdirs += migration
trace-events-subdirs += block
trace-events-subdirs += chardev
trace-events-subdirs += hw/block
trace-events-subdirs += hw/block/dataplane
trace-events-subdirs += hw/char
trace-events-subdirs += hw/intc
trace-events-subdirs += hw/net
trace-events-subdirs += hw/rdma
trace-events-subdirs += hw/rdma/vmw
trace-events-subdirs += hw/virtio
trace-events-subdirs += hw/audio
trace-events-subdirs += hw/misc
trace-events-subdirs += hw/misc/macio
trace-events-subdirs += hw/usb
trace-events-subdirs += hw/scsi
trace-events-subdirs += hw/nvram
trace-events-subdirs += hw/display
trace-events-subdirs += hw/input
trace-events-subdirs += hw/timer
trace-events-subdirs += hw/dma
trace-events-subdirs += hw/sparc
trace-events-subdirs += hw/sparc64
trace-events-subdirs += hw/sd
trace-events-subdirs += hw/isa
trace-events-subdirs += hw/mem
trace-events-subdirs += hw/i386
trace-events-subdirs += hw/i386/xen
trace-events-subdirs += hw/9pfs
trace-events-subdirs += hw/ppc
trace-events-subdirs += hw/pci
trace-events-subdirs += hw/pci-host
trace-events-subdirs += hw/s390x
trace-events-subdirs += hw/vfio
trace-events-subdirs += hw/acpi
trace-events-subdirs += hw/arm
trace-events-subdirs += hw/alpha
trace-events-subdirs += hw/hppa
trace-events-subdirs += hw/xen
trace-events-subdirs += hw/ide
trace-events-subdirs += hw/tpm
trace-events-subdirs += ui
trace-events-subdirs += audio
trace-events-subdirs += net
trace-events-subdirs += target/arm
trace-events-subdirs += target/i386
trace-events-subdirs += target/mips
trace-events-subdirs += target/sparc
trace-events-subdirs += target/s390x
trace-events-subdirs += target/ppc
trace-events-subdirs += qom
trace-events-subdirs += linux-user
trace-events-subdirs += qapi
trace-events-subdirs += accel/tcg
trace-events-subdirs += accel/kvm
trace-events-subdirs += nbd
trace-events-subdirs += scsi
trace-events-files = $(SRC_PATH)/trace-events $(trace-events-subdirs:%=$(SRC_PATH)/%/trace-events)
trace-obj-y = trace-root.o
trace-obj-y += $(trace-events-subdirs:%=%/trace.o)
trace-obj-$(CONFIG_TRACE_UST) += trace-ust-all.o
trace-obj-$(CONFIG_TRACE_DTRACE) += trace-dtrace-root.o
trace-obj-$(CONFIG_TRACE_DTRACE) += $(trace-events-subdirs:%=%/trace-dtrace.o)

View File

@@ -1,17 +1,15 @@
# -*- Mode: makefile -*-
BUILD_DIR?=$(CURDIR)/..
include ../config-host.mak
include config-target.mak
include config-devices.mak
include $(SRC_PATH)/rules.mak
$(call set-vpath, $(SRC_PATH):$(BUILD_DIR))
$(call set-vpath, $(SRC_PATH))
ifdef CONFIG_LINUX
QEMU_CFLAGS += -I../linux-headers
endif
QEMU_CFLAGS += -I.. -I$(SRC_PATH)/target/$(TARGET_BASE_ARCH) -DNEED_CPU_H
QEMU_CFLAGS += -I.. -I$(SRC_PATH)/target-$(TARGET_BASE_ARCH) -DNEED_CPU_H
QEMU_CFLAGS+=-I$(SRC_PATH)/include
@@ -22,11 +20,11 @@ QEMU_PROG_BUILD = $(QEMU_PROG)
else
# system emulator name
QEMU_PROG=qemu-system-$(TARGET_NAME)$(EXESUF)
ifneq (,$(findstring -mwindows,$(SDL_LIBS)))
ifneq (,$(findstring -mwindows,$(libs_softmmu)))
# Terminate program name with a 'w' because the linker builds a windows executable.
QEMU_PROGW=qemu-system-$(TARGET_NAME)w$(EXESUF)
$(QEMU_PROG): $(QEMU_PROGW)
$(call quiet-command,$(OBJCOPY) --subsystem console $(QEMU_PROGW) $(QEMU_PROG),"GEN","$(TARGET_DIR)$(QEMU_PROG)")
$(call quiet-command,$(OBJCOPY) --subsystem console $(QEMU_PROGW) $(QEMU_PROG)," GEN $(TARGET_DIR)$(QEMU_PROG)")
QEMU_PROG_BUILD = $(QEMU_PROGW)
else
QEMU_PROG_BUILD = $(QEMU_PROG)
@@ -40,7 +38,7 @@ config-target.h: config-target.h-timestamp
config-target.h-timestamp: config-target.mak
ifdef CONFIG_TRACE_SYSTEMTAP
stap: $(QEMU_PROG).stp-installed $(QEMU_PROG).stp $(QEMU_PROG)-simpletrace.stp
stap: $(QEMU_PROG).stp-installed $(QEMU_PROG).stp
ifdef CONFIG_USER_ONLY
TARGET_TYPE=user
@@ -48,41 +46,27 @@ else
TARGET_TYPE=system
endif
tracetool-y = $(SRC_PATH)/scripts/tracetool.py
tracetool-y += $(shell find $(SRC_PATH)/scripts/tracetool -name "*.py")
$(QEMU_PROG).stp-installed: $(BUILD_DIR)/trace-events-all $(tracetool-y)
$(QEMU_PROG).stp-installed: $(SRC_PATH)/trace-events
$(call quiet-command,$(TRACETOOL) \
--group=all \
--format=stap \
--backends=$(TRACE_BACKENDS) \
--binary=$(bindir)/$(QEMU_PROG) \
--target-name=$(TARGET_NAME) \
--target-type=$(TARGET_TYPE) \
$< > $@,"GEN","$(TARGET_DIR)$(QEMU_PROG).stp-installed")
< $< > $@," GEN $(TARGET_DIR)$(QEMU_PROG).stp-installed")
$(QEMU_PROG).stp: $(BUILD_DIR)/trace-events-all $(tracetool-y)
$(QEMU_PROG).stp: $(SRC_PATH)/trace-events
$(call quiet-command,$(TRACETOOL) \
--group=all \
--format=stap \
--backends=$(TRACE_BACKENDS) \
--binary=$(realpath .)/$(QEMU_PROG) \
--target-name=$(TARGET_NAME) \
--target-type=$(TARGET_TYPE) \
$< > $@,"GEN","$(TARGET_DIR)$(QEMU_PROG).stp")
$(QEMU_PROG)-simpletrace.stp: $(BUILD_DIR)/trace-events-all $(tracetool-y)
$(call quiet-command,$(TRACETOOL) \
--group=all \
--format=simpletrace-stap \
--backends=$(TRACE_BACKENDS) \
--probe-prefix=qemu.$(TARGET_TYPE).$(TARGET_NAME) \
$< > $@,"GEN","$(TARGET_DIR)$(QEMU_PROG)-simpletrace.stp")
< $< > $@," GEN $(TARGET_DIR)$(QEMU_PROG).stp")
else
stap:
endif
.PHONY: stap
all: $(PROGS) stap
@@ -91,28 +75,31 @@ all: $(PROGS) stap
#########################################################
# cpu emulator library
obj-y += exec.o
obj-y += accel/
obj-$(CONFIG_TCG) += tcg/tcg.o tcg/tcg-op.o tcg/tcg-op-vec.o tcg/tcg-op-gvec.o
obj-$(CONFIG_TCG) += tcg/tcg-common.o tcg/optimize.o
obj-$(CONFIG_TCG_INTERPRETER) += tcg/tci.o
obj-y = exec.o translate-all.o cpu-exec.o
obj-y += tcg/tcg.o tcg/optimize.o
obj-$(CONFIG_TCG_INTERPRETER) += tci.o
obj-$(CONFIG_TCG_INTERPRETER) += disas/tci.o
obj-y += fpu/softfloat.o
obj-y += target/$(TARGET_BASE_ARCH)/
obj-y += target-$(TARGET_BASE_ARCH)/
obj-y += disas.o
obj-$(call notempty,$(TARGET_XML_FILES)) += gdbstub-xml.o
obj-$(call lnot,$(CONFIG_KVM)) += kvm-stub.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/decContext.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/decNumber.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/dpd/decimal32.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/dpd/decimal64.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/dpd/decimal128.o
#########################################################
# Linux user emulator target
ifdef CONFIG_LINUX_USER
QEMU_CFLAGS+=-I$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR) \
-I$(SRC_PATH)/linux-user/host/$(ARCH) \
-I$(SRC_PATH)/linux-user
QEMU_CFLAGS+=-I$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR) -I$(SRC_PATH)/linux-user
obj-y += linux-user/
obj-y += gdbstub.o thunk.o
obj-y += gdbstub.o thunk.o user-exec.o
endif #CONFIG_LINUX_USER
@@ -125,7 +112,7 @@ QEMU_CFLAGS+=-I$(SRC_PATH)/bsd-user -I$(SRC_PATH)/bsd-user/$(TARGET_ABI_DIR) \
-I$(SRC_PATH)/bsd-user/$(HOST_VARIANT_DIR)
obj-y += bsd-user/
obj-y += gdbstub.o
obj-y += gdbstub.o user-exec.o
endif #CONFIG_BSD_USER
@@ -135,11 +122,18 @@ ifdef CONFIG_SOFTMMU
obj-y += arch_init.o cpus.o monitor.o gdbstub.o balloon.o ioport.o numa.o
obj-y += qtest.o
obj-y += hw/
obj-y += memory.o
obj-$(CONFIG_FDT) += device_tree.o
obj-$(CONFIG_KVM) += kvm-all.o
obj-y += memory.o savevm.o cputlb.o
obj-y += memory_mapping.o
obj-y += dump.o
obj-y += migration/ram.o
LIBS := $(libs_softmmu) $(LIBS)
LIBS+=$(libs_softmmu)
# xen support
obj-$(CONFIG_XEN) += xen-common.o
obj-$(CONFIG_XEN_I386) += xen-hvm.o xen-mapcache.o
obj-$(call lnot,$(CONFIG_XEN)) += xen-common-stub.o
obj-$(call lnot,$(CONFIG_XEN_I386)) += xen-hvm-stub.o
# Hardware support
ifeq ($(TARGET_NAME), sparc64)
@@ -148,7 +142,7 @@ else
obj-y += hw/$(TARGET_BASE_ARCH)/
endif
GENERATED_FILES += hmp-commands.h hmp-commands-info.h
GENERATED_HEADERS += hmp-commands.h qmp-commands-old.h
endif # CONFIG_SOFTMMU
@@ -158,57 +152,34 @@ endif # CONFIG_SOFTMMU
dummy := $(call unnest-vars,,obj-y)
all-obj-y := $(obj-y)
target-obj-y :=
block-obj-y :=
common-obj-y :=
chardev-obj-y :=
include $(SRC_PATH)/Makefile.objs
dummy := $(call unnest-vars,,target-obj-y)
target-obj-y-save := $(target-obj-y)
dummy := $(call unnest-vars,.., \
block-obj-y \
block-obj-m \
chardev-obj-y \
crypto-obj-y \
crypto-aes-obj-y \
qom-obj-y \
io-obj-y \
common-obj-y \
common-obj-m)
target-obj-y := $(target-obj-y-save)
all-obj-y += $(common-obj-y)
all-obj-y += $(target-obj-y)
all-obj-y += $(qom-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(block-obj-y) $(chardev-obj-y)
all-obj-$(CONFIG_USER_ONLY) += $(crypto-aes-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(crypto-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(io-obj-y)
$(QEMU_PROG_BUILD): config-devices.mak
COMMON_LDADDS = ../libqemuutil.a
all-obj-$(CONFIG_SOFTMMU) += $(block-obj-y)
# build either PROG or PROGW
$(QEMU_PROG_BUILD): $(all-obj-y) $(COMMON_LDADDS)
$(call LINK, $(filter-out %.mak, $^))
ifdef CONFIG_DARWIN
$(call quiet-command,Rez -append $(SRC_PATH)/pc-bios/qemu.rsrc -o $@,"REZ","$(TARGET_DIR)$@")
$(call quiet-command,SetFile -a C $@,"SETFILE","$(TARGET_DIR)$@")
endif
$(QEMU_PROG_BUILD): $(all-obj-y) ../libqemuutil.a ../libqemustub.a
$(call LINK,$^)
gdbstub-xml.c: $(TARGET_XML_FILES) $(SRC_PATH)/scripts/feature_to_c.sh
$(call quiet-command,rm -f $@ && $(SHELL) $(SRC_PATH)/scripts/feature_to_c.sh $@ $(TARGET_XML_FILES),"GEN","$(TARGET_DIR)$@")
$(call quiet-command,rm -f $@ && $(SHELL) $(SRC_PATH)/scripts/feature_to_c.sh $@ $(TARGET_XML_FILES)," GEN $(TARGET_DIR)$@")
hmp-commands.h: $(SRC_PATH)/hmp-commands.hx $(SRC_PATH)/scripts/hxtool
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@,"GEN","$(TARGET_DIR)$@")
hmp-commands.h: $(SRC_PATH)/hmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
hmp-commands-info.h: $(SRC_PATH)/hmp-commands-info.hx $(SRC_PATH)/scripts/hxtool
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@,"GEN","$(TARGET_DIR)$@")
qmp-commands-old.h: $(SRC_PATH)/qmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
clean: clean-target
clean:
rm -f *.a *~ $(PROGS)
rm -f $(shell find . -name '*.[od]')
rm -f hmp-commands.h gdbstub-xml.c
rm -f hmp-commands.h qmp-commands-old.h gdbstub-xml.c
ifdef CONFIG_TRACE_SYSTEMTAP
rm -f *.stp
endif
@@ -220,8 +191,7 @@ endif
ifdef CONFIG_TRACE_SYSTEMTAP
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset"
$(INSTALL_DATA) $(QEMU_PROG).stp-installed "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset/$(QEMU_PROG).stp"
$(INSTALL_DATA) $(QEMU_PROG)-simpletrace.stp "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset/$(QEMU_PROG)-simpletrace.stp"
endif
GENERATED_FILES += config-target.h
Makefile: $(GENERATED_FILES)
GENERATED_HEADERS += config-target.h
Makefile: $(GENERATED_HEADERS)

140
README
View File

@@ -1,139 +1,3 @@
QEMU README
===========
Read the documentation in qemu-doc.html or on http://wiki.qemu-project.org
QEMU is a generic and open source machine & userspace emulator and
virtualizer.
QEMU is capable of emulating a complete machine in software without any
need for hardware virtualization support. By using dynamic translation,
it achieves very good performance. QEMU can also integrate with the Xen
and KVM hypervisors to provide emulated hardware while allowing the
hypervisor to manage the CPU. With hypervisor support, QEMU can achieve
near native performance for CPUs. When QEMU emulates CPUs directly it is
capable of running operating systems made for one machine (e.g. an ARMv7
board) on a different machine (e.g. an x86_64 PC board).
QEMU is also capable of providing userspace API virtualization for Linux
and BSD kernel interfaces. This allows binaries compiled against one
architecture ABI (e.g. the Linux PPC64 ABI) to be run on a host using a
different architecture ABI (e.g. the Linux x86_64 ABI). This does not
involve any hardware emulation, simply CPU and syscall emulation.
QEMU aims to fit into a variety of use cases. It can be invoked directly
by users wishing to have full control over its behaviour and settings.
It also aims to facilitate integration into higher level management
layers, by providing a stable command line interface and monitor API.
It is commonly invoked indirectly via the libvirt library when using
open source applications such as oVirt, OpenStack and virt-manager.
QEMU as a whole is released under the GNU General Public License,
version 2. For full licensing details, consult the LICENSE file.
Building
========
QEMU is multi-platform software intended to be buildable on all modern
Linux platforms, OS-X, Win32 (via the Mingw64 toolchain) and a variety
of other UNIX targets. The simple steps to build QEMU are:
mkdir build
cd build
../configure
make
Additional information can also be found online via the QEMU website:
https://qemu.org/Hosts/Linux
https://qemu.org/Hosts/Mac
https://qemu.org/Hosts/W32
Submitting patches
==================
The QEMU source code is maintained under the GIT version control system.
git clone git://git.qemu.org/qemu.git
When submitting patches, one common approach is to use 'git
format-patch' and/or 'git send-email' to format & send the mail to the
qemu-devel@nongnu.org mailing list. All patches submitted must contain
a 'Signed-off-by' line from the author. Patches should follow the
guidelines set out in the HACKING and CODING_STYLE files.
Additional information on submitting patches can be found online via
the QEMU website
https://qemu.org/Contribute/SubmitAPatch
https://qemu.org/Contribute/TrivialPatches
The QEMU website is also maintained under source control.
git clone git://git.qemu.org/qemu-web.git
https://www.qemu.org/2017/02/04/the-new-qemu-website-is-up/
A 'git-publish' utility was created to make above process less
cumbersome, and is highly recommended for making regular contributions,
or even just for sending consecutive patch series revisions. It also
requires a working 'git send-email' setup, and by default doesn't
automate everything, so you may want to go through the above steps
manually for once.
For installation instructions, please go to
https://github.com/stefanha/git-publish
The workflow with 'git-publish' is:
$ git checkout master -b my-feature
$ # work on new commits, add your 'Signed-off-by' lines to each
$ git publish
Your patch series will be sent and tagged as my-feature-v1 if you need to refer
back to it in the future.
Sending v2:
$ git checkout my-feature # same topic branch
$ # making changes to the commits (using 'git rebase', for example)
$ git publish
Your patch series will be sent with 'v2' tag in the subject and the git tip
will be tagged as my-feature-v2.
Bug reporting
=============
The QEMU project uses Launchpad as its primary upstream bug tracker. Bugs
found when running code built from QEMU git or upstream released sources
should be reported via:
https://bugs.launchpad.net/qemu/
If using QEMU via an operating system vendor pre-built binary package, it
is preferable to report bugs to the vendor's own bug tracker first. If
the bug is also known to affect latest upstream code, it can also be
reported via launchpad.
For additional information on bug reporting consult:
https://qemu.org/Contribute/ReportABug
Contact
=======
The QEMU community can be contacted in a number of ways, with the two
main methods being email and IRC
- qemu-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/qemu-devel
- #qemu on irc.oftc.net
Information on additional methods of contacting the community can be
found online via the QEMU website:
https://qemu.org/Contribute/StartHere
-- End
- QEMU team

View File

@@ -1 +1 @@
2.11.50
2.1.3

View File

@@ -1,4 +0,0 @@
obj-$(CONFIG_SOFTMMU) += accel.o
obj-y += kvm/
obj-$(CONFIG_TCG) += tcg/
obj-y += stubs/

View File

@@ -1,134 +0,0 @@
/*
* QEMU System Emulator, accelerator interfaces
*
* Copyright (c) 2003-2008 Fabrice Bellard
* Copyright (c) 2014 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "sysemu/accel.h"
#include "hw/boards.h"
#include "sysemu/arch_init.h"
#include "sysemu/sysemu.h"
#include "sysemu/kvm.h"
#include "sysemu/qtest.h"
#include "hw/xen/xen.h"
#include "qom/object.h"
#include "qemu/error-report.h"
#include "qemu/option.h"
static const TypeInfo accel_type = {
.name = TYPE_ACCEL,
.parent = TYPE_OBJECT,
.class_size = sizeof(AccelClass),
.instance_size = sizeof(AccelState),
};
/* Lookup AccelClass from opt_name. Returns NULL if not found */
static AccelClass *accel_find(const char *opt_name)
{
char *class_name = g_strdup_printf(ACCEL_CLASS_NAME("%s"), opt_name);
AccelClass *ac = ACCEL_CLASS(object_class_by_name(class_name));
g_free(class_name);
return ac;
}
static int accel_init_machine(AccelClass *acc, MachineState *ms)
{
ObjectClass *oc = OBJECT_CLASS(acc);
const char *cname = object_class_get_name(oc);
AccelState *accel = ACCEL(object_new(cname));
int ret;
ms->accelerator = accel;
*(acc->allowed) = true;
ret = acc->init_machine(ms);
if (ret < 0) {
ms->accelerator = NULL;
*(acc->allowed) = false;
object_unref(OBJECT(accel));
}
return ret;
}
void configure_accelerator(MachineState *ms)
{
const char *accel, *p;
char buf[10];
int ret;
bool accel_initialised = false;
bool init_failed = false;
AccelClass *acc = NULL;
accel = qemu_opt_get(qemu_get_machine_opts(), "accel");
if (accel == NULL) {
/* Use the default "accelerator", tcg */
accel = "tcg";
}
p = accel;
while (!accel_initialised && *p != '\0') {
if (*p == ':') {
p++;
}
p = get_opt_name(buf, sizeof(buf), p, ':');
acc = accel_find(buf);
if (!acc) {
continue;
}
if (acc->available && !acc->available()) {
printf("%s not supported for this target\n",
acc->name);
continue;
}
ret = accel_init_machine(acc, ms);
if (ret < 0) {
init_failed = true;
error_report("failed to initialize %s: %s",
acc->name, strerror(-ret));
} else {
accel_initialised = true;
}
}
if (!accel_initialised) {
if (!init_failed) {
error_report("-machine accel=%s: No accelerator found", accel);
}
exit(1);
}
if (init_failed) {
error_report("Back to %s accelerator", acc->name);
}
}
void accel_register_compat_props(AccelState *accel)
{
AccelClass *class = ACCEL_GET_CLASS(accel);
register_compat_props_array(class->global_props);
}
static void register_accel_types(void)
{
type_register_static(&accel_type);
}
type_init(register_accel_types);

View File

@@ -1 +0,0 @@
obj-$(CONFIG_KVM) += kvm-all.o

File diff suppressed because it is too large Load Diff

View File

@@ -1,16 +0,0 @@
# Trace events for debugging and performance instrumentation
# kvm-all.c
kvm_ioctl(int type, void *arg) "type 0x%x, arg %p"
kvm_vm_ioctl(int type, void *arg) "type 0x%x, arg %p"
kvm_vcpu_ioctl(int cpu_index, int type, void *arg) "cpu_index %d, type 0x%x, arg %p"
kvm_run_exit(int cpu_index, uint32_t reason) "cpu_index %d, reason %d"
kvm_device_ioctl(int fd, int type, void *arg) "dev fd %d, type 0x%x, arg %p"
kvm_failed_reg_get(uint64_t id, const char *msg) "Warning: Unable to retrieve ONEREG %" PRIu64 " from KVM: %s"
kvm_failed_reg_set(uint64_t id, const char *msg) "Warning: Unable to set ONEREG %" PRIu64 " to KVM: %s"
kvm_irqchip_commit_routes(void) ""
kvm_irqchip_add_msi_route(char *name, int vector, int virq) "dev %s vector %d virq %d"
kvm_irqchip_update_msi_route(int virq) "Updating MSI route virq=%d"
kvm_irqchip_release_virq(int virq) "virq %d"
kvm_set_user_memory(uint32_t slot, uint32_t flags, uint64_t guest_phys_addr, uint64_t memory_size, uint64_t userspace_addr, int ret) "Slot#%d flags=0x%x gpa=0x%"PRIx64 " size=0x%"PRIx64 " ua=0x%"PRIx64 " ret=%d"

View File

@@ -1,5 +0,0 @@
obj-$(call lnot,$(CONFIG_HAX)) += hax-stub.o
obj-$(call lnot,$(CONFIG_HVF)) += hvf-stub.o
obj-$(call lnot,$(CONFIG_WHPX)) += whpx-stub.o
obj-$(call lnot,$(CONFIG_KVM)) += kvm-stub.o
obj-$(call lnot,$(CONFIG_TCG)) += tcg-stub.o

View File

@@ -1,34 +0,0 @@
/*
* QEMU HAXM support
*
* Copyright (c) 2015, Intel Corporation
*
* Copyright 2016 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "cpu.h"
#include "sysemu/hax.h"
int hax_sync_vcpus(void)
{
return 0;
}
int hax_init_vcpu(CPUState *cpu)
{
return -ENOSYS;
}
int hax_smp_cpu_exec(CPUState *cpu)
{
return -ENOSYS;
}

View File

@@ -1,31 +0,0 @@
/*
* QEMU HVF support
*
* Copyright 2017 Red Hat, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2 or later, as published by the Free Software Foundation,
* and may be copied, distributed, and modified under those terms.
*
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "cpu.h"
#include "sysemu/hvf.h"
int hvf_init_vcpu(CPUState *cpu)
{
return -ENOSYS;
}
int hvf_vcpu_exec(CPUState *cpu)
{
return -ENOSYS;
}
void hvf_vcpu_destroy(CPUState *cpu)
{
}

View File

@@ -1,163 +0,0 @@
/*
* QEMU KVM stub
*
* Copyright Red Hat, Inc. 2010
*
* Author: Paolo Bonzini <pbonzini@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "cpu.h"
#include "sysemu/kvm.h"
#ifndef CONFIG_USER_ONLY
#include "hw/pci/msi.h"
#endif
KVMState *kvm_state;
bool kvm_kernel_irqchip;
bool kvm_async_interrupts_allowed;
bool kvm_eventfds_allowed;
bool kvm_irqfds_allowed;
bool kvm_resamplefds_allowed;
bool kvm_msi_via_irqfd_allowed;
bool kvm_gsi_routing_allowed;
bool kvm_gsi_direct_mapping;
bool kvm_allowed;
bool kvm_readonly_mem_allowed;
bool kvm_ioeventfd_any_length_allowed;
bool kvm_msi_use_devid;
int kvm_destroy_vcpu(CPUState *cpu)
{
return -ENOSYS;
}
int kvm_init_vcpu(CPUState *cpu)
{
return -ENOSYS;
}
void kvm_flush_coalesced_mmio_buffer(void)
{
}
void kvm_cpu_synchronize_state(CPUState *cpu)
{
}
void kvm_cpu_synchronize_post_reset(CPUState *cpu)
{
}
void kvm_cpu_synchronize_post_init(CPUState *cpu)
{
}
int kvm_cpu_exec(CPUState *cpu)
{
abort();
}
bool kvm_has_sync_mmu(void)
{
return false;
}
int kvm_has_many_ioeventfds(void)
{
return 0;
}
int kvm_update_guest_debug(CPUState *cpu, unsigned long reinject_trap)
{
return -ENOSYS;
}
int kvm_insert_breakpoint(CPUState *cpu, target_ulong addr,
target_ulong len, int type)
{
return -EINVAL;
}
int kvm_remove_breakpoint(CPUState *cpu, target_ulong addr,
target_ulong len, int type)
{
return -EINVAL;
}
void kvm_remove_all_breakpoints(CPUState *cpu)
{
}
int kvm_on_sigbus_vcpu(CPUState *cpu, int code, void *addr)
{
return 1;
}
int kvm_on_sigbus(int code, void *addr)
{
return 1;
}
#ifndef CONFIG_USER_ONLY
int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev)
{
return -ENOSYS;
}
void kvm_init_irq_routing(KVMState *s)
{
}
void kvm_irqchip_release_virq(KVMState *s, int virq)
{
}
int kvm_irqchip_update_msi_route(KVMState *s, int virq, MSIMessage msg,
PCIDevice *dev)
{
return -ENOSYS;
}
void kvm_irqchip_commit_routes(KVMState *s)
{
}
int kvm_irqchip_add_adapter_route(KVMState *s, AdapterInfo *adapter)
{
return -ENOSYS;
}
int kvm_irqchip_add_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
EventNotifier *rn, int virq)
{
return -ENOSYS;
}
int kvm_irqchip_remove_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
int virq)
{
return -ENOSYS;
}
bool kvm_has_free_slot(MachineState *ms)
{
return false;
}
void kvm_init_cpu_signals(CPUState *cpu)
{
abort();
}
bool kvm_arm_supports_user_irq(void)
{
return false;
}
#endif

View File

@@ -1,30 +0,0 @@
/*
* QEMU TCG accelerator stub
*
* Copyright Red Hat, Inc. 2013
*
* Author: Paolo Bonzini <pbonzini@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "cpu.h"
#include "tcg/tcg.h"
#include "exec/cpu-common.h"
#include "exec/exec-all.h"
void tb_flush(CPUState *cpu)
{
}
void tb_unlock(void)
{
}
void tlb_set_dirty(CPUState *cpu, target_ulong vaddr)
{
}

View File

@@ -1,48 +0,0 @@
/*
* QEMU Windows Hypervisor Platform accelerator (WHPX) stub
*
* Copyright Microsoft Corp. 2017
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "cpu.h"
#include "sysemu/whpx.h"
int whpx_init_vcpu(CPUState *cpu)
{
return -1;
}
int whpx_vcpu_exec(CPUState *cpu)
{
return -1;
}
void whpx_destroy_vcpu(CPUState *cpu)
{
}
void whpx_vcpu_kick(CPUState *cpu)
{
}
void whpx_cpu_synchronize_state(CPUState *cpu)
{
}
void whpx_cpu_synchronize_post_reset(CPUState *cpu)
{
}
void whpx_cpu_synchronize_post_init(CPUState *cpu)
{
}
void whpx_cpu_synchronize_pre_loadvm(CPUState *cpu)
{
}

View File

@@ -1,8 +0,0 @@
obj-$(CONFIG_SOFTMMU) += tcg-all.o
obj-$(CONFIG_SOFTMMU) += cputlb.o
obj-y += tcg-runtime.o tcg-runtime-gvec.o
obj-y += cpu-exec.o cpu-exec-common.o translate-all.o
obj-y += translator.o
obj-$(CONFIG_USER_ONLY) += user-exec.o
obj-$(call lnot,$(CONFIG_SOFTMMU)) += user-exec-stub.o

View File

@@ -1,245 +0,0 @@
/*
* Atomic helper templates
* Included from tcg-runtime.c and cputlb.c.
*
* Copyright (c) 2016 Red Hat, Inc
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#if DATA_SIZE == 16
# define SUFFIX o
# define DATA_TYPE Int128
# define BSWAP bswap128
#elif DATA_SIZE == 8
# define SUFFIX q
# define DATA_TYPE uint64_t
# define BSWAP bswap64
#elif DATA_SIZE == 4
# define SUFFIX l
# define DATA_TYPE uint32_t
# define BSWAP bswap32
#elif DATA_SIZE == 2
# define SUFFIX w
# define DATA_TYPE uint16_t
# define BSWAP bswap16
#elif DATA_SIZE == 1
# define SUFFIX b
# define DATA_TYPE uint8_t
# define BSWAP
#else
# error unsupported data size
#endif
#if DATA_SIZE >= 4
# define ABI_TYPE DATA_TYPE
#else
# define ABI_TYPE uint32_t
#endif
/* Define host-endian atomic operations. Note that END is used within
the ATOMIC_NAME macro, and redefined below. */
#if DATA_SIZE == 1
# define END
#elif defined(HOST_WORDS_BIGENDIAN)
# define END _be
#else
# define END _le
#endif
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE cmpv, ABI_TYPE newv EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
DATA_TYPE ret = atomic_cmpxchg__nocheck(haddr, cmpv, newv);
ATOMIC_MMU_CLEANUP;
return ret;
}
#if DATA_SIZE >= 16
ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE val, *haddr = ATOMIC_MMU_LOOKUP;
__atomic_load(haddr, &val, __ATOMIC_RELAXED);
ATOMIC_MMU_CLEANUP;
return val;
}
void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr,
ABI_TYPE val EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
__atomic_store(haddr, &val, __ATOMIC_RELAXED);
ATOMIC_MMU_CLEANUP;
}
#else
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE val EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
DATA_TYPE ret = atomic_xchg__nocheck(haddr, val);
ATOMIC_MMU_CLEANUP;
return ret;
}
#define GEN_ATOMIC_HELPER(X) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE val EXTRA_ARGS) \
{ \
ATOMIC_MMU_DECLS; \
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
DATA_TYPE ret = atomic_##X(haddr, val); \
ATOMIC_MMU_CLEANUP; \
return ret; \
}
GEN_ATOMIC_HELPER(fetch_add)
GEN_ATOMIC_HELPER(fetch_and)
GEN_ATOMIC_HELPER(fetch_or)
GEN_ATOMIC_HELPER(fetch_xor)
GEN_ATOMIC_HELPER(add_fetch)
GEN_ATOMIC_HELPER(and_fetch)
GEN_ATOMIC_HELPER(or_fetch)
GEN_ATOMIC_HELPER(xor_fetch)
#undef GEN_ATOMIC_HELPER
#endif /* DATA SIZE >= 16 */
#undef END
#if DATA_SIZE > 1
/* Define reverse-host-endian atomic operations. Note that END is used
within the ATOMIC_NAME macro. */
#ifdef HOST_WORDS_BIGENDIAN
# define END _le
#else
# define END _be
#endif
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE cmpv, ABI_TYPE newv EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
DATA_TYPE ret = atomic_cmpxchg__nocheck(haddr, BSWAP(cmpv), BSWAP(newv));
ATOMIC_MMU_CLEANUP;
return BSWAP(ret);
}
#if DATA_SIZE >= 16
ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE val, *haddr = ATOMIC_MMU_LOOKUP;
__atomic_load(haddr, &val, __ATOMIC_RELAXED);
ATOMIC_MMU_CLEANUP;
return BSWAP(val);
}
void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr,
ABI_TYPE val EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
val = BSWAP(val);
__atomic_store(haddr, &val, __ATOMIC_RELAXED);
ATOMIC_MMU_CLEANUP;
}
#else
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE val EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
ABI_TYPE ret = atomic_xchg__nocheck(haddr, BSWAP(val));
ATOMIC_MMU_CLEANUP;
return BSWAP(ret);
}
#define GEN_ATOMIC_HELPER(X) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE val EXTRA_ARGS) \
{ \
ATOMIC_MMU_DECLS; \
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
DATA_TYPE ret = atomic_##X(haddr, BSWAP(val)); \
ATOMIC_MMU_CLEANUP; \
return BSWAP(ret); \
}
GEN_ATOMIC_HELPER(fetch_and)
GEN_ATOMIC_HELPER(fetch_or)
GEN_ATOMIC_HELPER(fetch_xor)
GEN_ATOMIC_HELPER(and_fetch)
GEN_ATOMIC_HELPER(or_fetch)
GEN_ATOMIC_HELPER(xor_fetch)
#undef GEN_ATOMIC_HELPER
/* Note that for addition, we need to use a separate cmpxchg loop instead
of bswaps for the reverse-host-endian helpers. */
ABI_TYPE ATOMIC_NAME(fetch_add)(CPUArchState *env, target_ulong addr,
ABI_TYPE val EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
DATA_TYPE ldo, ldn, ret, sto;
ldo = atomic_read__nocheck(haddr);
while (1) {
ret = BSWAP(ldo);
sto = BSWAP(ret + val);
ldn = atomic_cmpxchg__nocheck(haddr, ldo, sto);
if (ldn == ldo) {
ATOMIC_MMU_CLEANUP;
return ret;
}
ldo = ldn;
}
}
ABI_TYPE ATOMIC_NAME(add_fetch)(CPUArchState *env, target_ulong addr,
ABI_TYPE val EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
DATA_TYPE ldo, ldn, ret, sto;
ldo = atomic_read__nocheck(haddr);
while (1) {
ret = BSWAP(ldo) + val;
sto = BSWAP(ret);
ldn = atomic_cmpxchg__nocheck(haddr, ldo, sto);
if (ldn == ldo) {
ATOMIC_MMU_CLEANUP;
return ret;
}
ldo = ldn;
}
}
#endif /* DATA_SIZE >= 16 */
#undef END
#endif /* DATA_SIZE > 1 */
#undef BSWAP
#undef ABI_TYPE
#undef DATA_TYPE
#undef SUFFIX
#undef DATA_SIZE

View File

@@ -1,83 +0,0 @@
/*
* emulator main execution loop
*
* Copyright (c) 2003-2005 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "cpu.h"
#include "sysemu/cpus.h"
#include "exec/exec-all.h"
bool tcg_allowed;
/* exit the current TB, but without causing any exception to be raised */
void cpu_loop_exit_noexc(CPUState *cpu)
{
/* XXX: restore cpu registers saved in host registers */
cpu->exception_index = -1;
siglongjmp(cpu->jmp_env, 1);
}
#if defined(CONFIG_SOFTMMU)
void cpu_reloading_memory_map(void)
{
if (qemu_in_vcpu_thread() && current_cpu->running) {
/* The guest can in theory prolong the RCU critical section as long
* as it feels like. The major problem with this is that because it
* can do multiple reconfigurations of the memory map within the
* critical section, we could potentially accumulate an unbounded
* collection of memory data structures awaiting reclamation.
*
* Because the only thing we're currently protecting with RCU is the
* memory data structures, it's sufficient to break the critical section
* in this callback, which we know will get called every time the
* memory map is rearranged.
*
* (If we add anything else in the system that uses RCU to protect
* its data structures, we will need to implement some other mechanism
* to force TCG CPUs to exit the critical section, at which point this
* part of this callback might become unnecessary.)
*
* This pair matches cpu_exec's rcu_read_lock()/rcu_read_unlock(), which
* only protects cpu->as->dispatch. Since we know our caller is about
* to reload it, it's safe to split the critical section.
*/
rcu_read_unlock();
rcu_read_lock();
}
}
#endif
void cpu_loop_exit(CPUState *cpu)
{
siglongjmp(cpu->jmp_env, 1);
}
void cpu_loop_exit_restore(CPUState *cpu, uintptr_t pc)
{
if (pc) {
cpu_restore_state(cpu, pc);
}
siglongjmp(cpu->jmp_env, 1);
}
void cpu_loop_exit_atomic(CPUState *cpu, uintptr_t pc)
{
cpu->exception_index = EXCP_ATOMIC;
cpu_loop_exit_restore(cpu, pc);
}

View File

@@ -1,743 +0,0 @@
/*
* emulator main execution loop
*
* Copyright (c) 2003-2005 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "cpu.h"
#include "trace.h"
#include "disas/disas.h"
#include "exec/exec-all.h"
#include "tcg.h"
#include "qemu/atomic.h"
#include "sysemu/qtest.h"
#include "qemu/timer.h"
#include "exec/address-spaces.h"
#include "qemu/rcu.h"
#include "exec/tb-hash.h"
#include "exec/tb-lookup.h"
#include "exec/log.h"
#include "qemu/main-loop.h"
#if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
#include "hw/i386/apic.h"
#endif
#include "sysemu/cpus.h"
#include "sysemu/replay.h"
/* -icount align implementation. */
typedef struct SyncClocks {
int64_t diff_clk;
int64_t last_cpu_icount;
int64_t realtime_clock;
} SyncClocks;
#if !defined(CONFIG_USER_ONLY)
/* Allow the guest to have a max 3ms advance.
* The difference between the 2 clocks could therefore
* oscillate around 0.
*/
#define VM_CLOCK_ADVANCE 3000000
#define THRESHOLD_REDUCE 1.5
#define MAX_DELAY_PRINT_RATE 2000000000LL
#define MAX_NB_PRINTS 100
static void align_clocks(SyncClocks *sc, const CPUState *cpu)
{
int64_t cpu_icount;
if (!icount_align_option) {
return;
}
cpu_icount = cpu->icount_extra + cpu->icount_decr.u16.low;
sc->diff_clk += cpu_icount_to_ns(sc->last_cpu_icount - cpu_icount);
sc->last_cpu_icount = cpu_icount;
if (sc->diff_clk > VM_CLOCK_ADVANCE) {
#ifndef _WIN32
struct timespec sleep_delay, rem_delay;
sleep_delay.tv_sec = sc->diff_clk / 1000000000LL;
sleep_delay.tv_nsec = sc->diff_clk % 1000000000LL;
if (nanosleep(&sleep_delay, &rem_delay) < 0) {
sc->diff_clk = rem_delay.tv_sec * 1000000000LL + rem_delay.tv_nsec;
} else {
sc->diff_clk = 0;
}
#else
Sleep(sc->diff_clk / SCALE_MS);
sc->diff_clk = 0;
#endif
}
}
static void print_delay(const SyncClocks *sc)
{
static float threshold_delay;
static int64_t last_realtime_clock;
static int nb_prints;
if (icount_align_option &&
sc->realtime_clock - last_realtime_clock >= MAX_DELAY_PRINT_RATE &&
nb_prints < MAX_NB_PRINTS) {
if ((-sc->diff_clk / (float)1000000000LL > threshold_delay) ||
(-sc->diff_clk / (float)1000000000LL <
(threshold_delay - THRESHOLD_REDUCE))) {
threshold_delay = (-sc->diff_clk / 1000000000LL) + 1;
printf("Warning: The guest is now late by %.1f to %.1f seconds\n",
threshold_delay - 1,
threshold_delay);
nb_prints++;
last_realtime_clock = sc->realtime_clock;
}
}
}
static void init_delay_params(SyncClocks *sc,
const CPUState *cpu)
{
if (!icount_align_option) {
return;
}
sc->realtime_clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT);
sc->diff_clk = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - sc->realtime_clock;
sc->last_cpu_icount = cpu->icount_extra + cpu->icount_decr.u16.low;
if (sc->diff_clk < max_delay) {
max_delay = sc->diff_clk;
}
if (sc->diff_clk > max_advance) {
max_advance = sc->diff_clk;
}
/* Print every 2s max if the guest is late. We limit the number
of printed messages to NB_PRINT_MAX(currently 100) */
print_delay(sc);
}
#else
static void align_clocks(SyncClocks *sc, const CPUState *cpu)
{
}
static void init_delay_params(SyncClocks *sc, const CPUState *cpu)
{
}
#endif /* CONFIG USER ONLY */
/* Execute a TB, and fix up the CPU state afterwards if necessary */
static inline tcg_target_ulong cpu_tb_exec(CPUState *cpu, TranslationBlock *itb)
{
CPUArchState *env = cpu->env_ptr;
uintptr_t ret;
TranslationBlock *last_tb;
int tb_exit;
uint8_t *tb_ptr = itb->tc.ptr;
qemu_log_mask_and_addr(CPU_LOG_EXEC, itb->pc,
"Trace %d: %p ["
TARGET_FMT_lx "/" TARGET_FMT_lx "/%#x] %s\n",
cpu->cpu_index, itb->tc.ptr,
itb->cs_base, itb->pc, itb->flags,
lookup_symbol(itb->pc));
#if defined(DEBUG_DISAS)
if (qemu_loglevel_mask(CPU_LOG_TB_CPU)
&& qemu_log_in_addr_range(itb->pc)) {
qemu_log_lock();
#if defined(TARGET_I386)
log_cpu_state(cpu, CPU_DUMP_CCOP);
#else
log_cpu_state(cpu, 0);
#endif
qemu_log_unlock();
}
#endif /* DEBUG_DISAS */
cpu->can_do_io = !use_icount;
ret = tcg_qemu_tb_exec(env, tb_ptr);
cpu->can_do_io = 1;
last_tb = (TranslationBlock *)(ret & ~TB_EXIT_MASK);
tb_exit = ret & TB_EXIT_MASK;
trace_exec_tb_exit(last_tb, tb_exit);
if (tb_exit > TB_EXIT_IDX1) {
/* We didn't start executing this TB (eg because the instruction
* counter hit zero); we must restore the guest PC to the address
* of the start of the TB.
*/
CPUClass *cc = CPU_GET_CLASS(cpu);
qemu_log_mask_and_addr(CPU_LOG_EXEC, last_tb->pc,
"Stopped execution of TB chain before %p ["
TARGET_FMT_lx "] %s\n",
last_tb->tc.ptr, last_tb->pc,
lookup_symbol(last_tb->pc));
if (cc->synchronize_from_tb) {
cc->synchronize_from_tb(cpu, last_tb);
} else {
assert(cc->set_pc);
cc->set_pc(cpu, last_tb->pc);
}
}
return ret;
}
#ifndef CONFIG_USER_ONLY
/* Execute the code without caching the generated code. An interpreter
could be used if available. */
static void cpu_exec_nocache(CPUState *cpu, int max_cycles,
TranslationBlock *orig_tb, bool ignore_icount)
{
TranslationBlock *tb;
uint32_t cflags = curr_cflags() | CF_NOCACHE;
if (ignore_icount) {
cflags &= ~CF_USE_ICOUNT;
}
/* Should never happen.
We only end up here when an existing TB is too long. */
cflags |= MIN(max_cycles, CF_COUNT_MASK);
tb_lock();
tb = tb_gen_code(cpu, orig_tb->pc, orig_tb->cs_base,
orig_tb->flags, cflags);
tb->orig_tb = orig_tb;
tb_unlock();
/* execute the generated code */
trace_exec_tb_nocache(tb, tb->pc);
cpu_tb_exec(cpu, tb);
tb_lock();
tb_phys_invalidate(tb, -1);
tb_remove(tb);
tb_unlock();
}
#endif
void cpu_exec_step_atomic(CPUState *cpu)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
TranslationBlock *tb;
target_ulong cs_base, pc;
uint32_t flags;
uint32_t cflags = 1;
uint32_t cf_mask = cflags & CF_HASH_MASK;
/* volatile because we modify it between setjmp and longjmp */
volatile bool in_exclusive_region = false;
if (sigsetjmp(cpu->jmp_env, 0) == 0) {
tb = tb_lookup__cpu_state(cpu, &pc, &cs_base, &flags, cf_mask);
if (tb == NULL) {
mmap_lock();
tb_lock();
tb = tb_htable_lookup(cpu, pc, cs_base, flags, cf_mask);
if (likely(tb == NULL)) {
tb = tb_gen_code(cpu, pc, cs_base, flags, cflags);
}
tb_unlock();
mmap_unlock();
}
start_exclusive();
/* Since we got here, we know that parallel_cpus must be true. */
parallel_cpus = false;
in_exclusive_region = true;
cc->cpu_exec_enter(cpu);
/* execute the generated code */
trace_exec_tb(tb, pc);
cpu_tb_exec(cpu, tb);
cc->cpu_exec_exit(cpu);
} else {
/* We may have exited due to another problem here, so we need
* to reset any tb_locks we may have taken but didn't release.
* The mmap_lock is dropped by tb_gen_code if it runs out of
* memory.
*/
#ifndef CONFIG_SOFTMMU
tcg_debug_assert(!have_mmap_lock());
#endif
tb_lock_reset();
}
if (in_exclusive_region) {
/* We might longjump out of either the codegen or the
* execution, so must make sure we only end the exclusive
* region if we started it.
*/
parallel_cpus = true;
end_exclusive();
}
}
struct tb_desc {
target_ulong pc;
target_ulong cs_base;
CPUArchState *env;
tb_page_addr_t phys_page1;
uint32_t flags;
uint32_t cf_mask;
uint32_t trace_vcpu_dstate;
};
static bool tb_cmp(const void *p, const void *d)
{
const TranslationBlock *tb = p;
const struct tb_desc *desc = d;
if (tb->pc == desc->pc &&
tb->page_addr[0] == desc->phys_page1 &&
tb->cs_base == desc->cs_base &&
tb->flags == desc->flags &&
tb->trace_vcpu_dstate == desc->trace_vcpu_dstate &&
(tb_cflags(tb) & (CF_HASH_MASK | CF_INVALID)) == desc->cf_mask) {
/* check next page if needed */
if (tb->page_addr[1] == -1) {
return true;
} else {
tb_page_addr_t phys_page2;
target_ulong virt_page2;
virt_page2 = (desc->pc & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE;
phys_page2 = get_page_addr_code(desc->env, virt_page2);
if (tb->page_addr[1] == phys_page2) {
return true;
}
}
}
return false;
}
TranslationBlock *tb_htable_lookup(CPUState *cpu, target_ulong pc,
target_ulong cs_base, uint32_t flags,
uint32_t cf_mask)
{
tb_page_addr_t phys_pc;
struct tb_desc desc;
uint32_t h;
desc.env = (CPUArchState *)cpu->env_ptr;
desc.cs_base = cs_base;
desc.flags = flags;
desc.cf_mask = cf_mask;
desc.trace_vcpu_dstate = *cpu->trace_dstate;
desc.pc = pc;
phys_pc = get_page_addr_code(desc.env, pc);
desc.phys_page1 = phys_pc & TARGET_PAGE_MASK;
h = tb_hash_func(phys_pc, pc, flags, cf_mask, *cpu->trace_dstate);
return qht_lookup(&tb_ctx.htable, tb_cmp, &desc, h);
}
void tb_set_jmp_target(TranslationBlock *tb, int n, uintptr_t addr)
{
if (TCG_TARGET_HAS_direct_jump) {
uintptr_t offset = tb->jmp_target_arg[n];
uintptr_t tc_ptr = (uintptr_t)tb->tc.ptr;
tb_target_set_jmp_target(tc_ptr, tc_ptr + offset, addr);
} else {
tb->jmp_target_arg[n] = addr;
}
}
/* Called with tb_lock held. */
static inline void tb_add_jump(TranslationBlock *tb, int n,
TranslationBlock *tb_next)
{
assert(n < ARRAY_SIZE(tb->jmp_list_next));
if (tb->jmp_list_next[n]) {
/* Another thread has already done this while we were
* outside of the lock; nothing to do in this case */
return;
}
qemu_log_mask_and_addr(CPU_LOG_EXEC, tb->pc,
"Linking TBs %p [" TARGET_FMT_lx
"] index %d -> %p [" TARGET_FMT_lx "]\n",
tb->tc.ptr, tb->pc, n,
tb_next->tc.ptr, tb_next->pc);
/* patch the native jump address */
tb_set_jmp_target(tb, n, (uintptr_t)tb_next->tc.ptr);
/* add in TB jmp circular list */
tb->jmp_list_next[n] = tb_next->jmp_list_first;
tb_next->jmp_list_first = (uintptr_t)tb | n;
}
static inline TranslationBlock *tb_find(CPUState *cpu,
TranslationBlock *last_tb,
int tb_exit, uint32_t cf_mask)
{
TranslationBlock *tb;
target_ulong cs_base, pc;
uint32_t flags;
bool acquired_tb_lock = false;
tb = tb_lookup__cpu_state(cpu, &pc, &cs_base, &flags, cf_mask);
if (tb == NULL) {
/* mmap_lock is needed by tb_gen_code, and mmap_lock must be
* taken outside tb_lock. As system emulation is currently
* single threaded the locks are NOPs.
*/
mmap_lock();
tb_lock();
acquired_tb_lock = true;
/* There's a chance that our desired tb has been translated while
* taking the locks so we check again inside the lock.
*/
tb = tb_htable_lookup(cpu, pc, cs_base, flags, cf_mask);
if (likely(tb == NULL)) {
/* if no translated code available, then translate it now */
tb = tb_gen_code(cpu, pc, cs_base, flags, cf_mask);
}
mmap_unlock();
/* We add the TB in the virtual pc hash table for the fast lookup */
atomic_set(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)], tb);
}
#ifndef CONFIG_USER_ONLY
/* We don't take care of direct jumps when address mapping changes in
* system emulation. So it's not safe to make a direct jump to a TB
* spanning two pages because the mapping for the second page can change.
*/
if (tb->page_addr[1] != -1) {
last_tb = NULL;
}
#endif
/* See if we can patch the calling TB. */
if (last_tb && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) {
if (!acquired_tb_lock) {
tb_lock();
acquired_tb_lock = true;
}
if (!(tb->cflags & CF_INVALID)) {
tb_add_jump(last_tb, tb_exit, tb);
}
}
if (acquired_tb_lock) {
tb_unlock();
}
return tb;
}
static inline bool cpu_handle_halt(CPUState *cpu)
{
if (cpu->halted) {
#if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
if ((cpu->interrupt_request & CPU_INTERRUPT_POLL)
&& replay_interrupt()) {
X86CPU *x86_cpu = X86_CPU(cpu);
qemu_mutex_lock_iothread();
apic_poll_irq(x86_cpu->apic_state);
cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
qemu_mutex_unlock_iothread();
}
#endif
if (!cpu_has_work(cpu)) {
return true;
}
cpu->halted = 0;
}
return false;
}
static inline void cpu_handle_debug_exception(CPUState *cpu)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
CPUWatchpoint *wp;
if (!cpu->watchpoint_hit) {
QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) {
wp->flags &= ~BP_WATCHPOINT_HIT;
}
}
cc->debug_excp_handler(cpu);
}
static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
{
if (cpu->exception_index < 0) {
#ifndef CONFIG_USER_ONLY
if (replay_has_exception()
&& cpu->icount_decr.u16.low + cpu->icount_extra == 0) {
/* try to cause an exception pending in the log */
cpu_exec_nocache(cpu, 1, tb_find(cpu, NULL, 0, curr_cflags()), true);
}
#endif
if (cpu->exception_index < 0) {
return false;
}
}
if (cpu->exception_index >= EXCP_INTERRUPT) {
/* exit request from the cpu execution loop */
*ret = cpu->exception_index;
if (*ret == EXCP_DEBUG) {
cpu_handle_debug_exception(cpu);
}
cpu->exception_index = -1;
return true;
} else {
#if defined(CONFIG_USER_ONLY)
/* if user mode only, we simulate a fake exception
which will be handled outside the cpu execution
loop */
#if defined(TARGET_I386)
CPUClass *cc = CPU_GET_CLASS(cpu);
cc->do_interrupt(cpu);
#endif
*ret = cpu->exception_index;
cpu->exception_index = -1;
return true;
#else
if (replay_exception()) {
CPUClass *cc = CPU_GET_CLASS(cpu);
qemu_mutex_lock_iothread();
cc->do_interrupt(cpu);
qemu_mutex_unlock_iothread();
cpu->exception_index = -1;
} else if (!replay_has_interrupt()) {
/* give a chance to iothread in replay mode */
*ret = EXCP_INTERRUPT;
return true;
}
#endif
}
return false;
}
static inline bool cpu_handle_interrupt(CPUState *cpu,
TranslationBlock **last_tb)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
/* Clear the interrupt flag now since we're processing
* cpu->interrupt_request and cpu->exit_request.
* Ensure zeroing happens before reading cpu->exit_request or
* cpu->interrupt_request (see also smp_wmb in cpu_exit())
*/
atomic_mb_set(&cpu->icount_decr.u16.high, 0);
if (unlikely(atomic_read(&cpu->interrupt_request))) {
int interrupt_request;
qemu_mutex_lock_iothread();
interrupt_request = cpu->interrupt_request;
if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
/* Mask out external interrupts for this step. */
interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK;
}
if (interrupt_request & CPU_INTERRUPT_DEBUG) {
cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
cpu->exception_index = EXCP_DEBUG;
qemu_mutex_unlock_iothread();
return true;
}
if (replay_mode == REPLAY_MODE_PLAY && !replay_has_interrupt()) {
/* Do nothing */
} else if (interrupt_request & CPU_INTERRUPT_HALT) {
replay_interrupt();
cpu->interrupt_request &= ~CPU_INTERRUPT_HALT;
cpu->halted = 1;
cpu->exception_index = EXCP_HLT;
qemu_mutex_unlock_iothread();
return true;
}
#if defined(TARGET_I386)
else if (interrupt_request & CPU_INTERRUPT_INIT) {
X86CPU *x86_cpu = X86_CPU(cpu);
CPUArchState *env = &x86_cpu->env;
replay_interrupt();
cpu_svm_check_intercept_param(env, SVM_EXIT_INIT, 0, 0);
do_cpu_init(x86_cpu);
cpu->exception_index = EXCP_HALTED;
qemu_mutex_unlock_iothread();
return true;
}
#else
else if (interrupt_request & CPU_INTERRUPT_RESET) {
replay_interrupt();
cpu_reset(cpu);
qemu_mutex_unlock_iothread();
return true;
}
#endif
/* The target hook has 3 exit conditions:
False when the interrupt isn't processed,
True when it is, and we should restart on a new TB,
and via longjmp via cpu_loop_exit. */
else {
if (cc->cpu_exec_interrupt(cpu, interrupt_request)) {
replay_interrupt();
*last_tb = NULL;
}
/* The target hook may have updated the 'cpu->interrupt_request';
* reload the 'interrupt_request' value */
interrupt_request = cpu->interrupt_request;
}
if (interrupt_request & CPU_INTERRUPT_EXITTB) {
cpu->interrupt_request &= ~CPU_INTERRUPT_EXITTB;
/* ensure that no TB jump will be modified as
the program flow was changed */
*last_tb = NULL;
}
/* If we exit via cpu_loop_exit/longjmp it is reset in cpu_exec */
qemu_mutex_unlock_iothread();
}
/* Finally, check if we need to exit to the main loop. */
if (unlikely(atomic_read(&cpu->exit_request)
|| (use_icount && cpu->icount_decr.u16.low + cpu->icount_extra == 0))) {
atomic_set(&cpu->exit_request, 0);
cpu->exception_index = EXCP_INTERRUPT;
return true;
}
return false;
}
static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
TranslationBlock **last_tb, int *tb_exit)
{
uintptr_t ret;
int32_t insns_left;
trace_exec_tb(tb, tb->pc);
ret = cpu_tb_exec(cpu, tb);
tb = (TranslationBlock *)(ret & ~TB_EXIT_MASK);
*tb_exit = ret & TB_EXIT_MASK;
if (*tb_exit != TB_EXIT_REQUESTED) {
*last_tb = tb;
return;
}
*last_tb = NULL;
insns_left = atomic_read(&cpu->icount_decr.u32);
if (insns_left < 0) {
/* Something asked us to stop executing chained TBs; just
* continue round the main loop. Whatever requested the exit
* will also have set something else (eg exit_request or
* interrupt_request) which will be handled by
* cpu_handle_interrupt. cpu_handle_interrupt will also
* clear cpu->icount_decr.u16.high.
*/
return;
}
/* Instruction counter expired. */
assert(use_icount);
#ifndef CONFIG_USER_ONLY
/* Ensure global icount has gone forward */
cpu_update_icount(cpu);
/* Refill decrementer and continue execution. */
insns_left = MIN(0xffff, cpu->icount_budget);
cpu->icount_decr.u16.low = insns_left;
cpu->icount_extra = cpu->icount_budget - insns_left;
if (!cpu->icount_extra) {
/* Execute any remaining instructions, then let the main loop
* handle the next event.
*/
if (insns_left > 0) {
cpu_exec_nocache(cpu, insns_left, tb, false);
}
}
#endif
}
/* main execution loop */
int cpu_exec(CPUState *cpu)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
int ret;
SyncClocks sc = { 0 };
/* replay_interrupt may need current_cpu */
current_cpu = cpu;
if (cpu_handle_halt(cpu)) {
return EXCP_HALTED;
}
rcu_read_lock();
cc->cpu_exec_enter(cpu);
/* Calculate difference between guest clock and host clock.
* This delay includes the delay of the last cycle, so
* what we have to do is sleep until it is 0. As for the
* advance/delay we gain here, we try to fix it next time.
*/
init_delay_params(&sc, cpu);
/* prepare setjmp context for exception handling */
if (sigsetjmp(cpu->jmp_env, 0) != 0) {
#if defined(__clang__) || !QEMU_GNUC_PREREQ(4, 6)
/* Some compilers wrongly smash all local variables after
* siglongjmp. There were bug reports for gcc 4.5.0 and clang.
* Reload essential local variables here for those compilers.
* Newer versions of gcc would complain about this code (-Wclobbered). */
cpu = current_cpu;
cc = CPU_GET_CLASS(cpu);
#else /* buggy compiler */
/* Assert that the compiler does not smash local variables. */
g_assert(cpu == current_cpu);
g_assert(cc == CPU_GET_CLASS(cpu));
#endif /* buggy compiler */
cpu->can_do_io = 1;
tb_lock_reset();
if (qemu_mutex_iothread_locked()) {
qemu_mutex_unlock_iothread();
}
}
/* if an exception is pending, we execute it here */
while (!cpu_handle_exception(cpu, &ret)) {
TranslationBlock *last_tb = NULL;
int tb_exit = 0;
while (!cpu_handle_interrupt(cpu, &last_tb)) {
uint32_t cflags = cpu->cflags_next_tb;
TranslationBlock *tb;
/* When requested, use an exact setting for cflags for the next
execution. This is used for icount, precise smc, and stop-
after-access watchpoints. Since this request should never
have CF_INVALID set, -1 is a convenient invalid value that
does not require tcg headers for cpu_common_reset. */
if (cflags == -1) {
cflags = curr_cflags();
} else {
cpu->cflags_next_tb = -1;
}
tb = tb_find(cpu, last_tb, tb_exit, cflags);
cpu_loop_exec_tb(cpu, tb, &last_tb, &tb_exit);
/* Try to align the host and virtual clocks
if the guest is in advance */
align_clocks(&sc, cpu);
}
}
cc->cpu_exec_exit(cpu);
rcu_read_unlock();
return ret;
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,435 +0,0 @@
/*
* Software MMU support
*
* Generate helpers used by TCG for qemu_ld/st ops and code load
* functions.
*
* Included from target op helpers and exec.c.
*
* Copyright (c) 2003 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#if DATA_SIZE == 8
#define SUFFIX q
#define LSUFFIX q
#define SDATA_TYPE int64_t
#define DATA_TYPE uint64_t
#elif DATA_SIZE == 4
#define SUFFIX l
#define LSUFFIX l
#define SDATA_TYPE int32_t
#define DATA_TYPE uint32_t
#elif DATA_SIZE == 2
#define SUFFIX w
#define LSUFFIX uw
#define SDATA_TYPE int16_t
#define DATA_TYPE uint16_t
#elif DATA_SIZE == 1
#define SUFFIX b
#define LSUFFIX ub
#define SDATA_TYPE int8_t
#define DATA_TYPE uint8_t
#else
#error unsupported data size
#endif
/* For the benefit of TCG generated code, we want to avoid the complication
of ABI-specific return type promotion and always return a value extended
to the register size of the host. This is tcg_target_long, except in the
case of a 32-bit host and 64-bit data, and for that we always have
uint64_t. Don't bother with this widened value for SOFTMMU_CODE_ACCESS. */
#if defined(SOFTMMU_CODE_ACCESS) || DATA_SIZE == 8
# define WORD_TYPE DATA_TYPE
# define USUFFIX SUFFIX
#else
# define WORD_TYPE tcg_target_ulong
# define USUFFIX glue(u, SUFFIX)
# define SSUFFIX glue(s, SUFFIX)
#endif
#ifdef SOFTMMU_CODE_ACCESS
#define READ_ACCESS_TYPE MMU_INST_FETCH
#define ADDR_READ addr_code
#else
#define READ_ACCESS_TYPE MMU_DATA_LOAD
#define ADDR_READ addr_read
#endif
#if DATA_SIZE == 8
# define BSWAP(X) bswap64(X)
#elif DATA_SIZE == 4
# define BSWAP(X) bswap32(X)
#elif DATA_SIZE == 2
# define BSWAP(X) bswap16(X)
#else
# define BSWAP(X) (X)
#endif
#if DATA_SIZE == 1
# define helper_le_ld_name glue(glue(helper_ret_ld, USUFFIX), MMUSUFFIX)
# define helper_be_ld_name helper_le_ld_name
# define helper_le_lds_name glue(glue(helper_ret_ld, SSUFFIX), MMUSUFFIX)
# define helper_be_lds_name helper_le_lds_name
# define helper_le_st_name glue(glue(helper_ret_st, SUFFIX), MMUSUFFIX)
# define helper_be_st_name helper_le_st_name
#else
# define helper_le_ld_name glue(glue(helper_le_ld, USUFFIX), MMUSUFFIX)
# define helper_be_ld_name glue(glue(helper_be_ld, USUFFIX), MMUSUFFIX)
# define helper_le_lds_name glue(glue(helper_le_ld, SSUFFIX), MMUSUFFIX)
# define helper_be_lds_name glue(glue(helper_be_ld, SSUFFIX), MMUSUFFIX)
# define helper_le_st_name glue(glue(helper_le_st, SUFFIX), MMUSUFFIX)
# define helper_be_st_name glue(glue(helper_be_st, SUFFIX), MMUSUFFIX)
#endif
#ifndef SOFTMMU_CODE_ACCESS
static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env,
size_t mmu_idx, size_t index,
target_ulong addr,
uintptr_t retaddr)
{
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, DATA_SIZE);
}
#endif
WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong addr,
TCGMemOpIdx oi, uintptr_t retaddr)
{
unsigned mmu_idx = get_mmuidx(oi);
int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
target_ulong tlb_addr = env->tlb_table[mmu_idx][index].ADDR_READ;
unsigned a_bits = get_alignment_bits(get_memop(oi));
uintptr_t haddr;
DATA_TYPE res;
if (addr & ((1 << a_bits) - 1)) {
cpu_unaligned_access(ENV_GET_CPU(env), addr, READ_ACCESS_TYPE,
mmu_idx, retaddr);
}
/* If the TLB entry is for a different page, reload and try again. */
if ((addr & TARGET_PAGE_MASK)
!= (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) {
if (!VICTIM_TLB_HIT(ADDR_READ, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, READ_ACCESS_TYPE,
mmu_idx, retaddr);
}
tlb_addr = env->tlb_table[mmu_idx][index].ADDR_READ;
}
/* Handle an IO access. */
if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
if ((addr & (DATA_SIZE - 1)) != 0) {
goto do_unaligned_access;
}
/* ??? Note that the io helpers always read data in the target
byte ordering. We should push the LE/BE request down into io. */
res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
res = TGT_LE(res);
return res;
}
/* Handle slow unaligned access (it spans two pages or IO). */
if (DATA_SIZE > 1
&& unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1
>= TARGET_PAGE_SIZE)) {
target_ulong addr1, addr2;
DATA_TYPE res1, res2;
unsigned shift;
do_unaligned_access:
addr1 = addr & ~(DATA_SIZE - 1);
addr2 = addr1 + DATA_SIZE;
res1 = helper_le_ld_name(env, addr1, oi, retaddr);
res2 = helper_le_ld_name(env, addr2, oi, retaddr);
shift = (addr & (DATA_SIZE - 1)) * 8;
/* Little-endian combine. */
res = (res1 >> shift) | (res2 << ((DATA_SIZE * 8) - shift));
return res;
}
haddr = addr + env->tlb_table[mmu_idx][index].addend;
#if DATA_SIZE == 1
res = glue(glue(ld, LSUFFIX), _p)((uint8_t *)haddr);
#else
res = glue(glue(ld, LSUFFIX), _le_p)((uint8_t *)haddr);
#endif
return res;
}
#if DATA_SIZE > 1
WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong addr,
TCGMemOpIdx oi, uintptr_t retaddr)
{
unsigned mmu_idx = get_mmuidx(oi);
int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
target_ulong tlb_addr = env->tlb_table[mmu_idx][index].ADDR_READ;
unsigned a_bits = get_alignment_bits(get_memop(oi));
uintptr_t haddr;
DATA_TYPE res;
if (addr & ((1 << a_bits) - 1)) {
cpu_unaligned_access(ENV_GET_CPU(env), addr, READ_ACCESS_TYPE,
mmu_idx, retaddr);
}
/* If the TLB entry is for a different page, reload and try again. */
if ((addr & TARGET_PAGE_MASK)
!= (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) {
if (!VICTIM_TLB_HIT(ADDR_READ, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, READ_ACCESS_TYPE,
mmu_idx, retaddr);
}
tlb_addr = env->tlb_table[mmu_idx][index].ADDR_READ;
}
/* Handle an IO access. */
if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
if ((addr & (DATA_SIZE - 1)) != 0) {
goto do_unaligned_access;
}
/* ??? Note that the io helpers always read data in the target
byte ordering. We should push the LE/BE request down into io. */
res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
res = TGT_BE(res);
return res;
}
/* Handle slow unaligned access (it spans two pages or IO). */
if (DATA_SIZE > 1
&& unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1
>= TARGET_PAGE_SIZE)) {
target_ulong addr1, addr2;
DATA_TYPE res1, res2;
unsigned shift;
do_unaligned_access:
addr1 = addr & ~(DATA_SIZE - 1);
addr2 = addr1 + DATA_SIZE;
res1 = helper_be_ld_name(env, addr1, oi, retaddr);
res2 = helper_be_ld_name(env, addr2, oi, retaddr);
shift = (addr & (DATA_SIZE - 1)) * 8;
/* Big-endian combine. */
res = (res1 << shift) | (res2 >> ((DATA_SIZE * 8) - shift));
return res;
}
haddr = addr + env->tlb_table[mmu_idx][index].addend;
res = glue(glue(ld, LSUFFIX), _be_p)((uint8_t *)haddr);
return res;
}
#endif /* DATA_SIZE > 1 */
#ifndef SOFTMMU_CODE_ACCESS
/* Provide signed versions of the load routines as well. We can of course
avoid this for 64-bit data, or for 32-bit data on 32-bit host. */
#if DATA_SIZE * 8 < TCG_TARGET_REG_BITS
WORD_TYPE helper_le_lds_name(CPUArchState *env, target_ulong addr,
TCGMemOpIdx oi, uintptr_t retaddr)
{
return (SDATA_TYPE)helper_le_ld_name(env, addr, oi, retaddr);
}
# if DATA_SIZE > 1
WORD_TYPE helper_be_lds_name(CPUArchState *env, target_ulong addr,
TCGMemOpIdx oi, uintptr_t retaddr)
{
return (SDATA_TYPE)helper_be_ld_name(env, addr, oi, retaddr);
}
# endif
#endif
static inline void glue(io_write, SUFFIX)(CPUArchState *env,
size_t mmu_idx, size_t index,
DATA_TYPE val,
target_ulong addr,
uintptr_t retaddr)
{
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
return io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr, DATA_SIZE);
}
void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
TCGMemOpIdx oi, uintptr_t retaddr)
{
unsigned mmu_idx = get_mmuidx(oi);
int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
target_ulong tlb_addr = env->tlb_table[mmu_idx][index].addr_write;
unsigned a_bits = get_alignment_bits(get_memop(oi));
uintptr_t haddr;
if (addr & ((1 << a_bits) - 1)) {
cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE,
mmu_idx, retaddr);
}
/* If the TLB entry is for a different page, reload and try again. */
if ((addr & TARGET_PAGE_MASK)
!= (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) {
if (!VICTIM_TLB_HIT(addr_write, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, MMU_DATA_STORE,
mmu_idx, retaddr);
}
tlb_addr = env->tlb_table[mmu_idx][index].addr_write & ~TLB_INVALID_MASK;
}
/* Handle an IO access. */
if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
if ((addr & (DATA_SIZE - 1)) != 0) {
goto do_unaligned_access;
}
/* ??? Note that the io helpers always read data in the target
byte ordering. We should push the LE/BE request down into io. */
val = TGT_LE(val);
glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr);
return;
}
/* Handle slow unaligned access (it spans two pages or IO). */
if (DATA_SIZE > 1
&& unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1
>= TARGET_PAGE_SIZE)) {
int i, index2;
target_ulong page2, tlb_addr2;
do_unaligned_access:
/* Ensure the second page is in the TLB. Note that the first page
is already guaranteed to be filled, and that the second page
cannot evict the first. */
page2 = (addr + DATA_SIZE) & TARGET_PAGE_MASK;
index2 = (page2 >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
tlb_addr2 = env->tlb_table[mmu_idx][index2].addr_write;
if (page2 != (tlb_addr2 & (TARGET_PAGE_MASK | TLB_INVALID_MASK))
&& !VICTIM_TLB_HIT(addr_write, page2)) {
tlb_fill(ENV_GET_CPU(env), page2, DATA_SIZE, MMU_DATA_STORE,
mmu_idx, retaddr);
}
/* XXX: not efficient, but simple. */
/* This loop must go in the forward direction to avoid issues
with self-modifying code in Windows 64-bit. */
for (i = 0; i < DATA_SIZE; ++i) {
/* Little-endian extract. */
uint8_t val8 = val >> (i * 8);
glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8,
oi, retaddr);
}
return;
}
haddr = addr + env->tlb_table[mmu_idx][index].addend;
#if DATA_SIZE == 1
glue(glue(st, SUFFIX), _p)((uint8_t *)haddr, val);
#else
glue(glue(st, SUFFIX), _le_p)((uint8_t *)haddr, val);
#endif
}
#if DATA_SIZE > 1
void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
TCGMemOpIdx oi, uintptr_t retaddr)
{
unsigned mmu_idx = get_mmuidx(oi);
int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
target_ulong tlb_addr = env->tlb_table[mmu_idx][index].addr_write;
unsigned a_bits = get_alignment_bits(get_memop(oi));
uintptr_t haddr;
if (addr & ((1 << a_bits) - 1)) {
cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE,
mmu_idx, retaddr);
}
/* If the TLB entry is for a different page, reload and try again. */
if ((addr & TARGET_PAGE_MASK)
!= (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) {
if (!VICTIM_TLB_HIT(addr_write, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, MMU_DATA_STORE,
mmu_idx, retaddr);
}
tlb_addr = env->tlb_table[mmu_idx][index].addr_write & ~TLB_INVALID_MASK;
}
/* Handle an IO access. */
if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
if ((addr & (DATA_SIZE - 1)) != 0) {
goto do_unaligned_access;
}
/* ??? Note that the io helpers always read data in the target
byte ordering. We should push the LE/BE request down into io. */
val = TGT_BE(val);
glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr);
return;
}
/* Handle slow unaligned access (it spans two pages or IO). */
if (DATA_SIZE > 1
&& unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1
>= TARGET_PAGE_SIZE)) {
int i, index2;
target_ulong page2, tlb_addr2;
do_unaligned_access:
/* Ensure the second page is in the TLB. Note that the first page
is already guaranteed to be filled, and that the second page
cannot evict the first. */
page2 = (addr + DATA_SIZE) & TARGET_PAGE_MASK;
index2 = (page2 >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
tlb_addr2 = env->tlb_table[mmu_idx][index2].addr_write;
if (page2 != (tlb_addr2 & (TARGET_PAGE_MASK | TLB_INVALID_MASK))
&& !VICTIM_TLB_HIT(addr_write, page2)) {
tlb_fill(ENV_GET_CPU(env), page2, DATA_SIZE, MMU_DATA_STORE,
mmu_idx, retaddr);
}
/* XXX: not efficient, but simple */
/* This loop must go in the forward direction to avoid issues
with self-modifying code. */
for (i = 0; i < DATA_SIZE; ++i) {
/* Big-endian extract. */
uint8_t val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8));
glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8,
oi, retaddr);
}
return;
}
haddr = addr + env->tlb_table[mmu_idx][index].addend;
glue(glue(st, SUFFIX), _be_p)((uint8_t *)haddr, val);
}
#endif /* DATA_SIZE > 1 */
#endif /* !defined(SOFTMMU_CODE_ACCESS) */
#undef READ_ACCESS_TYPE
#undef DATA_TYPE
#undef SUFFIX
#undef LSUFFIX
#undef DATA_SIZE
#undef ADDR_READ
#undef WORD_TYPE
#undef SDATA_TYPE
#undef USUFFIX
#undef SSUFFIX
#undef BSWAP
#undef helper_le_ld_name
#undef helper_be_ld_name
#undef helper_le_lds_name
#undef helper_be_lds_name
#undef helper_le_st_name
#undef helper_be_st_name

View File

@@ -1,92 +0,0 @@
/*
* QEMU System Emulator, accelerator interfaces
*
* Copyright (c) 2003-2008 Fabrice Bellard
* Copyright (c) 2014 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "sysemu/accel.h"
#include "sysemu/sysemu.h"
#include "qom/object.h"
#include "qemu-common.h"
#include "qom/cpu.h"
#include "sysemu/cpus.h"
#include "qemu/main-loop.h"
unsigned long tcg_tb_size;
#ifndef CONFIG_USER_ONLY
/* mask must never be zero, except for A20 change call */
static void tcg_handle_interrupt(CPUState *cpu, int mask)
{
int old_mask;
g_assert(qemu_mutex_iothread_locked());
old_mask = cpu->interrupt_request;
cpu->interrupt_request |= mask;
/*
* If called from iothread context, wake the target cpu in
* case its halted.
*/
if (!qemu_cpu_is_self(cpu)) {
qemu_cpu_kick(cpu);
} else {
cpu->icount_decr.u16.high = -1;
if (use_icount &&
!cpu->can_do_io
&& (mask & ~old_mask) != 0) {
cpu_abort(cpu, "Raised interrupt while not in I/O function");
}
}
}
#endif
static int tcg_init(MachineState *ms)
{
tcg_exec_init(tcg_tb_size * 1024 * 1024);
cpu_interrupt_handler = tcg_handle_interrupt;
return 0;
}
static void tcg_accel_class_init(ObjectClass *oc, void *data)
{
AccelClass *ac = ACCEL_CLASS(oc);
ac->name = "tcg";
ac->init_machine = tcg_init;
ac->allowed = &tcg_allowed;
}
#define TYPE_TCG_ACCEL ACCEL_CLASS_NAME("tcg")
static const TypeInfo tcg_accel_type = {
.name = TYPE_TCG_ACCEL,
.parent = TYPE_ACCEL,
.class_init = tcg_accel_class_init,
};
static void register_accel_types(void)
{
type_register_static(&tcg_accel_type);
}
type_init(register_accel_types);

View File

@@ -1,997 +0,0 @@
/*
* Generic vectorized operation runtime
*
* Copyright (c) 2018 Linaro
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "qemu/host-utils.h"
#include "cpu.h"
#include "exec/helper-proto.h"
#include "tcg-gvec-desc.h"
/* Virtually all hosts support 16-byte vectors. Those that don't can emulate
* them via GCC's generic vector extension. This turns out to be simpler and
* more reliable than getting the compiler to autovectorize.
*
* In tcg-op-gvec.c, we asserted that both the size and alignment of the data
* are multiples of 16.
*
* When the compiler does not support all of the operations we require, the
* loops are written so that we can always fall back on the base types.
*/
#ifdef CONFIG_VECTOR16
typedef uint8_t vec8 __attribute__((vector_size(16)));
typedef uint16_t vec16 __attribute__((vector_size(16)));
typedef uint32_t vec32 __attribute__((vector_size(16)));
typedef uint64_t vec64 __attribute__((vector_size(16)));
typedef int8_t svec8 __attribute__((vector_size(16)));
typedef int16_t svec16 __attribute__((vector_size(16)));
typedef int32_t svec32 __attribute__((vector_size(16)));
typedef int64_t svec64 __attribute__((vector_size(16)));
#define DUP16(X) { X, X, X, X, X, X, X, X, X, X, X, X, X, X, X, X }
#define DUP8(X) { X, X, X, X, X, X, X, X }
#define DUP4(X) { X, X, X, X }
#define DUP2(X) { X, X }
#else
typedef uint8_t vec8;
typedef uint16_t vec16;
typedef uint32_t vec32;
typedef uint64_t vec64;
typedef int8_t svec8;
typedef int16_t svec16;
typedef int32_t svec32;
typedef int64_t svec64;
#define DUP16(X) X
#define DUP8(X) X
#define DUP4(X) X
#define DUP2(X) X
#endif /* CONFIG_VECTOR16 */
static inline void clear_high(void *d, intptr_t oprsz, uint32_t desc)
{
intptr_t maxsz = simd_maxsz(desc);
intptr_t i;
if (unlikely(maxsz > oprsz)) {
for (i = oprsz; i < maxsz; i += sizeof(uint64_t)) {
*(uint64_t *)(d + i) = 0;
}
}
}
void HELPER(gvec_add8)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec8)) {
*(vec8 *)(d + i) = *(vec8 *)(a + i) + *(vec8 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_add16)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec16)) {
*(vec16 *)(d + i) = *(vec16 *)(a + i) + *(vec16 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_add32)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec32)) {
*(vec32 *)(d + i) = *(vec32 *)(a + i) + *(vec32 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_add64)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = *(vec64 *)(a + i) + *(vec64 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_adds8)(void *d, void *a, uint64_t b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
vec8 vecb = (vec8)DUP16(b);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec8)) {
*(vec8 *)(d + i) = *(vec8 *)(a + i) + vecb;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_adds16)(void *d, void *a, uint64_t b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
vec16 vecb = (vec16)DUP8(b);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec16)) {
*(vec16 *)(d + i) = *(vec16 *)(a + i) + vecb;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_adds32)(void *d, void *a, uint64_t b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
vec32 vecb = (vec32)DUP4(b);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec32)) {
*(vec32 *)(d + i) = *(vec32 *)(a + i) + vecb;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_adds64)(void *d, void *a, uint64_t b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
vec64 vecb = (vec64)DUP2(b);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = *(vec64 *)(a + i) + vecb;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_sub8)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec8)) {
*(vec8 *)(d + i) = *(vec8 *)(a + i) - *(vec8 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_sub16)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec16)) {
*(vec16 *)(d + i) = *(vec16 *)(a + i) - *(vec16 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_sub32)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec32)) {
*(vec32 *)(d + i) = *(vec32 *)(a + i) - *(vec32 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_sub64)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = *(vec64 *)(a + i) - *(vec64 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_subs8)(void *d, void *a, uint64_t b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
vec8 vecb = (vec8)DUP16(b);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec8)) {
*(vec8 *)(d + i) = *(vec8 *)(a + i) - vecb;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_subs16)(void *d, void *a, uint64_t b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
vec16 vecb = (vec16)DUP8(b);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec16)) {
*(vec16 *)(d + i) = *(vec16 *)(a + i) - vecb;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_subs32)(void *d, void *a, uint64_t b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
vec32 vecb = (vec32)DUP4(b);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec32)) {
*(vec32 *)(d + i) = *(vec32 *)(a + i) - vecb;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_subs64)(void *d, void *a, uint64_t b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
vec64 vecb = (vec64)DUP2(b);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = *(vec64 *)(a + i) - vecb;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_mul8)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec8)) {
*(vec8 *)(d + i) = *(vec8 *)(a + i) * *(vec8 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_mul16)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec16)) {
*(vec16 *)(d + i) = *(vec16 *)(a + i) * *(vec16 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_mul32)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec32)) {
*(vec32 *)(d + i) = *(vec32 *)(a + i) * *(vec32 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_mul64)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = *(vec64 *)(a + i) * *(vec64 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_muls8)(void *d, void *a, uint64_t b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
vec8 vecb = (vec8)DUP16(b);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec8)) {
*(vec8 *)(d + i) = *(vec8 *)(a + i) * vecb;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_muls16)(void *d, void *a, uint64_t b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
vec16 vecb = (vec16)DUP8(b);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec16)) {
*(vec16 *)(d + i) = *(vec16 *)(a + i) * vecb;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_muls32)(void *d, void *a, uint64_t b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
vec32 vecb = (vec32)DUP4(b);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec32)) {
*(vec32 *)(d + i) = *(vec32 *)(a + i) * vecb;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_muls64)(void *d, void *a, uint64_t b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
vec64 vecb = (vec64)DUP2(b);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = *(vec64 *)(a + i) * vecb;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_neg8)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec8)) {
*(vec8 *)(d + i) = -*(vec8 *)(a + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_neg16)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec16)) {
*(vec16 *)(d + i) = -*(vec16 *)(a + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_neg32)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec32)) {
*(vec32 *)(d + i) = -*(vec32 *)(a + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_neg64)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = -*(vec64 *)(a + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_mov)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
memcpy(d, a, oprsz);
clear_high(d, oprsz, desc);
}
void HELPER(gvec_dup64)(void *d, uint32_t desc, uint64_t c)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
if (c == 0) {
oprsz = 0;
} else {
for (i = 0; i < oprsz; i += sizeof(uint64_t)) {
*(uint64_t *)(d + i) = c;
}
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_dup32)(void *d, uint32_t desc, uint32_t c)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
if (c == 0) {
oprsz = 0;
} else {
for (i = 0; i < oprsz; i += sizeof(uint32_t)) {
*(uint32_t *)(d + i) = c;
}
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_dup16)(void *d, uint32_t desc, uint32_t c)
{
HELPER(gvec_dup32)(d, desc, 0x00010001 * (c & 0xffff));
}
void HELPER(gvec_dup8)(void *d, uint32_t desc, uint32_t c)
{
HELPER(gvec_dup32)(d, desc, 0x01010101 * (c & 0xff));
}
void HELPER(gvec_not)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = ~*(vec64 *)(a + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_and)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = *(vec64 *)(a + i) & *(vec64 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_or)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = *(vec64 *)(a + i) | *(vec64 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_xor)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = *(vec64 *)(a + i) ^ *(vec64 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_andc)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = *(vec64 *)(a + i) &~ *(vec64 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_orc)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = *(vec64 *)(a + i) |~ *(vec64 *)(b + i);
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_ands)(void *d, void *a, uint64_t b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
vec64 vecb = (vec64)DUP2(b);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = *(vec64 *)(a + i) & vecb;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_xors)(void *d, void *a, uint64_t b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
vec64 vecb = (vec64)DUP2(b);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = *(vec64 *)(a + i) ^ vecb;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_ors)(void *d, void *a, uint64_t b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
vec64 vecb = (vec64)DUP2(b);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = *(vec64 *)(a + i) | vecb;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_shl8i)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
int shift = simd_data(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec8)) {
*(vec8 *)(d + i) = *(vec8 *)(a + i) << shift;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_shl16i)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
int shift = simd_data(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec16)) {
*(vec16 *)(d + i) = *(vec16 *)(a + i) << shift;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_shl32i)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
int shift = simd_data(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec32)) {
*(vec32 *)(d + i) = *(vec32 *)(a + i) << shift;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_shl64i)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
int shift = simd_data(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = *(vec64 *)(a + i) << shift;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_shr8i)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
int shift = simd_data(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec8)) {
*(vec8 *)(d + i) = *(vec8 *)(a + i) >> shift;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_shr16i)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
int shift = simd_data(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec16)) {
*(vec16 *)(d + i) = *(vec16 *)(a + i) >> shift;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_shr32i)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
int shift = simd_data(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec32)) {
*(vec32 *)(d + i) = *(vec32 *)(a + i) >> shift;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_shr64i)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
int shift = simd_data(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(vec64 *)(d + i) = *(vec64 *)(a + i) >> shift;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_sar8i)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
int shift = simd_data(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec8)) {
*(svec8 *)(d + i) = *(svec8 *)(a + i) >> shift;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_sar16i)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
int shift = simd_data(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec16)) {
*(svec16 *)(d + i) = *(svec16 *)(a + i) >> shift;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_sar32i)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
int shift = simd_data(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec32)) {
*(svec32 *)(d + i) = *(svec32 *)(a + i) >> shift;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_sar64i)(void *d, void *a, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
int shift = simd_data(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(vec64)) {
*(svec64 *)(d + i) = *(svec64 *)(a + i) >> shift;
}
clear_high(d, oprsz, desc);
}
/* If vectors are enabled, the compiler fills in -1 for true.
Otherwise, we must take care of this by hand. */
#ifdef CONFIG_VECTOR16
# define DO_CMP0(X) X
#else
# define DO_CMP0(X) -(X)
#endif
#define DO_CMP1(NAME, TYPE, OP) \
void HELPER(NAME)(void *d, void *a, void *b, uint32_t desc) \
{ \
intptr_t oprsz = simd_oprsz(desc); \
intptr_t i; \
for (i = 0; i < oprsz; i += sizeof(vec64)) { \
*(TYPE *)(d + i) = DO_CMP0(*(TYPE *)(a + i) OP *(TYPE *)(b + i)); \
} \
clear_high(d, oprsz, desc); \
}
#define DO_CMP2(SZ) \
DO_CMP1(gvec_eq##SZ, vec##SZ, ==) \
DO_CMP1(gvec_ne##SZ, vec##SZ, !=) \
DO_CMP1(gvec_lt##SZ, svec##SZ, <) \
DO_CMP1(gvec_le##SZ, svec##SZ, <=) \
DO_CMP1(gvec_ltu##SZ, vec##SZ, <) \
DO_CMP1(gvec_leu##SZ, vec##SZ, <=)
DO_CMP2(8)
DO_CMP2(16)
DO_CMP2(32)
DO_CMP2(64)
#undef DO_CMP0
#undef DO_CMP1
#undef DO_CMP2
void HELPER(gvec_ssadd8)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(int8_t)) {
int r = *(int8_t *)(a + i) + *(int8_t *)(b + i);
if (r > INT8_MAX) {
r = INT8_MAX;
} else if (r < INT8_MIN) {
r = INT8_MIN;
}
*(int8_t *)(d + i) = r;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_ssadd16)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(int16_t)) {
int r = *(int16_t *)(a + i) + *(int16_t *)(b + i);
if (r > INT16_MAX) {
r = INT16_MAX;
} else if (r < INT16_MIN) {
r = INT16_MIN;
}
*(int16_t *)(d + i) = r;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_ssadd32)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(int32_t)) {
int32_t ai = *(int32_t *)(a + i);
int32_t bi = *(int32_t *)(b + i);
int32_t di = ai + bi;
if (((di ^ ai) &~ (ai ^ bi)) < 0) {
/* Signed overflow. */
di = (di < 0 ? INT32_MAX : INT32_MIN);
}
*(int32_t *)(d + i) = di;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_ssadd64)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(int64_t)) {
int64_t ai = *(int64_t *)(a + i);
int64_t bi = *(int64_t *)(b + i);
int64_t di = ai + bi;
if (((di ^ ai) &~ (ai ^ bi)) < 0) {
/* Signed overflow. */
di = (di < 0 ? INT64_MAX : INT64_MIN);
}
*(int64_t *)(d + i) = di;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_sssub8)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(uint8_t)) {
int r = *(int8_t *)(a + i) - *(int8_t *)(b + i);
if (r > INT8_MAX) {
r = INT8_MAX;
} else if (r < INT8_MIN) {
r = INT8_MIN;
}
*(uint8_t *)(d + i) = r;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_sssub16)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(int16_t)) {
int r = *(int16_t *)(a + i) - *(int16_t *)(b + i);
if (r > INT16_MAX) {
r = INT16_MAX;
} else if (r < INT16_MIN) {
r = INT16_MIN;
}
*(int16_t *)(d + i) = r;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_sssub32)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(int32_t)) {
int32_t ai = *(int32_t *)(a + i);
int32_t bi = *(int32_t *)(b + i);
int32_t di = ai - bi;
if (((di ^ ai) & (ai ^ bi)) < 0) {
/* Signed overflow. */
di = (di < 0 ? INT32_MAX : INT32_MIN);
}
*(int32_t *)(d + i) = di;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_sssub64)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(int64_t)) {
int64_t ai = *(int64_t *)(a + i);
int64_t bi = *(int64_t *)(b + i);
int64_t di = ai - bi;
if (((di ^ ai) & (ai ^ bi)) < 0) {
/* Signed overflow. */
di = (di < 0 ? INT64_MAX : INT64_MIN);
}
*(int64_t *)(d + i) = di;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_usadd8)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(uint8_t)) {
unsigned r = *(uint8_t *)(a + i) + *(uint8_t *)(b + i);
if (r > UINT8_MAX) {
r = UINT8_MAX;
}
*(uint8_t *)(d + i) = r;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_usadd16)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(uint16_t)) {
unsigned r = *(uint16_t *)(a + i) + *(uint16_t *)(b + i);
if (r > UINT16_MAX) {
r = UINT16_MAX;
}
*(uint16_t *)(d + i) = r;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_usadd32)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(uint32_t)) {
uint32_t ai = *(uint32_t *)(a + i);
uint32_t bi = *(uint32_t *)(b + i);
uint32_t di = ai + bi;
if (di < ai) {
di = UINT32_MAX;
}
*(uint32_t *)(d + i) = di;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_usadd64)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(uint64_t)) {
uint64_t ai = *(uint64_t *)(a + i);
uint64_t bi = *(uint64_t *)(b + i);
uint64_t di = ai + bi;
if (di < ai) {
di = UINT64_MAX;
}
*(uint64_t *)(d + i) = di;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_ussub8)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(uint8_t)) {
int r = *(uint8_t *)(a + i) - *(uint8_t *)(b + i);
if (r < 0) {
r = 0;
}
*(uint8_t *)(d + i) = r;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_ussub16)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(uint16_t)) {
int r = *(uint16_t *)(a + i) - *(uint16_t *)(b + i);
if (r < 0) {
r = 0;
}
*(uint16_t *)(d + i) = r;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_ussub32)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(uint32_t)) {
uint32_t ai = *(uint32_t *)(a + i);
uint32_t bi = *(uint32_t *)(b + i);
uint32_t di = ai - bi;
if (ai < bi) {
di = 0;
}
*(uint32_t *)(d + i) = di;
}
clear_high(d, oprsz, desc);
}
void HELPER(gvec_ussub64)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);
intptr_t i;
for (i = 0; i < oprsz; i += sizeof(uint64_t)) {
uint64_t ai = *(uint64_t *)(a + i);
uint64_t bi = *(uint64_t *)(b + i);
uint64_t di = ai - bi;
if (ai < bi) {
di = 0;
}
*(uint64_t *)(d + i) = di;
}
clear_high(d, oprsz, desc);
}

View File

@@ -1,169 +0,0 @@
/*
* Tiny Code Generator for QEMU
*
* Copyright (c) 2008 Fabrice Bellard
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu/host-utils.h"
#include "cpu.h"
#include "exec/helper-proto.h"
#include "exec/cpu_ldst.h"
#include "exec/exec-all.h"
#include "exec/tb-lookup.h"
#include "disas/disas.h"
#include "exec/log.h"
/* 32-bit helpers */
int32_t HELPER(div_i32)(int32_t arg1, int32_t arg2)
{
return arg1 / arg2;
}
int32_t HELPER(rem_i32)(int32_t arg1, int32_t arg2)
{
return arg1 % arg2;
}
uint32_t HELPER(divu_i32)(uint32_t arg1, uint32_t arg2)
{
return arg1 / arg2;
}
uint32_t HELPER(remu_i32)(uint32_t arg1, uint32_t arg2)
{
return arg1 % arg2;
}
/* 64-bit helpers */
uint64_t HELPER(shl_i64)(uint64_t arg1, uint64_t arg2)
{
return arg1 << arg2;
}
uint64_t HELPER(shr_i64)(uint64_t arg1, uint64_t arg2)
{
return arg1 >> arg2;
}
int64_t HELPER(sar_i64)(int64_t arg1, int64_t arg2)
{
return arg1 >> arg2;
}
int64_t HELPER(div_i64)(int64_t arg1, int64_t arg2)
{
return arg1 / arg2;
}
int64_t HELPER(rem_i64)(int64_t arg1, int64_t arg2)
{
return arg1 % arg2;
}
uint64_t HELPER(divu_i64)(uint64_t arg1, uint64_t arg2)
{
return arg1 / arg2;
}
uint64_t HELPER(remu_i64)(uint64_t arg1, uint64_t arg2)
{
return arg1 % arg2;
}
uint64_t HELPER(muluh_i64)(uint64_t arg1, uint64_t arg2)
{
uint64_t l, h;
mulu64(&l, &h, arg1, arg2);
return h;
}
int64_t HELPER(mulsh_i64)(int64_t arg1, int64_t arg2)
{
uint64_t l, h;
muls64(&l, &h, arg1, arg2);
return h;
}
uint32_t HELPER(clz_i32)(uint32_t arg, uint32_t zero_val)
{
return arg ? clz32(arg) : zero_val;
}
uint32_t HELPER(ctz_i32)(uint32_t arg, uint32_t zero_val)
{
return arg ? ctz32(arg) : zero_val;
}
uint64_t HELPER(clz_i64)(uint64_t arg, uint64_t zero_val)
{
return arg ? clz64(arg) : zero_val;
}
uint64_t HELPER(ctz_i64)(uint64_t arg, uint64_t zero_val)
{
return arg ? ctz64(arg) : zero_val;
}
uint32_t HELPER(clrsb_i32)(uint32_t arg)
{
return clrsb32(arg);
}
uint64_t HELPER(clrsb_i64)(uint64_t arg)
{
return clrsb64(arg);
}
uint32_t HELPER(ctpop_i32)(uint32_t arg)
{
return ctpop32(arg);
}
uint64_t HELPER(ctpop_i64)(uint64_t arg)
{
return ctpop64(arg);
}
void *HELPER(lookup_tb_ptr)(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
TranslationBlock *tb;
target_ulong cs_base, pc;
uint32_t flags;
tb = tb_lookup__cpu_state(cpu, &pc, &cs_base, &flags, curr_cflags());
if (tb == NULL) {
return tcg_ctx->code_gen_epilogue;
}
qemu_log_mask_and_addr(CPU_LOG_EXEC, pc,
"Chain %d: %p ["
TARGET_FMT_lx "/" TARGET_FMT_lx "/%#x] %s\n",
cpu->cpu_index, tb->tc.ptr, cs_base, pc, flags,
lookup_symbol(pc));
return tb->tc.ptr;
}
void HELPER(exit_atomic)(CPUArchState *env)
{
cpu_loop_exit_atomic(ENV_GET_CPU(env), GETPC());
}

View File

@@ -1,254 +0,0 @@
DEF_HELPER_FLAGS_2(div_i32, TCG_CALL_NO_RWG_SE, s32, s32, s32)
DEF_HELPER_FLAGS_2(rem_i32, TCG_CALL_NO_RWG_SE, s32, s32, s32)
DEF_HELPER_FLAGS_2(divu_i32, TCG_CALL_NO_RWG_SE, i32, i32, i32)
DEF_HELPER_FLAGS_2(remu_i32, TCG_CALL_NO_RWG_SE, i32, i32, i32)
DEF_HELPER_FLAGS_2(div_i64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
DEF_HELPER_FLAGS_2(rem_i64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
DEF_HELPER_FLAGS_2(divu_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(remu_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(shl_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(shr_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(sar_i64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
DEF_HELPER_FLAGS_2(mulsh_i64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
DEF_HELPER_FLAGS_2(muluh_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(clz_i32, TCG_CALL_NO_RWG_SE, i32, i32, i32)
DEF_HELPER_FLAGS_2(ctz_i32, TCG_CALL_NO_RWG_SE, i32, i32, i32)
DEF_HELPER_FLAGS_2(clz_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(ctz_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_1(clrsb_i32, TCG_CALL_NO_RWG_SE, i32, i32)
DEF_HELPER_FLAGS_1(clrsb_i64, TCG_CALL_NO_RWG_SE, i64, i64)
DEF_HELPER_FLAGS_1(ctpop_i32, TCG_CALL_NO_RWG_SE, i32, i32)
DEF_HELPER_FLAGS_1(ctpop_i64, TCG_CALL_NO_RWG_SE, i64, i64)
DEF_HELPER_FLAGS_1(lookup_tb_ptr, TCG_CALL_NO_WG_SE, ptr, env)
DEF_HELPER_FLAGS_1(exit_atomic, TCG_CALL_NO_WG, noreturn, env)
#ifdef CONFIG_SOFTMMU
DEF_HELPER_FLAGS_5(atomic_cmpxchgb, TCG_CALL_NO_WG,
i32, env, tl, i32, i32, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgw_be, TCG_CALL_NO_WG,
i32, env, tl, i32, i32, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgw_le, TCG_CALL_NO_WG,
i32, env, tl, i32, i32, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgl_be, TCG_CALL_NO_WG,
i32, env, tl, i32, i32, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgl_le, TCG_CALL_NO_WG,
i32, env, tl, i32, i32, i32)
#ifdef CONFIG_ATOMIC64
DEF_HELPER_FLAGS_5(atomic_cmpxchgq_be, TCG_CALL_NO_WG,
i64, env, tl, i64, i64, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgq_le, TCG_CALL_NO_WG,
i64, env, tl, i64, i64, i32)
#endif
#ifdef CONFIG_ATOMIC64
#define GEN_ATOMIC_HELPERS(NAME) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), b), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), w_le), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), w_be), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), l_le), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), l_be), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), q_le), \
TCG_CALL_NO_WG, i64, env, tl, i64, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), q_be), \
TCG_CALL_NO_WG, i64, env, tl, i64, i32)
#else
#define GEN_ATOMIC_HELPERS(NAME) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), b), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), w_le), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), w_be), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), l_le), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), l_be), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32)
#endif /* CONFIG_ATOMIC64 */
#else
DEF_HELPER_FLAGS_4(atomic_cmpxchgb, TCG_CALL_NO_WG, i32, env, tl, i32, i32)
DEF_HELPER_FLAGS_4(atomic_cmpxchgw_be, TCG_CALL_NO_WG, i32, env, tl, i32, i32)
DEF_HELPER_FLAGS_4(atomic_cmpxchgw_le, TCG_CALL_NO_WG, i32, env, tl, i32, i32)
DEF_HELPER_FLAGS_4(atomic_cmpxchgl_be, TCG_CALL_NO_WG, i32, env, tl, i32, i32)
DEF_HELPER_FLAGS_4(atomic_cmpxchgl_le, TCG_CALL_NO_WG, i32, env, tl, i32, i32)
#ifdef CONFIG_ATOMIC64
DEF_HELPER_FLAGS_4(atomic_cmpxchgq_be, TCG_CALL_NO_WG, i64, env, tl, i64, i64)
DEF_HELPER_FLAGS_4(atomic_cmpxchgq_le, TCG_CALL_NO_WG, i64, env, tl, i64, i64)
#endif
#ifdef CONFIG_ATOMIC64
#define GEN_ATOMIC_HELPERS(NAME) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), b), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), w_le), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), w_be), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), l_le), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), l_be), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), q_le), \
TCG_CALL_NO_WG, i64, env, tl, i64) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), q_be), \
TCG_CALL_NO_WG, i64, env, tl, i64)
#else
#define GEN_ATOMIC_HELPERS(NAME) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), b), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), w_le), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), w_be), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), l_le), \
TCG_CALL_NO_WG, i32, env, tl, i32) \
DEF_HELPER_FLAGS_3(glue(glue(atomic_, NAME), l_be), \
TCG_CALL_NO_WG, i32, env, tl, i32)
#endif /* CONFIG_ATOMIC64 */
#endif /* CONFIG_SOFTMMU */
GEN_ATOMIC_HELPERS(fetch_add)
GEN_ATOMIC_HELPERS(fetch_and)
GEN_ATOMIC_HELPERS(fetch_or)
GEN_ATOMIC_HELPERS(fetch_xor)
GEN_ATOMIC_HELPERS(add_fetch)
GEN_ATOMIC_HELPERS(and_fetch)
GEN_ATOMIC_HELPERS(or_fetch)
GEN_ATOMIC_HELPERS(xor_fetch)
GEN_ATOMIC_HELPERS(xchg)
#undef GEN_ATOMIC_HELPERS
DEF_HELPER_FLAGS_3(gvec_mov, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_dup8, TCG_CALL_NO_RWG, void, ptr, i32, i32)
DEF_HELPER_FLAGS_3(gvec_dup16, TCG_CALL_NO_RWG, void, ptr, i32, i32)
DEF_HELPER_FLAGS_3(gvec_dup32, TCG_CALL_NO_RWG, void, ptr, i32, i32)
DEF_HELPER_FLAGS_3(gvec_dup64, TCG_CALL_NO_RWG, void, ptr, i32, i64)
DEF_HELPER_FLAGS_4(gvec_add8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_add16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_add32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_add64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_adds8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_adds16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_adds32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_adds64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_sub8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sub16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sub32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sub64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_subs8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_subs16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_subs32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_subs64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_mul8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_mul16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_mul32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_mul64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_muls8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_muls16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_muls32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_muls64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_ssadd8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ssadd16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ssadd32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ssadd64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sssub8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sssub16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sssub32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sssub64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_usadd8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_usadd16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_usadd32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_usadd64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ussub8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ussub16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ussub32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ussub64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_neg8, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_neg16, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_neg32, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_neg64, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_not, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_and, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_or, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_xor, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_andc, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_orc, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ands, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_xors, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_ors, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_3(gvec_shl8i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shl16i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shl32i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shl64i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shr8i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shr16i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shr32i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shr64i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_sar8i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_sar16i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_sar32i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_sar64i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_eq8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_eq16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_eq32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_eq64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ne8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ne16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ne32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ne64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_lt8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_lt16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_lt32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_lt64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_le8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_le16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_le32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_le64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ltu8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ltu16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ltu32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ltu64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_leu8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_leu16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_leu32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_leu64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)

View File

@@ -1,10 +0,0 @@
# Trace events for debugging and performance instrumentation
# TCG related tracing (mostly disabled by default)
# cpu-exec.c
disable exec_tb(void *tb, uintptr_t pc) "tb:%p pc=0x%"PRIxPTR
disable exec_tb_nocache(void *tb, uintptr_t pc) "tb:%p pc=0x%"PRIxPTR
disable exec_tb_exit(void *last_tb, unsigned int flags) "tb:%p flags=0x%x"
# translate-all.c
translate_block(void *tb, uintptr_t pc, uint8_t *tb_code) "tb:%p, pc:0x%"PRIxPTR", tb_code:%p"

File diff suppressed because it is too large Load Diff

View File

@@ -1,36 +0,0 @@
/*
* Translated block handling
*
* Copyright (c) 2003 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#ifndef TRANSLATE_ALL_H
#define TRANSLATE_ALL_H
#include "exec/exec-all.h"
/* translate-all.c */
void tb_invalidate_phys_page_fast(tb_page_addr_t start, int len);
void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end,
int is_cpu_write_access);
void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end);
void tb_check_watchpoint(CPUState *cpu);
#ifdef CONFIG_USER_ONLY
int page_unprotect(target_ulong address, uintptr_t pc);
#endif
#endif /* TRANSLATE_ALL_H */

View File

@@ -1,138 +0,0 @@
/*
* Generic intermediate code generation.
*
* Copyright (C) 2016-2017 Lluís Vilanova <vilanova@ac.upc.edu>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "qemu/error-report.h"
#include "cpu.h"
#include "tcg/tcg.h"
#include "tcg/tcg-op.h"
#include "exec/exec-all.h"
#include "exec/gen-icount.h"
#include "exec/log.h"
#include "exec/translator.h"
/* Pairs with tcg_clear_temp_count.
To be called by #TranslatorOps.{translate_insn,tb_stop} if
(1) the target is sufficiently clean to support reporting,
(2) as and when all temporaries are known to be consumed.
For most targets, (2) is at the end of translate_insn. */
void translator_loop_temp_check(DisasContextBase *db)
{
if (tcg_check_temp_count()) {
qemu_log("warning: TCG temporary leaks before "
TARGET_FMT_lx "\n", db->pc_next);
}
}
void translator_loop(const TranslatorOps *ops, DisasContextBase *db,
CPUState *cpu, TranslationBlock *tb)
{
int max_insns;
/* Initialize DisasContext */
db->tb = tb;
db->pc_first = tb->pc;
db->pc_next = db->pc_first;
db->is_jmp = DISAS_NEXT;
db->num_insns = 0;
db->singlestep_enabled = cpu->singlestep_enabled;
/* Instruction counting */
max_insns = tb_cflags(db->tb) & CF_COUNT_MASK;
if (max_insns == 0) {
max_insns = CF_COUNT_MASK;
}
if (max_insns > TCG_MAX_INSNS) {
max_insns = TCG_MAX_INSNS;
}
if (db->singlestep_enabled || singlestep) {
max_insns = 1;
}
max_insns = ops->init_disas_context(db, cpu, max_insns);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
/* Reset the temp count so that we can identify leaks */
tcg_clear_temp_count();
/* Start translating. */
gen_tb_start(db->tb);
ops->tb_start(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
while (true) {
db->num_insns++;
ops->insn_start(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
/* Pass breakpoint hits to target for further processing */
if (unlikely(!QTAILQ_EMPTY(&cpu->breakpoints))) {
CPUBreakpoint *bp;
QTAILQ_FOREACH(bp, &cpu->breakpoints, entry) {
if (bp->pc == db->pc_next) {
if (ops->breakpoint_check(db, cpu, bp)) {
break;
}
}
}
/* The breakpoint_check hook may use DISAS_TOO_MANY to indicate
that only one more instruction is to be executed. Otherwise
it should use DISAS_NORETURN when generating an exception,
but may use a DISAS_TARGET_* value for Something Else. */
if (db->is_jmp > DISAS_TOO_MANY) {
break;
}
}
/* Disassemble one instruction. The translate_insn hook should
update db->pc_next and db->is_jmp to indicate what should be
done next -- either exiting this loop or locate the start of
the next instruction. */
if (db->num_insns == max_insns && (tb_cflags(db->tb) & CF_LAST_IO)) {
/* Accept I/O on the last instruction. */
gen_io_start();
ops->translate_insn(db, cpu);
gen_io_end();
} else {
ops->translate_insn(db, cpu);
}
/* Stop translation if translate_insn so indicated. */
if (db->is_jmp != DISAS_NEXT) {
break;
}
/* Stop translation if the output buffer is full,
or we have executed all of the allowed instructions. */
if (tcg_op_buf_full() || db->num_insns >= max_insns) {
db->is_jmp = DISAS_TOO_MANY;
break;
}
}
/* Emit code to exit the TB, as indicated by db->is_jmp. */
ops->tb_stop(db, cpu);
gen_tb_end(db->tb, db->num_insns);
/* The disas_log hook may use these values rather than recompute. */
db->tb->size = db->pc_next - db->pc_first;
db->tb->icount = db->num_insns;
#ifdef DEBUG_DISAS
if (qemu_loglevel_mask(CPU_LOG_TB_IN_ASM)
&& qemu_log_in_addr_range(db->pc_first)) {
qemu_log_lock();
qemu_log("----------------\n");
ops->disas_log(db, cpu);
qemu_log("\n");
qemu_log_unlock();
}
#endif
}

View File

@@ -1,34 +0,0 @@
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "qom/cpu.h"
#include "sysemu/replay.h"
void cpu_resume(CPUState *cpu)
{
}
void qemu_init_vcpu(CPUState *cpu)
{
}
/* User mode emulation does not support record/replay yet. */
bool replay_exception(void)
{
return true;
}
bool replay_has_exception(void)
{
return false;
}
bool replay_interrupt(void)
{
return true;
}
bool replay_has_interrupt(void)
{
return false;
}

View File

@@ -1,631 +0,0 @@
/*
* User emulator execution
*
* Copyright (c) 2003-2005 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "cpu.h"
#include "disas/disas.h"
#include "exec/exec-all.h"
#include "tcg.h"
#include "qemu/bitops.h"
#include "exec/cpu_ldst.h"
#include "translate-all.h"
#include "exec/helper-proto.h"
#undef EAX
#undef ECX
#undef EDX
#undef EBX
#undef ESP
#undef EBP
#undef ESI
#undef EDI
#undef EIP
#ifdef __linux__
#include <sys/ucontext.h>
#endif
__thread uintptr_t helper_retaddr;
//#define DEBUG_SIGNAL
/* exit the current TB from a signal handler. The host registers are
restored in a state compatible with the CPU emulator
*/
static void cpu_exit_tb_from_sighandler(CPUState *cpu, sigset_t *old_set)
{
/* XXX: use siglongjmp ? */
sigprocmask(SIG_SETMASK, old_set, NULL);
cpu_loop_exit_noexc(cpu);
}
/* 'pc' is the host PC at which the exception was raised. 'address' is
the effective address of the memory exception. 'is_write' is 1 if a
write caused the exception and otherwise 0'. 'old_set' is the
signal set which should be restored */
static inline int handle_cpu_signal(uintptr_t pc, siginfo_t *info,
int is_write, sigset_t *old_set)
{
CPUState *cpu = current_cpu;
CPUClass *cc;
int ret;
unsigned long address = (unsigned long)info->si_addr;
/* We must handle PC addresses from two different sources:
* a call return address and a signal frame address.
*
* Within cpu_restore_state_from_tb we assume the former and adjust
* the address by -GETPC_ADJ so that the address is within the call
* insn so that addr does not accidentally match the beginning of the
* next guest insn.
*
* However, when the PC comes from the signal frame, it points to
* the actual faulting host insn and not a call insn. Subtracting
* GETPC_ADJ in that case may accidentally match the previous guest insn.
*
* So for the later case, adjust forward to compensate for what
* will be done later by cpu_restore_state_from_tb.
*/
if (helper_retaddr) {
pc = helper_retaddr;
} else {
pc += GETPC_ADJ;
}
/* For synchronous signals we expect to be coming from the vCPU
* thread (so current_cpu should be valid) and either from running
* code or during translation which can fault as we cross pages.
*
* If neither is true then something has gone wrong and we should
* abort rather than try and restart the vCPU execution.
*/
if (!cpu || !cpu->running) {
printf("qemu:%s received signal outside vCPU context @ pc=0x%"
PRIxPTR "\n", __func__, pc);
abort();
}
#if defined(DEBUG_SIGNAL)
printf("qemu: SIGSEGV pc=0x%08lx address=%08lx w=%d oldset=0x%08lx\n",
pc, address, is_write, *(unsigned long *)old_set);
#endif
/* XXX: locking issue */
/* Note that it is important that we don't call page_unprotect() unless
* this is really a "write to nonwriteable page" fault, because
* page_unprotect() assumes that if it is called for an access to
* a page that's writeable this means we had two threads racing and
* another thread got there first and already made the page writeable;
* so we will retry the access. If we were to call page_unprotect()
* for some other kind of fault that should really be passed to the
* guest, we'd end up in an infinite loop of retrying the faulting
* access.
*/
if (is_write && info->si_signo == SIGSEGV && info->si_code == SEGV_ACCERR &&
h2g_valid(address)) {
switch (page_unprotect(h2g(address), pc)) {
case 0:
/* Fault not caused by a page marked unwritable to protect
* cached translations, must be the guest binary's problem.
*/
break;
case 1:
/* Fault caused by protection of cached translation; TBs
* invalidated, so resume execution. Retain helper_retaddr
* for a possible second fault.
*/
return 1;
case 2:
/* Fault caused by protection of cached translation, and the
* currently executing TB was modified and must be exited
* immediately. Clear helper_retaddr for next execution.
*/
helper_retaddr = 0;
cpu_exit_tb_from_sighandler(cpu, old_set);
/* NORETURN */
default:
g_assert_not_reached();
}
}
/* Convert forcefully to guest address space, invalid addresses
are still valid segv ones */
address = h2g_nocheck(address);
cc = CPU_GET_CLASS(cpu);
/* see if it is an MMU fault */
g_assert(cc->handle_mmu_fault);
ret = cc->handle_mmu_fault(cpu, address, 0, is_write, MMU_USER_IDX);
if (ret == 0) {
/* The MMU fault was handled without causing real CPU fault.
* Retain helper_retaddr for a possible second fault.
*/
return 1;
}
/* All other paths lead to cpu_exit; clear helper_retaddr
* for next execution.
*/
helper_retaddr = 0;
if (ret < 0) {
return 0; /* not an MMU fault */
}
/* Now we have a real cpu fault. */
cpu_restore_state(cpu, pc);
sigprocmask(SIG_SETMASK, old_set, NULL);
cpu_loop_exit(cpu);
/* never comes here */
return 1;
}
#if defined(__i386__)
#if defined(__NetBSD__)
#include <ucontext.h>
#define EIP_sig(context) ((context)->uc_mcontext.__gregs[_REG_EIP])
#define TRAP_sig(context) ((context)->uc_mcontext.__gregs[_REG_TRAPNO])
#define ERROR_sig(context) ((context)->uc_mcontext.__gregs[_REG_ERR])
#define MASK_sig(context) ((context)->uc_sigmask)
#elif defined(__FreeBSD__) || defined(__DragonFly__)
#include <ucontext.h>
#define EIP_sig(context) (*((unsigned long *)&(context)->uc_mcontext.mc_eip))
#define TRAP_sig(context) ((context)->uc_mcontext.mc_trapno)
#define ERROR_sig(context) ((context)->uc_mcontext.mc_err)
#define MASK_sig(context) ((context)->uc_sigmask)
#elif defined(__OpenBSD__)
#define EIP_sig(context) ((context)->sc_eip)
#define TRAP_sig(context) ((context)->sc_trapno)
#define ERROR_sig(context) ((context)->sc_err)
#define MASK_sig(context) ((context)->sc_mask)
#else
#define EIP_sig(context) ((context)->uc_mcontext.gregs[REG_EIP])
#define TRAP_sig(context) ((context)->uc_mcontext.gregs[REG_TRAPNO])
#define ERROR_sig(context) ((context)->uc_mcontext.gregs[REG_ERR])
#define MASK_sig(context) ((context)->uc_sigmask)
#endif
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
#if defined(__NetBSD__) || defined(__FreeBSD__) || defined(__DragonFly__)
ucontext_t *uc = puc;
#elif defined(__OpenBSD__)
struct sigcontext *uc = puc;
#else
ucontext_t *uc = puc;
#endif
unsigned long pc;
int trapno;
#ifndef REG_EIP
/* for glibc 2.1 */
#define REG_EIP EIP
#define REG_ERR ERR
#define REG_TRAPNO TRAPNO
#endif
pc = EIP_sig(uc);
trapno = TRAP_sig(uc);
return handle_cpu_signal(pc, info,
trapno == 0xe ? (ERROR_sig(uc) >> 1) & 1 : 0,
&MASK_sig(uc));
}
#elif defined(__x86_64__)
#ifdef __NetBSD__
#define PC_sig(context) _UC_MACHINE_PC(context)
#define TRAP_sig(context) ((context)->uc_mcontext.__gregs[_REG_TRAPNO])
#define ERROR_sig(context) ((context)->uc_mcontext.__gregs[_REG_ERR])
#define MASK_sig(context) ((context)->uc_sigmask)
#elif defined(__OpenBSD__)
#define PC_sig(context) ((context)->sc_rip)
#define TRAP_sig(context) ((context)->sc_trapno)
#define ERROR_sig(context) ((context)->sc_err)
#define MASK_sig(context) ((context)->sc_mask)
#elif defined(__FreeBSD__) || defined(__DragonFly__)
#include <ucontext.h>
#define PC_sig(context) (*((unsigned long *)&(context)->uc_mcontext.mc_rip))
#define TRAP_sig(context) ((context)->uc_mcontext.mc_trapno)
#define ERROR_sig(context) ((context)->uc_mcontext.mc_err)
#define MASK_sig(context) ((context)->uc_sigmask)
#else
#define PC_sig(context) ((context)->uc_mcontext.gregs[REG_RIP])
#define TRAP_sig(context) ((context)->uc_mcontext.gregs[REG_TRAPNO])
#define ERROR_sig(context) ((context)->uc_mcontext.gregs[REG_ERR])
#define MASK_sig(context) ((context)->uc_sigmask)
#endif
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
unsigned long pc;
#if defined(__NetBSD__) || defined(__FreeBSD__) || defined(__DragonFly__)
ucontext_t *uc = puc;
#elif defined(__OpenBSD__)
struct sigcontext *uc = puc;
#else
ucontext_t *uc = puc;
#endif
pc = PC_sig(uc);
return handle_cpu_signal(pc, info,
TRAP_sig(uc) == 0xe ? (ERROR_sig(uc) >> 1) & 1 : 0,
&MASK_sig(uc));
}
#elif defined(_ARCH_PPC)
/***********************************************************************
* signal context platform-specific definitions
* From Wine
*/
#ifdef linux
/* All Registers access - only for local access */
#define REG_sig(reg_name, context) \
((context)->uc_mcontext.regs->reg_name)
/* Gpr Registers access */
#define GPR_sig(reg_num, context) REG_sig(gpr[reg_num], context)
/* Program counter */
#define IAR_sig(context) REG_sig(nip, context)
/* Machine State Register (Supervisor) */
#define MSR_sig(context) REG_sig(msr, context)
/* Count register */
#define CTR_sig(context) REG_sig(ctr, context)
/* User's integer exception register */
#define XER_sig(context) REG_sig(xer, context)
/* Link register */
#define LR_sig(context) REG_sig(link, context)
/* Condition register */
#define CR_sig(context) REG_sig(ccr, context)
/* Float Registers access */
#define FLOAT_sig(reg_num, context) \
(((double *)((char *)((context)->uc_mcontext.regs + 48 * 4)))[reg_num])
#define FPSCR_sig(context) \
(*(int *)((char *)((context)->uc_mcontext.regs + (48 + 32 * 2) * 4)))
/* Exception Registers access */
#define DAR_sig(context) REG_sig(dar, context)
#define DSISR_sig(context) REG_sig(dsisr, context)
#define TRAP_sig(context) REG_sig(trap, context)
#endif /* linux */
#if defined(__FreeBSD__) || defined(__FreeBSD_kernel__)
#include <ucontext.h>
#define IAR_sig(context) ((context)->uc_mcontext.mc_srr0)
#define MSR_sig(context) ((context)->uc_mcontext.mc_srr1)
#define CTR_sig(context) ((context)->uc_mcontext.mc_ctr)
#define XER_sig(context) ((context)->uc_mcontext.mc_xer)
#define LR_sig(context) ((context)->uc_mcontext.mc_lr)
#define CR_sig(context) ((context)->uc_mcontext.mc_cr)
/* Exception Registers access */
#define DAR_sig(context) ((context)->uc_mcontext.mc_dar)
#define DSISR_sig(context) ((context)->uc_mcontext.mc_dsisr)
#define TRAP_sig(context) ((context)->uc_mcontext.mc_exc)
#endif /* __FreeBSD__|| __FreeBSD_kernel__ */
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
#if defined(__FreeBSD__) || defined(__FreeBSD_kernel__)
ucontext_t *uc = puc;
#else
ucontext_t *uc = puc;
#endif
unsigned long pc;
int is_write;
pc = IAR_sig(uc);
is_write = 0;
#if 0
/* ppc 4xx case */
if (DSISR_sig(uc) & 0x00800000) {
is_write = 1;
}
#else
if (TRAP_sig(uc) != 0x400 && (DSISR_sig(uc) & 0x02000000)) {
is_write = 1;
}
#endif
return handle_cpu_signal(pc, info, is_write, &uc->uc_sigmask);
}
#elif defined(__alpha__)
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
ucontext_t *uc = puc;
uint32_t *pc = uc->uc_mcontext.sc_pc;
uint32_t insn = *pc;
int is_write = 0;
/* XXX: need kernel patch to get write flag faster */
switch (insn >> 26) {
case 0x0d: /* stw */
case 0x0e: /* stb */
case 0x0f: /* stq_u */
case 0x24: /* stf */
case 0x25: /* stg */
case 0x26: /* sts */
case 0x27: /* stt */
case 0x2c: /* stl */
case 0x2d: /* stq */
case 0x2e: /* stl_c */
case 0x2f: /* stq_c */
is_write = 1;
}
return handle_cpu_signal(pc, info, is_write, &uc->uc_sigmask);
}
#elif defined(__sparc__)
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
int is_write;
uint32_t insn;
#if !defined(__arch64__) || defined(CONFIG_SOLARIS)
uint32_t *regs = (uint32_t *)(info + 1);
void *sigmask = (regs + 20);
/* XXX: is there a standard glibc define ? */
unsigned long pc = regs[1];
#else
#ifdef __linux__
struct sigcontext *sc = puc;
unsigned long pc = sc->sigc_regs.tpc;
void *sigmask = (void *)sc->sigc_mask;
#elif defined(__OpenBSD__)
struct sigcontext *uc = puc;
unsigned long pc = uc->sc_pc;
void *sigmask = (void *)(long)uc->sc_mask;
#elif defined(__NetBSD__)
ucontext_t *uc = puc;
unsigned long pc = _UC_MACHINE_PC(uc);
void *sigmask = (void *)&uc->uc_sigmask;
#endif
#endif
/* XXX: need kernel patch to get write flag faster */
is_write = 0;
insn = *(uint32_t *)pc;
if ((insn >> 30) == 3) {
switch ((insn >> 19) & 0x3f) {
case 0x05: /* stb */
case 0x15: /* stba */
case 0x06: /* sth */
case 0x16: /* stha */
case 0x04: /* st */
case 0x14: /* sta */
case 0x07: /* std */
case 0x17: /* stda */
case 0x0e: /* stx */
case 0x1e: /* stxa */
case 0x24: /* stf */
case 0x34: /* stfa */
case 0x27: /* stdf */
case 0x37: /* stdfa */
case 0x26: /* stqf */
case 0x36: /* stqfa */
case 0x25: /* stfsr */
case 0x3c: /* casa */
case 0x3e: /* casxa */
is_write = 1;
break;
}
}
return handle_cpu_signal(pc, info, is_write, sigmask);
}
#elif defined(__arm__)
#if defined(__NetBSD__)
#include <ucontext.h>
#endif
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
#if defined(__NetBSD__)
ucontext_t *uc = puc;
#else
ucontext_t *uc = puc;
#endif
unsigned long pc;
int is_write;
#if defined(__NetBSD__)
pc = uc->uc_mcontext.__gregs[_REG_R15];
#elif defined(__GLIBC__) && (__GLIBC__ < 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ <= 3))
pc = uc->uc_mcontext.gregs[R15];
#else
pc = uc->uc_mcontext.arm_pc;
#endif
/* error_code is the FSR value, in which bit 11 is WnR (assuming a v6 or
* later processor; on v5 we will always report this as a read).
*/
is_write = extract32(uc->uc_mcontext.error_code, 11, 1);
return handle_cpu_signal(pc, info, is_write, &uc->uc_sigmask);
}
#elif defined(__aarch64__)
int cpu_signal_handler(int host_signum, void *pinfo, void *puc)
{
siginfo_t *info = pinfo;
ucontext_t *uc = puc;
uintptr_t pc = uc->uc_mcontext.pc;
uint32_t insn = *(uint32_t *)pc;
bool is_write;
/* XXX: need kernel patch to get write flag faster. */
is_write = ( (insn & 0xbfff0000) == 0x0c000000 /* C3.3.1 */
|| (insn & 0xbfe00000) == 0x0c800000 /* C3.3.2 */
|| (insn & 0xbfdf0000) == 0x0d000000 /* C3.3.3 */
|| (insn & 0xbfc00000) == 0x0d800000 /* C3.3.4 */
|| (insn & 0x3f400000) == 0x08000000 /* C3.3.6 */
|| (insn & 0x3bc00000) == 0x39000000 /* C3.3.13 */
|| (insn & 0x3fc00000) == 0x3d800000 /* ... 128bit */
/* Ingore bits 10, 11 & 21, controlling indexing. */
|| (insn & 0x3bc00000) == 0x38000000 /* C3.3.8-12 */
|| (insn & 0x3fe00000) == 0x3c800000 /* ... 128bit */
/* Ignore bits 23 & 24, controlling indexing. */
|| (insn & 0x3a400000) == 0x28000000); /* C3.3.7,14-16 */
return handle_cpu_signal(pc, info, is_write, &uc->uc_sigmask);
}
#elif defined(__s390__)
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
ucontext_t *uc = puc;
unsigned long pc;
uint16_t *pinsn;
int is_write = 0;
pc = uc->uc_mcontext.psw.addr;
/* ??? On linux, the non-rt signal handler has 4 (!) arguments instead
of the normal 2 arguments. The 3rd argument contains the "int_code"
from the hardware which does in fact contain the is_write value.
The rt signal handler, as far as I can tell, does not give this value
at all. Not that we could get to it from here even if it were. */
/* ??? This is not even close to complete, since it ignores all
of the read-modify-write instructions. */
pinsn = (uint16_t *)pc;
switch (pinsn[0] >> 8) {
case 0x50: /* ST */
case 0x42: /* STC */
case 0x40: /* STH */
is_write = 1;
break;
case 0xc4: /* RIL format insns */
switch (pinsn[0] & 0xf) {
case 0xf: /* STRL */
case 0xb: /* STGRL */
case 0x7: /* STHRL */
is_write = 1;
}
break;
case 0xe3: /* RXY format insns */
switch (pinsn[2] & 0xff) {
case 0x50: /* STY */
case 0x24: /* STG */
case 0x72: /* STCY */
case 0x70: /* STHY */
case 0x8e: /* STPQ */
case 0x3f: /* STRVH */
case 0x3e: /* STRV */
case 0x2f: /* STRVG */
is_write = 1;
}
break;
}
return handle_cpu_signal(pc, info, is_write, &uc->uc_sigmask);
}
#elif defined(__mips__)
int cpu_signal_handler(int host_signum, void *pinfo,
void *puc)
{
siginfo_t *info = pinfo;
ucontext_t *uc = puc;
greg_t pc = uc->uc_mcontext.pc;
int is_write;
/* XXX: compute is_write */
is_write = 0;
return handle_cpu_signal(pc, info, is_write, &uc->uc_sigmask);
}
#else
#error host CPU specific signal handler needed
#endif
/* The softmmu versions of these helpers are in cputlb.c. */
/* Do not allow unaligned operations to proceed. Return the host address. */
static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
int size, uintptr_t retaddr)
{
/* Enforce qemu required alignment. */
if (unlikely(addr & (size - 1))) {
cpu_loop_exit_atomic(ENV_GET_CPU(env), retaddr);
}
helper_retaddr = retaddr;
return g2h(addr);
}
/* Macro to call the above, with local variables from the use context. */
#define ATOMIC_MMU_DECLS do {} while (0)
#define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, DATA_SIZE, GETPC())
#define ATOMIC_MMU_CLEANUP do { helper_retaddr = 0; } while (0)
#define ATOMIC_NAME(X) HELPER(glue(glue(atomic_ ## X, SUFFIX), END))
#define EXTRA_ARGS
#define DATA_SIZE 1
#include "atomic_template.h"
#define DATA_SIZE 2
#include "atomic_template.h"
#define DATA_SIZE 4
#include "atomic_template.h"
#ifdef CONFIG_ATOMIC64
#define DATA_SIZE 8
#include "atomic_template.h"
#endif
/* The following is only callable from other helpers, and matches up
with the softmmu version. */
#ifdef CONFIG_ATOMIC128
#undef EXTRA_ARGS
#undef ATOMIC_NAME
#undef ATOMIC_MMU_LOOKUP
#define EXTRA_ARGS , TCGMemOpIdx oi, uintptr_t retaddr
#define ATOMIC_NAME(X) \
HELPER(glue(glue(glue(atomic_ ## X, SUFFIX), END), _mmu))
#define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, DATA_SIZE, retaddr)
#define DATA_SIZE 16
#include "atomic_template.h"
#endif /* CONFIG_ATOMIC128 */

274
aio-posix.c Normal file
View File

@@ -0,0 +1,274 @@
/*
* QEMU aio implementation
*
* Copyright IBM, Corp. 2008
*
* Authors:
* Anthony Liguori <aliguori@us.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu-common.h"
#include "block/block.h"
#include "qemu/queue.h"
#include "qemu/sockets.h"
struct AioHandler
{
GPollFD pfd;
IOHandler *io_read;
IOHandler *io_write;
int deleted;
int pollfds_idx;
void *opaque;
QLIST_ENTRY(AioHandler) node;
};
static AioHandler *find_aio_handler(AioContext *ctx, int fd)
{
AioHandler *node;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->pfd.fd == fd)
if (!node->deleted)
return node;
}
return NULL;
}
void aio_set_fd_handler(AioContext *ctx,
int fd,
IOHandler *io_read,
IOHandler *io_write,
void *opaque)
{
AioHandler *node;
node = find_aio_handler(ctx, fd);
/* Are we deleting the fd handler? */
if (!io_read && !io_write) {
if (node) {
g_source_remove_poll(&ctx->source, &node->pfd);
/* If the lock is held, just mark the node as deleted */
if (ctx->walking_handlers) {
node->deleted = 1;
node->pfd.revents = 0;
} else {
/* Otherwise, delete it for real. We can't just mark it as
* deleted because deleted nodes are only cleaned up after
* releasing the walking_handlers lock.
*/
QLIST_REMOVE(node, node);
g_free(node);
}
}
} else {
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = g_malloc0(sizeof(AioHandler));
node->pfd.fd = fd;
QLIST_INSERT_HEAD(&ctx->aio_handlers, node, node);
g_source_add_poll(&ctx->source, &node->pfd);
}
/* Update handler with latest information */
node->io_read = io_read;
node->io_write = io_write;
node->opaque = opaque;
node->pollfds_idx = -1;
node->pfd.events = (io_read ? G_IO_IN | G_IO_HUP | G_IO_ERR : 0);
node->pfd.events |= (io_write ? G_IO_OUT | G_IO_ERR : 0);
}
aio_notify(ctx);
}
void aio_set_event_notifier(AioContext *ctx,
EventNotifier *notifier,
EventNotifierHandler *io_read)
{
aio_set_fd_handler(ctx, event_notifier_get_fd(notifier),
(IOHandler *)io_read, NULL, notifier);
}
bool aio_pending(AioContext *ctx)
{
AioHandler *node;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
int revents;
revents = node->pfd.revents & node->pfd.events;
if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) {
return true;
}
if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write) {
return true;
}
}
return false;
}
static bool aio_dispatch(AioContext *ctx)
{
AioHandler *node;
bool progress = false;
/*
* We have to walk very carefully in case aio_set_fd_handler is
* called while we're walking.
*/
node = QLIST_FIRST(&ctx->aio_handlers);
while (node) {
AioHandler *tmp;
int revents;
ctx->walking_handlers++;
revents = node->pfd.revents & node->pfd.events;
node->pfd.revents = 0;
if (!node->deleted &&
(revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
node->io_read) {
node->io_read(node->opaque);
/* aio_notify() does not count as progress */
if (node->opaque != &ctx->notifier) {
progress = true;
}
}
if (!node->deleted &&
(revents & (G_IO_OUT | G_IO_ERR)) &&
node->io_write) {
node->io_write(node->opaque);
progress = true;
}
tmp = node;
node = QLIST_NEXT(node, node);
ctx->walking_handlers--;
if (!ctx->walking_handlers && tmp->deleted) {
QLIST_REMOVE(tmp, node);
g_free(tmp);
}
}
/* Run our timers */
progress |= timerlistgroup_run_timers(&ctx->tlg);
return progress;
}
bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
bool was_dispatching;
int ret;
bool progress;
was_dispatching = ctx->dispatching;
progress = false;
/* aio_notify can avoid the expensive event_notifier_set if
* everything (file descriptors, bottom halves, timers) will
* be re-evaluated before the next blocking poll(). This happens
* in two cases:
*
* 1) when aio_poll is called with blocking == false
*
* 2) when we are called after poll(). If we are called before
* poll(), bottom halves will not be re-evaluated and we need
* aio_notify() if blocking == true.
*
* The first aio_dispatch() only does something when AioContext is
* running as a GSource, and in that case aio_poll is used only
* with blocking == false, so this optimization is already quite
* effective. However, the code is ugly and should be restructured
* to have a single aio_dispatch() call. To do this, we need to
* reorganize aio_poll into a prepare/poll/dispatch model like
* glib's.
*
* If we're in a nested event loop, ctx->dispatching might be true.
* In that case we can restore it just before returning, but we
* have to clear it now.
*/
aio_set_dispatching(ctx, !blocking);
/*
* If there are callbacks left that have been queued, we need to call them.
* Do not call select in this case, because it is possible that the caller
* does not need a complete flush (as is the case for aio_poll loops).
*/
if (aio_bh_poll(ctx)) {
blocking = false;
progress = true;
}
/* Re-evaluate condition (1) above. */
aio_set_dispatching(ctx, !blocking);
if (aio_dispatch(ctx)) {
progress = true;
}
if (progress && !blocking) {
goto out;
}
ctx->walking_handlers++;
g_array_set_size(ctx->pollfds, 0);
/* fill pollfds */
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
node->pollfds_idx = -1;
if (!node->deleted && node->pfd.events) {
GPollFD pfd = {
.fd = node->pfd.fd,
.events = node->pfd.events,
};
node->pollfds_idx = ctx->pollfds->len;
g_array_append_val(ctx->pollfds, pfd);
}
}
ctx->walking_handlers--;
/* wait until next event */
ret = qemu_poll_ns((GPollFD *)ctx->pollfds->data,
ctx->pollfds->len,
blocking ? timerlistgroup_deadline_ns(&ctx->tlg) : 0);
/* if we have any readable fds, dispatch event */
if (ret > 0) {
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->pollfds_idx != -1) {
GPollFD *pfd = &g_array_index(ctx->pollfds, GPollFD,
node->pollfds_idx);
node->pfd.revents = pfd->revents;
}
}
}
/* Run dispatch even if there were no readable fds to run timers */
aio_set_dispatching(ctx, true);
if (aio_dispatch(ctx)) {
progress = true;
}
out:
aio_set_dispatching(ctx, was_dispatching);
return progress;
}

223
aio-win32.c Normal file
View File

@@ -0,0 +1,223 @@
/*
* QEMU aio implementation
*
* Copyright IBM Corp., 2008
* Copyright Red Hat Inc., 2012
*
* Authors:
* Anthony Liguori <aliguori@us.ibm.com>
* Paolo Bonzini <pbonzini@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu-common.h"
#include "block/block.h"
#include "qemu/queue.h"
#include "qemu/sockets.h"
struct AioHandler {
EventNotifier *e;
EventNotifierHandler *io_notify;
GPollFD pfd;
int deleted;
QLIST_ENTRY(AioHandler) node;
};
void aio_set_event_notifier(AioContext *ctx,
EventNotifier *e,
EventNotifierHandler *io_notify)
{
AioHandler *node;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->e == e && !node->deleted) {
break;
}
}
/* Are we deleting the fd handler? */
if (!io_notify) {
if (node) {
g_source_remove_poll(&ctx->source, &node->pfd);
/* If the lock is held, just mark the node as deleted */
if (ctx->walking_handlers) {
node->deleted = 1;
node->pfd.revents = 0;
} else {
/* Otherwise, delete it for real. We can't just mark it as
* deleted because deleted nodes are only cleaned up after
* releasing the walking_handlers lock.
*/
QLIST_REMOVE(node, node);
g_free(node);
}
}
} else {
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = g_malloc0(sizeof(AioHandler));
node->e = e;
node->pfd.fd = (uintptr_t)event_notifier_get_handle(e);
node->pfd.events = G_IO_IN;
QLIST_INSERT_HEAD(&ctx->aio_handlers, node, node);
g_source_add_poll(&ctx->source, &node->pfd);
}
/* Update handler with latest information */
node->io_notify = io_notify;
}
aio_notify(ctx);
}
bool aio_pending(AioContext *ctx)
{
AioHandler *node;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->pfd.revents && node->io_notify) {
return true;
}
}
return false;
}
bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
bool progress;
int count;
int timeout;
progress = false;
/*
* If there are callbacks left that have been queued, we need to call then.
* Do not call select in this case, because it is possible that the caller
* does not need a complete flush (as is the case for aio_poll loops).
*/
if (aio_bh_poll(ctx)) {
blocking = false;
progress = true;
}
/* Run timers */
progress |= timerlistgroup_run_timers(&ctx->tlg);
/*
* Then dispatch any pending callbacks from the GSource.
*
* We have to walk very carefully in case aio_set_fd_handler is
* called while we're walking.
*/
node = QLIST_FIRST(&ctx->aio_handlers);
while (node) {
AioHandler *tmp;
ctx->walking_handlers++;
if (node->pfd.revents && node->io_notify) {
node->pfd.revents = 0;
node->io_notify(node->e);
/* aio_notify() does not count as progress */
if (node->e != &ctx->notifier) {
progress = true;
}
}
tmp = node;
node = QLIST_NEXT(node, node);
ctx->walking_handlers--;
if (!ctx->walking_handlers && tmp->deleted) {
QLIST_REMOVE(tmp, node);
g_free(tmp);
}
}
if (progress && !blocking) {
return true;
}
ctx->walking_handlers++;
/* fill fd sets */
count = 0;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (!node->deleted && node->io_notify) {
events[count++] = event_notifier_get_handle(node->e);
}
}
ctx->walking_handlers--;
/* wait until next event */
while (count > 0) {
int ret;
timeout = blocking ?
qemu_timeout_ns_to_ms(timerlistgroup_deadline_ns(&ctx->tlg)) : 0;
ret = WaitForMultipleObjects(count, events, FALSE, timeout);
/* if we have any signaled events, dispatch event */
if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
break;
}
blocking = false;
/* we have to walk very carefully in case
* aio_set_fd_handler is called while we're walking */
node = QLIST_FIRST(&ctx->aio_handlers);
while (node) {
AioHandler *tmp;
ctx->walking_handlers++;
if (!node->deleted &&
event_notifier_get_handle(node->e) == events[ret - WAIT_OBJECT_0] &&
node->io_notify) {
node->io_notify(node->e);
/* aio_notify() does not count as progress */
if (node->e != &ctx->notifier) {
progress = true;
}
}
tmp = node;
node = QLIST_NEXT(node, node);
ctx->walking_handlers--;
if (!ctx->walking_handlers && tmp->deleted) {
QLIST_REMOVE(tmp, node);
g_free(tmp);
}
}
/* Try again, but only call each handler once. */
events[ret - WAIT_OBJECT_0] = events[--count];
}
if (blocking) {
/* Run the timers a second time. We do this because otherwise aio_wait
* will not note progress - and will stop a drain early - if we have
* a timer that was not ready to run entering g_poll but is ready
* after g_poll. This will only do anything if a timer has expired.
*/
progress |= timerlistgroup_run_timers(&ctx->tlg);
}
return progress;
}

File diff suppressed because it is too large Load Diff

318
async.c Normal file
View File

@@ -0,0 +1,318 @@
/*
* QEMU System Emulator
*
* Copyright (c) 2003-2008 Fabrice Bellard
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "block/aio.h"
#include "block/thread-pool.h"
#include "qemu/main-loop.h"
#include "qemu/atomic.h"
/***********************************************************/
/* bottom halves (can be seen as timers which expire ASAP) */
struct QEMUBH {
AioContext *ctx;
QEMUBHFunc *cb;
void *opaque;
QEMUBH *next;
bool scheduled;
bool idle;
bool deleted;
};
QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
{
QEMUBH *bh;
bh = g_malloc0(sizeof(QEMUBH));
bh->ctx = ctx;
bh->cb = cb;
bh->opaque = opaque;
qemu_mutex_lock(&ctx->bh_lock);
bh->next = ctx->first_bh;
/* Make sure that the members are ready before putting bh into list */
smp_wmb();
ctx->first_bh = bh;
qemu_mutex_unlock(&ctx->bh_lock);
return bh;
}
/* Multiple occurrences of aio_bh_poll cannot be called concurrently */
int aio_bh_poll(AioContext *ctx)
{
QEMUBH *bh, **bhp, *next;
int ret;
ctx->walking_bh++;
ret = 0;
for (bh = ctx->first_bh; bh; bh = next) {
/* Make sure that fetching bh happens before accessing its members */
smp_read_barrier_depends();
next = bh->next;
if (!bh->deleted && bh->scheduled) {
bh->scheduled = 0;
/* Paired with write barrier in bh schedule to ensure reading for
* idle & callbacks coming after bh's scheduling.
*/
smp_rmb();
if (!bh->idle)
ret = 1;
bh->idle = 0;
bh->cb(bh->opaque);
}
}
ctx->walking_bh--;
/* remove deleted bhs */
if (!ctx->walking_bh) {
qemu_mutex_lock(&ctx->bh_lock);
bhp = &ctx->first_bh;
while (*bhp) {
bh = *bhp;
if (bh->deleted) {
*bhp = bh->next;
g_free(bh);
} else {
bhp = &bh->next;
}
}
qemu_mutex_unlock(&ctx->bh_lock);
}
return ret;
}
void qemu_bh_schedule_idle(QEMUBH *bh)
{
if (bh->scheduled)
return;
bh->idle = 1;
/* Make sure that idle & any writes needed by the callback are done
* before the locations are read in the aio_bh_poll.
*/
smp_wmb();
bh->scheduled = 1;
}
void qemu_bh_schedule(QEMUBH *bh)
{
AioContext *ctx;
if (bh->scheduled)
return;
ctx = bh->ctx;
bh->idle = 0;
/* Make sure that:
* 1. idle & any writes needed by the callback are done before the
* locations are read in the aio_bh_poll.
* 2. ctx is loaded before scheduled is set and the callback has a chance
* to execute.
*/
smp_mb();
bh->scheduled = 1;
aio_notify(ctx);
}
/* This func is async.
*/
void qemu_bh_cancel(QEMUBH *bh)
{
bh->scheduled = 0;
}
/* This func is async.The bottom half will do the delete action at the finial
* end.
*/
void qemu_bh_delete(QEMUBH *bh)
{
bh->scheduled = 0;
bh->deleted = 1;
}
static gboolean
aio_ctx_prepare(GSource *source, gint *timeout)
{
AioContext *ctx = (AioContext *) source;
QEMUBH *bh;
int deadline;
/* We assume there is no timeout already supplied */
*timeout = -1;
for (bh = ctx->first_bh; bh; bh = bh->next) {
if (!bh->deleted && bh->scheduled) {
if (bh->idle) {
/* idle bottom halves will be polled at least
* every 10ms */
*timeout = 10;
} else {
/* non-idle bottom halves will be executed
* immediately */
*timeout = 0;
return true;
}
}
}
deadline = qemu_timeout_ns_to_ms(timerlistgroup_deadline_ns(&ctx->tlg));
if (deadline == 0) {
*timeout = 0;
return true;
} else {
*timeout = qemu_soonest_timeout(*timeout, deadline);
}
return false;
}
static gboolean
aio_ctx_check(GSource *source)
{
AioContext *ctx = (AioContext *) source;
QEMUBH *bh;
for (bh = ctx->first_bh; bh; bh = bh->next) {
if (!bh->deleted && bh->scheduled) {
return true;
}
}
return aio_pending(ctx) || (timerlistgroup_deadline_ns(&ctx->tlg) == 0);
}
static gboolean
aio_ctx_dispatch(GSource *source,
GSourceFunc callback,
gpointer user_data)
{
AioContext *ctx = (AioContext *) source;
assert(callback == NULL);
aio_poll(ctx, false);
return true;
}
static void
aio_ctx_finalize(GSource *source)
{
AioContext *ctx = (AioContext *) source;
thread_pool_free(ctx->thread_pool);
aio_set_event_notifier(ctx, &ctx->notifier, NULL);
event_notifier_cleanup(&ctx->notifier);
rfifolock_destroy(&ctx->lock);
qemu_mutex_destroy(&ctx->bh_lock);
g_array_free(ctx->pollfds, TRUE);
timerlistgroup_deinit(&ctx->tlg);
}
static GSourceFuncs aio_source_funcs = {
aio_ctx_prepare,
aio_ctx_check,
aio_ctx_dispatch,
aio_ctx_finalize
};
GSource *aio_get_g_source(AioContext *ctx)
{
g_source_ref(&ctx->source);
return &ctx->source;
}
ThreadPool *aio_get_thread_pool(AioContext *ctx)
{
if (!ctx->thread_pool) {
ctx->thread_pool = thread_pool_new(ctx);
}
return ctx->thread_pool;
}
void aio_set_dispatching(AioContext *ctx, bool dispatching)
{
ctx->dispatching = dispatching;
if (!dispatching) {
/* Write ctx->dispatching before reading e.g. bh->scheduled.
* Optimization: this is only needed when we're entering the "unsafe"
* phase where other threads must call event_notifier_set.
*/
smp_mb();
}
}
void aio_notify(AioContext *ctx)
{
/* Write e.g. bh->scheduled before reading ctx->dispatching. */
smp_mb();
if (!ctx->dispatching) {
event_notifier_set(&ctx->notifier);
}
}
static void aio_timerlist_notify(void *opaque)
{
aio_notify(opaque);
}
static void aio_rfifolock_cb(void *opaque)
{
/* Kick owner thread in case they are blocked in aio_poll() */
aio_notify(opaque);
}
AioContext *aio_context_new(void)
{
AioContext *ctx;
ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext));
ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
ctx->thread_pool = NULL;
qemu_mutex_init(&ctx->bh_lock);
rfifolock_init(&ctx->lock, aio_rfifolock_cb, ctx);
event_notifier_init(&ctx->notifier, false);
aio_set_event_notifier(ctx, &ctx->notifier,
(EventNotifierHandler *)
event_notifier_test_and_clear);
timerlistgroup_init(&ctx->tlg, aio_timerlist_notify, ctx);
return ctx;
}
void aio_context_ref(AioContext *ctx)
{
g_source_ref(&ctx->source);
}
void aio_context_unref(AioContext *ctx)
{
g_source_unref(&ctx->source);
}
void aio_context_acquire(AioContext *ctx)
{
rfifolock_lock(&ctx->lock);
}
void aio_context_release(AioContext *ctx)
{
rfifolock_unlock(&ctx->lock);
}

View File

@@ -1,31 +1,17 @@
common-obj-y = audio.o noaudio.o wavaudio.o mixeng.o
common-obj-$(CONFIG_SDL) += sdlaudio.o
common-obj-$(CONFIG_OSS) += ossaudio.o
common-obj-$(CONFIG_SPICE) += spiceaudio.o
common-obj-$(CONFIG_AUDIO_COREAUDIO) += coreaudio.o
common-obj-$(CONFIG_AUDIO_DSOUND) += dsoundaudio.o
common-obj-$(CONFIG_COREAUDIO) += coreaudio.o
common-obj-$(CONFIG_ALSA) += alsaaudio.o
common-obj-$(CONFIG_DSOUND) += dsoundaudio.o
common-obj-$(CONFIG_FMOD) += fmodaudio.o
common-obj-$(CONFIG_ESD) += esdaudio.o
common-obj-$(CONFIG_PA) += paaudio.o
common-obj-$(CONFIG_WINWAVE) += winwaveaudio.o
common-obj-$(CONFIG_AUDIO_PT_INT) += audio_pt_int.o
common-obj-$(CONFIG_AUDIO_WIN_INT) += audio_win_int.o
common-obj-y += wavcapture.o
coreaudio.o-libs := $(COREAUDIO_LIBS)
dsoundaudio.o-libs := $(DSOUND_LIBS)
# alsa module
common-obj-$(CONFIG_AUDIO_ALSA) += alsa.mo
alsa.mo-objs = alsaaudio.o
alsa.mo-libs := $(ALSA_LIBS)
# oss module
common-obj-$(CONFIG_AUDIO_OSS) += oss.mo
oss.mo-objs = ossaudio.o
oss.mo-libs := $(OSS_LIBS)
# pulseaudio module
common-obj-$(CONFIG_AUDIO_PA) += pa.mo
pa.mo-objs = paaudio.o
pa.mo-libs := $(PULSE_LIBS)
# sdl module
common-obj-$(CONFIG_AUDIO_SDL) += sdl.mo
sdl.mo-objs = sdlaudio.o
sdl.mo-cflags := $(SDL_CFLAGS)
sdl.mo-libs := $(SDL_LIBS)
$(obj)/audio.o $(obj)/fmodaudio.o: QEMU_CFLAGS += $(FMOD_CFLAGS)
sdlaudio.o-cflags := $(SDL_CFLAGS)

View File

@@ -21,12 +21,10 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include <alsa/asoundlib.h>
#include "qemu-common.h"
#include "qemu/main-loop.h"
#include "audio.h"
#include "trace.h"
#if QEMU_GNUC_PREREQ(4, 3)
#pragma GCC diagnostic ignored "-Waddress"
@@ -35,28 +33,9 @@
#define AUDIO_CAP "alsa"
#include "audio_int.h"
typedef struct ALSAConf {
int size_in_usec_in;
int size_in_usec_out;
const char *pcm_name_in;
const char *pcm_name_out;
unsigned int buffer_size_in;
unsigned int period_size_in;
unsigned int buffer_size_out;
unsigned int period_size_out;
unsigned int threshold;
int buffer_size_in_overridden;
int period_size_in_overridden;
int buffer_size_out_overridden;
int period_size_out_overridden;
} ALSAConf;
struct pollhlp {
snd_pcm_t *handle;
struct pollfd *pfds;
ALSAConf *conf;
int count;
int mask;
};
@@ -77,6 +56,30 @@ typedef struct ALSAVoiceIn {
struct pollhlp pollhlp;
} ALSAVoiceIn;
static struct {
int size_in_usec_in;
int size_in_usec_out;
const char *pcm_name_in;
const char *pcm_name_out;
unsigned int buffer_size_in;
unsigned int period_size_in;
unsigned int buffer_size_out;
unsigned int period_size_out;
unsigned int threshold;
int buffer_size_in_overridden;
int period_size_in_overridden;
int buffer_size_out_overridden;
int period_size_out_overridden;
int verbose;
} conf = {
.buffer_size_out = 4096,
.period_size_out = 1024,
.pcm_name_out = "default",
.pcm_name_in = "default",
};
struct alsa_params_req {
int freq;
snd_pcm_format_t fmt;
@@ -202,7 +205,9 @@ static void alsa_poll_handler (void *opaque)
}
if (!(revents & hlp->mask)) {
trace_alsa_revents(revents);
if (conf.verbose) {
dolog ("revents = %d\n", revents);
}
return;
}
@@ -261,14 +266,31 @@ static int alsa_poll_helper (snd_pcm_t *handle, struct pollhlp *hlp, int mask)
for (i = 0; i < count; ++i) {
if (pfds[i].events & POLLIN) {
qemu_set_fd_handler (pfds[i].fd, alsa_poll_handler, NULL, hlp);
err = qemu_set_fd_handler (pfds[i].fd, alsa_poll_handler,
NULL, hlp);
}
if (pfds[i].events & POLLOUT) {
trace_alsa_pollout(i, pfds[i].fd);
qemu_set_fd_handler (pfds[i].fd, NULL, alsa_poll_handler, hlp);
if (conf.verbose) {
dolog ("POLLOUT %d %d\n", i, pfds[i].fd);
}
err = qemu_set_fd_handler (pfds[i].fd, NULL,
alsa_poll_handler, hlp);
}
if (conf.verbose) {
dolog ("Set handler events=%#x index=%d fd=%d err=%d\n",
pfds[i].events, i, pfds[i].fd, err);
}
trace_alsa_set_handler(pfds[i].events, i, pfds[i].fd, err);
if (err) {
dolog ("Failed to set handler events=%#x index=%d fd=%d err=%d\n",
pfds[i].events, i, pfds[i].fd, err);
while (i--) {
qemu_set_fd_handler (pfds[i].fd, NULL, NULL, NULL);
}
g_free (pfds);
return -1;
}
}
hlp->pfds = pfds;
hlp->count = count;
@@ -454,15 +476,14 @@ static void alsa_set_threshold (snd_pcm_t *handle, snd_pcm_uframes_t threshold)
}
static int alsa_open (int in, struct alsa_params_req *req,
struct alsa_params_obt *obt, snd_pcm_t **handlep,
ALSAConf *conf)
struct alsa_params_obt *obt, snd_pcm_t **handlep)
{
snd_pcm_t *handle;
snd_pcm_hw_params_t *hw_params;
int err;
int size_in_usec;
unsigned int freq, nchannels;
const char *pcm_name = in ? conf->pcm_name_in : conf->pcm_name_out;
const char *pcm_name = in ? conf.pcm_name_in : conf.pcm_name_out;
snd_pcm_uframes_t obt_buffer_size;
const char *typ = in ? "ADC" : "DAC";
snd_pcm_format_t obtfmt;
@@ -501,7 +522,7 @@ static int alsa_open (int in, struct alsa_params_req *req,
}
err = snd_pcm_hw_params_set_format (handle, hw_params, req->fmt);
if (err < 0) {
if (err < 0 && conf.verbose) {
alsa_logerr2 (err, typ, "Failed to set format %d\n", req->fmt);
}
@@ -633,7 +654,7 @@ static int alsa_open (int in, struct alsa_params_req *req,
goto err;
}
if (!in && conf->threshold) {
if (!in && conf.threshold) {
snd_pcm_uframes_t threshold;
int bytes_per_sec;
@@ -655,7 +676,7 @@ static int alsa_open (int in, struct alsa_params_req *req,
break;
}
threshold = (conf->threshold * bytes_per_sec) / 1000;
threshold = (conf.threshold * bytes_per_sec) / 1000;
alsa_set_threshold (handle, threshold);
}
@@ -665,9 +686,10 @@ static int alsa_open (int in, struct alsa_params_req *req,
*handlep = handle;
if (obtfmt != req->fmt ||
if (conf.verbose &&
(obtfmt != req->fmt ||
obt->nchannels != req->nchannels ||
obt->freq != req->freq) {
obt->freq != req->freq)) {
dolog ("Audio parameters for %s\n", typ);
alsa_dump_info (req, obt, obtfmt);
}
@@ -721,7 +743,9 @@ static void alsa_write_pending (ALSAVoiceOut *alsa)
if (written <= 0) {
switch (written) {
case 0:
trace_alsa_wrote_zero(len);
if (conf.verbose) {
dolog ("Failed to write %d frames (wrote zero)\n", len);
}
return;
case -EPIPE:
@@ -730,7 +754,9 @@ static void alsa_write_pending (ALSAVoiceOut *alsa)
len);
return;
}
trace_alsa_xrun_out();
if (conf.verbose) {
dolog ("Recovering from playback xrun\n");
}
continue;
case -ESTRPIPE:
@@ -741,7 +767,9 @@ static void alsa_write_pending (ALSAVoiceOut *alsa)
len);
return;
}
trace_alsa_resume_out();
if (conf.verbose) {
dolog ("Resuming suspended output stream\n");
}
continue;
case -EAGAIN:
@@ -791,27 +819,25 @@ static void alsa_fini_out (HWVoiceOut *hw)
alsa->pcm_buf = NULL;
}
static int alsa_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int alsa_init_out (HWVoiceOut *hw, struct audsettings *as)
{
ALSAVoiceOut *alsa = (ALSAVoiceOut *) hw;
struct alsa_params_req req;
struct alsa_params_obt obt;
snd_pcm_t *handle;
struct audsettings obt_as;
ALSAConf *conf = drv_opaque;
req.fmt = aud_to_alsafmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.period_size = conf->period_size_out;
req.buffer_size = conf->buffer_size_out;
req.size_in_usec = conf->size_in_usec_out;
req.period_size = conf.period_size_out;
req.buffer_size = conf.buffer_size_out;
req.size_in_usec = conf.size_in_usec_out;
req.override_mask =
(conf->period_size_out_overridden ? 1 : 0) |
(conf->buffer_size_out_overridden ? 2 : 0);
(conf.period_size_out_overridden ? 1 : 0) |
(conf.buffer_size_out_overridden ? 2 : 0);
if (alsa_open (0, &req, &obt, &handle, conf)) {
if (alsa_open (0, &req, &obt, &handle)) {
return -1;
}
@@ -823,7 +849,7 @@ static int alsa_init_out(HWVoiceOut *hw, struct audsettings *as,
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = obt.samples;
alsa->pcm_buf = audio_calloc(__func__, obt.samples, 1 << hw->info.shift);
alsa->pcm_buf = audio_calloc (AUDIO_FUNC, obt.samples, 1 << hw->info.shift);
if (!alsa->pcm_buf) {
dolog ("Could not allocate DAC buffer (%d samples, each %d bytes)\n",
hw->samples, 1 << hw->info.shift);
@@ -832,7 +858,6 @@ static int alsa_init_out(HWVoiceOut *hw, struct audsettings *as,
}
alsa->handle = handle;
alsa->pollhlp.conf = conf;
return 0;
}
@@ -903,26 +928,25 @@ static int alsa_ctl_out (HWVoiceOut *hw, int cmd, ...)
return -1;
}
static int alsa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int alsa_init_in (HWVoiceIn *hw, struct audsettings *as)
{
ALSAVoiceIn *alsa = (ALSAVoiceIn *) hw;
struct alsa_params_req req;
struct alsa_params_obt obt;
snd_pcm_t *handle;
struct audsettings obt_as;
ALSAConf *conf = drv_opaque;
req.fmt = aud_to_alsafmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.period_size = conf->period_size_in;
req.buffer_size = conf->buffer_size_in;
req.size_in_usec = conf->size_in_usec_in;
req.period_size = conf.period_size_in;
req.buffer_size = conf.buffer_size_in;
req.size_in_usec = conf.size_in_usec_in;
req.override_mask =
(conf->period_size_in_overridden ? 1 : 0) |
(conf->buffer_size_in_overridden ? 2 : 0);
(conf.period_size_in_overridden ? 1 : 0) |
(conf.buffer_size_in_overridden ? 2 : 0);
if (alsa_open (1, &req, &obt, &handle, conf)) {
if (alsa_open (1, &req, &obt, &handle)) {
return -1;
}
@@ -934,7 +958,7 @@ static int alsa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = obt.samples;
alsa->pcm_buf = audio_calloc(__func__, hw->samples, 1 << hw->info.shift);
alsa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
if (!alsa->pcm_buf) {
dolog ("Could not allocate ADC buffer (%d samples, each %d bytes)\n",
hw->samples, 1 << hw->info.shift);
@@ -943,7 +967,6 @@ static int alsa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
alsa->handle = handle;
alsa->pollhlp.conf = conf;
return 0;
}
@@ -999,10 +1022,14 @@ static int alsa_run_in (HWVoiceIn *hw)
dolog ("Failed to resume suspended input stream\n");
return 0;
}
trace_alsa_resume_in();
if (conf.verbose) {
dolog ("Resuming suspended input stream\n");
}
break;
default:
trace_alsa_no_frames(state);
if (conf.verbose) {
dolog ("No frames available and ALSA state is %d\n", state);
}
return 0;
}
}
@@ -1037,7 +1064,9 @@ static int alsa_run_in (HWVoiceIn *hw)
if (nread <= 0) {
switch (nread) {
case 0:
trace_alsa_read_zero(len);
if (conf.verbose) {
dolog ("Failed to read %ld frames (read zero)\n", len);
}
goto exit;
case -EPIPE:
@@ -1045,7 +1074,9 @@ static int alsa_run_in (HWVoiceIn *hw)
alsa_logerr (nread, "Failed to read %ld frames\n", len);
goto exit;
}
trace_alsa_xrun_in();
if (conf.verbose) {
dolog ("Recovering from capture xrun\n");
}
continue;
case -EAGAIN:
@@ -1117,85 +1148,82 @@ static int alsa_ctl_in (HWVoiceIn *hw, int cmd, ...)
return -1;
}
static ALSAConf glob_conf = {
.buffer_size_out = 4096,
.period_size_out = 1024,
.pcm_name_out = "default",
.pcm_name_in = "default",
};
static void *alsa_audio_init (void)
{
ALSAConf *conf = g_malloc(sizeof(ALSAConf));
*conf = glob_conf;
return conf;
return &conf;
}
static void alsa_audio_fini (void *opaque)
{
g_free(opaque);
(void) opaque;
}
static struct audio_option alsa_options[] = {
{
.name = "DAC_SIZE_IN_USEC",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.size_in_usec_out,
.valp = &conf.size_in_usec_out,
.descr = "DAC period/buffer size in microseconds (otherwise in frames)"
},
{
.name = "DAC_PERIOD_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.period_size_out,
.valp = &conf.period_size_out,
.descr = "DAC period size (0 to go with system default)",
.overriddenp = &glob_conf.period_size_out_overridden
.overriddenp = &conf.period_size_out_overridden
},
{
.name = "DAC_BUFFER_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.buffer_size_out,
.valp = &conf.buffer_size_out,
.descr = "DAC buffer size (0 to go with system default)",
.overriddenp = &glob_conf.buffer_size_out_overridden
.overriddenp = &conf.buffer_size_out_overridden
},
{
.name = "ADC_SIZE_IN_USEC",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.size_in_usec_in,
.valp = &conf.size_in_usec_in,
.descr =
"ADC period/buffer size in microseconds (otherwise in frames)"
},
{
.name = "ADC_PERIOD_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.period_size_in,
.valp = &conf.period_size_in,
.descr = "ADC period size (0 to go with system default)",
.overriddenp = &glob_conf.period_size_in_overridden
.overriddenp = &conf.period_size_in_overridden
},
{
.name = "ADC_BUFFER_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.buffer_size_in,
.valp = &conf.buffer_size_in,
.descr = "ADC buffer size (0 to go with system default)",
.overriddenp = &glob_conf.buffer_size_in_overridden
.overriddenp = &conf.buffer_size_in_overridden
},
{
.name = "THRESHOLD",
.tag = AUD_OPT_INT,
.valp = &glob_conf.threshold,
.valp = &conf.threshold,
.descr = "(undocumented)"
},
{
.name = "DAC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.pcm_name_out,
.valp = &conf.pcm_name_out,
.descr = "DAC device name (for instance dmix)"
},
{
.name = "ADC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.pcm_name_in,
.valp = &conf.pcm_name_in,
.descr = "ADC device name"
},
{
.name = "VERBOSE",
.tag = AUD_OPT_BOOL,
.valp = &conf.verbose,
.descr = "Behave in a more verbose way"
},
{ /* End of list */ }
};
@@ -1213,7 +1241,7 @@ static struct audio_pcm_ops alsa_pcm_ops = {
.ctl_in = alsa_ctl_in,
};
static struct audio_driver alsa_audio_driver = {
struct audio_driver alsa_audio_driver = {
.name = "alsa",
.descr = "ALSA http://www.alsa-project.org",
.options = alsa_options,
@@ -1226,9 +1254,3 @@ static struct audio_driver alsa_audio_driver = {
.voice_size_out = sizeof (ALSAVoiceOut),
.voice_size_in = sizeof (ALSAVoiceIn)
};
static void register_audio_alsa(void)
{
audio_driver_register(&alsa_audio_driver);
}
type_init(register_audio_alsa);

View File

@@ -21,18 +21,16 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "hw/hw.h"
#include "audio.h"
#include "monitor/monitor.h"
#include "qemu/timer.h"
#include "sysemu/sysemu.h"
#include "qemu/cutils.h"
#include "sysemu/replay.h"
#define AUDIO_CAP "audio"
#include "audio_int.h"
/* #define DEBUG_PLIVE */
/* #define DEBUG_LIVE */
/* #define DEBUG_OUT */
/* #define DEBUG_CAPTURE */
@@ -45,49 +43,15 @@
The 1st one is the one used by default, that is the reason
that we generate the list.
*/
static const char *audio_prio_list[] = {
"spice",
static struct audio_driver *drvtab[] = {
#ifdef CONFIG_SPICE
&spice_audio_driver,
#endif
CONFIG_AUDIO_DRIVERS
"none",
"wav",
&no_audio_driver,
&wav_audio_driver
};
static QLIST_HEAD(, audio_driver) audio_drivers;
void audio_driver_register(audio_driver *drv)
{
QLIST_INSERT_HEAD(&audio_drivers, drv, next);
}
audio_driver *audio_driver_lookup(const char *name)
{
struct audio_driver *d;
QLIST_FOREACH(d, &audio_drivers, next) {
if (strcmp(name, d->name) == 0) {
return d;
}
}
audio_module_load_one(name);
QLIST_FOREACH(d, &audio_drivers, next) {
if (strcmp(name, d->name) == 0) {
return d;
}
}
return NULL;
}
static void audio_module_load_all(void)
{
int i;
for (i = 0; i < ARRAY_SIZE(audio_prio_list); i++) {
audio_driver_lookup(audio_prio_list[i]);
}
}
struct fixed_settings {
int enabled;
int nb_voices;
@@ -102,6 +66,8 @@ static struct {
int hertz;
int64_t ticks;
} period;
int plive;
int log_to_monitor;
int try_poll_in;
int try_poll_out;
} conf = {
@@ -130,6 +96,8 @@ static struct {
},
.period = { .hertz = 100 },
.plive = 0,
.log_to_monitor = 0,
.try_poll_in = 1,
.try_poll_out = 1,
};
@@ -363,11 +331,20 @@ static const char *audio_get_conf_str (const char *key,
void AUD_vlog (const char *cap, const char *fmt, va_list ap)
{
if (cap) {
fprintf(stderr, "%s: ", cap);
}
if (conf.log_to_monitor) {
if (cap) {
monitor_printf(default_mon, "%s: ", cap);
}
vfprintf(stderr, fmt, ap);
monitor_vprintf(default_mon, fmt, ap);
}
else {
if (cap) {
fprintf (stderr, "%s: ", cap);
}
vfprintf (stderr, fmt, ap);
}
}
void AUD_log (const char *cap, const char *fmt, ...)
@@ -458,12 +435,12 @@ static void audio_process_options (const char *prefix,
const char qemu_prefix[] = "QEMU_";
size_t preflen, optlen;
if (audio_bug(__func__, !prefix)) {
if (audio_bug (AUDIO_FUNC, !prefix)) {
dolog ("prefix = NULL\n");
return;
}
if (audio_bug(__func__, !opt)) {
if (audio_bug (AUDIO_FUNC, !opt)) {
dolog ("opt = NULL\n");
return;
}
@@ -826,7 +803,7 @@ static int audio_attach_capture (HWVoiceOut *hw)
SWVoiceOut *sw;
HWVoiceOut *hw_cap = &cap->hw;
sc = audio_calloc(__func__, 1, sizeof(*sc));
sc = audio_calloc (AUDIO_FUNC, 1, sizeof (*sc));
if (!sc) {
dolog ("Could not allocate soft capture voice (%zu bytes)\n",
sizeof (*sc));
@@ -882,7 +859,7 @@ static int audio_pcm_hw_find_min_in (HWVoiceIn *hw)
int audio_pcm_hw_get_live_in (HWVoiceIn *hw)
{
int live = hw->total_samples_captured - audio_pcm_hw_find_min_in (hw);
if (audio_bug(__func__, live < 0 || live > hw->samples)) {
if (audio_bug (AUDIO_FUNC, live < 0 || live > hw->samples)) {
dolog ("live=%d hw->samples=%d\n", live, hw->samples);
return 0;
}
@@ -920,7 +897,7 @@ static int audio_pcm_sw_get_rpos_in (SWVoiceIn *sw)
int live = hw->total_samples_captured - sw->total_hw_samples_acquired;
int rpos;
if (audio_bug(__func__, live < 0 || live > hw->samples)) {
if (audio_bug (AUDIO_FUNC, live < 0 || live > hw->samples)) {
dolog ("live=%d hw->samples=%d\n", live, hw->samples);
return 0;
}
@@ -943,7 +920,7 @@ int audio_pcm_sw_read (SWVoiceIn *sw, void *buf, int size)
rpos = audio_pcm_sw_get_rpos_in (sw) % hw->samples;
live = hw->total_samples_captured - sw->total_hw_samples_acquired;
if (audio_bug(__func__, live < 0 || live > hw->samples)) {
if (audio_bug (AUDIO_FUNC, live < 0 || live > hw->samples)) {
dolog ("live_in=%d hw->samples=%d\n", live, hw->samples);
return 0;
}
@@ -969,7 +946,7 @@ int audio_pcm_sw_read (SWVoiceIn *sw, void *buf, int size)
}
osamp = swlim;
if (audio_bug(__func__, osamp < 0)) {
if (audio_bug (AUDIO_FUNC, osamp < 0)) {
dolog ("osamp=%d\n", osamp);
return 0;
}
@@ -1024,7 +1001,7 @@ static int audio_pcm_hw_get_live_out (HWVoiceOut *hw, int *nb_live)
if (nb_live1) {
int live = smin;
if (audio_bug(__func__, live < 0 || live > hw->samples)) {
if (audio_bug (AUDIO_FUNC, live < 0 || live > hw->samples)) {
dolog ("live=%d hw->samples=%d\n", live, hw->samples);
return 0;
}
@@ -1048,7 +1025,7 @@ int audio_pcm_sw_write (SWVoiceOut *sw, void *buf, int size)
hwsamples = sw->hw->samples;
live = sw->total_hw_samples_mixed;
if (audio_bug(__func__, live < 0 || live > hwsamples)) {
if (audio_bug (AUDIO_FUNC, live < 0 || live > hwsamples)){
dolog ("live=%d hw->samples=%d\n", live, hwsamples);
return 0;
}
@@ -1147,7 +1124,7 @@ static int audio_is_timer_needed (void)
static void audio_reset_timer (AudioState *s)
{
if (audio_is_timer_needed ()) {
timer_mod_anticipate_ns(s->ts,
timer_mod (s->ts,
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + conf.period.ticks);
}
else {
@@ -1166,6 +1143,8 @@ static void audio_timer (void *opaque)
*/
int AUD_write (SWVoiceOut *sw, void *buf, int size)
{
int bytes;
if (!sw) {
/* XXX: Consider options */
return size;
@@ -1176,11 +1155,14 @@ int AUD_write (SWVoiceOut *sw, void *buf, int size)
return 0;
}
return sw->hw->pcm_ops->write(sw, buf, size);
bytes = sw->hw->pcm_ops->write (sw, buf, size);
return bytes;
}
int AUD_read (SWVoiceIn *sw, void *buf, int size)
{
int bytes;
if (!sw) {
/* XXX: Consider options */
return size;
@@ -1191,7 +1173,8 @@ int AUD_read (SWVoiceIn *sw, void *buf, int size)
return 0;
}
return sw->hw->pcm_ops->read(sw, buf, size);
bytes = sw->hw->pcm_ops->read (sw, buf, size);
return bytes;
}
int AUD_get_buffer_size_out (SWVoiceOut *sw)
@@ -1297,7 +1280,7 @@ static int audio_get_avail (SWVoiceIn *sw)
}
live = sw->hw->total_samples_captured - sw->total_hw_samples_acquired;
if (audio_bug(__func__, live < 0 || live > sw->hw->samples)) {
if (audio_bug (AUDIO_FUNC, live < 0 || live > sw->hw->samples)) {
dolog ("live=%d sw->hw->samples=%d\n", live, sw->hw->samples);
return 0;
}
@@ -1321,7 +1304,7 @@ static int audio_get_free (SWVoiceOut *sw)
live = sw->total_hw_samples_mixed;
if (audio_bug(__func__, live < 0 || live > sw->hw->samples)) {
if (audio_bug (AUDIO_FUNC, live < 0 || live > sw->hw->samples)) {
dolog ("live=%d sw->hw->samples=%d\n", live, sw->hw->samples);
return 0;
}
@@ -1388,7 +1371,7 @@ static void audio_run_out (AudioState *s)
live = 0;
}
if (audio_bug(__func__, live < 0 || live > hw->samples)) {
if (audio_bug (AUDIO_FUNC, live < 0 || live > hw->samples)) {
dolog ("live=%d hw->samples=%d\n", live, hw->samples);
continue;
}
@@ -1422,8 +1405,7 @@ static void audio_run_out (AudioState *s)
prev_rpos = hw->rpos;
played = hw->pcm_ops->run_out (hw, live);
replay_audio_out(&played);
if (audio_bug(__func__, hw->rpos >= hw->samples)) {
if (audio_bug (AUDIO_FUNC, hw->rpos >= hw->samples)) {
dolog ("hw->rpos=%d hw->samples=%d played=%d\n",
hw->rpos, hw->samples, played);
hw->rpos = 0;
@@ -1444,7 +1426,7 @@ static void audio_run_out (AudioState *s)
continue;
}
if (audio_bug(__func__, played > sw->total_hw_samples_mixed)) {
if (audio_bug (AUDIO_FUNC, played > sw->total_hw_samples_mixed)) {
dolog ("played=%d sw->total_hw_samples_mixed=%d\n",
played, sw->total_hw_samples_mixed);
played = sw->total_hw_samples_mixed;
@@ -1472,6 +1454,9 @@ static void audio_run_out (AudioState *s)
while (sw) {
sw1 = sw->entries.le_next;
if (!sw->active && !sw->callback.fn) {
#ifdef DEBUG_PLIVE
dolog ("Finishing with old voice\n");
#endif
audio_close_out (sw);
}
sw = sw1;
@@ -1486,12 +1471,9 @@ static void audio_run_in (AudioState *s)
while ((hw = audio_pcm_hw_find_any_enabled_in (hw))) {
SWVoiceIn *sw;
int captured = 0, min;
int captured, min;
if (replay_mode != REPLAY_MODE_PLAY) {
captured = hw->pcm_ops->run_in(hw);
}
replay_audio_in(&captured, hw->conv_buf, &hw->wpos, hw->samples);
captured = hw->pcm_ops->run_in (hw);
min = audio_pcm_hw_find_min_in (hw);
hw->total_samples_captured += captured - min;
@@ -1547,7 +1529,7 @@ static void audio_run_capture (AudioState *s)
continue;
}
if (audio_bug(__func__, captured > sw->total_hw_samples_mixed)) {
if (audio_bug (AUDIO_FUNC, captured > sw->total_hw_samples_mixed)) {
dolog ("captured=%d sw->total_hw_samples_mixed=%d\n",
captured, sw->total_hw_samples_mixed);
captured = sw->total_hw_samples_mixed;
@@ -1666,6 +1648,18 @@ static struct audio_option audio_options[] = {
.valp = &conf.period.hertz,
.descr = "Timer period in HZ (0 - use lowest possible)"
},
{
.name = "PLIVE",
.tag = AUD_OPT_BOOL,
.valp = &conf.plive,
.descr = "(undocumented)"
},
{
.name = "LOG_TO_MONITOR",
.tag = AUD_OPT_BOOL,
.valp = &conf.log_to_monitor,
.descr = "Print logging messages to monitor instead of stderr"
},
{ /* End of list */ }
};
@@ -1690,13 +1684,11 @@ static void audio_pp_nb_voices (const char *typ, int nb)
void AUD_help (void)
{
struct audio_driver *d;
/* make sure we print the help text for modular drivers too */
audio_module_load_all();
size_t i;
audio_process_options ("AUDIO", audio_options);
QLIST_FOREACH(d, &audio_drivers, next) {
for (i = 0; i < ARRAY_SIZE (drvtab); i++) {
struct audio_driver *d = drvtab[i];
if (d->options) {
audio_process_options (d->name, d->options);
}
@@ -1708,7 +1700,8 @@ void AUD_help (void)
printf ("Available drivers:\n");
QLIST_FOREACH(d, &audio_drivers, next) {
for (i = 0; i < ARRAY_SIZE (drvtab); i++) {
struct audio_driver *d = drvtab[i];
printf ("Name: %s\n", d->name);
printf ("Description: %s\n", d->descr);
@@ -1779,21 +1772,13 @@ static void audio_vm_change_state_handler (void *opaque, int running,
audio_reset_timer (s);
}
static bool is_cleaning_up;
bool audio_is_cleaning_up(void)
{
return is_cleaning_up;
}
void audio_cleanup(void)
static void audio_atexit (void)
{
AudioState *s = &glob_audio_state;
HWVoiceOut *hwo, *hwon;
HWVoiceIn *hwi, *hwin;
HWVoiceOut *hwo = NULL;
HWVoiceIn *hwi = NULL;
is_cleaning_up = true;
QLIST_FOREACH_SAFE(hwo, &glob_audio_state.hw_head_out, entries, hwon) {
while ((hwo = audio_pcm_hw_find_any_out (hwo))) {
SWVoiceCap *sc;
if (hwo->enabled) {
@@ -1809,20 +1794,17 @@ void audio_cleanup(void)
cb->ops.destroy (cb->opaque);
}
}
QLIST_REMOVE(hwo, entries);
}
QLIST_FOREACH_SAFE(hwi, &glob_audio_state.hw_head_in, entries, hwin) {
while ((hwi = audio_pcm_hw_find_any_in (hwi))) {
if (hwi->enabled) {
hwi->pcm_ops->ctl_in (hwi, VOICE_DISABLE);
}
hwi->pcm_ops->fini_in (hwi);
QLIST_REMOVE(hwi, entries);
}
if (s->drv) {
s->drv->fini (s->drv_opaque);
s->drv = NULL;
}
}
@@ -1842,7 +1824,6 @@ static void audio_init (void)
const char *drvname;
VMChangeStateEntry *e;
AudioState *s = &glob_audio_state;
struct audio_driver *driver;
if (s->drv) {
return;
@@ -1851,9 +1832,12 @@ static void audio_init (void)
QLIST_INIT (&s->hw_head_out);
QLIST_INIT (&s->hw_head_in);
QLIST_INIT (&s->cap_head);
atexit(audio_cleanup);
atexit (audio_atexit);
s->ts = timer_new_ns(QEMU_CLOCK_VIRTUAL, audio_timer, s);
if (!s->ts) {
hw_error("Could not create audio timer\n");
}
audio_process_options ("AUDIO", audio_options);
@@ -1878,29 +1862,38 @@ static void audio_init (void)
}
if (drvname) {
driver = audio_driver_lookup(drvname);
if (driver) {
done = !audio_driver_init(s, driver);
} else {
int found = 0;
for (i = 0; i < ARRAY_SIZE (drvtab); i++) {
if (!strcmp (drvname, drvtab[i]->name)) {
done = !audio_driver_init (s, drvtab[i]);
found = 1;
break;
}
}
if (!found) {
dolog ("Unknown audio driver `%s'\n", drvname);
dolog ("Run with -audio-help to list available drivers\n");
}
}
if (!done) {
for (i = 0; !done && i < ARRAY_SIZE(audio_prio_list); i++) {
driver = audio_driver_lookup(audio_prio_list[i]);
if (driver && driver->can_be_default) {
done = !audio_driver_init(s, driver);
for (i = 0; !done && i < ARRAY_SIZE (drvtab); i++) {
if (drvtab[i]->can_be_default) {
done = !audio_driver_init (s, drvtab[i]);
}
}
}
if (!done) {
driver = audio_driver_lookup("none");
done = !audio_driver_init(s, driver);
assert(done);
dolog("warning: Using timer based audio emulation\n");
done = !audio_driver_init (s, &no_audio_driver);
if (!done) {
hw_error("Could not initialize audio subsystem\n");
}
else {
dolog ("warning: Using timer based audio emulation\n");
}
}
if (conf.period.hertz <= 0) {
@@ -1911,7 +1904,8 @@ static void audio_init (void)
}
conf.period.ticks = 1;
} else {
conf.period.ticks = NANOSECONDS_PER_SECOND / conf.period.hertz;
conf.period.ticks =
muldiv64 (1, get_ticks_per_sec (), conf.period.hertz);
}
e = qemu_add_vm_change_state_handler (audio_vm_change_state_handler, s);
@@ -1955,7 +1949,7 @@ CaptureVoiceOut *AUD_add_capture (
goto err0;
}
cb = audio_calloc(__func__, 1, sizeof(*cb));
cb = audio_calloc (AUDIO_FUNC, 1, sizeof (*cb));
if (!cb) {
dolog ("Could not allocate capture callback information, size %zu\n",
sizeof (*cb));
@@ -1973,7 +1967,7 @@ CaptureVoiceOut *AUD_add_capture (
HWVoiceOut *hw;
CaptureVoiceOut *cap;
cap = audio_calloc(__func__, 1, sizeof(*cap));
cap = audio_calloc (AUDIO_FUNC, 1, sizeof (*cap));
if (!cap) {
dolog ("Could not allocate capture voice, size %zu\n",
sizeof (*cap));
@@ -1986,8 +1980,8 @@ CaptureVoiceOut *AUD_add_capture (
/* XXX find a more elegant way */
hw->samples = 4096 * 4;
hw->mix_buf = audio_calloc(__func__, hw->samples,
sizeof(struct st_sample));
hw->mix_buf = audio_calloc (AUDIO_FUNC, hw->samples,
sizeof (struct st_sample));
if (!hw->mix_buf) {
dolog ("Could not allocate capture mix buffer (%d samples)\n",
hw->samples);
@@ -1996,7 +1990,7 @@ CaptureVoiceOut *AUD_add_capture (
audio_pcm_init_info (&hw->info, as);
cap->buf = audio_calloc(__func__, hw->samples, 1 << hw->info.shift);
cap->buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
if (!cap->buf) {
dolog ("Could not allocate capture buffer "
"(%d samples, each %d bytes)\n",
@@ -2013,7 +2007,8 @@ CaptureVoiceOut *AUD_add_capture (
QLIST_INSERT_HEAD (&s->cap_head, cap, entries);
QLIST_INSERT_HEAD (&cap->cb_head, cb, entries);
QLIST_FOREACH(hw, &glob_audio_state.hw_head_out, entries) {
hw = NULL;
while ((hw = audio_pcm_hw_find_any_out (hw))) {
audio_attach_capture (hw);
}
return cap;
@@ -2059,8 +2054,6 @@ void AUD_del_capture (CaptureVoiceOut *cap, void *cb_opaque)
sw = sw1;
}
QLIST_REMOVE (cap, entries);
g_free (cap->hw.mix_buf);
g_free (cap->buf);
g_free (cap);
}
return;

View File

@@ -21,10 +21,10 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#ifndef QEMU_AUDIO_H
#define QEMU_AUDIO_H
#include "config-host.h"
#include "qemu/queue.h"
typedef void (*audio_callback_fn) (void *opaque, int avail);
@@ -163,12 +163,4 @@ static inline void *advance (void *p, int incr)
int wav_start_capture (CaptureState *s, const char *path, int freq,
int bits, int nchannels);
bool audio_is_cleaning_up(void);
void audio_cleanup(void);
void audio_sample_to_uint64(void *samples, int pos,
uint64_t *left, uint64_t *right);
void audio_sample_from_uint64(void *samples, int pos,
uint64_t left, uint64_t right);
#endif /* QEMU_AUDIO_H */
#endif /* audio.h */

View File

@@ -21,11 +21,10 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#ifndef QEMU_AUDIO_INT_H
#define QEMU_AUDIO_INT_H
#ifdef CONFIG_AUDIO_COREAUDIO
#ifdef CONFIG_COREAUDIO
#define FLOAT_MIXENG
/* #define RECIPROCAL */
#endif
@@ -141,7 +140,6 @@ struct SWVoiceIn {
QLIST_ENTRY (SWVoiceIn) entries;
};
typedef struct audio_driver audio_driver;
struct audio_driver {
const char *name;
const char *descr;
@@ -155,17 +153,16 @@ struct audio_driver {
int voice_size_out;
int voice_size_in;
int ctl_caps;
QLIST_ENTRY(audio_driver) next;
};
struct audio_pcm_ops {
int (*init_out)(HWVoiceOut *hw, struct audsettings *as, void *drv_opaque);
int (*init_out)(HWVoiceOut *hw, struct audsettings *as);
void (*fini_out)(HWVoiceOut *hw);
int (*run_out) (HWVoiceOut *hw, int live);
int (*write) (SWVoiceOut *sw, void *buf, int size);
int (*ctl_out) (HWVoiceOut *hw, int cmd, ...);
int (*init_in) (HWVoiceIn *hw, struct audsettings *as, void *drv_opaque);
int (*init_in) (HWVoiceIn *hw, struct audsettings *as);
void (*fini_in) (HWVoiceIn *hw);
int (*run_in) (HWVoiceIn *hw);
int (*read) (SWVoiceIn *sw, void *buf, int size);
@@ -205,11 +202,20 @@ struct AudioState {
int vm_running;
};
extern struct audio_driver no_audio_driver;
extern struct audio_driver oss_audio_driver;
extern struct audio_driver sdl_audio_driver;
extern struct audio_driver wav_audio_driver;
extern struct audio_driver fmod_audio_driver;
extern struct audio_driver alsa_audio_driver;
extern struct audio_driver coreaudio_audio_driver;
extern struct audio_driver dsound_audio_driver;
extern struct audio_driver esd_audio_driver;
extern struct audio_driver pa_audio_driver;
extern struct audio_driver spice_audio_driver;
extern struct audio_driver winwave_audio_driver;
extern const struct mixeng_volume nominal_volume;
void audio_driver_register(audio_driver *drv);
audio_driver *audio_driver_lookup(const char *name);
void audio_pcm_init_info (struct audio_pcm_info *info, struct audsettings *as);
void audio_pcm_info_clear_buf (struct audio_pcm_info *info, void *buf, int len);
@@ -248,4 +254,10 @@ static inline int audio_ring_dist (int dst, int src, int len)
#define AUDIO_STRINGIFY_(n) #n
#define AUDIO_STRINGIFY(n) AUDIO_STRINGIFY_(n)
#endif /* QEMU_AUDIO_INT_H */
#if defined _MSC_VER || defined __GNUC__
#define AUDIO_FUNC __FUNCTION__
#else
#define AUDIO_FUNC __FILE__ ":" AUDIO_STRINGIFY (__LINE__)
#endif
#endif /* audio_int.h */

View File

@@ -1,4 +1,3 @@
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "audio.h"
@@ -31,7 +30,7 @@ int audio_pt_init (struct audio_pt *p, void *(*func) (void *),
err = sigfillset (&set);
if (err) {
logerr(p, errno, "%s(%s): sigfillset failed", cap, __func__);
logerr (p, errno, "%s(%s): sigfillset failed", cap, AUDIO_FUNC);
return -1;
}
@@ -57,8 +56,8 @@ int audio_pt_init (struct audio_pt *p, void *(*func) (void *),
err2 = pthread_sigmask (SIG_SETMASK, &old_set, NULL);
if (err2) {
logerr(p, err2, "%s(%s): pthread_sigmask (restore) failed",
cap, __func__);
logerr (p, err2, "%s(%s): pthread_sigmask (restore) failed",
cap, AUDIO_FUNC);
/* We have failed to restore original signal mask, all bets are off,
so terminate the process */
exit (EXIT_FAILURE);
@@ -74,17 +73,17 @@ int audio_pt_init (struct audio_pt *p, void *(*func) (void *),
err2:
err2 = pthread_cond_destroy (&p->cond);
if (err2) {
logerr(p, err2, "%s(%s): pthread_cond_destroy failed", cap, __func__);
logerr (p, err2, "%s(%s): pthread_cond_destroy failed", cap, AUDIO_FUNC);
}
err1:
err2 = pthread_mutex_destroy (&p->mutex);
if (err2) {
logerr(p, err2, "%s(%s): pthread_mutex_destroy failed", cap, __func__);
logerr (p, err2, "%s(%s): pthread_mutex_destroy failed", cap, AUDIO_FUNC);
}
err0:
logerr(p, err, "%s(%s): %s failed", cap, __func__, efunc);
logerr (p, err, "%s(%s): %s failed", cap, AUDIO_FUNC, efunc);
return -1;
}
@@ -94,13 +93,13 @@ int audio_pt_fini (struct audio_pt *p, const char *cap)
err = pthread_cond_destroy (&p->cond);
if (err) {
logerr(p, err, "%s(%s): pthread_cond_destroy failed", cap, __func__);
logerr (p, err, "%s(%s): pthread_cond_destroy failed", cap, AUDIO_FUNC);
ret = -1;
}
err = pthread_mutex_destroy (&p->mutex);
if (err) {
logerr(p, err, "%s(%s): pthread_mutex_destroy failed", cap, __func__);
logerr (p, err, "%s(%s): pthread_mutex_destroy failed", cap, AUDIO_FUNC);
ret = -1;
}
return ret;
@@ -112,7 +111,7 @@ int audio_pt_lock (struct audio_pt *p, const char *cap)
err = pthread_mutex_lock (&p->mutex);
if (err) {
logerr(p, err, "%s(%s): pthread_mutex_lock failed", cap, __func__);
logerr (p, err, "%s(%s): pthread_mutex_lock failed", cap, AUDIO_FUNC);
return -1;
}
return 0;
@@ -124,7 +123,7 @@ int audio_pt_unlock (struct audio_pt *p, const char *cap)
err = pthread_mutex_unlock (&p->mutex);
if (err) {
logerr(p, err, "%s(%s): pthread_mutex_unlock failed", cap, __func__);
logerr (p, err, "%s(%s): pthread_mutex_unlock failed", cap, AUDIO_FUNC);
return -1;
}
return 0;
@@ -136,7 +135,7 @@ int audio_pt_wait (struct audio_pt *p, const char *cap)
err = pthread_cond_wait (&p->cond, &p->mutex);
if (err) {
logerr(p, err, "%s(%s): pthread_cond_wait failed", cap, __func__);
logerr (p, err, "%s(%s): pthread_cond_wait failed", cap, AUDIO_FUNC);
return -1;
}
return 0;
@@ -148,12 +147,12 @@ int audio_pt_unlock_and_signal (struct audio_pt *p, const char *cap)
err = pthread_mutex_unlock (&p->mutex);
if (err) {
logerr(p, err, "%s(%s): pthread_mutex_unlock failed", cap, __func__);
logerr (p, err, "%s(%s): pthread_mutex_unlock failed", cap, AUDIO_FUNC);
return -1;
}
err = pthread_cond_signal (&p->cond);
if (err) {
logerr(p, err, "%s(%s): pthread_cond_signal failed", cap, __func__);
logerr (p, err, "%s(%s): pthread_cond_signal failed", cap, AUDIO_FUNC);
return -1;
}
return 0;
@@ -166,7 +165,7 @@ int audio_pt_join (struct audio_pt *p, void **arg, const char *cap)
err = pthread_join (p->thread, &ret);
if (err) {
logerr(p, err, "%s(%s): pthread_join failed", cap, __func__);
logerr (p, err, "%s(%s): pthread_join failed", cap, AUDIO_FUNC);
return -1;
}
*arg = ret;

View File

@@ -19,4 +19,4 @@ int audio_pt_wait (struct audio_pt *, const char *);
int audio_pt_unlock_and_signal (struct audio_pt *, const char *);
int audio_pt_join (struct audio_pt *, void **, const char *);
#endif /* QEMU_AUDIO_PT_INT_H */
#endif /* audio_pt_int.h */

View File

@@ -57,13 +57,13 @@ static void glue (audio_init_nb_voices_, TYPE) (struct audio_driver *drv)
glue (s->nb_hw_voices_, TYPE) = max_voices;
}
if (audio_bug(__func__, !voice_size && max_voices)) {
if (audio_bug (AUDIO_FUNC, !voice_size && max_voices)) {
dolog ("drv=`%s' voice_size=0 max_voices=%d\n",
drv->name, max_voices);
glue (s->nb_hw_voices_, TYPE) = 0;
}
if (audio_bug(__func__, voice_size && !max_voices)) {
if (audio_bug (AUDIO_FUNC, voice_size && !max_voices)) {
dolog ("drv=`%s' voice_size=%d max_voices=0\n",
drv->name, voice_size);
}
@@ -77,7 +77,7 @@ static void glue (audio_pcm_hw_free_resources_, TYPE) (HW *hw)
static int glue (audio_pcm_hw_alloc_resources_, TYPE) (HW *hw)
{
HWBUF = audio_calloc(__func__, hw->samples, sizeof(struct st_sample));
HWBUF = audio_calloc (AUDIO_FUNC, hw->samples, sizeof (struct st_sample));
if (!HWBUF) {
dolog ("Could not allocate " NAME " buffer (%d samples)\n",
hw->samples);
@@ -105,7 +105,7 @@ static int glue (audio_pcm_sw_alloc_resources_, TYPE) (SW *sw)
samples = ((int64_t) sw->hw->samples << 32) / sw->ratio;
sw->buf = audio_calloc(__func__, samples, sizeof(struct st_sample));
sw->buf = audio_calloc (AUDIO_FUNC, samples, sizeof (struct st_sample));
if (!sw->buf) {
dolog ("Could not allocate buffer for `%s' (%d samples)\n",
SW_NAME (sw), samples);
@@ -238,17 +238,17 @@ static HW *glue (audio_pcm_hw_add_new_, TYPE) (struct audsettings *as)
return NULL;
}
if (audio_bug(__func__, !drv)) {
if (audio_bug (AUDIO_FUNC, !drv)) {
dolog ("No host audio driver\n");
return NULL;
}
if (audio_bug(__func__, !drv->pcm_ops)) {
if (audio_bug (AUDIO_FUNC, !drv->pcm_ops)) {
dolog ("Host audio driver without pcm_ops\n");
return NULL;
}
hw = audio_calloc(__func__, 1, glue(drv->voice_size_, TYPE));
hw = audio_calloc (AUDIO_FUNC, 1, glue (drv->voice_size_, TYPE));
if (!hw) {
dolog ("Can not allocate voice `%s' size %d\n",
drv->name, glue (drv->voice_size_, TYPE));
@@ -262,11 +262,11 @@ static HW *glue (audio_pcm_hw_add_new_, TYPE) (struct audsettings *as)
#ifdef DAC
QLIST_INIT (&hw->cap_head);
#endif
if (glue (hw->pcm_ops->init_, TYPE) (hw, as, s->drv_opaque)) {
if (glue (hw->pcm_ops->init_, TYPE) (hw, as)) {
goto err0;
}
if (audio_bug(__func__, hw->samples <= 0)) {
if (audio_bug (AUDIO_FUNC, hw->samples <= 0)) {
dolog ("hw->samples=%d\n", hw->samples);
goto err1;
}
@@ -339,7 +339,7 @@ static SW *glue (audio_pcm_create_voice_pair_, TYPE) (
hw_as = *as;
}
sw = audio_calloc(__func__, 1, sizeof(*sw));
sw = audio_calloc (AUDIO_FUNC, 1, sizeof (*sw));
if (!sw) {
dolog ("Could not allocate soft voice `%s' (%zu bytes)\n",
sw_name ? sw_name : "unknown", sizeof (*sw));
@@ -379,7 +379,7 @@ static void glue (audio_close_, TYPE) (SW *sw)
void glue (AUD_close_, TYPE) (QEMUSoundCard *card, SW *sw)
{
if (sw) {
if (audio_bug(__func__, !card)) {
if (audio_bug (AUDIO_FUNC, !card)) {
dolog ("card=%p\n", card);
return;
}
@@ -398,8 +398,12 @@ SW *glue (AUD_open_, TYPE) (
)
{
AudioState *s = &glob_audio_state;
#ifdef DAC
int live = 0;
SW *old_sw = NULL;
#endif
if (audio_bug(__func__, !card || !name || !callback_fn || !as)) {
if (audio_bug (AUDIO_FUNC, !card || !name || !callback_fn || !as)) {
dolog ("card=%p name=%p callback_fn=%p as=%p\n",
card, name, callback_fn, as);
goto fail;
@@ -408,12 +412,12 @@ SW *glue (AUD_open_, TYPE) (
ldebug ("open %s, freq %d, nchannels %d, fmt %d\n",
name, as->freq, as->nchannels, as->fmt);
if (audio_bug(__func__, audio_validate_settings(as))) {
if (audio_bug (AUDIO_FUNC, audio_validate_settings (as))) {
audio_print_settings (as);
goto fail;
}
if (audio_bug(__func__, !s->drv)) {
if (audio_bug (AUDIO_FUNC, !s->drv)) {
dolog ("Can not open `%s' (no host audio driver)\n", name);
goto fail;
}
@@ -422,6 +426,29 @@ SW *glue (AUD_open_, TYPE) (
return sw;
}
#ifdef DAC
if (conf.plive && sw && (!sw->active && !sw->empty)) {
live = sw->total_hw_samples_mixed;
#ifdef DEBUG_PLIVE
dolog ("Replacing voice %s with %d live samples\n", SW_NAME (sw), live);
dolog ("Old %s freq %d, bits %d, channels %d\n",
SW_NAME (sw), sw->info.freq, sw->info.bits, sw->info.nchannels);
dolog ("New %s freq %d, bits %d, channels %d\n",
name,
as->freq,
(as->fmt == AUD_FMT_S16 || as->fmt == AUD_FMT_U16) ? 16 : 8,
as->nchannels);
#endif
if (live) {
old_sw = sw;
old_sw->callback.fn = NULL;
sw = NULL;
}
}
#endif
if (!glue (conf.fixed_, TYPE).enabled && sw) {
glue (AUD_close_, TYPE) (card, sw);
sw = NULL;
@@ -454,6 +481,20 @@ SW *glue (AUD_open_, TYPE) (
sw->callback.fn = callback_fn;
sw->callback.opaque = callback_opaque;
#ifdef DAC
if (live) {
int mixed =
(live << old_sw->info.shift)
* old_sw->info.bytes_per_second
/ sw->info.bytes_per_second;
#ifdef DEBUG_PLIVE
dolog ("Silence will be mixed %d\n", mixed);
#endif
sw->total_hw_samples_mixed += mixed;
}
#endif
#ifdef DEBUG_AUDIO
dolog ("%s\n", name);
audio_pcm_print_info ("hw", &sw->hw->info);

View File

@@ -1,6 +1,5 @@
/* public domain */
#include "qemu/osdep.h"
#include "qemu-common.h"
#define AUDIO_CAP "win-int"

View File

@@ -22,8 +22,8 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include <CoreAudio/CoreAudio.h>
#include <string.h> /* strerror */
#include <pthread.h> /* pthread_X */
#include "qemu-common.h"
@@ -32,248 +32,28 @@
#define AUDIO_CAP "coreaudio"
#include "audio_int.h"
#ifndef MAC_OS_X_VERSION_10_6
#define MAC_OS_X_VERSION_10_6 1060
#endif
typedef struct {
struct {
int buffer_frames;
int nbuffers;
} CoreaudioConf;
int isAtexit;
} conf = {
.buffer_frames = 512,
.nbuffers = 4,
.isAtexit = 0
};
typedef struct coreaudioVoiceOut {
HWVoiceOut hw;
pthread_mutex_t mutex;
int isAtexit;
AudioDeviceID outputDeviceID;
UInt32 audioDevicePropertyBufferFrameSize;
AudioStreamBasicDescription outputStreamBasicDescription;
AudioDeviceIOProcID ioprocid;
int live;
int decr;
int rpos;
} coreaudioVoiceOut;
#if MAC_OS_X_VERSION_MAX_ALLOWED >= MAC_OS_X_VERSION_10_6
/* The APIs used here only become available from 10.6 */
static OSStatus coreaudio_get_voice(AudioDeviceID *id)
{
UInt32 size = sizeof(*id);
AudioObjectPropertyAddress addr = {
kAudioHardwarePropertyDefaultOutputDevice,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster
};
return AudioObjectGetPropertyData(kAudioObjectSystemObject,
&addr,
0,
NULL,
&size,
id);
}
static OSStatus coreaudio_get_framesizerange(AudioDeviceID id,
AudioValueRange *framerange)
{
UInt32 size = sizeof(*framerange);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyBufferFrameSizeRange,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectGetPropertyData(id,
&addr,
0,
NULL,
&size,
framerange);
}
static OSStatus coreaudio_get_framesize(AudioDeviceID id, UInt32 *framesize)
{
UInt32 size = sizeof(*framesize);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyBufferFrameSize,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectGetPropertyData(id,
&addr,
0,
NULL,
&size,
framesize);
}
static OSStatus coreaudio_set_framesize(AudioDeviceID id, UInt32 *framesize)
{
UInt32 size = sizeof(*framesize);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyBufferFrameSize,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectSetPropertyData(id,
&addr,
0,
NULL,
size,
framesize);
}
static OSStatus coreaudio_get_streamformat(AudioDeviceID id,
AudioStreamBasicDescription *d)
{
UInt32 size = sizeof(*d);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyStreamFormat,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectGetPropertyData(id,
&addr,
0,
NULL,
&size,
d);
}
static OSStatus coreaudio_set_streamformat(AudioDeviceID id,
AudioStreamBasicDescription *d)
{
UInt32 size = sizeof(*d);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyStreamFormat,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectSetPropertyData(id,
&addr,
0,
NULL,
size,
d);
}
static OSStatus coreaudio_get_isrunning(AudioDeviceID id, UInt32 *result)
{
UInt32 size = sizeof(*result);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyDeviceIsRunning,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectGetPropertyData(id,
&addr,
0,
NULL,
&size,
result);
}
#else
/* Legacy versions of functions using deprecated APIs */
static OSStatus coreaudio_get_voice(AudioDeviceID *id)
{
UInt32 size = sizeof(*id);
return AudioHardwareGetProperty(
kAudioHardwarePropertyDefaultOutputDevice,
&size,
id);
}
static OSStatus coreaudio_get_framesizerange(AudioDeviceID id,
AudioValueRange *framerange)
{
UInt32 size = sizeof(*framerange);
return AudioDeviceGetProperty(
id,
0,
0,
kAudioDevicePropertyBufferFrameSizeRange,
&size,
framerange);
}
static OSStatus coreaudio_get_framesize(AudioDeviceID id, UInt32 *framesize)
{
UInt32 size = sizeof(*framesize);
return AudioDeviceGetProperty(
id,
0,
false,
kAudioDevicePropertyBufferFrameSize,
&size,
framesize);
}
static OSStatus coreaudio_set_framesize(AudioDeviceID id, UInt32 *framesize)
{
UInt32 size = sizeof(*framesize);
return AudioDeviceSetProperty(
id,
NULL,
0,
false,
kAudioDevicePropertyBufferFrameSize,
size,
framesize);
}
static OSStatus coreaudio_get_streamformat(AudioDeviceID id,
AudioStreamBasicDescription *d)
{
UInt32 size = sizeof(*d);
return AudioDeviceGetProperty(
id,
0,
false,
kAudioDevicePropertyStreamFormat,
&size,
d);
}
static OSStatus coreaudio_set_streamformat(AudioDeviceID id,
AudioStreamBasicDescription *d)
{
UInt32 size = sizeof(*d);
return AudioDeviceSetProperty(
id,
0,
0,
0,
kAudioDevicePropertyStreamFormat,
size,
d);
}
static OSStatus coreaudio_get_isrunning(AudioDeviceID id, UInt32 *result)
{
UInt32 size = sizeof(*result);
return AudioDeviceGetProperty(
id,
0,
0,
kAudioDevicePropertyDeviceIsRunning,
&size,
result);
}
#endif
static void coreaudio_logstatus (OSStatus status)
{
const char *str = "BUG";
@@ -368,7 +148,10 @@ static inline UInt32 isPlaying (AudioDeviceID outputDeviceID)
{
OSStatus status;
UInt32 result = 0;
status = coreaudio_get_isrunning(outputDeviceID, &result);
UInt32 propertySize = sizeof(outputDeviceID);
status = AudioDeviceGetProperty(
outputDeviceID, 0, 0,
kAudioDevicePropertyDeviceIsRunning, &propertySize, &result);
if (status != kAudioHardwareNoError) {
coreaudio_logerr(status,
"Could not determine whether Device is playing\n");
@@ -376,6 +159,11 @@ static inline UInt32 isPlaying (AudioDeviceID outputDeviceID)
return result;
}
static void coreaudio_atexit (void)
{
conf.isAtexit = 1;
}
static int coreaudio_lock (coreaudioVoiceOut *core, const char *fn_name)
{
int err;
@@ -499,15 +287,14 @@ static int coreaudio_write (SWVoiceOut *sw, void *buf, int len)
return audio_pcm_sw_write (sw, buf, len);
}
static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int coreaudio_init_out (HWVoiceOut *hw, struct audsettings *as)
{
OSStatus status;
coreaudioVoiceOut *core = (coreaudioVoiceOut *) hw;
UInt32 propertySize;
int err;
const char *typ = "playback";
AudioValueRange frameRange;
CoreaudioConf *conf = drv_opaque;
/* create mutex */
err = pthread_mutex_init(&core->mutex, NULL);
@@ -518,7 +305,12 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
audio_pcm_init_info (&hw->info, as);
status = coreaudio_get_voice(&core->outputDeviceID);
/* open default output device */
propertySize = sizeof(core->outputDeviceID);
status = AudioHardwareGetProperty(
kAudioHardwarePropertyDefaultOutputDevice,
&propertySize,
&core->outputDeviceID);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ,
"Could not get default output Device\n");
@@ -530,29 +322,42 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
}
/* get minimum and maximum buffer frame sizes */
status = coreaudio_get_framesizerange(core->outputDeviceID,
&frameRange);
propertySize = sizeof(frameRange);
status = AudioDeviceGetProperty(
core->outputDeviceID,
0,
0,
kAudioDevicePropertyBufferFrameSizeRange,
&propertySize,
&frameRange);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ,
"Could not get device buffer frame range\n");
return -1;
}
if (frameRange.mMinimum > conf->buffer_frames) {
if (frameRange.mMinimum > conf.buffer_frames) {
core->audioDevicePropertyBufferFrameSize = (UInt32) frameRange.mMinimum;
dolog ("warning: Upsizing Buffer Frames to %f\n", frameRange.mMinimum);
}
else if (frameRange.mMaximum < conf->buffer_frames) {
else if (frameRange.mMaximum < conf.buffer_frames) {
core->audioDevicePropertyBufferFrameSize = (UInt32) frameRange.mMaximum;
dolog ("warning: Downsizing Buffer Frames to %f\n", frameRange.mMaximum);
}
else {
core->audioDevicePropertyBufferFrameSize = conf->buffer_frames;
core->audioDevicePropertyBufferFrameSize = conf.buffer_frames;
}
/* set Buffer Frame Size */
status = coreaudio_set_framesize(core->outputDeviceID,
&core->audioDevicePropertyBufferFrameSize);
propertySize = sizeof(core->audioDevicePropertyBufferFrameSize);
status = AudioDeviceSetProperty(
core->outputDeviceID,
NULL,
0,
false,
kAudioDevicePropertyBufferFrameSize,
propertySize,
&core->audioDevicePropertyBufferFrameSize);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ,
"Could not set device buffer frame size %" PRIu32 "\n",
@@ -561,18 +366,30 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
}
/* get Buffer Frame Size */
status = coreaudio_get_framesize(core->outputDeviceID,
&core->audioDevicePropertyBufferFrameSize);
propertySize = sizeof(core->audioDevicePropertyBufferFrameSize);
status = AudioDeviceGetProperty(
core->outputDeviceID,
0,
false,
kAudioDevicePropertyBufferFrameSize,
&propertySize,
&core->audioDevicePropertyBufferFrameSize);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ,
"Could not get device buffer frame size\n");
return -1;
}
hw->samples = conf->nbuffers * core->audioDevicePropertyBufferFrameSize;
hw->samples = conf.nbuffers * core->audioDevicePropertyBufferFrameSize;
/* get StreamFormat */
status = coreaudio_get_streamformat(core->outputDeviceID,
&core->outputStreamBasicDescription);
propertySize = sizeof(core->outputStreamBasicDescription);
status = AudioDeviceGetProperty(
core->outputDeviceID,
0,
false,
kAudioDevicePropertyStreamFormat,
&propertySize,
&core->outputStreamBasicDescription);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ,
"Could not get Device Stream properties\n");
@@ -582,8 +399,15 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
/* set Samplerate */
core->outputStreamBasicDescription.mSampleRate = (Float64) as->freq;
status = coreaudio_set_streamformat(core->outputDeviceID,
&core->outputStreamBasicDescription);
propertySize = sizeof(core->outputStreamBasicDescription);
status = AudioDeviceSetProperty(
core->outputDeviceID,
0,
0,
0,
kAudioDevicePropertyStreamFormat,
propertySize,
&core->outputStreamBasicDescription);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ, "Could not set samplerate %d\n",
as->freq);
@@ -592,12 +416,8 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
}
/* set Callback */
core->ioprocid = NULL;
status = AudioDeviceCreateIOProcID(core->outputDeviceID,
audioDeviceIOProc,
hw,
&core->ioprocid);
if (status != kAudioHardwareNoError || core->ioprocid == NULL) {
status = AudioDeviceAddIOProc(core->outputDeviceID, audioDeviceIOProc, hw);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ, "Could not set IOProc\n");
core->outputDeviceID = kAudioDeviceUnknown;
return -1;
@@ -605,10 +425,10 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
/* start Playback */
if (!isPlaying(core->outputDeviceID)) {
status = AudioDeviceStart(core->outputDeviceID, core->ioprocid);
status = AudioDeviceStart(core->outputDeviceID, audioDeviceIOProc);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ, "Could not start playback\n");
AudioDeviceDestroyIOProcID(core->outputDeviceID, core->ioprocid);
AudioDeviceRemoveIOProc(core->outputDeviceID, audioDeviceIOProc);
core->outputDeviceID = kAudioDeviceUnknown;
return -1;
}
@@ -623,18 +443,18 @@ static void coreaudio_fini_out (HWVoiceOut *hw)
int err;
coreaudioVoiceOut *core = (coreaudioVoiceOut *) hw;
if (!audio_is_cleaning_up()) {
if (!conf.isAtexit) {
/* stop playback */
if (isPlaying(core->outputDeviceID)) {
status = AudioDeviceStop(core->outputDeviceID, core->ioprocid);
status = AudioDeviceStop(core->outputDeviceID, audioDeviceIOProc);
if (status != kAudioHardwareNoError) {
coreaudio_logerr (status, "Could not stop playback\n");
}
}
/* remove callback */
status = AudioDeviceDestroyIOProcID(core->outputDeviceID,
core->ioprocid);
status = AudioDeviceRemoveIOProc(core->outputDeviceID,
audioDeviceIOProc);
if (status != kAudioHardwareNoError) {
coreaudio_logerr (status, "Could not remove IOProc\n");
}
@@ -657,7 +477,7 @@ static int coreaudio_ctl_out (HWVoiceOut *hw, int cmd, ...)
case VOICE_ENABLE:
/* start playback */
if (!isPlaying(core->outputDeviceID)) {
status = AudioDeviceStart(core->outputDeviceID, core->ioprocid);
status = AudioDeviceStart(core->outputDeviceID, audioDeviceIOProc);
if (status != kAudioHardwareNoError) {
coreaudio_logerr (status, "Could not resume playback\n");
}
@@ -666,10 +486,9 @@ static int coreaudio_ctl_out (HWVoiceOut *hw, int cmd, ...)
case VOICE_DISABLE:
/* stop playback */
if (!audio_is_cleaning_up()) {
if (!conf.isAtexit) {
if (isPlaying(core->outputDeviceID)) {
status = AudioDeviceStop(core->outputDeviceID,
core->ioprocid);
status = AudioDeviceStop(core->outputDeviceID, audioDeviceIOProc);
if (status != kAudioHardwareNoError) {
coreaudio_logerr (status, "Could not pause playback\n");
}
@@ -680,35 +499,28 @@ static int coreaudio_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static CoreaudioConf glob_conf = {
.buffer_frames = 512,
.nbuffers = 4,
};
static void *coreaudio_audio_init (void)
{
CoreaudioConf *conf = g_malloc(sizeof(CoreaudioConf));
*conf = glob_conf;
return conf;
atexit(coreaudio_atexit);
return &coreaudio_audio_init;
}
static void coreaudio_audio_fini (void *opaque)
{
g_free(opaque);
(void) opaque;
}
static struct audio_option coreaudio_options[] = {
{
.name = "BUFFER_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.buffer_frames,
.valp = &conf.buffer_frames,
.descr = "Size of the buffer in frames"
},
{
.name = "BUFFER_COUNT",
.tag = AUD_OPT_INT,
.valp = &glob_conf.nbuffers,
.valp = &conf.nbuffers,
.descr = "Number of buffers"
},
{ /* End of list */ }
@@ -722,7 +534,7 @@ static struct audio_pcm_ops coreaudio_pcm_ops = {
.ctl_out = coreaudio_ctl_out
};
static struct audio_driver coreaudio_audio_driver = {
struct audio_driver coreaudio_audio_driver = {
.name = "coreaudio",
.descr = "CoreAudio http://developer.apple.com/audio/coreaudio.html",
.options = coreaudio_options,
@@ -735,9 +547,3 @@ static struct audio_driver coreaudio_audio_driver = {
.voice_size_out = sizeof (coreaudioVoiceOut),
.voice_size_in = 0
};
static void register_audio_coreaudio(void)
{
audio_driver_register(&coreaudio_audio_driver);
}
type_init(register_audio_coreaudio);

View File

@@ -67,11 +67,11 @@ static int glue (dsound_lock_, TYPE) (
LPVOID *p2p,
DWORD *blen1p,
DWORD *blen2p,
int entire,
dsound *s
int entire
)
{
HRESULT hr;
int i;
LPVOID p1 = NULL, p2 = NULL;
DWORD blen1 = 0, blen2 = 0;
DWORD flag;
@@ -81,18 +81,37 @@ static int glue (dsound_lock_, TYPE) (
#else
flag = entire ? DSBLOCK_ENTIREBUFFER : 0;
#endif
hr = glue(IFACE, _Lock)(buf, pos, len, &p1, &blen1, &p2, &blen2, flag);
for (i = 0; i < conf.lock_retries; ++i) {
hr = glue (IFACE, _Lock) (
buf,
pos,
len,
&p1,
&blen1,
&p2,
&blen2,
flag
);
if (FAILED (hr)) {
if (FAILED (hr)) {
#ifndef DSBTYPE_IN
if (hr == DSERR_BUFFERLOST) {
if (glue (dsound_restore_, TYPE) (buf, s)) {
dsound_logerr (hr, "Could not lock " NAME "\n");
if (hr == DSERR_BUFFERLOST) {
if (glue (dsound_restore_, TYPE) (buf)) {
dsound_logerr (hr, "Could not lock " NAME "\n");
goto fail;
}
continue;
}
#endif
dsound_logerr (hr, "Could not lock " NAME "\n");
goto fail;
}
#endif
dsound_logerr (hr, "Could not lock " NAME "\n");
break;
}
if (i == conf.lock_retries) {
dolog ("%d attempts to lock " NAME " failed\n", i);
goto fail;
}
@@ -155,19 +174,16 @@ static void dsound_fini_out (HWVoiceOut *hw)
}
#ifdef DSBTYPE_IN
static int dsound_init_in(HWVoiceIn *hw, struct audsettings *as,
void *drv_opaque)
static int dsound_init_in (HWVoiceIn *hw, struct audsettings *as)
#else
static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int dsound_init_out (HWVoiceOut *hw, struct audsettings *as)
#endif
{
int err;
HRESULT hr;
dsound *s = drv_opaque;
dsound *s = &glob_dsound;
WAVEFORMATEX wfx;
struct audsettings obt_as;
DSoundConf *conf = &s->conf;
#ifdef DSBTYPE_IN
const char *typ = "ADC";
DSoundVoiceIn *ds = (DSoundVoiceIn *) hw;
@@ -194,7 +210,7 @@ static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
bd.dwSize = sizeof (bd);
bd.lpwfxFormat = &wfx;
#ifdef DSBTYPE_IN
bd.dwBufferBytes = conf->bufsize_in;
bd.dwBufferBytes = conf.bufsize_in;
hr = IDirectSoundCapture_CreateCaptureBuffer (
s->dsound_capture,
&bd,
@@ -203,7 +219,7 @@ static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
);
#else
bd.dwFlags = DSBCAPS_STICKYFOCUS | DSBCAPS_GETCURRENTPOSITION2;
bd.dwBufferBytes = conf->bufsize_out;
bd.dwBufferBytes = conf.bufsize_out;
hr = IDirectSound_CreateSoundBuffer (
s->dsound,
&bd,
@@ -253,7 +269,6 @@ static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
);
}
hw->samples = bc.dwBufferBytes >> hw->info.shift;
ds->s = s;
#ifdef DEBUG_DSOUND
dolog ("caps %ld, desc %ld\n",

View File

@@ -26,7 +26,6 @@
* SEAL 1.07 by Carlos 'pel' Hasan was used as documentation
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "audio.h"
@@ -42,25 +41,42 @@
/* #define DEBUG_DSOUND */
typedef struct {
static struct {
int lock_retries;
int restore_retries;
int getstatus_retries;
int set_primary;
int bufsize_in;
int bufsize_out;
struct audsettings settings;
int latency_millis;
} DSoundConf;
} conf = {
.lock_retries = 1,
.restore_retries = 1,
.getstatus_retries = 1,
.set_primary = 0,
.bufsize_in = 16384,
.bufsize_out = 16384,
.settings.freq = 44100,
.settings.nchannels = 2,
.settings.fmt = AUD_FMT_S16,
.latency_millis = 10
};
typedef struct {
LPDIRECTSOUND dsound;
LPDIRECTSOUNDCAPTURE dsound_capture;
LPDIRECTSOUNDBUFFER dsound_primary_buffer;
struct audsettings settings;
DSoundConf conf;
} dsound;
static dsound glob_dsound;
typedef struct {
HWVoiceOut hw;
LPDIRECTSOUNDBUFFER dsound_buffer;
DWORD old_pos;
int first_time;
dsound *s;
#ifdef DEBUG_DSOUND
DWORD old_ppos;
DWORD played;
@@ -72,7 +88,6 @@ typedef struct {
HWVoiceIn hw;
int first_time;
LPDIRECTSOUNDCAPTUREBUFFER dsound_capture_buffer;
dsound *s;
} DSoundVoiceIn;
static void dsound_log_hresult (HRESULT hr)
@@ -266,17 +281,29 @@ static void print_wave_format (WAVEFORMATEX *wfx)
}
#endif
static int dsound_restore_out (LPDIRECTSOUNDBUFFER dsb, dsound *s)
static int dsound_restore_out (LPDIRECTSOUNDBUFFER dsb)
{
HRESULT hr;
int i;
hr = IDirectSoundBuffer_Restore (dsb);
for (i = 0; i < conf.restore_retries; ++i) {
hr = IDirectSoundBuffer_Restore (dsb);
if (hr != DS_OK) {
dsound_logerr (hr, "Could not restore playback buffer\n");
return -1;
switch (hr) {
case DS_OK:
return 0;
case DSERR_BUFFERLOST:
continue;
default:
dsound_logerr (hr, "Could not restore playback buffer\n");
return -1;
}
}
return 0;
dolog ("%d attempts to restore playback buffer failed\n", i);
return -1;
}
#include "dsound_template.h"
@@ -284,20 +311,25 @@ static int dsound_restore_out (LPDIRECTSOUNDBUFFER dsb, dsound *s)
#include "dsound_template.h"
#undef DSBTYPE_IN
static int dsound_get_status_out (LPDIRECTSOUNDBUFFER dsb, DWORD *statusp,
dsound *s)
static int dsound_get_status_out (LPDIRECTSOUNDBUFFER dsb, DWORD *statusp)
{
HRESULT hr;
int i;
hr = IDirectSoundBuffer_GetStatus (dsb, statusp);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not get playback buffer status\n");
return -1;
}
for (i = 0; i < conf.getstatus_retries; ++i) {
hr = IDirectSoundBuffer_GetStatus (dsb, statusp);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not get playback buffer status\n");
return -1;
}
if (*statusp & DSERR_BUFFERLOST) {
dsound_restore_out(dsb, s);
return -1;
if (*statusp & DSERR_BUFFERLOST) {
if (dsound_restore_out (dsb)) {
return -1;
}
continue;
}
break;
}
return 0;
@@ -344,8 +376,7 @@ static void dsound_write_sample (HWVoiceOut *hw, uint8_t *dst, int dst_len)
hw->rpos = pos % hw->samples;
}
static void dsound_clear_sample (HWVoiceOut *hw, LPDIRECTSOUNDBUFFER dsb,
dsound *s)
static void dsound_clear_sample (HWVoiceOut *hw, LPDIRECTSOUNDBUFFER dsb)
{
int err;
LPVOID p1, p2;
@@ -358,8 +389,7 @@ static void dsound_clear_sample (HWVoiceOut *hw, LPDIRECTSOUNDBUFFER dsb,
hw->samples << hw->info.shift,
&p1, &p2,
&blen1, &blen2,
1,
s
1
);
if (err) {
return;
@@ -385,9 +415,25 @@ static void dsound_clear_sample (HWVoiceOut *hw, LPDIRECTSOUNDBUFFER dsb,
dsound_unlock_out (dsb, p1, p2, blen1, blen2);
}
static int dsound_open (dsound *s)
static void dsound_close (dsound *s)
{
HRESULT hr;
if (s->dsound_primary_buffer) {
hr = IDirectSoundBuffer_Release (s->dsound_primary_buffer);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not release primary buffer\n");
}
s->dsound_primary_buffer = NULL;
}
}
static int dsound_open (dsound *s)
{
int err;
HRESULT hr;
WAVEFORMATEX wfx;
DSBUFFERDESC dsbd;
HWND hwnd;
hwnd = GetForegroundWindow ();
@@ -403,7 +449,63 @@ static int dsound_open (dsound *s)
return -1;
}
if (!conf.set_primary) {
return 0;
}
err = waveformat_from_audio_settings (&wfx, &conf.settings);
if (err) {
return -1;
}
memset (&dsbd, 0, sizeof (dsbd));
dsbd.dwSize = sizeof (dsbd);
dsbd.dwFlags = DSBCAPS_PRIMARYBUFFER;
dsbd.dwBufferBytes = 0;
dsbd.lpwfxFormat = NULL;
hr = IDirectSound_CreateSoundBuffer (
s->dsound,
&dsbd,
&s->dsound_primary_buffer,
NULL
);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not create primary playback buffer\n");
return -1;
}
hr = IDirectSoundBuffer_SetFormat (s->dsound_primary_buffer, &wfx);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not set primary playback buffer format\n");
}
hr = IDirectSoundBuffer_GetFormat (
s->dsound_primary_buffer,
&wfx,
sizeof (wfx),
NULL
);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not get primary playback buffer format\n");
goto fail0;
}
#ifdef DEBUG_DSOUND
dolog ("Primary\n");
print_wave_format (&wfx);
#endif
err = waveformat_to_audio_settings (&wfx, &s->settings);
if (err) {
goto fail0;
}
return 0;
fail0:
dsound_close (s);
return -1;
}
static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
@@ -412,7 +514,6 @@ static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
DWORD status;
DSoundVoiceOut *ds = (DSoundVoiceOut *) hw;
LPDIRECTSOUNDBUFFER dsb = ds->dsound_buffer;
dsound *s = ds->s;
if (!dsb) {
dolog ("Attempt to control voice without a buffer\n");
@@ -421,7 +522,7 @@ static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
switch (cmd) {
case VOICE_ENABLE:
if (dsound_get_status_out (dsb, &status, s)) {
if (dsound_get_status_out (dsb, &status)) {
return -1;
}
@@ -430,7 +531,7 @@ static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
dsound_clear_sample (hw, dsb, s);
dsound_clear_sample (hw, dsb);
hr = IDirectSoundBuffer_Play (dsb, 0, 0, DSBPLAY_LOOPING);
if (FAILED (hr)) {
@@ -440,7 +541,7 @@ static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
break;
case VOICE_DISABLE:
if (dsound_get_status_out (dsb, &status, s)) {
if (dsound_get_status_out (dsb, &status)) {
return -1;
}
@@ -477,8 +578,6 @@ static int dsound_run_out (HWVoiceOut *hw, int live)
DWORD wpos, ppos, old_pos;
LPVOID p1, p2;
int bufsize;
dsound *s = ds->s;
DSoundConf *conf = &s->conf;
if (!dsb) {
dolog ("Attempt to run empty with playback buffer\n");
@@ -501,14 +600,14 @@ static int dsound_run_out (HWVoiceOut *hw, int live)
len = live << hwshift;
if (ds->first_time) {
if (conf->latency_millis) {
if (conf.latency_millis) {
DWORD cur_blat;
cur_blat = audio_ring_dist (wpos, ppos, bufsize);
ds->first_time = 0;
old_pos = wpos;
old_pos +=
millis_to_bytes (&hw->info, conf->latency_millis) - cur_blat;
millis_to_bytes (&hw->info, conf.latency_millis) - cur_blat;
old_pos %= bufsize;
old_pos &= ~hw->info.align;
}
@@ -543,7 +642,7 @@ static int dsound_run_out (HWVoiceOut *hw, int live)
}
}
if (audio_bug(__func__, len < 0 || len > bufsize)) {
if (audio_bug (AUDIO_FUNC, len < 0 || len > bufsize)) {
dolog ("len=%d bufsize=%d old_pos=%ld ppos=%ld\n",
len, bufsize, old_pos, ppos);
return 0;
@@ -564,8 +663,7 @@ static int dsound_run_out (HWVoiceOut *hw, int live)
len,
&p1, &p2,
&blen1, &blen2,
0,
s
0
);
if (err) {
return 0;
@@ -668,7 +766,6 @@ static int dsound_run_in (HWVoiceIn *hw)
DWORD cpos, rpos;
LPVOID p1, p2;
int hwshift;
dsound *s = ds->s;
if (!dscb) {
dolog ("Attempt to run without capture buffer\n");
@@ -723,8 +820,7 @@ static int dsound_run_in (HWVoiceIn *hw)
&p2,
&blen1,
&blen2,
0,
s
0
);
if (err) {
return 0;
@@ -747,19 +843,12 @@ static int dsound_run_in (HWVoiceIn *hw)
return decr;
}
static DSoundConf glob_conf = {
.bufsize_in = 16384,
.bufsize_out = 16384,
.latency_millis = 10
};
static void dsound_audio_fini (void *opaque)
{
HRESULT hr;
dsound *s = opaque;
if (!s->dsound) {
g_free(s);
return;
}
@@ -770,7 +859,6 @@ static void dsound_audio_fini (void *opaque)
s->dsound = NULL;
if (!s->dsound_capture) {
g_free(s);
return;
}
@@ -779,21 +867,17 @@ static void dsound_audio_fini (void *opaque)
dsound_logerr (hr, "Could not release DirectSoundCapture\n");
}
s->dsound_capture = NULL;
g_free(s);
}
static void *dsound_audio_init (void)
{
int err;
HRESULT hr;
dsound *s = g_malloc0(sizeof(dsound));
dsound *s = &glob_dsound;
s->conf = glob_conf;
hr = CoInitialize (NULL);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not initialize COM\n");
g_free(s);
return NULL;
}
@@ -806,7 +890,6 @@ static void *dsound_audio_init (void)
);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not create DirectSound instance\n");
g_free(s);
return NULL;
}
@@ -818,7 +901,7 @@ static void *dsound_audio_init (void)
if (FAILED (hr)) {
dsound_logerr (hr, "Could not release DirectSound\n");
}
g_free(s);
s->dsound = NULL;
return NULL;
}
@@ -855,22 +938,64 @@ static void *dsound_audio_init (void)
}
static struct audio_option dsound_options[] = {
{
.name = "LOCK_RETRIES",
.tag = AUD_OPT_INT,
.valp = &conf.lock_retries,
.descr = "Number of times to attempt locking the buffer"
},
{
.name = "RESTOURE_RETRIES",
.tag = AUD_OPT_INT,
.valp = &conf.restore_retries,
.descr = "Number of times to attempt restoring the buffer"
},
{
.name = "GETSTATUS_RETRIES",
.tag = AUD_OPT_INT,
.valp = &conf.getstatus_retries,
.descr = "Number of times to attempt getting status of the buffer"
},
{
.name = "SET_PRIMARY",
.tag = AUD_OPT_BOOL,
.valp = &conf.set_primary,
.descr = "Set the parameters of primary buffer"
},
{
.name = "LATENCY_MILLIS",
.tag = AUD_OPT_INT,
.valp = &glob_conf.latency_millis,
.valp = &conf.latency_millis,
.descr = "(undocumented)"
},
{
.name = "PRIMARY_FREQ",
.tag = AUD_OPT_INT,
.valp = &conf.settings.freq,
.descr = "Primary buffer frequency"
},
{
.name = "PRIMARY_CHANNELS",
.tag = AUD_OPT_INT,
.valp = &conf.settings.nchannels,
.descr = "Primary buffer number of channels (1 - mono, 2 - stereo)"
},
{
.name = "PRIMARY_FMT",
.tag = AUD_OPT_FMT,
.valp = &conf.settings.fmt,
.descr = "Primary buffer format"
},
{
.name = "BUFSIZE_OUT",
.tag = AUD_OPT_INT,
.valp = &glob_conf.bufsize_out,
.valp = &conf.bufsize_out,
.descr = "(undocumented)"
},
{
.name = "BUFSIZE_IN",
.tag = AUD_OPT_INT,
.valp = &glob_conf.bufsize_in,
.valp = &conf.bufsize_in,
.descr = "(undocumented)"
},
{ /* End of list */ }
@@ -890,7 +1015,7 @@ static struct audio_pcm_ops dsound_pcm_ops = {
.ctl_in = dsound_ctl_in
};
static struct audio_driver dsound_audio_driver = {
struct audio_driver dsound_audio_driver = {
.name = "dsound",
.descr = "DirectSound http://wikipedia.org/wiki/DirectSound",
.options = dsound_options,
@@ -903,9 +1028,3 @@ static struct audio_driver dsound_audio_driver = {
.voice_size_out = sizeof (DSoundVoiceOut),
.voice_size_in = sizeof (DSoundVoiceIn)
};
static void register_audio_dsound(void)
{
audio_driver_register(&dsound_audio_driver);
}
type_init(register_audio_dsound);

557
audio/esdaudio.c Normal file
View File

@@ -0,0 +1,557 @@
/*
* QEMU ESD audio driver
*
* Copyright (c) 2006 Frederick Reeve (brushed up by malc)
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <esd.h>
#include "qemu-common.h"
#include "audio.h"
#define AUDIO_CAP "esd"
#include "audio_int.h"
#include "audio_pt_int.h"
typedef struct {
HWVoiceOut hw;
int done;
int live;
int decr;
int rpos;
void *pcm_buf;
int fd;
struct audio_pt pt;
} ESDVoiceOut;
typedef struct {
HWVoiceIn hw;
int done;
int dead;
int incr;
int wpos;
void *pcm_buf;
int fd;
struct audio_pt pt;
} ESDVoiceIn;
static struct {
int samples;
int divisor;
char *dac_host;
char *adc_host;
} conf = {
.samples = 1024,
.divisor = 2,
};
static void GCC_FMT_ATTR (2, 3) qesd_logerr (int err, const char *fmt, ...)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
AUD_log (AUDIO_CAP, "Reason: %s\n", strerror (err));
}
/* playback */
static void *qesd_thread_out (void *arg)
{
ESDVoiceOut *esd = arg;
HWVoiceOut *hw = &esd->hw;
int threshold;
threshold = conf.divisor ? hw->samples / conf.divisor : 0;
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
for (;;) {
int decr, to_mix, rpos;
for (;;) {
if (esd->done) {
goto exit;
}
if (esd->live > threshold) {
break;
}
if (audio_pt_wait (&esd->pt, AUDIO_FUNC)) {
goto exit;
}
}
decr = to_mix = esd->live;
rpos = hw->rpos;
if (audio_pt_unlock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
while (to_mix) {
ssize_t written;
int chunk = audio_MIN (to_mix, hw->samples - rpos);
struct st_sample *src = hw->mix_buf + rpos;
hw->clip (esd->pcm_buf, src, chunk);
again:
written = write (esd->fd, esd->pcm_buf, chunk << hw->info.shift);
if (written == -1) {
if (errno == EINTR || errno == EAGAIN) {
goto again;
}
qesd_logerr (errno, "write failed\n");
return NULL;
}
if (written != chunk << hw->info.shift) {
int wsamples = written >> hw->info.shift;
int wbytes = wsamples << hw->info.shift;
if (wbytes != written) {
dolog ("warning: Misaligned write %d (requested %zd), "
"alignment %d\n",
wbytes, written, hw->info.align + 1);
}
to_mix -= wsamples;
rpos = (rpos + wsamples) % hw->samples;
break;
}
rpos = (rpos + chunk) % hw->samples;
to_mix -= chunk;
}
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
esd->rpos = rpos;
esd->live -= decr;
esd->decr += decr;
}
exit:
audio_pt_unlock (&esd->pt, AUDIO_FUNC);
return NULL;
}
static int qesd_run_out (HWVoiceOut *hw, int live)
{
int decr;
ESDVoiceOut *esd = (ESDVoiceOut *) hw;
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return 0;
}
decr = audio_MIN (live, esd->decr);
esd->decr -= decr;
esd->live = live - decr;
hw->rpos = esd->rpos;
if (esd->live > 0) {
audio_pt_unlock_and_signal (&esd->pt, AUDIO_FUNC);
}
else {
audio_pt_unlock (&esd->pt, AUDIO_FUNC);
}
return decr;
}
static int qesd_write (SWVoiceOut *sw, void *buf, int len)
{
return audio_pcm_sw_write (sw, buf, len);
}
static int qesd_init_out (HWVoiceOut *hw, struct audsettings *as)
{
ESDVoiceOut *esd = (ESDVoiceOut *) hw;
struct audsettings obt_as = *as;
int esdfmt = ESD_STREAM | ESD_PLAY;
esdfmt |= (as->nchannels == 2) ? ESD_STEREO : ESD_MONO;
switch (as->fmt) {
case AUD_FMT_S8:
case AUD_FMT_U8:
esdfmt |= ESD_BITS8;
obt_as.fmt = AUD_FMT_U8;
break;
case AUD_FMT_S32:
case AUD_FMT_U32:
dolog ("Will use 16 instead of 32 bit samples\n");
/* fall through */
case AUD_FMT_S16:
case AUD_FMT_U16:
deffmt:
esdfmt |= ESD_BITS16;
obt_as.fmt = AUD_FMT_S16;
break;
default:
dolog ("Internal logic error: Bad audio format %d\n", as->fmt);
goto deffmt;
}
obt_as.endianness = AUDIO_HOST_ENDIANNESS;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = conf.samples;
esd->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
if (!esd->pcm_buf) {
dolog ("Could not allocate buffer (%d bytes)\n",
hw->samples << hw->info.shift);
return -1;
}
esd->fd = esd_play_stream (esdfmt, as->freq, conf.dac_host, NULL);
if (esd->fd < 0) {
qesd_logerr (errno, "esd_play_stream failed\n");
goto fail1;
}
if (audio_pt_init (&esd->pt, qesd_thread_out, esd, AUDIO_CAP, AUDIO_FUNC)) {
goto fail2;
}
return 0;
fail2:
if (close (esd->fd)) {
qesd_logerr (errno, "%s: close on esd socket(%d) failed\n",
AUDIO_FUNC, esd->fd);
}
esd->fd = -1;
fail1:
g_free (esd->pcm_buf);
esd->pcm_buf = NULL;
return -1;
}
static void qesd_fini_out (HWVoiceOut *hw)
{
void *ret;
ESDVoiceOut *esd = (ESDVoiceOut *) hw;
audio_pt_lock (&esd->pt, AUDIO_FUNC);
esd->done = 1;
audio_pt_unlock_and_signal (&esd->pt, AUDIO_FUNC);
audio_pt_join (&esd->pt, &ret, AUDIO_FUNC);
if (esd->fd >= 0) {
if (close (esd->fd)) {
qesd_logerr (errno, "failed to close esd socket\n");
}
esd->fd = -1;
}
audio_pt_fini (&esd->pt, AUDIO_FUNC);
g_free (esd->pcm_buf);
esd->pcm_buf = NULL;
}
static int qesd_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
(void) hw;
(void) cmd;
return 0;
}
/* capture */
static void *qesd_thread_in (void *arg)
{
ESDVoiceIn *esd = arg;
HWVoiceIn *hw = &esd->hw;
int threshold;
threshold = conf.divisor ? hw->samples / conf.divisor : 0;
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
for (;;) {
int incr, to_grab, wpos;
for (;;) {
if (esd->done) {
goto exit;
}
if (esd->dead > threshold) {
break;
}
if (audio_pt_wait (&esd->pt, AUDIO_FUNC)) {
goto exit;
}
}
incr = to_grab = esd->dead;
wpos = hw->wpos;
if (audio_pt_unlock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
while (to_grab) {
ssize_t nread;
int chunk = audio_MIN (to_grab, hw->samples - wpos);
void *buf = advance (esd->pcm_buf, wpos);
again:
nread = read (esd->fd, buf, chunk << hw->info.shift);
if (nread == -1) {
if (errno == EINTR || errno == EAGAIN) {
goto again;
}
qesd_logerr (errno, "read failed\n");
return NULL;
}
if (nread != chunk << hw->info.shift) {
int rsamples = nread >> hw->info.shift;
int rbytes = rsamples << hw->info.shift;
if (rbytes != nread) {
dolog ("warning: Misaligned write %d (requested %zd), "
"alignment %d\n",
rbytes, nread, hw->info.align + 1);
}
to_grab -= rsamples;
wpos = (wpos + rsamples) % hw->samples;
break;
}
hw->conv (hw->conv_buf + wpos, buf, nread >> hw->info.shift);
wpos = (wpos + chunk) % hw->samples;
to_grab -= chunk;
}
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
esd->wpos = wpos;
esd->dead -= incr;
esd->incr += incr;
}
exit:
audio_pt_unlock (&esd->pt, AUDIO_FUNC);
return NULL;
}
static int qesd_run_in (HWVoiceIn *hw)
{
int live, incr, dead;
ESDVoiceIn *esd = (ESDVoiceIn *) hw;
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return 0;
}
live = audio_pcm_hw_get_live_in (hw);
dead = hw->samples - live;
incr = audio_MIN (dead, esd->incr);
esd->incr -= incr;
esd->dead = dead - incr;
hw->wpos = esd->wpos;
if (esd->dead > 0) {
audio_pt_unlock_and_signal (&esd->pt, AUDIO_FUNC);
}
else {
audio_pt_unlock (&esd->pt, AUDIO_FUNC);
}
return incr;
}
static int qesd_read (SWVoiceIn *sw, void *buf, int len)
{
return audio_pcm_sw_read (sw, buf, len);
}
static int qesd_init_in (HWVoiceIn *hw, struct audsettings *as)
{
ESDVoiceIn *esd = (ESDVoiceIn *) hw;
struct audsettings obt_as = *as;
int esdfmt = ESD_STREAM | ESD_RECORD;
esdfmt |= (as->nchannels == 2) ? ESD_STEREO : ESD_MONO;
switch (as->fmt) {
case AUD_FMT_S8:
case AUD_FMT_U8:
esdfmt |= ESD_BITS8;
obt_as.fmt = AUD_FMT_U8;
break;
case AUD_FMT_S16:
case AUD_FMT_U16:
esdfmt |= ESD_BITS16;
obt_as.fmt = AUD_FMT_S16;
break;
case AUD_FMT_S32:
case AUD_FMT_U32:
dolog ("Will use 16 instead of 32 bit samples\n");
esdfmt |= ESD_BITS16;
obt_as.fmt = AUD_FMT_S16;
break;
}
obt_as.endianness = AUDIO_HOST_ENDIANNESS;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = conf.samples;
esd->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
if (!esd->pcm_buf) {
dolog ("Could not allocate buffer (%d bytes)\n",
hw->samples << hw->info.shift);
return -1;
}
esd->fd = esd_record_stream (esdfmt, as->freq, conf.adc_host, NULL);
if (esd->fd < 0) {
qesd_logerr (errno, "esd_record_stream failed\n");
goto fail1;
}
if (audio_pt_init (&esd->pt, qesd_thread_in, esd, AUDIO_CAP, AUDIO_FUNC)) {
goto fail2;
}
return 0;
fail2:
if (close (esd->fd)) {
qesd_logerr (errno, "%s: close on esd socket(%d) failed\n",
AUDIO_FUNC, esd->fd);
}
esd->fd = -1;
fail1:
g_free (esd->pcm_buf);
esd->pcm_buf = NULL;
return -1;
}
static void qesd_fini_in (HWVoiceIn *hw)
{
void *ret;
ESDVoiceIn *esd = (ESDVoiceIn *) hw;
audio_pt_lock (&esd->pt, AUDIO_FUNC);
esd->done = 1;
audio_pt_unlock_and_signal (&esd->pt, AUDIO_FUNC);
audio_pt_join (&esd->pt, &ret, AUDIO_FUNC);
if (esd->fd >= 0) {
if (close (esd->fd)) {
qesd_logerr (errno, "failed to close esd socket\n");
}
esd->fd = -1;
}
audio_pt_fini (&esd->pt, AUDIO_FUNC);
g_free (esd->pcm_buf);
esd->pcm_buf = NULL;
}
static int qesd_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
(void) hw;
(void) cmd;
return 0;
}
/* common */
static void *qesd_audio_init (void)
{
return &conf;
}
static void qesd_audio_fini (void *opaque)
{
(void) opaque;
ldebug ("esd_fini");
}
struct audio_option qesd_options[] = {
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.samples,
.descr = "buffer size in samples"
},
{
.name = "DIVISOR",
.tag = AUD_OPT_INT,
.valp = &conf.divisor,
.descr = "threshold divisor"
},
{
.name = "DAC_HOST",
.tag = AUD_OPT_STR,
.valp = &conf.dac_host,
.descr = "playback host"
},
{
.name = "ADC_HOST",
.tag = AUD_OPT_STR,
.valp = &conf.adc_host,
.descr = "capture host"
},
{ /* End of list */ }
};
static struct audio_pcm_ops qesd_pcm_ops = {
.init_out = qesd_init_out,
.fini_out = qesd_fini_out,
.run_out = qesd_run_out,
.write = qesd_write,
.ctl_out = qesd_ctl_out,
.init_in = qesd_init_in,
.fini_in = qesd_fini_in,
.run_in = qesd_run_in,
.read = qesd_read,
.ctl_in = qesd_ctl_in,
};
struct audio_driver esd_audio_driver = {
.name = "esd",
.descr = "http://en.wikipedia.org/wiki/Esound",
.options = qesd_options,
.init = qesd_audio_init,
.fini = qesd_audio_fini,
.pcm_ops = &qesd_pcm_ops,
.can_be_default = 0,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (ESDVoiceOut),
.voice_size_in = sizeof (ESDVoiceIn)
};

685
audio/fmodaudio.c Normal file
View File

@@ -0,0 +1,685 @@
/*
* QEMU FMOD audio driver
*
* Copyright (c) 2004-2005 Vassili Karpov (malc)
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <fmod.h>
#include <fmod_errors.h>
#include "qemu-common.h"
#include "audio.h"
#define AUDIO_CAP "fmod"
#include "audio_int.h"
typedef struct FMODVoiceOut {
HWVoiceOut hw;
unsigned int old_pos;
FSOUND_SAMPLE *fmod_sample;
int channel;
} FMODVoiceOut;
typedef struct FMODVoiceIn {
HWVoiceIn hw;
FSOUND_SAMPLE *fmod_sample;
} FMODVoiceIn;
static struct {
const char *drvname;
int nb_samples;
int freq;
int nb_channels;
int bufsize;
int broken_adc;
} conf = {
.nb_samples = 2048 * 2,
.freq = 44100,
.nb_channels = 2,
};
static void GCC_FMT_ATTR (1, 2) fmod_logerr (const char *fmt, ...)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
AUD_log (AUDIO_CAP, "Reason: %s\n",
FMOD_ErrorString (FSOUND_GetError ()));
}
static void GCC_FMT_ATTR (2, 3) fmod_logerr2 (
const char *typ,
const char *fmt,
...
)
{
va_list ap;
AUD_log (AUDIO_CAP, "Could not initialize %s\n", typ);
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
AUD_log (AUDIO_CAP, "Reason: %s\n",
FMOD_ErrorString (FSOUND_GetError ()));
}
static int fmod_write (SWVoiceOut *sw, void *buf, int len)
{
return audio_pcm_sw_write (sw, buf, len);
}
static void fmod_clear_sample (FMODVoiceOut *fmd)
{
HWVoiceOut *hw = &fmd->hw;
int status;
void *p1 = 0, *p2 = 0;
unsigned int len1 = 0, len2 = 0;
status = FSOUND_Sample_Lock (
fmd->fmod_sample,
0,
hw->samples << hw->info.shift,
&p1,
&p2,
&len1,
&len2
);
if (!status) {
fmod_logerr ("Failed to lock sample\n");
return;
}
if ((len1 & hw->info.align) || (len2 & hw->info.align)) {
dolog ("Lock returned misaligned length %d, %d, alignment %d\n",
len1, len2, hw->info.align + 1);
goto fail;
}
if ((len1 + len2) - (hw->samples << hw->info.shift)) {
dolog ("Lock returned incomplete length %d, %d\n",
len1 + len2, hw->samples << hw->info.shift);
goto fail;
}
audio_pcm_info_clear_buf (&hw->info, p1, hw->samples);
fail:
status = FSOUND_Sample_Unlock (fmd->fmod_sample, p1, p2, len1, len2);
if (!status) {
fmod_logerr ("Failed to unlock sample\n");
}
}
static void fmod_write_sample (HWVoiceOut *hw, uint8_t *dst, int dst_len)
{
int src_len1 = dst_len;
int src_len2 = 0;
int pos = hw->rpos + dst_len;
struct st_sample *src1 = hw->mix_buf + hw->rpos;
struct st_sample *src2 = NULL;
if (pos > hw->samples) {
src_len1 = hw->samples - hw->rpos;
src2 = hw->mix_buf;
src_len2 = dst_len - src_len1;
pos = src_len2;
}
if (src_len1) {
hw->clip (dst, src1, src_len1);
}
if (src_len2) {
dst = advance (dst, src_len1 << hw->info.shift);
hw->clip (dst, src2, src_len2);
}
hw->rpos = pos % hw->samples;
}
static int fmod_unlock_sample (FSOUND_SAMPLE *sample, void *p1, void *p2,
unsigned int blen1, unsigned int blen2)
{
int status = FSOUND_Sample_Unlock (sample, p1, p2, blen1, blen2);
if (!status) {
fmod_logerr ("Failed to unlock sample\n");
return -1;
}
return 0;
}
static int fmod_lock_sample (
FSOUND_SAMPLE *sample,
struct audio_pcm_info *info,
int pos,
int len,
void **p1,
void **p2,
unsigned int *blen1,
unsigned int *blen2
)
{
int status;
status = FSOUND_Sample_Lock (
sample,
pos << info->shift,
len << info->shift,
p1,
p2,
blen1,
blen2
);
if (!status) {
fmod_logerr ("Failed to lock sample\n");
return -1;
}
if ((*blen1 & info->align) || (*blen2 & info->align)) {
dolog ("Lock returned misaligned length %d, %d, alignment %d\n",
*blen1, *blen2, info->align + 1);
fmod_unlock_sample (sample, *p1, *p2, *blen1, *blen2);
*p1 = NULL - 1;
*p2 = NULL - 1;
*blen1 = ~0U;
*blen2 = ~0U;
return -1;
}
if (!*p1 && *blen1) {
dolog ("warning: !p1 && blen1=%d\n", *blen1);
*blen1 = 0;
}
if (!p2 && *blen2) {
dolog ("warning: !p2 && blen2=%d\n", *blen2);
*blen2 = 0;
}
return 0;
}
static int fmod_run_out (HWVoiceOut *hw, int live)
{
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
int decr;
void *p1 = 0, *p2 = 0;
unsigned int blen1 = 0, blen2 = 0;
unsigned int len1 = 0, len2 = 0;
if (!hw->pending_disable) {
return 0;
}
decr = live;
if (fmd->channel >= 0) {
int len = decr;
int old_pos = fmd->old_pos;
int ppos = FSOUND_GetCurrentPosition (fmd->channel);
if (ppos == old_pos || !ppos) {
return 0;
}
if ((old_pos < ppos) && ((old_pos + len) > ppos)) {
len = ppos - old_pos;
}
else {
if ((old_pos > ppos) && ((old_pos + len) > (ppos + hw->samples))) {
len = hw->samples - old_pos + ppos;
}
}
decr = len;
if (audio_bug (AUDIO_FUNC, decr < 0)) {
dolog ("decr=%d live=%d ppos=%d old_pos=%d len=%d\n",
decr, live, ppos, old_pos, len);
return 0;
}
}
if (!decr) {
return 0;
}
if (fmod_lock_sample (fmd->fmod_sample, &fmd->hw.info,
fmd->old_pos, decr,
&p1, &p2,
&blen1, &blen2)) {
return 0;
}
len1 = blen1 >> hw->info.shift;
len2 = blen2 >> hw->info.shift;
ldebug ("%p %p %d %d %d %d\n", p1, p2, len1, len2, blen1, blen2);
decr = len1 + len2;
if (p1 && len1) {
fmod_write_sample (hw, p1, len1);
}
if (p2 && len2) {
fmod_write_sample (hw, p2, len2);
}
fmod_unlock_sample (fmd->fmod_sample, p1, p2, blen1, blen2);
fmd->old_pos = (fmd->old_pos + decr) % hw->samples;
return decr;
}
static int aud_to_fmodfmt (audfmt_e fmt, int stereo)
{
int mode = FSOUND_LOOP_NORMAL;
switch (fmt) {
case AUD_FMT_S8:
mode |= FSOUND_SIGNED | FSOUND_8BITS;
break;
case AUD_FMT_U8:
mode |= FSOUND_UNSIGNED | FSOUND_8BITS;
break;
case AUD_FMT_S16:
mode |= FSOUND_SIGNED | FSOUND_16BITS;
break;
case AUD_FMT_U16:
mode |= FSOUND_UNSIGNED | FSOUND_16BITS;
break;
default:
dolog ("Internal logic error: Bad audio format %d\n", fmt);
#ifdef DEBUG_FMOD
abort ();
#endif
mode |= FSOUND_8BITS;
}
mode |= stereo ? FSOUND_STEREO : FSOUND_MONO;
return mode;
}
static void fmod_fini_out (HWVoiceOut *hw)
{
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
if (fmd->fmod_sample) {
FSOUND_Sample_Free (fmd->fmod_sample);
fmd->fmod_sample = 0;
if (fmd->channel >= 0) {
FSOUND_StopSound (fmd->channel);
}
}
}
static int fmod_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int mode, channel;
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
struct audsettings obt_as = *as;
mode = aud_to_fmodfmt (as->fmt, as->nchannels == 2 ? 1 : 0);
fmd->fmod_sample = FSOUND_Sample_Alloc (
FSOUND_FREE, /* index */
conf.nb_samples, /* length */
mode, /* mode */
as->freq, /* freq */
255, /* volume */
128, /* pan */
255 /* priority */
);
if (!fmd->fmod_sample) {
fmod_logerr2 ("DAC", "Failed to allocate FMOD sample\n");
return -1;
}
channel = FSOUND_PlaySoundEx (FSOUND_FREE, fmd->fmod_sample, 0, 1);
if (channel < 0) {
fmod_logerr2 ("DAC", "Failed to start playing sound\n");
FSOUND_Sample_Free (fmd->fmod_sample);
return -1;
}
fmd->channel = channel;
/* FMOD always operates on little endian frames? */
obt_as.endianness = 0;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = conf.nb_samples;
return 0;
}
static int fmod_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
int status;
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
switch (cmd) {
case VOICE_ENABLE:
fmod_clear_sample (fmd);
status = FSOUND_SetPaused (fmd->channel, 0);
if (!status) {
fmod_logerr ("Failed to resume channel %d\n", fmd->channel);
}
break;
case VOICE_DISABLE:
status = FSOUND_SetPaused (fmd->channel, 1);
if (!status) {
fmod_logerr ("Failed to pause channel %d\n", fmd->channel);
}
break;
}
return 0;
}
static int fmod_init_in (HWVoiceIn *hw, struct audsettings *as)
{
int mode;
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
struct audsettings obt_as = *as;
if (conf.broken_adc) {
return -1;
}
mode = aud_to_fmodfmt (as->fmt, as->nchannels == 2 ? 1 : 0);
fmd->fmod_sample = FSOUND_Sample_Alloc (
FSOUND_FREE, /* index */
conf.nb_samples, /* length */
mode, /* mode */
as->freq, /* freq */
255, /* volume */
128, /* pan */
255 /* priority */
);
if (!fmd->fmod_sample) {
fmod_logerr2 ("ADC", "Failed to allocate FMOD sample\n");
return -1;
}
/* FMOD always operates on little endian frames? */
obt_as.endianness = 0;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = conf.nb_samples;
return 0;
}
static void fmod_fini_in (HWVoiceIn *hw)
{
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
if (fmd->fmod_sample) {
FSOUND_Record_Stop ();
FSOUND_Sample_Free (fmd->fmod_sample);
fmd->fmod_sample = 0;
}
}
static int fmod_run_in (HWVoiceIn *hw)
{
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
int hwshift = hw->info.shift;
int live, dead, new_pos, len;
unsigned int blen1 = 0, blen2 = 0;
unsigned int len1, len2;
unsigned int decr;
void *p1, *p2;
live = audio_pcm_hw_get_live_in (hw);
dead = hw->samples - live;
if (!dead) {
return 0;
}
new_pos = FSOUND_Record_GetPosition ();
if (new_pos < 0) {
fmod_logerr ("Could not get recording position\n");
return 0;
}
len = audio_ring_dist (new_pos, hw->wpos, hw->samples);
if (!len) {
return 0;
}
len = audio_MIN (len, dead);
if (fmod_lock_sample (fmd->fmod_sample, &fmd->hw.info,
hw->wpos, len,
&p1, &p2,
&blen1, &blen2)) {
return 0;
}
len1 = blen1 >> hwshift;
len2 = blen2 >> hwshift;
decr = len1 + len2;
if (p1 && blen1) {
hw->conv (hw->conv_buf + hw->wpos, p1, len1);
}
if (p2 && len2) {
hw->conv (hw->conv_buf, p2, len2);
}
fmod_unlock_sample (fmd->fmod_sample, p1, p2, blen1, blen2);
hw->wpos = (hw->wpos + decr) % hw->samples;
return decr;
}
static struct {
const char *name;
int type;
} drvtab[] = {
{ .name = "none", .type = FSOUND_OUTPUT_NOSOUND },
#ifdef _WIN32
{ .name = "winmm", .type = FSOUND_OUTPUT_WINMM },
{ .name = "dsound", .type = FSOUND_OUTPUT_DSOUND },
{ .name = "a3d", .type = FSOUND_OUTPUT_A3D },
{ .name = "asio", .type = FSOUND_OUTPUT_ASIO },
#endif
#ifdef __linux__
{ .name = "oss", .type = FSOUND_OUTPUT_OSS },
{ .name = "alsa", .type = FSOUND_OUTPUT_ALSA },
{ .name = "esd", .type = FSOUND_OUTPUT_ESD },
#endif
#ifdef __APPLE__
{ .name = "mac", .type = FSOUND_OUTPUT_MAC },
#endif
#if 0
{ .name = "xbox", .type = FSOUND_OUTPUT_XBOX },
{ .name = "ps2", .type = FSOUND_OUTPUT_PS2 },
{ .name = "gcube", .type = FSOUND_OUTPUT_GC },
#endif
{ .name = "none-realtime", .type = FSOUND_OUTPUT_NOSOUND_NONREALTIME }
};
static void *fmod_audio_init (void)
{
size_t i;
double ver;
int status;
int output_type = -1;
const char *drv = conf.drvname;
ver = FSOUND_GetVersion ();
if (ver < FMOD_VERSION) {
dolog ("Wrong FMOD version %f, need at least %f\n", ver, FMOD_VERSION);
return NULL;
}
#ifdef __linux__
if (ver < 3.75) {
dolog ("FMOD before 3.75 has bug preventing ADC from working\n"
"ADC will be disabled.\n");
conf.broken_adc = 1;
}
#endif
if (drv) {
int found = 0;
for (i = 0; i < ARRAY_SIZE (drvtab); i++) {
if (!strcmp (drv, drvtab[i].name)) {
output_type = drvtab[i].type;
found = 1;
break;
}
}
if (!found) {
dolog ("Unknown FMOD driver `%s'\n", drv);
dolog ("Valid drivers:\n");
for (i = 0; i < ARRAY_SIZE (drvtab); i++) {
dolog (" %s\n", drvtab[i].name);
}
}
}
if (output_type != -1) {
status = FSOUND_SetOutput (output_type);
if (!status) {
fmod_logerr ("FSOUND_SetOutput(%d) failed\n", output_type);
return NULL;
}
}
if (conf.bufsize) {
status = FSOUND_SetBufferSize (conf.bufsize);
if (!status) {
fmod_logerr ("FSOUND_SetBufferSize (%d) failed\n", conf.bufsize);
}
}
status = FSOUND_Init (conf.freq, conf.nb_channels, 0);
if (!status) {
fmod_logerr ("FSOUND_Init failed\n");
return NULL;
}
return &conf;
}
static int fmod_read (SWVoiceIn *sw, void *buf, int size)
{
return audio_pcm_sw_read (sw, buf, size);
}
static int fmod_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
int status;
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
switch (cmd) {
case VOICE_ENABLE:
status = FSOUND_Record_StartSample (fmd->fmod_sample, 1);
if (!status) {
fmod_logerr ("Failed to start recording\n");
}
break;
case VOICE_DISABLE:
status = FSOUND_Record_Stop ();
if (!status) {
fmod_logerr ("Failed to stop recording\n");
}
break;
}
return 0;
}
static void fmod_audio_fini (void *opaque)
{
(void) opaque;
FSOUND_Close ();
}
static struct audio_option fmod_options[] = {
{
.name = "DRV",
.tag = AUD_OPT_STR,
.valp = &conf.drvname,
.descr = "FMOD driver"
},
{
.name = "FREQ",
.tag = AUD_OPT_INT,
.valp = &conf.freq,
.descr = "Default frequency"
},
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.nb_samples,
.descr = "Buffer size in samples"
},
{
.name = "CHANNELS",
.tag = AUD_OPT_INT,
.valp = &conf.nb_channels,
.descr = "Number of default channels (1 - mono, 2 - stereo)"
},
{
.name = "BUFSIZE",
.tag = AUD_OPT_INT,
.valp = &conf.bufsize,
.descr = "(undocumented)"
},
{ /* End of list */ }
};
static struct audio_pcm_ops fmod_pcm_ops = {
.init_out = fmod_init_out,
.fini_out = fmod_fini_out,
.run_out = fmod_run_out,
.write = fmod_write,
.ctl_out = fmod_ctl_out,
.init_in = fmod_init_in,
.fini_in = fmod_fini_in,
.run_in = fmod_run_in,
.read = fmod_read,
.ctl_in = fmod_ctl_in
};
struct audio_driver fmod_audio_driver = {
.name = "fmod",
.descr = "FMOD 3.xx http://www.fmod.org",
.options = fmod_options,
.init = fmod_audio_init,
.fini = fmod_audio_fini,
.pcm_ops = &fmod_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (FMODVoiceOut),
.voice_size_in = sizeof (FMODVoiceIn)
};

View File

@@ -22,10 +22,7 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "qemu/bswap.h"
#include "qemu/error-report.h"
#include "audio.h"
#define AUDIO_CAP "mixeng"
@@ -268,42 +265,11 @@ f_sample *mixeng_clip[2][2][2][3] = {
}
};
void audio_sample_to_uint64(void *samples, int pos,
uint64_t *left, uint64_t *right)
{
struct st_sample *sample = samples;
sample += pos;
#ifdef FLOAT_MIXENG
error_report(
"Coreaudio and floating point samples are not supported by replay yet");
abort();
#else
*left = sample->l;
*right = sample->r;
#endif
}
void audio_sample_from_uint64(void *samples, int pos,
uint64_t left, uint64_t right)
{
struct st_sample *sample = samples;
sample += pos;
#ifdef FLOAT_MIXENG
error_report(
"Coreaudio and floating point samples are not supported by replay yet");
abort();
#else
sample->l = left;
sample->r = right;
#endif
}
/*
* August 21, 1998
* Copyright 1998 Fabrice Bellard.
*
* [Rewrote completely the code of Lance Norskog And Sundry
* [Rewrote completly the code of Lance Norskog And Sundry
* Contributors with a more efficient algorithm.]
*
* This source code is freely redistributable and may be used for
@@ -344,7 +310,7 @@ struct rate {
*/
void *st_rate_start (int inrate, int outrate)
{
struct rate *rate = audio_calloc(__func__, 1, sizeof(*rate));
struct rate *rate = audio_calloc (AUDIO_FUNC, 1, sizeof (*rate));
if (!rate) {
dolog ("Could not allocate resampler (%zu bytes)\n", sizeof (*rate));

View File

@@ -21,7 +21,6 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#ifndef QEMU_MIXENG_H
#define QEMU_MIXENG_H
@@ -49,4 +48,4 @@ void st_rate_stop (void *opaque);
void mixeng_clear (struct st_sample *buf, int len);
void mixeng_volume (struct st_sample *buf, int len, struct mixeng_volume *vol);
#endif /* QEMU_MIXENG_H */
#endif /* mixeng.h */

View File

@@ -21,9 +21,7 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "qemu/host-utils.h"
#include "audio.h"
#include "qemu/timer.h"
@@ -50,8 +48,8 @@ static int no_run_out (HWVoiceOut *hw, int live)
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
ticks = now - no->old_ticks;
bytes = muldiv64(ticks, hw->info.bytes_per_second, NANOSECONDS_PER_SECOND);
bytes = audio_MIN(bytes, INT_MAX);
bytes = muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());
bytes = audio_MIN (bytes, INT_MAX);
samples = bytes >> hw->info.shift;
no->old_ticks = now;
@@ -62,10 +60,10 @@ static int no_run_out (HWVoiceOut *hw, int live)
static int no_write (SWVoiceOut *sw, void *buf, int len)
{
return audio_pcm_sw_write(sw, buf, len);
return audio_pcm_sw_write (sw, buf, len);
}
static int no_init_out(HWVoiceOut *hw, struct audsettings *as, void *drv_opaque)
static int no_init_out (HWVoiceOut *hw, struct audsettings *as)
{
audio_pcm_init_info (&hw->info, as);
hw->samples = 1024;
@@ -84,7 +82,7 @@ static int no_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static int no_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int no_init_in (HWVoiceIn *hw, struct audsettings *as)
{
audio_pcm_init_info (&hw->info, as);
hw->samples = 1024;
@@ -107,7 +105,7 @@ static int no_run_in (HWVoiceIn *hw)
int64_t now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
int64_t ticks = now - no->old_ticks;
int64_t bytes =
muldiv64(ticks, hw->info.bytes_per_second, NANOSECONDS_PER_SECOND);
muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());
no->old_ticks = now;
bytes = audio_MIN (bytes, INT_MAX);
@@ -160,7 +158,7 @@ static struct audio_pcm_ops no_pcm_ops = {
.ctl_in = no_ctl_in
};
static struct audio_driver no_audio_driver = {
struct audio_driver no_audio_driver = {
.name = "none",
.descr = "Timer based audio emulation",
.options = NULL,
@@ -173,9 +171,3 @@ static struct audio_driver no_audio_driver = {
.voice_size_out = sizeof (NoVoiceOut),
.voice_size_in = sizeof (NoVoiceIn)
};
static void register_audio_none(void)
{
audio_driver_register(&no_audio_driver);
}
type_init(register_audio_none);

View File

@@ -21,14 +21,15 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/ioctl.h>
#include <sys/soundcard.h>
#include "qemu-common.h"
#include "qemu/main-loop.h"
#include "qemu/host-utils.h"
#include "audio.h"
#include "trace.h"
#define AUDIO_CAP "oss"
#include "audio_int.h"
@@ -37,16 +38,6 @@
#define USE_DSP_POLICY
#endif
typedef struct OSSConf {
int try_mmap;
int nfrags;
int fragsize;
const char *devpath_out;
const char *devpath_in;
int exclusive;
int policy;
} OSSConf;
typedef struct OSSVoiceOut {
HWVoiceOut hw;
void *pcm_buf;
@@ -56,7 +47,6 @@ typedef struct OSSVoiceOut {
int fragsize;
int mmapped;
int pending;
OSSConf *conf;
} OSSVoiceOut;
typedef struct OSSVoiceIn {
@@ -65,9 +55,28 @@ typedef struct OSSVoiceIn {
int fd;
int nfrags;
int fragsize;
OSSConf *conf;
} OSSVoiceIn;
static struct {
int try_mmap;
int nfrags;
int fragsize;
const char *devpath_out;
const char *devpath_in;
int debug;
int exclusive;
int policy;
} conf = {
.try_mmap = 0,
.nfrags = 4,
.fragsize = 4096,
.devpath_out = "/dev/dsp",
.devpath_in = "/dev/dsp",
.debug = 0,
.exclusive = 0,
.policy = 5
};
struct oss_params {
int freq;
audfmt_e fmt;
@@ -129,18 +138,18 @@ static void oss_helper_poll_in (void *opaque)
audio_run ("oss_poll_in");
}
static void oss_poll_out (HWVoiceOut *hw)
static int oss_poll_out (HWVoiceOut *hw)
{
OSSVoiceOut *oss = (OSSVoiceOut *) hw;
qemu_set_fd_handler (oss->fd, NULL, oss_helper_poll_out, NULL);
return qemu_set_fd_handler (oss->fd, NULL, oss_helper_poll_out, NULL);
}
static void oss_poll_in (HWVoiceIn *hw)
static int oss_poll_in (HWVoiceIn *hw)
{
OSSVoiceIn *oss = (OSSVoiceIn *) hw;
qemu_set_fd_handler (oss->fd, oss_helper_poll_in, NULL, NULL);
return qemu_set_fd_handler (oss->fd, oss_helper_poll_in, NULL, NULL);
}
static int oss_write (SWVoiceOut *sw, void *buf, int len)
@@ -263,18 +272,18 @@ static int oss_get_version (int fd, int *version, const char *typ)
#endif
static int oss_open (int in, struct oss_params *req,
struct oss_params *obt, int *pfd, OSSConf* conf)
struct oss_params *obt, int *pfd)
{
int fd;
int oflags = conf->exclusive ? O_EXCL : 0;
int oflags = conf.exclusive ? O_EXCL : 0;
audio_buf_info abinfo;
int fmt, freq, nchannels;
int setfragment = 1;
const char *dspname = in ? conf->devpath_in : conf->devpath_out;
const char *dspname = in ? conf.devpath_in : conf.devpath_out;
const char *typ = in ? "ADC" : "DAC";
/* Kludge needed to have working mmap on Linux */
oflags |= conf->try_mmap ? O_RDWR : (in ? O_RDONLY : O_WRONLY);
oflags |= conf.try_mmap ? O_RDWR : (in ? O_RDONLY : O_WRONLY);
fd = open (dspname, oflags | O_NONBLOCK);
if (-1 == fd) {
@@ -308,18 +317,20 @@ static int oss_open (int in, struct oss_params *req,
}
#ifdef USE_DSP_POLICY
if (conf->policy >= 0) {
if (conf.policy >= 0) {
int version;
if (!oss_get_version (fd, &version, typ)) {
trace_oss_version(version);
if (conf.debug) {
dolog ("OSS version = %#x\n", version);
}
if (version >= 0x040000) {
int policy = conf->policy;
int policy = conf.policy;
if (ioctl (fd, SNDCTL_DSP_POLICY, &policy)) {
oss_logerr2 (errno, typ,
"Failed to set timing policy to %d\n",
conf->policy);
conf.policy);
goto err;
}
setfragment = 0;
@@ -447,12 +458,19 @@ static int oss_run_out (HWVoiceOut *hw, int live)
}
if (abinfo.bytes > bufsize) {
trace_oss_invalid_available_size(abinfo.bytes, bufsize);
if (conf.debug) {
dolog ("warning: Invalid available size, size=%d bufsize=%d\n"
"please report your OS/audio hw to av1474@comtv.ru\n",
abinfo.bytes, bufsize);
}
abinfo.bytes = bufsize;
}
if (abinfo.bytes < 0) {
trace_oss_invalid_available_size(abinfo.bytes, bufsize);
if (conf.debug) {
dolog ("warning: Invalid available size, size=%d bufsize=%d\n",
abinfo.bytes, bufsize);
}
return 0;
}
@@ -492,8 +510,7 @@ static void oss_fini_out (HWVoiceOut *hw)
}
}
static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int oss_init_out (HWVoiceOut *hw, struct audsettings *as)
{
OSSVoiceOut *oss = (OSSVoiceOut *) hw;
struct oss_params req, obt;
@@ -502,17 +519,16 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
int fd;
audfmt_e effective_fmt;
struct audsettings obt_as;
OSSConf *conf = drv_opaque;
oss->fd = -1;
req.fmt = aud_to_ossfmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.fragsize = conf->fragsize;
req.nfrags = conf->nfrags;
req.fragsize = conf.fragsize;
req.nfrags = conf.nfrags;
if (oss_open (0, &req, &obt, &fd, conf)) {
if (oss_open (0, &req, &obt, &fd)) {
return -1;
}
@@ -539,7 +555,7 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
hw->samples = (obt.nfrags * obt.fragsize) >> hw->info.shift;
oss->mmapped = 0;
if (conf->try_mmap) {
if (conf.try_mmap) {
oss->pcm_buf = mmap (
NULL,
hw->samples << hw->info.shift,
@@ -582,9 +598,11 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
}
if (!oss->mmapped) {
oss->pcm_buf = audio_calloc(__func__,
hw->samples,
1 << hw->info.shift);
oss->pcm_buf = audio_calloc (
AUDIO_FUNC,
hw->samples,
1 << hw->info.shift
);
if (!oss->pcm_buf) {
dolog (
"Could not allocate DAC buffer (%d samples, each %d bytes)\n",
@@ -597,7 +615,6 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
}
oss->fd = fd;
oss->conf = conf;
return 0;
}
@@ -617,8 +634,7 @@ static int oss_ctl_out (HWVoiceOut *hw, int cmd, ...)
va_end (ap);
ldebug ("enabling voice\n");
if (poll_mode) {
oss_poll_out (hw);
if (poll_mode && oss_poll_out (hw)) {
poll_mode = 0;
}
hw->poll_mode = poll_mode;
@@ -660,7 +676,7 @@ static int oss_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static int oss_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int oss_init_in (HWVoiceIn *hw, struct audsettings *as)
{
OSSVoiceIn *oss = (OSSVoiceIn *) hw;
struct oss_params req, obt;
@@ -669,16 +685,15 @@ static int oss_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
int fd;
audfmt_e effective_fmt;
struct audsettings obt_as;
OSSConf *conf = drv_opaque;
oss->fd = -1;
req.fmt = aud_to_ossfmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.fragsize = conf->fragsize;
req.nfrags = conf->nfrags;
if (oss_open (1, &req, &obt, &fd, conf)) {
req.fragsize = conf.fragsize;
req.nfrags = conf.nfrags;
if (oss_open (1, &req, &obt, &fd)) {
return -1;
}
@@ -703,7 +718,7 @@ static int oss_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
hw->samples = (obt.nfrags * obt.fragsize) >> hw->info.shift;
oss->pcm_buf = audio_calloc(__func__, hw->samples, 1 << hw->info.shift);
oss->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
if (!oss->pcm_buf) {
dolog ("Could not allocate ADC buffer (%d samples, each %d bytes)\n",
hw->samples, 1 << hw->info.shift);
@@ -712,7 +727,6 @@ static int oss_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
oss->fd = fd;
oss->conf = conf;
return 0;
}
@@ -814,8 +828,7 @@ static int oss_ctl_in (HWVoiceIn *hw, int cmd, ...)
poll_mode = va_arg (ap, int);
va_end (ap);
if (poll_mode) {
oss_poll_in (hw);
if (poll_mode && oss_poll_in (hw)) {
poll_mode = 0;
}
hw->poll_mode = poll_mode;
@@ -832,79 +845,71 @@ static int oss_ctl_in (HWVoiceIn *hw, int cmd, ...)
return 0;
}
static OSSConf glob_conf = {
.try_mmap = 0,
.nfrags = 4,
.fragsize = 4096,
.devpath_out = "/dev/dsp",
.devpath_in = "/dev/dsp",
.exclusive = 0,
.policy = 5
};
static void *oss_audio_init (void)
{
OSSConf *conf = g_malloc(sizeof(OSSConf));
*conf = glob_conf;
if (access(conf->devpath_in, R_OK | W_OK) < 0 ||
access(conf->devpath_out, R_OK | W_OK) < 0) {
g_free(conf);
if (access(conf.devpath_in, R_OK | W_OK) < 0 ||
access(conf.devpath_out, R_OK | W_OK) < 0) {
return NULL;
}
return conf;
return &conf;
}
static void oss_audio_fini (void *opaque)
{
g_free(opaque);
(void) opaque;
}
static struct audio_option oss_options[] = {
{
.name = "FRAGSIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.fragsize,
.valp = &conf.fragsize,
.descr = "Fragment size in bytes"
},
{
.name = "NFRAGS",
.tag = AUD_OPT_INT,
.valp = &glob_conf.nfrags,
.valp = &conf.nfrags,
.descr = "Number of fragments"
},
{
.name = "MMAP",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.try_mmap,
.valp = &conf.try_mmap,
.descr = "Try using memory mapped access"
},
{
.name = "DAC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.devpath_out,
.valp = &conf.devpath_out,
.descr = "Path to DAC device"
},
{
.name = "ADC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.devpath_in,
.valp = &conf.devpath_in,
.descr = "Path to ADC device"
},
{
.name = "EXCLUSIVE",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.exclusive,
.descr = "Open device in exclusive mode (vmix won't work)"
.valp = &conf.exclusive,
.descr = "Open device in exclusive mode (vmix wont work)"
},
#ifdef USE_DSP_POLICY
{
.name = "POLICY",
.tag = AUD_OPT_INT,
.valp = &glob_conf.policy,
.valp = &conf.policy,
.descr = "Set the timing policy of the device, -1 to use fragment mode",
},
#endif
{
.name = "DEBUG",
.tag = AUD_OPT_BOOL,
.valp = &conf.debug,
.descr = "Turn on some debugging messages"
},
{ /* End of list */ }
};
@@ -922,7 +927,7 @@ static struct audio_pcm_ops oss_pcm_ops = {
.ctl_in = oss_ctl_in
};
static struct audio_driver oss_audio_driver = {
struct audio_driver oss_audio_driver = {
.name = "oss",
.descr = "OSS http://www.opensound.com",
.options = oss_options,
@@ -935,9 +940,3 @@ static struct audio_driver oss_audio_driver = {
.voice_size_out = sizeof (OSSVoiceOut),
.voice_size_in = sizeof (OSSVoiceIn)
};
static void register_audio_oss(void)
{
audio_driver_register(&oss_audio_driver);
}
type_init(register_audio_oss);

View File

@@ -1,5 +1,4 @@
/* public domain */
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "audio.h"
@@ -9,19 +8,6 @@
#include "audio_int.h"
#include "audio_pt_int.h"
typedef struct {
int samples;
char *server;
char *sink;
char *source;
} PAConf;
typedef struct {
PAConf conf;
pa_threaded_mainloop *mainloop;
pa_context *context;
} paaudio;
typedef struct {
HWVoiceOut hw;
int done;
@@ -31,7 +17,6 @@ typedef struct {
pa_stream *stream;
void *pcm_buf;
struct audio_pt pt;
paaudio *g;
} PAVoiceOut;
typedef struct {
@@ -45,10 +30,20 @@ typedef struct {
struct audio_pt pt;
const void *read_data;
size_t read_index, read_length;
paaudio *g;
} PAVoiceIn;
static void qpa_audio_fini(void *opaque);
typedef struct {
int samples;
char *server;
char *sink;
char *source;
pa_threaded_mainloop *mainloop;
pa_context *context;
} paaudio;
static paaudio glob_paaudio = {
.samples = 4096,
};
static void GCC_FMT_ATTR (2, 3) qpa_logerr (int err, const char *fmt, ...)
{
@@ -89,7 +84,7 @@ static inline int PA_STREAM_IS_GOOD(pa_stream_state_t x)
} \
goto label; \
} \
} while (0)
} while (0);
#define CHECK_DEAD_GOTO(c, stream, rerror, label) \
do { \
@@ -107,11 +102,11 @@ static inline int PA_STREAM_IS_GOOD(pa_stream_state_t x)
} \
goto label; \
} \
} while (0)
} while (0);
static int qpa_simple_read (PAVoiceIn *p, void *data, size_t length, int *rerror)
{
paaudio *g = p->g;
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_lock (g->mainloop);
@@ -165,7 +160,7 @@ unlock_and_fail:
static int qpa_simple_write (PAVoiceOut *p, const void *data, size_t length, int *rerror)
{
paaudio *g = p->g;
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_lock (g->mainloop);
@@ -206,7 +201,7 @@ static void *qpa_thread_out (void *arg)
PAVoiceOut *pa = arg;
HWVoiceOut *hw = &pa->hw;
if (audio_pt_lock(&pa->pt, __func__)) {
if (audio_pt_lock (&pa->pt, AUDIO_FUNC)) {
return NULL;
}
@@ -222,15 +217,15 @@ static void *qpa_thread_out (void *arg)
break;
}
if (audio_pt_wait(&pa->pt, __func__)) {
if (audio_pt_wait (&pa->pt, AUDIO_FUNC)) {
goto exit;
}
}
decr = to_mix = audio_MIN (pa->live, pa->g->conf.samples >> 2);
decr = to_mix = audio_MIN (pa->live, glob_paaudio.samples >> 2);
rpos = pa->rpos;
if (audio_pt_unlock(&pa->pt, __func__)) {
if (audio_pt_unlock (&pa->pt, AUDIO_FUNC)) {
return NULL;
}
@@ -251,7 +246,7 @@ static void *qpa_thread_out (void *arg)
to_mix -= chunk;
}
if (audio_pt_lock(&pa->pt, __func__)) {
if (audio_pt_lock (&pa->pt, AUDIO_FUNC)) {
return NULL;
}
@@ -261,7 +256,7 @@ static void *qpa_thread_out (void *arg)
}
exit:
audio_pt_unlock(&pa->pt, __func__);
audio_pt_unlock (&pa->pt, AUDIO_FUNC);
return NULL;
}
@@ -270,7 +265,7 @@ static int qpa_run_out (HWVoiceOut *hw, int live)
int decr;
PAVoiceOut *pa = (PAVoiceOut *) hw;
if (audio_pt_lock(&pa->pt, __func__)) {
if (audio_pt_lock (&pa->pt, AUDIO_FUNC)) {
return 0;
}
@@ -279,10 +274,10 @@ static int qpa_run_out (HWVoiceOut *hw, int live)
pa->live = live - decr;
hw->rpos = pa->rpos;
if (pa->live > 0) {
audio_pt_unlock_and_signal(&pa->pt, __func__);
audio_pt_unlock_and_signal (&pa->pt, AUDIO_FUNC);
}
else {
audio_pt_unlock(&pa->pt, __func__);
audio_pt_unlock (&pa->pt, AUDIO_FUNC);
}
return decr;
}
@@ -298,7 +293,7 @@ static void *qpa_thread_in (void *arg)
PAVoiceIn *pa = arg;
HWVoiceIn *hw = &pa->hw;
if (audio_pt_lock(&pa->pt, __func__)) {
if (audio_pt_lock (&pa->pt, AUDIO_FUNC)) {
return NULL;
}
@@ -314,15 +309,15 @@ static void *qpa_thread_in (void *arg)
break;
}
if (audio_pt_wait(&pa->pt, __func__)) {
if (audio_pt_wait (&pa->pt, AUDIO_FUNC)) {
goto exit;
}
}
incr = to_grab = audio_MIN (pa->dead, pa->g->conf.samples >> 2);
incr = to_grab = audio_MIN (pa->dead, glob_paaudio.samples >> 2);
wpos = pa->wpos;
if (audio_pt_unlock(&pa->pt, __func__)) {
if (audio_pt_unlock (&pa->pt, AUDIO_FUNC)) {
return NULL;
}
@@ -342,7 +337,7 @@ static void *qpa_thread_in (void *arg)
to_grab -= chunk;
}
if (audio_pt_lock(&pa->pt, __func__)) {
if (audio_pt_lock (&pa->pt, AUDIO_FUNC)) {
return NULL;
}
@@ -352,7 +347,7 @@ static void *qpa_thread_in (void *arg)
}
exit:
audio_pt_unlock(&pa->pt, __func__);
audio_pt_unlock (&pa->pt, AUDIO_FUNC);
return NULL;
}
@@ -361,7 +356,7 @@ static int qpa_run_in (HWVoiceIn *hw)
int live, incr, dead;
PAVoiceIn *pa = (PAVoiceIn *) hw;
if (audio_pt_lock(&pa->pt, __func__)) {
if (audio_pt_lock (&pa->pt, AUDIO_FUNC)) {
return 0;
}
@@ -372,10 +367,10 @@ static int qpa_run_in (HWVoiceIn *hw)
pa->dead = dead - incr;
hw->wpos = pa->wpos;
if (pa->dead > 0) {
audio_pt_unlock_and_signal(&pa->pt, __func__);
audio_pt_unlock_and_signal (&pa->pt, AUDIO_FUNC);
}
else {
audio_pt_unlock(&pa->pt, __func__);
audio_pt_unlock (&pa->pt, AUDIO_FUNC);
}
return incr;
}
@@ -435,7 +430,7 @@ static audfmt_e pa_to_audfmt (pa_sample_format_t fmt, int *endianness)
static void context_state_cb (pa_context *c, void *userdata)
{
paaudio *g = userdata;
paaudio *g = &glob_paaudio;
switch (pa_context_get_state(c)) {
case PA_CONTEXT_READY:
@@ -454,7 +449,7 @@ static void context_state_cb (pa_context *c, void *userdata)
static void stream_state_cb (pa_stream *s, void * userdata)
{
paaudio *g = userdata;
paaudio *g = &glob_paaudio;
switch (pa_stream_get_state (s)) {
@@ -472,21 +467,23 @@ static void stream_state_cb (pa_stream *s, void * userdata)
static void stream_request_cb (pa_stream *s, size_t length, void *userdata)
{
paaudio *g = userdata;
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_signal (g->mainloop, 0);
}
static pa_stream *qpa_simple_new (
paaudio *g,
const char *server,
const char *name,
pa_stream_direction_t dir,
const char *dev,
const char *stream_name,
const pa_sample_spec *ss,
const pa_channel_map *map,
const pa_buffer_attr *attr,
int *rerror)
{
paaudio *g = &glob_paaudio;
int r;
pa_stream *stream;
@@ -537,15 +534,13 @@ fail:
return NULL;
}
static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int qpa_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int error;
pa_sample_spec ss;
pa_buffer_attr ba;
static pa_sample_spec ss;
static pa_buffer_attr ba;
struct audsettings obt_as = *as;
PAVoiceOut *pa = (PAVoiceOut *) hw;
paaudio *g = pa->g = drv_opaque;
ss.format = audfmt_to_pa (as->fmt, as->endianness);
ss.channels = as->nchannels;
@@ -563,10 +558,11 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
obt_as.fmt = pa_to_audfmt (ss.format, &obt_as.endianness);
pa->stream = qpa_simple_new (
g,
glob_paaudio.server,
"qemu",
PA_STREAM_PLAYBACK,
g->conf.sink,
glob_paaudio.sink,
"pcm.playback",
&ss,
NULL, /* channel map */
&ba, /* buffering attributes */
@@ -578,8 +574,8 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
}
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = g->conf.samples;
pa->pcm_buf = audio_calloc(__func__, hw->samples, 1 << hw->info.shift);
hw->samples = glob_paaudio.samples;
pa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
pa->rpos = hw->rpos;
if (!pa->pcm_buf) {
dolog ("Could not allocate buffer (%d bytes)\n",
@@ -587,7 +583,7 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
goto fail2;
}
if (audio_pt_init(&pa->pt, qpa_thread_out, hw, AUDIO_CAP, __func__)) {
if (audio_pt_init (&pa->pt, qpa_thread_out, hw, AUDIO_CAP, AUDIO_FUNC)) {
goto fail3;
}
@@ -605,13 +601,12 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
return -1;
}
static int qpa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int qpa_init_in (HWVoiceIn *hw, struct audsettings *as)
{
int error;
pa_sample_spec ss;
static pa_sample_spec ss;
struct audsettings obt_as = *as;
PAVoiceIn *pa = (PAVoiceIn *) hw;
paaudio *g = pa->g = drv_opaque;
ss.format = audfmt_to_pa (as->fmt, as->endianness);
ss.channels = as->nchannels;
@@ -620,10 +615,11 @@ static int qpa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
obt_as.fmt = pa_to_audfmt (ss.format, &obt_as.endianness);
pa->stream = qpa_simple_new (
g,
glob_paaudio.server,
"qemu",
PA_STREAM_RECORD,
g->conf.source,
glob_paaudio.source,
"pcm.capture",
&ss,
NULL, /* channel map */
NULL, /* buffering attributes */
@@ -635,8 +631,8 @@ static int qpa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = g->conf.samples;
pa->pcm_buf = audio_calloc(__func__, hw->samples, 1 << hw->info.shift);
hw->samples = glob_paaudio.samples;
pa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
pa->wpos = hw->wpos;
if (!pa->pcm_buf) {
dolog ("Could not allocate buffer (%d bytes)\n",
@@ -644,7 +640,7 @@ static int qpa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
goto fail2;
}
if (audio_pt_init(&pa->pt, qpa_thread_in, hw, AUDIO_CAP, __func__)) {
if (audio_pt_init (&pa->pt, qpa_thread_in, hw, AUDIO_CAP, AUDIO_FUNC)) {
goto fail3;
}
@@ -667,17 +663,17 @@ static void qpa_fini_out (HWVoiceOut *hw)
void *ret;
PAVoiceOut *pa = (PAVoiceOut *) hw;
audio_pt_lock(&pa->pt, __func__);
audio_pt_lock (&pa->pt, AUDIO_FUNC);
pa->done = 1;
audio_pt_unlock_and_signal(&pa->pt, __func__);
audio_pt_join(&pa->pt, &ret, __func__);
audio_pt_unlock_and_signal (&pa->pt, AUDIO_FUNC);
audio_pt_join (&pa->pt, &ret, AUDIO_FUNC);
if (pa->stream) {
pa_stream_unref (pa->stream);
pa->stream = NULL;
}
audio_pt_fini(&pa->pt, __func__);
audio_pt_fini (&pa->pt, AUDIO_FUNC);
g_free (pa->pcm_buf);
pa->pcm_buf = NULL;
}
@@ -687,17 +683,17 @@ static void qpa_fini_in (HWVoiceIn *hw)
void *ret;
PAVoiceIn *pa = (PAVoiceIn *) hw;
audio_pt_lock(&pa->pt, __func__);
audio_pt_lock (&pa->pt, AUDIO_FUNC);
pa->done = 1;
audio_pt_unlock_and_signal(&pa->pt, __func__);
audio_pt_join(&pa->pt, &ret, __func__);
audio_pt_unlock_and_signal (&pa->pt, AUDIO_FUNC);
audio_pt_join (&pa->pt, &ret, AUDIO_FUNC);
if (pa->stream) {
pa_stream_unref (pa->stream);
pa->stream = NULL;
}
audio_pt_fini(&pa->pt, __func__);
audio_pt_fini (&pa->pt, AUDIO_FUNC);
g_free (pa->pcm_buf);
pa->pcm_buf = NULL;
}
@@ -707,7 +703,7 @@ static int qpa_ctl_out (HWVoiceOut *hw, int cmd, ...)
PAVoiceOut *pa = (PAVoiceOut *) hw;
pa_operation *op;
pa_cvolume v;
paaudio *g = pa->g;
paaudio *g = &glob_paaudio;
#ifdef PA_CHECK_VERSION /* macro is present in 0.9.16+ */
pa_cvolume_init (&v); /* function is present in 0.9.13+ */
@@ -759,7 +755,7 @@ static int qpa_ctl_in (HWVoiceIn *hw, int cmd, ...)
PAVoiceIn *pa = (PAVoiceIn *) hw;
pa_operation *op;
pa_cvolume v;
paaudio *g = pa->g;
paaudio *g = &glob_paaudio;
#ifdef PA_CHECK_VERSION
pa_cvolume_init (&v);
@@ -781,22 +777,23 @@ static int qpa_ctl_in (HWVoiceIn *hw, int cmd, ...)
pa_threaded_mainloop_lock (g->mainloop);
op = pa_context_set_source_output_volume (g->context,
pa_stream_get_index (pa->stream),
/* FIXME: use the upcoming "set_source_output_{volume,mute}" */
op = pa_context_set_source_volume_by_index (g->context,
pa_stream_get_device_index (pa->stream),
&v, NULL, NULL);
if (!op) {
qpa_logerr (pa_context_errno (g->context),
"set_source_output_volume() failed\n");
"set_source_volume() failed\n");
} else {
pa_operation_unref(op);
}
op = pa_context_set_source_output_mute (g->context,
op = pa_context_set_source_mute_by_index (g->context,
pa_stream_get_index (pa->stream),
sw->vol.mute, NULL, NULL);
if (!op) {
qpa_logerr (pa_context_errno (g->context),
"set_source_output_mute() failed\n");
"set_source_mute() failed\n");
} else {
pa_operation_unref (op);
}
@@ -808,31 +805,23 @@ static int qpa_ctl_in (HWVoiceIn *hw, int cmd, ...)
}
/* common */
static PAConf glob_conf = {
.samples = 4096,
};
static void *qpa_audio_init (void)
{
paaudio *g = g_malloc(sizeof(paaudio));
g->conf = glob_conf;
g->mainloop = NULL;
g->context = NULL;
paaudio *g = &glob_paaudio;
g->mainloop = pa_threaded_mainloop_new ();
if (!g->mainloop) {
goto fail;
}
g->context = pa_context_new (pa_threaded_mainloop_get_api (g->mainloop),
g->conf.server);
g->context = pa_context_new (pa_threaded_mainloop_get_api (g->mainloop), glob_paaudio.server);
if (!g->context) {
goto fail;
}
pa_context_set_state_callback (g->context, context_state_cb, g);
if (pa_context_connect (g->context, g->conf.server, 0, NULL) < 0) {
if (pa_context_connect (g->context, glob_paaudio.server, 0, NULL) < 0) {
qpa_logerr (pa_context_errno (g->context),
"pa_context_connect() failed\n");
goto fail;
@@ -865,13 +854,12 @@ static void *qpa_audio_init (void)
pa_threaded_mainloop_unlock (g->mainloop);
return g;
return &glob_paaudio;
unlock_and_fail:
pa_threaded_mainloop_unlock (g->mainloop);
fail:
AUD_log (AUDIO_CAP, "Failed to initialize PA context");
qpa_audio_fini(g);
return NULL;
}
@@ -886,38 +874,39 @@ static void qpa_audio_fini (void *opaque)
if (g->context) {
pa_context_disconnect (g->context);
pa_context_unref (g->context);
g->context = NULL;
}
if (g->mainloop) {
pa_threaded_mainloop_free (g->mainloop);
}
g_free(g);
g->mainloop = NULL;
}
struct audio_option qpa_options[] = {
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &glob_conf.samples,
.valp = &glob_paaudio.samples,
.descr = "buffer size in samples"
},
{
.name = "SERVER",
.tag = AUD_OPT_STR,
.valp = &glob_conf.server,
.valp = &glob_paaudio.server,
.descr = "server address"
},
{
.name = "SINK",
.tag = AUD_OPT_STR,
.valp = &glob_conf.sink,
.valp = &glob_paaudio.sink,
.descr = "sink device name"
},
{
.name = "SOURCE",
.tag = AUD_OPT_STR,
.valp = &glob_conf.source,
.valp = &glob_paaudio.source,
.descr = "source device name"
},
{ /* End of list */ }
@@ -937,7 +926,7 @@ static struct audio_pcm_ops qpa_pcm_ops = {
.ctl_in = qpa_ctl_in
};
static struct audio_driver pa_audio_driver = {
struct audio_driver pa_audio_driver = {
.name = "pa",
.descr = "http://www.pulseaudio.org/",
.options = qpa_options,
@@ -951,9 +940,3 @@ static struct audio_driver pa_audio_driver = {
.voice_size_in = sizeof (PAVoiceIn),
.ctl_caps = VOICE_VOLUME_CAP
};
static void register_audio_pa(void)
{
audio_driver_register(&pa_audio_driver);
}
type_init(register_audio_pa);

View File

@@ -71,12 +71,6 @@ void NAME (void *opaque, struct st_sample *ibuf, struct st_sample *obuf,
while (rate->ipos <= (rate->opos >> 32)) {
ilast = *ibuf++;
rate->ipos++;
/* if ipos overflow, there is a infinite loop */
if (rate->ipos == 0xffffffff) {
rate->ipos = 1;
rate->opos = rate->opos & 0xffffffff;
}
/* See if we finished the input buffer yet */
if (ibuf >= iend) {
goto the_end;

View File

@@ -21,7 +21,6 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include <SDL.h>
#include <SDL_thread.h>
#include "qemu-common.h"
@@ -38,14 +37,10 @@
#define AUDIO_CAP "sdl"
#include "audio_int.h"
#define USE_SEMAPHORE (SDL_MAJOR_VERSION < 2)
typedef struct SDLVoiceOut {
HWVoiceOut hw;
int live;
#if USE_SEMAPHORE
int rpos;
#endif
int decr;
} SDLVoiceOut;
@@ -57,12 +52,9 @@ static struct {
static struct SDLAudioState {
int exit;
#if USE_SEMAPHORE
SDL_mutex *mutex;
SDL_sem *sem;
#endif
int initialized;
bool driver_created;
} glob_sdl;
typedef struct SDLAudioState SDLAudioState;
@@ -79,45 +71,31 @@ static void GCC_FMT_ATTR (1, 2) sdl_logerr (const char *fmt, ...)
static int sdl_lock (SDLAudioState *s, const char *forfn)
{
#if USE_SEMAPHORE
if (SDL_LockMutex (s->mutex)) {
sdl_logerr ("SDL_LockMutex for %s failed\n", forfn);
return -1;
}
#else
SDL_LockAudio();
#endif
return 0;
}
static int sdl_unlock (SDLAudioState *s, const char *forfn)
{
#if USE_SEMAPHORE
if (SDL_UnlockMutex (s->mutex)) {
sdl_logerr ("SDL_UnlockMutex for %s failed\n", forfn);
return -1;
}
#else
SDL_UnlockAudio();
#endif
return 0;
}
static int sdl_post (SDLAudioState *s, const char *forfn)
{
#if USE_SEMAPHORE
if (SDL_SemPost (s->sem)) {
sdl_logerr ("SDL_SemPost for %s failed\n", forfn);
return -1;
}
#endif
return 0;
}
#if USE_SEMAPHORE
static int sdl_wait (SDLAudioState *s, const char *forfn)
{
if (SDL_SemWait (s->sem)) {
@@ -126,7 +104,6 @@ static int sdl_wait (SDLAudioState *s, const char *forfn)
}
return 0;
}
#endif
static int sdl_unlock_and_post (SDLAudioState *s, const char *forfn)
{
@@ -267,7 +244,6 @@ static void sdl_callback (void *opaque, Uint8 *buf, int len)
int to_mix, decr;
/* dolog ("in callback samples=%d\n", samples); */
#if USE_SEMAPHORE
sdl_wait (s, "sdl_callback");
if (s->exit) {
return;
@@ -277,7 +253,7 @@ static void sdl_callback (void *opaque, Uint8 *buf, int len)
return;
}
if (audio_bug(__func__, sdl->live < 0 || sdl->live > hw->samples)) {
if (audio_bug (AUDIO_FUNC, sdl->live < 0 || sdl->live > hw->samples)) {
dolog ("sdl->live=%d hw->samples=%d\n",
sdl->live, hw->samples);
return;
@@ -286,11 +262,6 @@ static void sdl_callback (void *opaque, Uint8 *buf, int len)
if (!sdl->live) {
goto again;
}
#else
if (s->exit || !sdl->live) {
break;
}
#endif
/* dolog ("in callback live=%d\n", live); */
to_mix = audio_MIN (samples, sdl->live);
@@ -301,11 +272,7 @@ static void sdl_callback (void *opaque, Uint8 *buf, int len)
/* dolog ("in callback to_mix %d, chunk %d\n", to_mix, chunk); */
hw->clip (buf, src, chunk);
#if USE_SEMAPHORE
sdl->rpos = (sdl->rpos + chunk) % hw->samples;
#else
hw->rpos = (hw->rpos + chunk) % hw->samples;
#endif
to_mix -= chunk;
buf += chunk << hw->info.shift;
}
@@ -313,21 +280,12 @@ static void sdl_callback (void *opaque, Uint8 *buf, int len)
sdl->live -= decr;
sdl->decr += decr;
#if USE_SEMAPHORE
again:
if (sdl_unlock (s, "sdl_callback")) {
return;
}
#endif
}
/* dolog ("done len=%d\n", len); */
#if (SDL_MAJOR_VERSION >= 2)
/* SDL2 does not clear the remaining buffer for us, so do it on our own */
if (samples) {
memset(buf, 0, samples << hw->info.shift);
}
#endif
}
static int sdl_write_out (SWVoiceOut *sw, void *buf, int len)
@@ -355,12 +313,8 @@ static int sdl_run_out (HWVoiceOut *hw, int live)
decr = audio_MIN (sdl->decr, live);
sdl->decr -= decr;
#if USE_SEMAPHORE
sdl->live = live - decr;
hw->rpos = sdl->rpos;
#else
sdl->live = live;
#endif
if (sdl->live > 0) {
sdl_unlock_and_post (s, "sdl_run_out");
@@ -378,8 +332,7 @@ static void sdl_fini_out (HWVoiceOut *hw)
sdl_close (&glob_sdl);
}
static int sdl_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int sdl_init_out (HWVoiceOut *hw, struct audsettings *as)
{
SDLVoiceOut *sdl = (SDLVoiceOut *) hw;
SDLAudioState *s = &glob_sdl;
@@ -439,17 +392,12 @@ static int sdl_ctl_out (HWVoiceOut *hw, int cmd, ...)
static void *sdl_audio_init (void)
{
SDLAudioState *s = &glob_sdl;
if (s->driver_created) {
sdl_logerr("Can't create multiple sdl backends\n");
return NULL;
}
if (SDL_InitSubSystem (SDL_INIT_AUDIO)) {
sdl_logerr ("SDL failed to initialize audio subsystem\n");
return NULL;
}
#if USE_SEMAPHORE
s->mutex = SDL_CreateMutex ();
if (!s->mutex) {
sdl_logerr ("Failed to create SDL mutex\n");
@@ -464,9 +412,7 @@ static void *sdl_audio_init (void)
SDL_QuitSubSystem (SDL_INIT_AUDIO);
return NULL;
}
#endif
s->driver_created = true;
return s;
}
@@ -474,12 +420,9 @@ static void sdl_audio_fini (void *opaque)
{
SDLAudioState *s = opaque;
sdl_close (s);
#if USE_SEMAPHORE
SDL_DestroySemaphore (s->sem);
SDL_DestroyMutex (s->mutex);
#endif
SDL_QuitSubSystem (SDL_INIT_AUDIO);
s->driver_created = false;
}
static struct audio_option sdl_options[] = {
@@ -500,7 +443,7 @@ static struct audio_pcm_ops sdl_pcm_ops = {
.ctl_out = sdl_ctl_out,
};
static struct audio_driver sdl_audio_driver = {
struct audio_driver sdl_audio_driver = {
.name = "sdl",
.descr = "SDL http://www.libsdl.org",
.options = sdl_options,
@@ -513,9 +456,3 @@ static struct audio_driver sdl_audio_driver = {
.voice_size_out = sizeof (SDLVoiceOut),
.voice_size_in = 0
};
static void register_audio_sdl(void)
{
audio_driver_register(&sdl_audio_driver);
}
type_init(register_audio_sdl);

View File

@@ -17,10 +17,7 @@
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "hw/hw.h"
#include "qemu/host-utils.h"
#include "qemu/error-report.h"
#include "qemu/timer.h"
#include "ui/qemu-spice.h"
@@ -105,11 +102,11 @@ static int rate_get_samples (struct audio_pcm_info *info, SpiceRateCtl *rate)
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
ticks = now - rate->start_ticks;
bytes = muldiv64(ticks, info->bytes_per_second, NANOSECONDS_PER_SECOND);
bytes = muldiv64 (ticks, info->bytes_per_second, get_ticks_per_sec ());
samples = (bytes - rate->bytes_sent) >> info->shift;
if (samples < 0 || samples > 65536) {
error_report("Resetting rate control (%" PRId64 " samples)", samples);
rate_start(rate);
rate_start (rate);
samples = 0;
}
rate->bytes_sent += samples << info->shift;
@@ -118,8 +115,7 @@ static int rate_get_samples (struct audio_pcm_info *info, SpiceRateCtl *rate)
/* playback */
static int line_out_init(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int line_out_init (HWVoiceOut *hw, struct audsettings *as)
{
SpiceVoiceOut *out = container_of (hw, SpiceVoiceOut, hw);
struct audsettings settings;
@@ -247,7 +243,7 @@ static int line_out_ctl (HWVoiceOut *hw, int cmd, ...)
/* record */
static int line_in_init(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int line_in_init (HWVoiceIn *hw, struct audsettings *as)
{
SpiceVoiceIn *in = container_of (hw, SpiceVoiceIn, hw);
struct audsettings settings;
@@ -391,7 +387,7 @@ static struct audio_pcm_ops audio_callbacks = {
.ctl_in = line_in_ctl,
};
static struct audio_driver spice_audio_driver = {
struct audio_driver spice_audio_driver = {
.name = "spice",
.descr = "spice audio driver",
.options = audio_options,
@@ -411,9 +407,3 @@ void qemu_spice_audio_init (void)
{
spice_audio_driver.can_be_default = 1;
}
static void register_audio_spice(void)
{
audio_driver_register(&spice_audio_driver);
}
type_init(register_audio_spice);

View File

@@ -1,17 +0,0 @@
# See docs/devel/tracing.txt for syntax documentation.
# audio/alsaaudio.c
alsa_revents(int revents) "revents = %d"
alsa_pollout(int i, int fd) "i = %d fd = %d"
alsa_set_handler(int events, int index, int fd, int err) "events=0x%x index=%d fd=%d err=%d"
alsa_wrote_zero(int len) "Failed to write %d frames (wrote zero)"
alsa_read_zero(long len) "Failed to read %ld frames (read zero)"
alsa_xrun_out(void) "Recovering from playback xrun"
alsa_xrun_in(void) "Recovering from capture xrun"
alsa_resume_out(void) "Resuming suspended output stream"
alsa_resume_in(void) "Resuming suspended input stream"
alsa_no_frames(int state) "No frames available and ALSA state is %d"
# audio/ossaudio.c
oss_version(int version) "OSS version = 0x%x"
oss_invalid_available_size(int size, int bufsize) "Invalid available size, size=%d bufsize=%d"

View File

@@ -21,8 +21,7 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu/host-utils.h"
#include "hw/hw.h"
#include "qemu/timer.h"
#include "audio.h"
@@ -37,10 +36,15 @@ typedef struct WAVVoiceOut {
int total_samples;
} WAVVoiceOut;
typedef struct {
static struct {
struct audsettings settings;
const char *wav_path;
} WAVConf;
} conf = {
.settings.freq = 44100,
.settings.nchannels = 2,
.settings.fmt = AUD_FMT_S16,
.wav_path = "qemu.wav"
};
static int wav_run_out (HWVoiceOut *hw, int live)
{
@@ -51,7 +55,7 @@ static int wav_run_out (HWVoiceOut *hw, int live)
int64_t now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
int64_t ticks = now - wav->old_ticks;
int64_t bytes =
muldiv64(ticks, hw->info.bytes_per_second, NANOSECONDS_PER_SECOND);
muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());
if (bytes > INT_MAX) {
samples = INT_MAX >> hw->info.shift;
@@ -101,8 +105,7 @@ static void le_store (uint8_t *buf, uint32_t val, int len)
}
}
static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int wav_init_out (HWVoiceOut *hw, struct audsettings *as)
{
WAVVoiceOut *wav = (WAVVoiceOut *) hw;
int bits16 = 0, stereo = 0;
@@ -112,8 +115,9 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
0x02, 0x00, 0x44, 0xac, 0x00, 0x00, 0x10, 0xb1, 0x02, 0x00, 0x04,
0x00, 0x10, 0x00, 0x64, 0x61, 0x74, 0x61, 0x00, 0x00, 0x00, 0x00
};
WAVConf *conf = drv_opaque;
struct audsettings wav_as = conf->settings;
struct audsettings wav_as = conf.settings;
(void) as;
stereo = wav_as.nchannels == 2;
switch (wav_as.fmt) {
@@ -139,7 +143,7 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
audio_pcm_init_info (&hw->info, &wav_as);
hw->samples = 1024;
wav->pcm_buf = audio_calloc(__func__, hw->samples, 1 << hw->info.shift);
wav->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
if (!wav->pcm_buf) {
dolog ("Could not allocate buffer (%d bytes)\n",
hw->samples << hw->info.shift);
@@ -151,10 +155,10 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
le_store (hdr + 28, hw->info.freq << (bits16 + stereo), 4);
le_store (hdr + 32, 1 << (bits16 + stereo), 2);
wav->f = fopen (conf->wav_path, "wb");
wav->f = fopen (conf.wav_path, "wb");
if (!wav->f) {
dolog ("Failed to open wave file `%s'\nReason: %s\n",
conf->wav_path, strerror (errno));
conf.wav_path, strerror (errno));
g_free (wav->pcm_buf);
wav->pcm_buf = NULL;
return -1;
@@ -222,49 +226,40 @@ static int wav_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static WAVConf glob_conf = {
.settings.freq = 44100,
.settings.nchannels = 2,
.settings.fmt = AUD_FMT_S16,
.wav_path = "qemu.wav"
};
static void *wav_audio_init (void)
{
WAVConf *conf = g_malloc(sizeof(WAVConf));
*conf = glob_conf;
return conf;
return &conf;
}
static void wav_audio_fini (void *opaque)
{
(void) opaque;
ldebug ("wav_fini");
g_free(opaque);
}
static struct audio_option wav_options[] = {
{
.name = "FREQUENCY",
.tag = AUD_OPT_INT,
.valp = &glob_conf.settings.freq,
.valp = &conf.settings.freq,
.descr = "Frequency"
},
{
.name = "FORMAT",
.tag = AUD_OPT_FMT,
.valp = &glob_conf.settings.fmt,
.valp = &conf.settings.fmt,
.descr = "Format"
},
{
.name = "DAC_FIXED_CHANNELS",
.tag = AUD_OPT_INT,
.valp = &glob_conf.settings.nchannels,
.valp = &conf.settings.nchannels,
.descr = "Number of channels (1 - mono, 2 - stereo)"
},
{
.name = "PATH",
.tag = AUD_OPT_STR,
.valp = &glob_conf.wav_path,
.valp = &conf.wav_path,
.descr = "Path to wave file"
},
{ /* End of list */ }
@@ -278,7 +273,7 @@ static struct audio_pcm_ops wav_pcm_ops = {
.ctl_out = wav_ctl_out,
};
static struct audio_driver wav_audio_driver = {
struct audio_driver wav_audio_driver = {
.name = "wav",
.descr = "WAV renderer http://wikipedia.org/wiki/WAV",
.options = wav_options,
@@ -291,9 +286,3 @@ static struct audio_driver wav_audio_driver = {
.voice_size_out = sizeof (WAVVoiceOut),
.voice_size_in = 0
};
static void register_audio_wav(void)
{
audio_driver_register(&wav_audio_driver);
}
type_init(register_audio_wav);

View File

@@ -1,8 +1,5 @@
#include "qemu/osdep.h"
#include "hw/hw.h"
#include "monitor/monitor.h"
#include "qapi/error.h"
#include "qemu/error-report.h"
#include "audio.h"
typedef struct {
@@ -89,7 +86,6 @@ static void wav_capture_destroy (void *opaque)
WAVState *wav = opaque;
AUD_del_capture (wav->cap, wav);
g_free (wav);
}
static void wav_capture_info (void *opaque)

717
audio/winwaveaudio.c Normal file
View File

@@ -0,0 +1,717 @@
/* public domain */
#include "qemu-common.h"
#include "sysemu/sysemu.h"
#include "audio.h"
#define AUDIO_CAP "winwave"
#include "audio_int.h"
#include <windows.h>
#include <mmsystem.h>
#include "audio_win_int.h"
static struct {
int dac_headers;
int dac_samples;
int adc_headers;
int adc_samples;
} conf = {
.dac_headers = 4,
.dac_samples = 1024,
.adc_headers = 4,
.adc_samples = 1024
};
typedef struct {
HWVoiceOut hw;
HWAVEOUT hwo;
WAVEHDR *hdrs;
HANDLE event;
void *pcm_buf;
int avail;
int pending;
int curhdr;
int paused;
CRITICAL_SECTION crit_sect;
} WaveVoiceOut;
typedef struct {
HWVoiceIn hw;
HWAVEIN hwi;
WAVEHDR *hdrs;
HANDLE event;
void *pcm_buf;
int curhdr;
int paused;
int rpos;
int avail;
CRITICAL_SECTION crit_sect;
} WaveVoiceIn;
static void winwave_log_mmresult (MMRESULT mr)
{
const char *str = "BUG";
switch (mr) {
case MMSYSERR_NOERROR:
str = "Success";
break;
case MMSYSERR_INVALHANDLE:
str = "Specified device handle is invalid";
break;
case MMSYSERR_BADDEVICEID:
str = "Specified device id is out of range";
break;
case MMSYSERR_NODRIVER:
str = "No device driver is present";
break;
case MMSYSERR_NOMEM:
str = "Unable to allocate or lock memory";
break;
case WAVERR_SYNC:
str = "Device is synchronous but waveOutOpen was called "
"without using the WINWAVE_ALLOWSYNC flag";
break;
case WAVERR_UNPREPARED:
str = "The data block pointed to by the pwh parameter "
"hasn't been prepared";
break;
case WAVERR_STILLPLAYING:
str = "There are still buffers in the queue";
break;
default:
dolog ("Reason: Unknown (MMRESULT %#x)\n", mr);
return;
}
dolog ("Reason: %s\n", str);
}
static void GCC_FMT_ATTR (2, 3) winwave_logerr (
MMRESULT mr,
const char *fmt,
...
)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
AUD_log (NULL, " failed\n");
winwave_log_mmresult (mr);
}
static void winwave_anal_close_out (WaveVoiceOut *wave)
{
MMRESULT mr;
mr = waveOutClose (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutClose");
}
wave->hwo = NULL;
}
static void CALLBACK winwave_callback_out (
HWAVEOUT hwo,
UINT msg,
DWORD_PTR dwInstance,
DWORD_PTR dwParam1,
DWORD_PTR dwParam2
)
{
WaveVoiceOut *wave = (WaveVoiceOut *) dwInstance;
switch (msg) {
case WOM_DONE:
{
WAVEHDR *h = (WAVEHDR *) dwParam1;
if (!h->dwUser) {
h->dwUser = 1;
EnterCriticalSection (&wave->crit_sect);
{
wave->avail += conf.dac_samples;
}
LeaveCriticalSection (&wave->crit_sect);
if (wave->hw.poll_mode) {
if (!SetEvent (wave->event)) {
dolog ("DAC SetEvent failed %lx\n", GetLastError ());
}
}
}
}
break;
case WOM_CLOSE:
case WOM_OPEN:
break;
default:
dolog ("unknown wave out callback msg %x\n", msg);
}
}
static int winwave_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int i;
int err;
MMRESULT mr;
WAVEFORMATEX wfx;
WaveVoiceOut *wave;
wave = (WaveVoiceOut *) hw;
InitializeCriticalSection (&wave->crit_sect);
err = waveformat_from_audio_settings (&wfx, as);
if (err) {
goto err0;
}
mr = waveOutOpen (&wave->hwo, WAVE_MAPPER, &wfx,
(DWORD_PTR) winwave_callback_out,
(DWORD_PTR) wave, CALLBACK_FUNCTION);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutOpen");
goto err1;
}
wave->hdrs = audio_calloc (AUDIO_FUNC, conf.dac_headers,
sizeof (*wave->hdrs));
if (!wave->hdrs) {
goto err2;
}
audio_pcm_init_info (&hw->info, as);
hw->samples = conf.dac_samples * conf.dac_headers;
wave->avail = hw->samples;
wave->pcm_buf = audio_calloc (AUDIO_FUNC, conf.dac_samples,
conf.dac_headers << hw->info.shift);
if (!wave->pcm_buf) {
goto err3;
}
for (i = 0; i < conf.dac_headers; ++i) {
WAVEHDR *h = &wave->hdrs[i];
h->dwUser = 0;
h->dwBufferLength = conf.dac_samples << hw->info.shift;
h->lpData = advance (wave->pcm_buf, i * h->dwBufferLength);
h->dwFlags = 0;
mr = waveOutPrepareHeader (wave->hwo, h, sizeof (*h));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutPrepareHeader(%d)", i);
goto err4;
}
}
return 0;
err4:
g_free (wave->pcm_buf);
err3:
g_free (wave->hdrs);
err2:
winwave_anal_close_out (wave);
err1:
err0:
return -1;
}
static int winwave_write (SWVoiceOut *sw, void *buf, int len)
{
return audio_pcm_sw_write (sw, buf, len);
}
static int winwave_run_out (HWVoiceOut *hw, int live)
{
WaveVoiceOut *wave = (WaveVoiceOut *) hw;
int decr;
int doreset;
EnterCriticalSection (&wave->crit_sect);
{
decr = audio_MIN (live, wave->avail);
decr = audio_pcm_hw_clip_out (hw, wave->pcm_buf, decr, wave->pending);
wave->pending += decr;
wave->avail -= decr;
}
LeaveCriticalSection (&wave->crit_sect);
doreset = hw->poll_mode && (wave->pending >= conf.dac_samples);
if (doreset && !ResetEvent (wave->event)) {
dolog ("DAC ResetEvent failed %lx\n", GetLastError ());
}
while (wave->pending >= conf.dac_samples) {
MMRESULT mr;
WAVEHDR *h = &wave->hdrs[wave->curhdr];
h->dwUser = 0;
mr = waveOutWrite (wave->hwo, h, sizeof (*h));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutWrite(%d)", wave->curhdr);
break;
}
wave->pending -= conf.dac_samples;
wave->curhdr = (wave->curhdr + 1) % conf.dac_headers;
}
return decr;
}
static void winwave_poll (void *opaque)
{
(void) opaque;
audio_run ("winwave_poll");
}
static void winwave_fini_out (HWVoiceOut *hw)
{
int i;
MMRESULT mr;
WaveVoiceOut *wave = (WaveVoiceOut *) hw;
mr = waveOutReset (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutReset");
}
for (i = 0; i < conf.dac_headers; ++i) {
mr = waveOutUnprepareHeader (wave->hwo, &wave->hdrs[i],
sizeof (wave->hdrs[i]));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutUnprepareHeader(%d)", i);
}
}
winwave_anal_close_out (wave);
if (wave->event) {
qemu_del_wait_object (wave->event, winwave_poll, wave);
if (!CloseHandle (wave->event)) {
dolog ("DAC CloseHandle failed %lx\n", GetLastError ());
}
wave->event = NULL;
}
g_free (wave->pcm_buf);
wave->pcm_buf = NULL;
g_free (wave->hdrs);
wave->hdrs = NULL;
}
static int winwave_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
MMRESULT mr;
WaveVoiceOut *wave = (WaveVoiceOut *) hw;
switch (cmd) {
case VOICE_ENABLE:
{
va_list ap;
int poll_mode;
va_start (ap, cmd);
poll_mode = va_arg (ap, int);
va_end (ap);
if (poll_mode && !wave->event) {
wave->event = CreateEvent (NULL, TRUE, TRUE, NULL);
if (!wave->event) {
dolog ("DAC CreateEvent: %lx, poll mode will be disabled\n",
GetLastError ());
}
}
if (wave->event) {
int ret;
ret = qemu_add_wait_object (wave->event, winwave_poll, wave);
hw->poll_mode = (ret == 0);
}
else {
hw->poll_mode = 0;
}
wave->paused = 0;
}
return 0;
case VOICE_DISABLE:
if (!wave->paused) {
mr = waveOutReset (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutReset");
}
else {
wave->paused = 1;
}
}
if (wave->event) {
qemu_del_wait_object (wave->event, winwave_poll, wave);
}
return 0;
}
return -1;
}
static void winwave_anal_close_in (WaveVoiceIn *wave)
{
MMRESULT mr;
mr = waveInClose (wave->hwi);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInClose");
}
wave->hwi = NULL;
}
static void CALLBACK winwave_callback_in (
HWAVEIN *hwi,
UINT msg,
DWORD_PTR dwInstance,
DWORD_PTR dwParam1,
DWORD_PTR dwParam2
)
{
WaveVoiceIn *wave = (WaveVoiceIn *) dwInstance;
switch (msg) {
case WIM_DATA:
{
WAVEHDR *h = (WAVEHDR *) dwParam1;
if (!h->dwUser) {
h->dwUser = 1;
EnterCriticalSection (&wave->crit_sect);
{
wave->avail += conf.adc_samples;
}
LeaveCriticalSection (&wave->crit_sect);
if (wave->hw.poll_mode) {
if (!SetEvent (wave->event)) {
dolog ("ADC SetEvent failed %lx\n", GetLastError ());
}
}
}
}
break;
case WIM_CLOSE:
case WIM_OPEN:
break;
default:
dolog ("unknown wave in callback msg %x\n", msg);
}
}
static void winwave_add_buffers (WaveVoiceIn *wave, int samples)
{
int doreset;
doreset = wave->hw.poll_mode && (samples >= conf.adc_samples);
if (doreset && !ResetEvent (wave->event)) {
dolog ("ADC ResetEvent failed %lx\n", GetLastError ());
}
while (samples >= conf.adc_samples) {
MMRESULT mr;
WAVEHDR *h = &wave->hdrs[wave->curhdr];
h->dwUser = 0;
mr = waveInAddBuffer (wave->hwi, h, sizeof (*h));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInAddBuffer(%d)", wave->curhdr);
}
wave->curhdr = (wave->curhdr + 1) % conf.adc_headers;
samples -= conf.adc_samples;
}
}
static int winwave_init_in (HWVoiceIn *hw, struct audsettings *as)
{
int i;
int err;
MMRESULT mr;
WAVEFORMATEX wfx;
WaveVoiceIn *wave;
wave = (WaveVoiceIn *) hw;
InitializeCriticalSection (&wave->crit_sect);
err = waveformat_from_audio_settings (&wfx, as);
if (err) {
goto err0;
}
mr = waveInOpen (&wave->hwi, WAVE_MAPPER, &wfx,
(DWORD_PTR) winwave_callback_in,
(DWORD_PTR) wave, CALLBACK_FUNCTION);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInOpen");
goto err1;
}
wave->hdrs = audio_calloc (AUDIO_FUNC, conf.dac_headers,
sizeof (*wave->hdrs));
if (!wave->hdrs) {
goto err2;
}
audio_pcm_init_info (&hw->info, as);
hw->samples = conf.adc_samples * conf.adc_headers;
wave->avail = 0;
wave->pcm_buf = audio_calloc (AUDIO_FUNC, conf.adc_samples,
conf.adc_headers << hw->info.shift);
if (!wave->pcm_buf) {
goto err3;
}
for (i = 0; i < conf.adc_headers; ++i) {
WAVEHDR *h = &wave->hdrs[i];
h->dwUser = 0;
h->dwBufferLength = conf.adc_samples << hw->info.shift;
h->lpData = advance (wave->pcm_buf, i * h->dwBufferLength);
h->dwFlags = 0;
mr = waveInPrepareHeader (wave->hwi, h, sizeof (*h));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInPrepareHeader(%d)", i);
goto err4;
}
}
wave->paused = 1;
winwave_add_buffers (wave, hw->samples);
return 0;
err4:
g_free (wave->pcm_buf);
err3:
g_free (wave->hdrs);
err2:
winwave_anal_close_in (wave);
err1:
err0:
return -1;
}
static void winwave_fini_in (HWVoiceIn *hw)
{
int i;
MMRESULT mr;
WaveVoiceIn *wave = (WaveVoiceIn *) hw;
mr = waveInReset (wave->hwi);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInReset");
}
for (i = 0; i < conf.adc_headers; ++i) {
mr = waveInUnprepareHeader (wave->hwi, &wave->hdrs[i],
sizeof (wave->hdrs[i]));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInUnprepareHeader(%d)", i);
}
}
winwave_anal_close_in (wave);
if (wave->event) {
qemu_del_wait_object (wave->event, winwave_poll, wave);
if (!CloseHandle (wave->event)) {
dolog ("ADC CloseHandle failed %lx\n", GetLastError ());
}
wave->event = NULL;
}
g_free (wave->pcm_buf);
wave->pcm_buf = NULL;
g_free (wave->hdrs);
wave->hdrs = NULL;
}
static int winwave_run_in (HWVoiceIn *hw)
{
WaveVoiceIn *wave = (WaveVoiceIn *) hw;
int live = audio_pcm_hw_get_live_in (hw);
int dead = hw->samples - live;
int decr, ret;
if (!dead) {
return 0;
}
EnterCriticalSection (&wave->crit_sect);
{
decr = audio_MIN (dead, wave->avail);
wave->avail -= decr;
}
LeaveCriticalSection (&wave->crit_sect);
ret = decr;
while (decr) {
int left = hw->samples - hw->wpos;
int conv = audio_MIN (left, decr);
hw->conv (hw->conv_buf + hw->wpos,
advance (wave->pcm_buf, wave->rpos << hw->info.shift),
conv);
wave->rpos = (wave->rpos + conv) % hw->samples;
hw->wpos = (hw->wpos + conv) % hw->samples;
decr -= conv;
}
winwave_add_buffers (wave, ret);
return ret;
}
static int winwave_read (SWVoiceIn *sw, void *buf, int size)
{
return audio_pcm_sw_read (sw, buf, size);
}
static int winwave_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
MMRESULT mr;
WaveVoiceIn *wave = (WaveVoiceIn *) hw;
switch (cmd) {
case VOICE_ENABLE:
{
va_list ap;
int poll_mode;
va_start (ap, cmd);
poll_mode = va_arg (ap, int);
va_end (ap);
if (poll_mode && !wave->event) {
wave->event = CreateEvent (NULL, TRUE, TRUE, NULL);
if (!wave->event) {
dolog ("ADC CreateEvent: %lx, poll mode will be disabled\n",
GetLastError ());
}
}
if (wave->event) {
int ret;
ret = qemu_add_wait_object (wave->event, winwave_poll, wave);
hw->poll_mode = (ret == 0);
}
else {
hw->poll_mode = 0;
}
if (wave->paused) {
mr = waveInStart (wave->hwi);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInStart");
}
wave->paused = 0;
}
}
return 0;
case VOICE_DISABLE:
if (!wave->paused) {
mr = waveInStop (wave->hwi);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInStop");
}
else {
wave->paused = 1;
}
}
if (wave->event) {
qemu_del_wait_object (wave->event, winwave_poll, wave);
}
return 0;
}
return 0;
}
static void *winwave_audio_init (void)
{
return &conf;
}
static void winwave_audio_fini (void *opaque)
{
(void) opaque;
}
static struct audio_option winwave_options[] = {
{
.name = "DAC_HEADERS",
.tag = AUD_OPT_INT,
.valp = &conf.dac_headers,
.descr = "DAC number of headers",
},
{
.name = "DAC_SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.dac_samples,
.descr = "DAC number of samples per header",
},
{
.name = "ADC_HEADERS",
.tag = AUD_OPT_INT,
.valp = &conf.adc_headers,
.descr = "ADC number of headers",
},
{
.name = "ADC_SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.adc_samples,
.descr = "ADC number of samples per header",
},
{ /* End of list */ }
};
static struct audio_pcm_ops winwave_pcm_ops = {
.init_out = winwave_init_out,
.fini_out = winwave_fini_out,
.run_out = winwave_run_out,
.write = winwave_write,
.ctl_out = winwave_ctl_out,
.init_in = winwave_init_in,
.fini_in = winwave_fini_in,
.run_in = winwave_run_in,
.read = winwave_read,
.ctl_in = winwave_ctl_in
};
struct audio_driver winwave_audio_driver = {
.name = "winwave",
.descr = "Windows Waveform Audio http://msdn.microsoft.com",
.options = winwave_options,
.init = winwave_audio_init,
.fini = winwave_audio_fini,
.pcm_ops = &winwave_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (WaveVoiceOut),
.voice_size_in = sizeof (WaveVoiceIn)
};

View File

@@ -1,18 +1,11 @@
common-obj-y += rng.o rng-egd.o
common-obj-$(CONFIG_POSIX) += rng-random.o
common-obj-y += msmouse.o
common-obj-$(CONFIG_BRLAPI) += baum.o
baum.o-cflags := $(SDL_CFLAGS)
common-obj-$(CONFIG_TPM) += tpm.o
common-obj-y += hostmem.o hostmem-ram.o
common-obj-$(CONFIG_LINUX) += hostmem-file.o
common-obj-y += cryptodev.o
common-obj-y += cryptodev-builtin.o
ifeq ($(CONFIG_VIRTIO),y)
common-obj-y += cryptodev-vhost.o
common-obj-$(call land,$(CONFIG_VHOST_USER),$(CONFIG_LINUX)) += \
cryptodev-vhost-user.o
endif
common-obj-$(CONFIG_LINUX) += hostmem-memfd.o

635
backends/baum.c Normal file
View File

@@ -0,0 +1,635 @@
/*
* QEMU Baum Braille Device
*
* Copyright (c) 2008 Samuel Thibault
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "sysemu/char.h"
#include "qemu/timer.h"
#include "hw/usb.h"
#include <brlapi.h>
#include <brlapi_constants.h>
#include <brlapi_keycodes.h>
#ifdef CONFIG_SDL
#include <SDL_syswm.h>
#endif
#if 0
#define DPRINTF(fmt, ...) \
printf(fmt, ## __VA_ARGS__)
#else
#define DPRINTF(fmt, ...)
#endif
#define ESC 0x1B
#define BAUM_REQ_DisplayData 0x01
#define BAUM_REQ_GetVersionNumber 0x05
#define BAUM_REQ_GetKeys 0x08
#define BAUM_REQ_SetMode 0x12
#define BAUM_REQ_SetProtocol 0x15
#define BAUM_REQ_GetDeviceIdentity 0x84
#define BAUM_REQ_GetSerialNumber 0x8A
#define BAUM_RSP_CellCount 0x01
#define BAUM_RSP_VersionNumber 0x05
#define BAUM_RSP_ModeSetting 0x11
#define BAUM_RSP_CommunicationChannel 0x16
#define BAUM_RSP_PowerdownSignal 0x17
#define BAUM_RSP_HorizontalSensors 0x20
#define BAUM_RSP_VerticalSensors 0x21
#define BAUM_RSP_RoutingKeys 0x22
#define BAUM_RSP_Switches 0x23
#define BAUM_RSP_TopKeys 0x24
#define BAUM_RSP_HorizontalSensor 0x25
#define BAUM_RSP_VerticalSensor 0x26
#define BAUM_RSP_RoutingKey 0x27
#define BAUM_RSP_FrontKeys6 0x28
#define BAUM_RSP_BackKeys6 0x29
#define BAUM_RSP_CommandKeys 0x2B
#define BAUM_RSP_FrontKeys10 0x2C
#define BAUM_RSP_BackKeys10 0x2D
#define BAUM_RSP_EntryKeys 0x33
#define BAUM_RSP_JoyStick 0x34
#define BAUM_RSP_ErrorCode 0x40
#define BAUM_RSP_InfoBlock 0x42
#define BAUM_RSP_DeviceIdentity 0x84
#define BAUM_RSP_SerialNumber 0x8A
#define BAUM_RSP_BluetoothName 0x8C
#define BAUM_TL1 0x01
#define BAUM_TL2 0x02
#define BAUM_TL3 0x04
#define BAUM_TR1 0x08
#define BAUM_TR2 0x10
#define BAUM_TR3 0x20
#define BUF_SIZE 256
typedef struct {
CharDriverState *chr;
brlapi_handle_t *brlapi;
int brlapi_fd;
unsigned int x, y;
uint8_t in_buf[BUF_SIZE];
uint8_t in_buf_used;
uint8_t out_buf[BUF_SIZE];
uint8_t out_buf_used, out_buf_ptr;
QEMUTimer *cellCount_timer;
} BaumDriverState;
/* Let's assume NABCC by default */
static const uint8_t nabcc_translation[256] = {
[0] = ' ',
#ifndef BRLAPI_DOTS
#define BRLAPI_DOTS(d1,d2,d3,d4,d5,d6,d7,d8) \
((d1?BRLAPI_DOT1:0)|\
(d2?BRLAPI_DOT2:0)|\
(d3?BRLAPI_DOT3:0)|\
(d4?BRLAPI_DOT4:0)|\
(d5?BRLAPI_DOT5:0)|\
(d6?BRLAPI_DOT6:0)|\
(d7?BRLAPI_DOT7:0)|\
(d8?BRLAPI_DOT8:0))
#endif
[BRLAPI_DOTS(1,0,0,0,0,0,0,0)] = 'a',
[BRLAPI_DOTS(1,1,0,0,0,0,0,0)] = 'b',
[BRLAPI_DOTS(1,0,0,1,0,0,0,0)] = 'c',
[BRLAPI_DOTS(1,0,0,1,1,0,0,0)] = 'd',
[BRLAPI_DOTS(1,0,0,0,1,0,0,0)] = 'e',
[BRLAPI_DOTS(1,1,0,1,0,0,0,0)] = 'f',
[BRLAPI_DOTS(1,1,0,1,1,0,0,0)] = 'g',
[BRLAPI_DOTS(1,1,0,0,1,0,0,0)] = 'h',
[BRLAPI_DOTS(0,1,0,1,0,0,0,0)] = 'i',
[BRLAPI_DOTS(0,1,0,1,1,0,0,0)] = 'j',
[BRLAPI_DOTS(1,0,1,0,0,0,0,0)] = 'k',
[BRLAPI_DOTS(1,1,1,0,0,0,0,0)] = 'l',
[BRLAPI_DOTS(1,0,1,1,0,0,0,0)] = 'm',
[BRLAPI_DOTS(1,0,1,1,1,0,0,0)] = 'n',
[BRLAPI_DOTS(1,0,1,0,1,0,0,0)] = 'o',
[BRLAPI_DOTS(1,1,1,1,0,0,0,0)] = 'p',
[BRLAPI_DOTS(1,1,1,1,1,0,0,0)] = 'q',
[BRLAPI_DOTS(1,1,1,0,1,0,0,0)] = 'r',
[BRLAPI_DOTS(0,1,1,1,0,0,0,0)] = 's',
[BRLAPI_DOTS(0,1,1,1,1,0,0,0)] = 't',
[BRLAPI_DOTS(1,0,1,0,0,1,0,0)] = 'u',
[BRLAPI_DOTS(1,1,1,0,0,1,0,0)] = 'v',
[BRLAPI_DOTS(0,1,0,1,1,1,0,0)] = 'w',
[BRLAPI_DOTS(1,0,1,1,0,1,0,0)] = 'x',
[BRLAPI_DOTS(1,0,1,1,1,1,0,0)] = 'y',
[BRLAPI_DOTS(1,0,1,0,1,1,0,0)] = 'z',
[BRLAPI_DOTS(1,0,0,0,0,0,1,0)] = 'A',
[BRLAPI_DOTS(1,1,0,0,0,0,1,0)] = 'B',
[BRLAPI_DOTS(1,0,0,1,0,0,1,0)] = 'C',
[BRLAPI_DOTS(1,0,0,1,1,0,1,0)] = 'D',
[BRLAPI_DOTS(1,0,0,0,1,0,1,0)] = 'E',
[BRLAPI_DOTS(1,1,0,1,0,0,1,0)] = 'F',
[BRLAPI_DOTS(1,1,0,1,1,0,1,0)] = 'G',
[BRLAPI_DOTS(1,1,0,0,1,0,1,0)] = 'H',
[BRLAPI_DOTS(0,1,0,1,0,0,1,0)] = 'I',
[BRLAPI_DOTS(0,1,0,1,1,0,1,0)] = 'J',
[BRLAPI_DOTS(1,0,1,0,0,0,1,0)] = 'K',
[BRLAPI_DOTS(1,1,1,0,0,0,1,0)] = 'L',
[BRLAPI_DOTS(1,0,1,1,0,0,1,0)] = 'M',
[BRLAPI_DOTS(1,0,1,1,1,0,1,0)] = 'N',
[BRLAPI_DOTS(1,0,1,0,1,0,1,0)] = 'O',
[BRLAPI_DOTS(1,1,1,1,0,0,1,0)] = 'P',
[BRLAPI_DOTS(1,1,1,1,1,0,1,0)] = 'Q',
[BRLAPI_DOTS(1,1,1,0,1,0,1,0)] = 'R',
[BRLAPI_DOTS(0,1,1,1,0,0,1,0)] = 'S',
[BRLAPI_DOTS(0,1,1,1,1,0,1,0)] = 'T',
[BRLAPI_DOTS(1,0,1,0,0,1,1,0)] = 'U',
[BRLAPI_DOTS(1,1,1,0,0,1,1,0)] = 'V',
[BRLAPI_DOTS(0,1,0,1,1,1,1,0)] = 'W',
[BRLAPI_DOTS(1,0,1,1,0,1,1,0)] = 'X',
[BRLAPI_DOTS(1,0,1,1,1,1,1,0)] = 'Y',
[BRLAPI_DOTS(1,0,1,0,1,1,1,0)] = 'Z',
[BRLAPI_DOTS(0,0,1,0,1,1,0,0)] = '0',
[BRLAPI_DOTS(0,1,0,0,0,0,0,0)] = '1',
[BRLAPI_DOTS(0,1,1,0,0,0,0,0)] = '2',
[BRLAPI_DOTS(0,1,0,0,1,0,0,0)] = '3',
[BRLAPI_DOTS(0,1,0,0,1,1,0,0)] = '4',
[BRLAPI_DOTS(0,1,0,0,0,1,0,0)] = '5',
[BRLAPI_DOTS(0,1,1,0,1,0,0,0)] = '6',
[BRLAPI_DOTS(0,1,1,0,1,1,0,0)] = '7',
[BRLAPI_DOTS(0,1,1,0,0,1,0,0)] = '8',
[BRLAPI_DOTS(0,0,1,0,1,0,0,0)] = '9',
[BRLAPI_DOTS(0,0,0,1,0,1,0,0)] = '.',
[BRLAPI_DOTS(0,0,1,1,0,1,0,0)] = '+',
[BRLAPI_DOTS(0,0,1,0,0,1,0,0)] = '-',
[BRLAPI_DOTS(1,0,0,0,0,1,0,0)] = '*',
[BRLAPI_DOTS(0,0,1,1,0,0,0,0)] = '/',
[BRLAPI_DOTS(1,1,1,0,1,1,0,0)] = '(',
[BRLAPI_DOTS(0,1,1,1,1,1,0,0)] = ')',
[BRLAPI_DOTS(1,1,1,1,0,1,0,0)] = '&',
[BRLAPI_DOTS(0,0,1,1,1,1,0,0)] = '#',
[BRLAPI_DOTS(0,0,0,0,0,1,0,0)] = ',',
[BRLAPI_DOTS(0,0,0,0,1,1,0,0)] = ';',
[BRLAPI_DOTS(1,0,0,0,1,1,0,0)] = ':',
[BRLAPI_DOTS(0,1,1,1,0,1,0,0)] = '!',
[BRLAPI_DOTS(1,0,0,1,1,1,0,0)] = '?',
[BRLAPI_DOTS(0,0,0,0,1,0,0,0)] = '"',
[BRLAPI_DOTS(0,0,1,0,0,0,0,0)] ='\'',
[BRLAPI_DOTS(0,0,0,1,0,0,0,0)] = '`',
[BRLAPI_DOTS(0,0,0,1,1,0,1,0)] = '^',
[BRLAPI_DOTS(0,0,0,1,1,0,0,0)] = '~',
[BRLAPI_DOTS(0,1,0,1,0,1,1,0)] = '[',
[BRLAPI_DOTS(1,1,0,1,1,1,1,0)] = ']',
[BRLAPI_DOTS(0,1,0,1,0,1,0,0)] = '{',
[BRLAPI_DOTS(1,1,0,1,1,1,0,0)] = '}',
[BRLAPI_DOTS(1,1,1,1,1,1,0,0)] = '=',
[BRLAPI_DOTS(1,1,0,0,0,1,0,0)] = '<',
[BRLAPI_DOTS(0,0,1,1,1,0,0,0)] = '>',
[BRLAPI_DOTS(1,1,0,1,0,1,0,0)] = '$',
[BRLAPI_DOTS(1,0,0,1,0,1,0,0)] = '%',
[BRLAPI_DOTS(0,0,0,1,0,0,1,0)] = '@',
[BRLAPI_DOTS(1,1,0,0,1,1,0,0)] = '|',
[BRLAPI_DOTS(1,1,0,0,1,1,1,0)] ='\\',
[BRLAPI_DOTS(0,0,0,1,1,1,0,0)] = '_',
};
/* The serial port can receive more of our data */
static void baum_accept_input(struct CharDriverState *chr)
{
BaumDriverState *baum = chr->opaque;
int room, first;
if (!baum->out_buf_used)
return;
room = qemu_chr_be_can_write(chr);
if (!room)
return;
if (room > baum->out_buf_used)
room = baum->out_buf_used;
first = BUF_SIZE - baum->out_buf_ptr;
if (room > first) {
qemu_chr_be_write(chr, baum->out_buf + baum->out_buf_ptr, first);
baum->out_buf_ptr = 0;
baum->out_buf_used -= first;
room -= first;
}
qemu_chr_be_write(chr, baum->out_buf + baum->out_buf_ptr, room);
baum->out_buf_ptr += room;
baum->out_buf_used -= room;
}
/* We want to send a packet */
static void baum_write_packet(BaumDriverState *baum, const uint8_t *buf, int len)
{
uint8_t io_buf[1 + 2 * len], *cur = io_buf;
int room;
*cur++ = ESC;
while (len--)
if ((*cur++ = *buf++) == ESC)
*cur++ = ESC;
room = qemu_chr_be_can_write(baum->chr);
len = cur - io_buf;
if (len <= room) {
/* Fits */
qemu_chr_be_write(baum->chr, io_buf, len);
} else {
int first;
uint8_t out;
/* Can't fit all, send what can be, and store the rest. */
qemu_chr_be_write(baum->chr, io_buf, room);
len -= room;
cur = io_buf + room;
if (len > BUF_SIZE - baum->out_buf_used) {
/* Can't even store it, drop the previous data... */
assert(len <= BUF_SIZE);
baum->out_buf_used = 0;
baum->out_buf_ptr = 0;
}
out = baum->out_buf_ptr;
baum->out_buf_used += len;
first = BUF_SIZE - baum->out_buf_ptr;
if (len > first) {
memcpy(baum->out_buf + out, cur, first);
out = 0;
len -= first;
cur += first;
}
memcpy(baum->out_buf + out, cur, len);
}
}
/* Called when the other end seems to have a wrong idea of our display size */
static void baum_cellCount_timer_cb(void *opaque)
{
BaumDriverState *baum = opaque;
uint8_t cell_count[] = { BAUM_RSP_CellCount, baum->x * baum->y };
DPRINTF("Timeout waiting for DisplayData, sending cell count\n");
baum_write_packet(baum, cell_count, sizeof(cell_count));
}
/* Try to interpret a whole incoming packet */
static int baum_eat_packet(BaumDriverState *baum, const uint8_t *buf, int len)
{
const uint8_t *cur = buf;
uint8_t req = 0;
if (!len--)
return 0;
if (*cur++ != ESC) {
while (*cur != ESC) {
if (!len--)
return 0;
cur++;
}
DPRINTF("Dropped %d bytes!\n", cur - buf);
}
#define EAT(c) do {\
if (!len--) \
return 0; \
if ((c = *cur++) == ESC) { \
if (!len--) \
return 0; \
if (*cur++ != ESC) { \
DPRINTF("Broken packet %#2x, tossing\n", req); \
if (timer_pending(baum->cellCount_timer)) { \
timer_del(baum->cellCount_timer); \
baum_cellCount_timer_cb(baum); \
} \
return (cur - 2 - buf); \
} \
} \
} while (0)
EAT(req);
switch (req) {
case BAUM_REQ_DisplayData:
{
uint8_t cells[baum->x * baum->y], c;
uint8_t text[baum->x * baum->y];
uint8_t zero[baum->x * baum->y];
int cursor = BRLAPI_CURSOR_OFF;
int i;
/* Allow 100ms to complete the DisplayData packet */
timer_mod(baum->cellCount_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
get_ticks_per_sec() / 10);
for (i = 0; i < baum->x * baum->y ; i++) {
EAT(c);
cells[i] = c;
if ((c & (BRLAPI_DOT7|BRLAPI_DOT8))
== (BRLAPI_DOT7|BRLAPI_DOT8)) {
cursor = i + 1;
c &= ~(BRLAPI_DOT7|BRLAPI_DOT8);
}
if (!(c = nabcc_translation[c]))
c = '?';
text[i] = c;
}
timer_del(baum->cellCount_timer);
memset(zero, 0, sizeof(zero));
brlapi_writeArguments_t wa = {
.displayNumber = BRLAPI_DISPLAY_DEFAULT,
.regionBegin = 1,
.regionSize = baum->x * baum->y,
.text = (char *)text,
.textSize = baum->x * baum->y,
.andMask = zero,
.orMask = cells,
.cursor = cursor,
.charset = (char *)"ISO-8859-1",
};
if (brlapi__write(baum->brlapi, &wa) == -1)
brlapi_perror("baum brlapi_write");
break;
}
case BAUM_REQ_SetMode:
{
uint8_t mode, setting;
DPRINTF("SetMode\n");
EAT(mode);
EAT(setting);
/* ignore */
break;
}
case BAUM_REQ_SetProtocol:
{
uint8_t protocol;
DPRINTF("SetProtocol\n");
EAT(protocol);
/* ignore */
break;
}
case BAUM_REQ_GetDeviceIdentity:
{
uint8_t identity[17] = { BAUM_RSP_DeviceIdentity,
'B','a','u','m',' ','V','a','r','i','o' };
DPRINTF("GetDeviceIdentity\n");
identity[11] = '0' + baum->x / 10;
identity[12] = '0' + baum->x % 10;
baum_write_packet(baum, identity, sizeof(identity));
break;
}
case BAUM_REQ_GetVersionNumber:
{
uint8_t version[] = { BAUM_RSP_VersionNumber, 1 }; /* ? */
DPRINTF("GetVersionNumber\n");
baum_write_packet(baum, version, sizeof(version));
break;
}
case BAUM_REQ_GetSerialNumber:
{
uint8_t serial[] = { BAUM_RSP_SerialNumber,
'0','0','0','0','0','0','0','0' };
DPRINTF("GetSerialNumber\n");
baum_write_packet(baum, serial, sizeof(serial));
break;
}
case BAUM_REQ_GetKeys:
{
DPRINTF("Get%0#2x\n", req);
/* ignore */
break;
}
default:
DPRINTF("unrecognized request %0#2x\n", req);
do
if (!len--)
return 0;
while (*cur++ != ESC);
cur--;
break;
}
return cur - buf;
}
/* The other end is writing some data. Store it and try to interpret */
static int baum_write(CharDriverState *chr, const uint8_t *buf, int len)
{
BaumDriverState *baum = chr->opaque;
int tocopy, cur, eaten, orig_len = len;
if (!len)
return 0;
if (!baum->brlapi)
return len;
while (len) {
/* Complete our buffer as much as possible */
tocopy = len;
if (tocopy > BUF_SIZE - baum->in_buf_used)
tocopy = BUF_SIZE - baum->in_buf_used;
memcpy(baum->in_buf + baum->in_buf_used, buf, tocopy);
baum->in_buf_used += tocopy;
buf += tocopy;
len -= tocopy;
/* Interpret it as much as possible */
cur = 0;
while (cur < baum->in_buf_used &&
(eaten = baum_eat_packet(baum, baum->in_buf + cur, baum->in_buf_used - cur)))
cur += eaten;
/* Shift the remainder */
if (cur) {
memmove(baum->in_buf, baum->in_buf + cur, baum->in_buf_used - cur);
baum->in_buf_used -= cur;
}
/* And continue if any data left */
}
return orig_len;
}
/* Send the key code to the other end */
static void baum_send_key(BaumDriverState *baum, uint8_t type, uint8_t value) {
uint8_t packet[] = { type, value };
DPRINTF("writing key %x %x\n", type, value);
baum_write_packet(baum, packet, sizeof(packet));
}
/* We got some data on the BrlAPI socket */
static void baum_chr_read(void *opaque)
{
BaumDriverState *baum = opaque;
brlapi_keyCode_t code;
int ret;
if (!baum->brlapi)
return;
while ((ret = brlapi__readKey(baum->brlapi, 0, &code)) == 1) {
DPRINTF("got key %"BRLAPI_PRIxKEYCODE"\n", code);
/* Emulate */
switch (code & BRLAPI_KEY_TYPE_MASK) {
case BRLAPI_KEY_TYPE_CMD:
switch (code & BRLAPI_KEY_CMD_BLK_MASK) {
case BRLAPI_KEY_CMD_ROUTE:
baum_send_key(baum, BAUM_RSP_RoutingKey, (code & BRLAPI_KEY_CMD_ARG_MASK)+1);
baum_send_key(baum, BAUM_RSP_RoutingKey, 0);
break;
case 0:
switch (code & BRLAPI_KEY_CMD_ARG_MASK) {
case BRLAPI_KEY_CMD_FWINLT:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TL2);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_FWINRT:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TR2);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_LNUP:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TR1);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_LNDN:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TR3);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_TOP:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TL1|BAUM_TR1);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_BOT:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TL3|BAUM_TR3);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_TOP_LEFT:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TL2|BAUM_TR1);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_BOT_LEFT:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TL2|BAUM_TR3);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_HOME:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TL2|BAUM_TR1|BAUM_TR3);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
case BRLAPI_KEY_CMD_PREFMENU:
baum_send_key(baum, BAUM_RSP_TopKeys, BAUM_TL1|BAUM_TL3|BAUM_TR1);
baum_send_key(baum, BAUM_RSP_TopKeys, 0);
break;
}
}
break;
case BRLAPI_KEY_TYPE_SYM:
break;
}
}
if (ret == -1 && (brlapi_errno != BRLAPI_ERROR_LIBCERR || errno != EINTR)) {
brlapi_perror("baum: brlapi_readKey");
brlapi__closeConnection(baum->brlapi);
g_free(baum->brlapi);
baum->brlapi = NULL;
}
}
static void baum_close(struct CharDriverState *chr)
{
BaumDriverState *baum = chr->opaque;
timer_free(baum->cellCount_timer);
if (baum->brlapi) {
brlapi__closeConnection(baum->brlapi);
g_free(baum->brlapi);
}
g_free(baum);
}
CharDriverState *chr_baum_init(void)
{
BaumDriverState *baum;
CharDriverState *chr;
brlapi_handle_t *handle;
#if defined(CONFIG_SDL)
#if SDL_COMPILEDVERSION < SDL_VERSIONNUM(2, 0, 0)
SDL_SysWMinfo info;
#endif
#endif
int tty;
baum = g_malloc0(sizeof(BaumDriverState));
baum->chr = chr = qemu_chr_alloc();
chr->opaque = baum;
chr->chr_write = baum_write;
chr->chr_accept_input = baum_accept_input;
chr->chr_close = baum_close;
handle = g_malloc0(brlapi_getHandleSize());
baum->brlapi = handle;
baum->brlapi_fd = brlapi__openConnection(handle, NULL, NULL);
if (baum->brlapi_fd == -1) {
brlapi_perror("baum_init: brlapi_openConnection");
goto fail_handle;
}
baum->cellCount_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, baum_cellCount_timer_cb, baum);
if (brlapi__getDisplaySize(handle, &baum->x, &baum->y) == -1) {
brlapi_perror("baum_init: brlapi_getDisplaySize");
goto fail;
}
#if defined(CONFIG_SDL)
#if SDL_COMPILEDVERSION < SDL_VERSIONNUM(2, 0, 0)
memset(&info, 0, sizeof(info));
SDL_VERSION(&info.version);
if (SDL_GetWMInfo(&info))
tty = info.info.x11.wmwindow;
else
#endif
#endif
tty = BRLAPI_TTY_DEFAULT;
if (brlapi__enterTtyMode(handle, tty, NULL) == -1) {
brlapi_perror("baum_init: brlapi_enterTtyMode");
goto fail;
}
qemu_set_fd_handler(baum->brlapi_fd, baum_chr_read, NULL, baum);
return chr;
fail:
timer_free(baum->cellCount_timer);
brlapi__closeConnection(handle);
fail_handle:
g_free(handle);
g_free(chr);
g_free(baum);
return NULL;
}
static void register_types(void)
{
register_char_driver_qapi("braille", CHARDEV_BACKEND_KIND_BRAILLE, NULL);
}
type_init(register_types);

View File

@@ -1,401 +0,0 @@
/*
* QEMU Cryptodev backend for QEMU cipher APIs
*
* Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD.
*
* Authors:
* Gonglei <arei.gonglei@huawei.com>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu/osdep.h"
#include "sysemu/cryptodev.h"
#include "hw/boards.h"
#include "qapi/error.h"
#include "standard-headers/linux/virtio_crypto.h"
#include "crypto/cipher.h"
/**
* @TYPE_CRYPTODEV_BACKEND_BUILTIN:
* name of backend that uses QEMU cipher API
*/
#define TYPE_CRYPTODEV_BACKEND_BUILTIN "cryptodev-backend-builtin"
#define CRYPTODEV_BACKEND_BUILTIN(obj) \
OBJECT_CHECK(CryptoDevBackendBuiltin, \
(obj), TYPE_CRYPTODEV_BACKEND_BUILTIN)
typedef struct CryptoDevBackendBuiltin
CryptoDevBackendBuiltin;
typedef struct CryptoDevBackendBuiltinSession {
QCryptoCipher *cipher;
uint8_t direction; /* encryption or decryption */
uint8_t type; /* cipher? hash? aead? */
QTAILQ_ENTRY(CryptoDevBackendBuiltinSession) next;
} CryptoDevBackendBuiltinSession;
/* Max number of symmetric sessions */
#define MAX_NUM_SESSIONS 256
#define CRYPTODEV_BUITLIN_MAX_AUTH_KEY_LEN 512
#define CRYPTODEV_BUITLIN_MAX_CIPHER_KEY_LEN 64
struct CryptoDevBackendBuiltin {
CryptoDevBackend parent_obj;
CryptoDevBackendBuiltinSession *sessions[MAX_NUM_SESSIONS];
};
static void cryptodev_builtin_init(
CryptoDevBackend *backend, Error **errp)
{
/* Only support one queue */
int queues = backend->conf.peers.queues;
CryptoDevBackendClient *cc;
if (queues != 1) {
error_setg(errp,
"Only support one queue in cryptdov-builtin backend");
return;
}
cc = cryptodev_backend_new_client(
"cryptodev-builtin", NULL);
cc->info_str = g_strdup_printf("cryptodev-builtin0");
cc->queue_index = 0;
cc->type = CRYPTODEV_BACKEND_TYPE_BUILTIN;
backend->conf.peers.ccs[0] = cc;
backend->conf.crypto_services =
1u << VIRTIO_CRYPTO_SERVICE_CIPHER |
1u << VIRTIO_CRYPTO_SERVICE_HASH |
1u << VIRTIO_CRYPTO_SERVICE_MAC;
backend->conf.cipher_algo_l = 1u << VIRTIO_CRYPTO_CIPHER_AES_CBC;
backend->conf.hash_algo = 1u << VIRTIO_CRYPTO_HASH_SHA1;
/*
* Set the Maximum length of crypto request.
* Why this value? Just avoid to overflow when
* memory allocation for each crypto request.
*/
backend->conf.max_size = LONG_MAX - sizeof(CryptoDevBackendSymOpInfo);
backend->conf.max_cipher_key_len = CRYPTODEV_BUITLIN_MAX_CIPHER_KEY_LEN;
backend->conf.max_auth_key_len = CRYPTODEV_BUITLIN_MAX_AUTH_KEY_LEN;
cryptodev_backend_set_ready(backend, true);
}
static int
cryptodev_builtin_get_unused_session_index(
CryptoDevBackendBuiltin *builtin)
{
size_t i;
for (i = 0; i < MAX_NUM_SESSIONS; i++) {
if (builtin->sessions[i] == NULL) {
return i;
}
}
return -1;
}
#define AES_KEYSIZE_128 16
#define AES_KEYSIZE_192 24
#define AES_KEYSIZE_256 32
#define AES_KEYSIZE_128_XTS AES_KEYSIZE_256
#define AES_KEYSIZE_256_XTS 64
static int
cryptodev_builtin_get_aes_algo(uint32_t key_len, int mode, Error **errp)
{
int algo;
if (key_len == AES_KEYSIZE_128) {
algo = QCRYPTO_CIPHER_ALG_AES_128;
} else if (key_len == AES_KEYSIZE_192) {
algo = QCRYPTO_CIPHER_ALG_AES_192;
} else if (key_len == AES_KEYSIZE_256) { /* equals AES_KEYSIZE_128_XTS */
if (mode == QCRYPTO_CIPHER_MODE_XTS) {
algo = QCRYPTO_CIPHER_ALG_AES_128;
} else {
algo = QCRYPTO_CIPHER_ALG_AES_256;
}
} else if (key_len == AES_KEYSIZE_256_XTS) {
if (mode == QCRYPTO_CIPHER_MODE_XTS) {
algo = QCRYPTO_CIPHER_ALG_AES_256;
} else {
goto err;
}
} else {
goto err;
}
return algo;
err:
error_setg(errp, "Unsupported key length :%u", key_len);
return -1;
}
static int cryptodev_builtin_create_cipher_session(
CryptoDevBackendBuiltin *builtin,
CryptoDevBackendSymSessionInfo *sess_info,
Error **errp)
{
int algo;
int mode;
QCryptoCipher *cipher;
int index;
CryptoDevBackendBuiltinSession *sess;
if (sess_info->op_type != VIRTIO_CRYPTO_SYM_OP_CIPHER) {
error_setg(errp, "Unsupported optype :%u", sess_info->op_type);
return -1;
}
index = cryptodev_builtin_get_unused_session_index(builtin);
if (index < 0) {
error_setg(errp, "Total number of sessions created exceeds %u",
MAX_NUM_SESSIONS);
return -1;
}
switch (sess_info->cipher_alg) {
case VIRTIO_CRYPTO_CIPHER_AES_ECB:
mode = QCRYPTO_CIPHER_MODE_ECB;
algo = cryptodev_builtin_get_aes_algo(sess_info->key_len,
mode, errp);
if (algo < 0) {
return -1;
}
break;
case VIRTIO_CRYPTO_CIPHER_AES_CBC:
mode = QCRYPTO_CIPHER_MODE_CBC;
algo = cryptodev_builtin_get_aes_algo(sess_info->key_len,
mode, errp);
if (algo < 0) {
return -1;
}
break;
case VIRTIO_CRYPTO_CIPHER_AES_CTR:
mode = QCRYPTO_CIPHER_MODE_CTR;
algo = cryptodev_builtin_get_aes_algo(sess_info->key_len,
mode, errp);
if (algo < 0) {
return -1;
}
break;
case VIRTIO_CRYPTO_CIPHER_AES_XTS:
mode = QCRYPTO_CIPHER_MODE_XTS;
algo = cryptodev_builtin_get_aes_algo(sess_info->key_len,
mode, errp);
if (algo < 0) {
return -1;
}
break;
case VIRTIO_CRYPTO_CIPHER_3DES_ECB:
mode = QCRYPTO_CIPHER_MODE_ECB;
algo = QCRYPTO_CIPHER_ALG_3DES;
break;
case VIRTIO_CRYPTO_CIPHER_3DES_CBC:
mode = QCRYPTO_CIPHER_MODE_CBC;
algo = QCRYPTO_CIPHER_ALG_3DES;
break;
case VIRTIO_CRYPTO_CIPHER_3DES_CTR:
mode = QCRYPTO_CIPHER_MODE_CTR;
algo = QCRYPTO_CIPHER_ALG_3DES;
break;
default:
error_setg(errp, "Unsupported cipher alg :%u",
sess_info->cipher_alg);
return -1;
}
cipher = qcrypto_cipher_new(algo, mode,
sess_info->cipher_key,
sess_info->key_len,
errp);
if (!cipher) {
return -1;
}
sess = g_new0(CryptoDevBackendBuiltinSession, 1);
sess->cipher = cipher;
sess->direction = sess_info->direction;
sess->type = sess_info->op_type;
builtin->sessions[index] = sess;
return index;
}
static int64_t cryptodev_builtin_sym_create_session(
CryptoDevBackend *backend,
CryptoDevBackendSymSessionInfo *sess_info,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendBuiltin *builtin =
CRYPTODEV_BACKEND_BUILTIN(backend);
int64_t session_id = -1;
int ret;
switch (sess_info->op_code) {
case VIRTIO_CRYPTO_CIPHER_CREATE_SESSION:
ret = cryptodev_builtin_create_cipher_session(
builtin, sess_info, errp);
if (ret < 0) {
return ret;
} else {
session_id = ret;
}
break;
case VIRTIO_CRYPTO_HASH_CREATE_SESSION:
case VIRTIO_CRYPTO_MAC_CREATE_SESSION:
default:
error_setg(errp, "Unsupported opcode :%" PRIu32 "",
sess_info->op_code);
return -1;
}
return session_id;
}
static int cryptodev_builtin_sym_close_session(
CryptoDevBackend *backend,
uint64_t session_id,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendBuiltin *builtin =
CRYPTODEV_BACKEND_BUILTIN(backend);
if (session_id >= MAX_NUM_SESSIONS ||
builtin->sessions[session_id] == NULL) {
error_setg(errp, "Cannot find a valid session id: %" PRIu64 "",
session_id);
return -1;
}
qcrypto_cipher_free(builtin->sessions[session_id]->cipher);
g_free(builtin->sessions[session_id]);
builtin->sessions[session_id] = NULL;
return 0;
}
static int cryptodev_builtin_sym_operation(
CryptoDevBackend *backend,
CryptoDevBackendSymOpInfo *op_info,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendBuiltin *builtin =
CRYPTODEV_BACKEND_BUILTIN(backend);
CryptoDevBackendBuiltinSession *sess;
int ret;
if (op_info->session_id >= MAX_NUM_SESSIONS ||
builtin->sessions[op_info->session_id] == NULL) {
error_setg(errp, "Cannot find a valid session id: %" PRIu64 "",
op_info->session_id);
return -VIRTIO_CRYPTO_INVSESS;
}
if (op_info->op_type == VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING) {
error_setg(errp,
"Algorithm chain is unsupported for cryptdoev-builtin");
return -VIRTIO_CRYPTO_NOTSUPP;
}
sess = builtin->sessions[op_info->session_id];
if (op_info->iv_len > 0) {
ret = qcrypto_cipher_setiv(sess->cipher, op_info->iv,
op_info->iv_len, errp);
if (ret < 0) {
return -VIRTIO_CRYPTO_ERR;
}
}
if (sess->direction == VIRTIO_CRYPTO_OP_ENCRYPT) {
ret = qcrypto_cipher_encrypt(sess->cipher, op_info->src,
op_info->dst, op_info->src_len, errp);
if (ret < 0) {
return -VIRTIO_CRYPTO_ERR;
}
} else {
ret = qcrypto_cipher_decrypt(sess->cipher, op_info->src,
op_info->dst, op_info->src_len, errp);
if (ret < 0) {
return -VIRTIO_CRYPTO_ERR;
}
}
return VIRTIO_CRYPTO_OK;
}
static void cryptodev_builtin_cleanup(
CryptoDevBackend *backend,
Error **errp)
{
CryptoDevBackendBuiltin *builtin =
CRYPTODEV_BACKEND_BUILTIN(backend);
size_t i;
int queues = backend->conf.peers.queues;
CryptoDevBackendClient *cc;
for (i = 0; i < MAX_NUM_SESSIONS; i++) {
if (builtin->sessions[i] != NULL) {
cryptodev_builtin_sym_close_session(
backend, i, 0, errp);
}
}
for (i = 0; i < queues; i++) {
cc = backend->conf.peers.ccs[i];
if (cc) {
cryptodev_backend_free_client(cc);
backend->conf.peers.ccs[i] = NULL;
}
}
cryptodev_backend_set_ready(backend, false);
}
static void
cryptodev_builtin_class_init(ObjectClass *oc, void *data)
{
CryptoDevBackendClass *bc = CRYPTODEV_BACKEND_CLASS(oc);
bc->init = cryptodev_builtin_init;
bc->cleanup = cryptodev_builtin_cleanup;
bc->create_session = cryptodev_builtin_sym_create_session;
bc->close_session = cryptodev_builtin_sym_close_session;
bc->do_sym_op = cryptodev_builtin_sym_operation;
}
static const TypeInfo cryptodev_builtin_info = {
.name = TYPE_CRYPTODEV_BACKEND_BUILTIN,
.parent = TYPE_CRYPTODEV_BACKEND,
.class_init = cryptodev_builtin_class_init,
.instance_size = sizeof(CryptoDevBackendBuiltin),
};
static void
cryptodev_builtin_register_types(void)
{
type_register_static(&cryptodev_builtin_info);
}
type_init(cryptodev_builtin_register_types);

View File

@@ -1,377 +0,0 @@
/*
* QEMU Cryptodev backend for QEMU cipher APIs
*
* Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD.
*
* Authors:
* Gonglei <arei.gonglei@huawei.com>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu/osdep.h"
#include "hw/boards.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "qemu/error-report.h"
#include "standard-headers/linux/virtio_crypto.h"
#include "sysemu/cryptodev-vhost.h"
#include "chardev/char-fe.h"
#include "sysemu/cryptodev-vhost-user.h"
/**
* @TYPE_CRYPTODEV_BACKEND_VHOST_USER:
* name of backend that uses vhost user server
*/
#define TYPE_CRYPTODEV_BACKEND_VHOST_USER "cryptodev-vhost-user"
#define CRYPTODEV_BACKEND_VHOST_USER(obj) \
OBJECT_CHECK(CryptoDevBackendVhostUser, \
(obj), TYPE_CRYPTODEV_BACKEND_VHOST_USER)
typedef struct CryptoDevBackendVhostUser {
CryptoDevBackend parent_obj;
CharBackend chr;
char *chr_name;
bool opened;
CryptoDevBackendVhost *vhost_crypto[MAX_CRYPTO_QUEUE_NUM];
} CryptoDevBackendVhostUser;
static int
cryptodev_vhost_user_running(
CryptoDevBackendVhost *crypto)
{
return crypto ? 1 : 0;
}
CryptoDevBackendVhost *
cryptodev_vhost_user_get_vhost(
CryptoDevBackendClient *cc,
CryptoDevBackend *b,
uint16_t queue)
{
CryptoDevBackendVhostUser *s =
CRYPTODEV_BACKEND_VHOST_USER(b);
assert(cc->type == CRYPTODEV_BACKEND_TYPE_VHOST_USER);
assert(queue < MAX_CRYPTO_QUEUE_NUM);
return s->vhost_crypto[queue];
}
static void cryptodev_vhost_user_stop(int queues,
CryptoDevBackendVhostUser *s)
{
size_t i;
for (i = 0; i < queues; i++) {
if (!cryptodev_vhost_user_running(s->vhost_crypto[i])) {
continue;
}
cryptodev_vhost_cleanup(s->vhost_crypto[i]);
s->vhost_crypto[i] = NULL;
}
}
static int
cryptodev_vhost_user_start(int queues,
CryptoDevBackendVhostUser *s)
{
CryptoDevBackendVhostOptions options;
CryptoDevBackend *b = CRYPTODEV_BACKEND(s);
int max_queues;
size_t i;
for (i = 0; i < queues; i++) {
if (cryptodev_vhost_user_running(s->vhost_crypto[i])) {
continue;
}
options.opaque = &s->chr;
options.backend_type = VHOST_BACKEND_TYPE_USER;
options.cc = b->conf.peers.ccs[i];
s->vhost_crypto[i] = cryptodev_vhost_init(&options);
if (!s->vhost_crypto[i]) {
error_report("failed to init vhost_crypto for queue %zu", i);
goto err;
}
if (i == 0) {
max_queues =
cryptodev_vhost_get_max_queues(s->vhost_crypto[i]);
if (queues > max_queues) {
error_report("you are asking more queues than supported: %d",
max_queues);
goto err;
}
}
}
return 0;
err:
cryptodev_vhost_user_stop(i + 1, s);
return -1;
}
static Chardev *
cryptodev_vhost_claim_chardev(CryptoDevBackendVhostUser *s,
Error **errp)
{
Chardev *chr;
if (s->chr_name == NULL) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
"chardev", "a valid character device");
return NULL;
}
chr = qemu_chr_find(s->chr_name);
if (chr == NULL) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", s->chr_name);
return NULL;
}
return chr;
}
static void cryptodev_vhost_user_event(void *opaque, int event)
{
CryptoDevBackendVhostUser *s = opaque;
CryptoDevBackend *b = CRYPTODEV_BACKEND(s);
Error *err = NULL;
int queues = b->conf.peers.queues;
assert(queues < MAX_CRYPTO_QUEUE_NUM);
switch (event) {
case CHR_EVENT_OPENED:
if (cryptodev_vhost_user_start(queues, s) < 0) {
exit(1);
}
b->ready = true;
break;
case CHR_EVENT_CLOSED:
b->ready = false;
cryptodev_vhost_user_stop(queues, s);
break;
}
if (err) {
error_report_err(err);
}
}
static void cryptodev_vhost_user_init(
CryptoDevBackend *backend, Error **errp)
{
int queues = backend->conf.peers.queues;
size_t i;
Error *local_err = NULL;
Chardev *chr;
CryptoDevBackendClient *cc;
CryptoDevBackendVhostUser *s =
CRYPTODEV_BACKEND_VHOST_USER(backend);
chr = cryptodev_vhost_claim_chardev(s, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
s->opened = true;
for (i = 0; i < queues; i++) {
cc = cryptodev_backend_new_client(
"cryptodev-vhost-user", NULL);
cc->info_str = g_strdup_printf("cryptodev-vhost-user%zu to %s ",
i, chr->label);
cc->queue_index = i;
cc->type = CRYPTODEV_BACKEND_TYPE_VHOST_USER;
backend->conf.peers.ccs[i] = cc;
if (i == 0) {
if (!qemu_chr_fe_init(&s->chr, chr, &local_err)) {
error_propagate(errp, local_err);
return;
}
}
}
qemu_chr_fe_set_handlers(&s->chr, NULL, NULL,
cryptodev_vhost_user_event, NULL, s, NULL, true);
backend->conf.crypto_services =
1u << VIRTIO_CRYPTO_SERVICE_CIPHER |
1u << VIRTIO_CRYPTO_SERVICE_HASH |
1u << VIRTIO_CRYPTO_SERVICE_MAC;
backend->conf.cipher_algo_l = 1u << VIRTIO_CRYPTO_CIPHER_AES_CBC;
backend->conf.hash_algo = 1u << VIRTIO_CRYPTO_HASH_SHA1;
backend->conf.max_size = UINT64_MAX;
backend->conf.max_cipher_key_len = VHOST_USER_MAX_CIPHER_KEY_LEN;
backend->conf.max_auth_key_len = VHOST_USER_MAX_AUTH_KEY_LEN;
}
static int64_t cryptodev_vhost_user_sym_create_session(
CryptoDevBackend *backend,
CryptoDevBackendSymSessionInfo *sess_info,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendClient *cc =
backend->conf.peers.ccs[queue_index];
CryptoDevBackendVhost *vhost_crypto;
uint64_t session_id = 0;
int ret;
vhost_crypto = cryptodev_vhost_user_get_vhost(cc, backend, queue_index);
if (vhost_crypto) {
struct vhost_dev *dev = &(vhost_crypto->dev);
ret = dev->vhost_ops->vhost_crypto_create_session(dev,
sess_info,
&session_id);
if (ret < 0) {
return -1;
} else {
return session_id;
}
}
return -1;
}
static int cryptodev_vhost_user_sym_close_session(
CryptoDevBackend *backend,
uint64_t session_id,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendClient *cc =
backend->conf.peers.ccs[queue_index];
CryptoDevBackendVhost *vhost_crypto;
int ret;
vhost_crypto = cryptodev_vhost_user_get_vhost(cc, backend, queue_index);
if (vhost_crypto) {
struct vhost_dev *dev = &(vhost_crypto->dev);
ret = dev->vhost_ops->vhost_crypto_close_session(dev,
session_id);
if (ret < 0) {
return -1;
} else {
return 0;
}
}
return -1;
}
static void cryptodev_vhost_user_cleanup(
CryptoDevBackend *backend,
Error **errp)
{
CryptoDevBackendVhostUser *s =
CRYPTODEV_BACKEND_VHOST_USER(backend);
size_t i;
int queues = backend->conf.peers.queues;
CryptoDevBackendClient *cc;
cryptodev_vhost_user_stop(queues, s);
for (i = 0; i < queues; i++) {
cc = backend->conf.peers.ccs[i];
if (cc) {
cryptodev_backend_free_client(cc);
backend->conf.peers.ccs[i] = NULL;
}
}
}
static void cryptodev_vhost_user_set_chardev(Object *obj,
const char *value, Error **errp)
{
CryptoDevBackendVhostUser *s =
CRYPTODEV_BACKEND_VHOST_USER(obj);
if (s->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
} else {
g_free(s->chr_name);
s->chr_name = g_strdup(value);
}
}
static char *
cryptodev_vhost_user_get_chardev(Object *obj, Error **errp)
{
CryptoDevBackendVhostUser *s =
CRYPTODEV_BACKEND_VHOST_USER(obj);
Chardev *chr = qemu_chr_fe_get_driver(&s->chr);
if (chr && chr->label) {
return g_strdup(chr->label);
}
return NULL;
}
static void cryptodev_vhost_user_instance_int(Object *obj)
{
object_property_add_str(obj, "chardev",
cryptodev_vhost_user_get_chardev,
cryptodev_vhost_user_set_chardev,
NULL);
}
static void cryptodev_vhost_user_finalize(Object *obj)
{
CryptoDevBackendVhostUser *s =
CRYPTODEV_BACKEND_VHOST_USER(obj);
qemu_chr_fe_deinit(&s->chr, false);
g_free(s->chr_name);
}
static void
cryptodev_vhost_user_class_init(ObjectClass *oc, void *data)
{
CryptoDevBackendClass *bc = CRYPTODEV_BACKEND_CLASS(oc);
bc->init = cryptodev_vhost_user_init;
bc->cleanup = cryptodev_vhost_user_cleanup;
bc->create_session = cryptodev_vhost_user_sym_create_session;
bc->close_session = cryptodev_vhost_user_sym_close_session;
bc->do_sym_op = NULL;
}
static const TypeInfo cryptodev_vhost_user_info = {
.name = TYPE_CRYPTODEV_BACKEND_VHOST_USER,
.parent = TYPE_CRYPTODEV_BACKEND,
.class_init = cryptodev_vhost_user_class_init,
.instance_init = cryptodev_vhost_user_instance_int,
.instance_finalize = cryptodev_vhost_user_finalize,
.instance_size = sizeof(CryptoDevBackendVhostUser),
};
static void
cryptodev_vhost_user_register_types(void)
{
type_register_static(&cryptodev_vhost_user_info);
}
type_init(cryptodev_vhost_user_register_types);

View File

@@ -1,347 +0,0 @@
/*
* QEMU Cryptodev backend for QEMU cipher APIs
*
* Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD.
*
* Authors:
* Gonglei <arei.gonglei@huawei.com>
* Jay Zhou <jianjay.zhou@huawei.com>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu/osdep.h"
#include "hw/virtio/virtio-bus.h"
#include "sysemu/cryptodev-vhost.h"
#ifdef CONFIG_VHOST_CRYPTO
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "qemu/error-report.h"
#include "hw/virtio/virtio-crypto.h"
#include "sysemu/cryptodev-vhost-user.h"
uint64_t
cryptodev_vhost_get_max_queues(
CryptoDevBackendVhost *crypto)
{
return crypto->dev.max_queues;
}
void cryptodev_vhost_cleanup(CryptoDevBackendVhost *crypto)
{
vhost_dev_cleanup(&crypto->dev);
g_free(crypto);
}
struct CryptoDevBackendVhost *
cryptodev_vhost_init(
CryptoDevBackendVhostOptions *options)
{
int r;
CryptoDevBackendVhost *crypto;
crypto = g_new(CryptoDevBackendVhost, 1);
crypto->dev.max_queues = 1;
crypto->dev.nvqs = 1;
crypto->dev.vqs = crypto->vqs;
crypto->cc = options->cc;
crypto->dev.protocol_features = 0;
crypto->backend = -1;
/* vhost-user needs vq_index to initiate a specific queue pair */
crypto->dev.vq_index = crypto->cc->queue_index * crypto->dev.nvqs;
r = vhost_dev_init(&crypto->dev, options->opaque, options->backend_type, 0);
if (r < 0) {
goto fail;
}
return crypto;
fail:
g_free(crypto);
return NULL;
}
static int
cryptodev_vhost_start_one(CryptoDevBackendVhost *crypto,
VirtIODevice *dev)
{
int r;
crypto->dev.nvqs = 1;
crypto->dev.vqs = crypto->vqs;
r = vhost_dev_enable_notifiers(&crypto->dev, dev);
if (r < 0) {
goto fail_notifiers;
}
r = vhost_dev_start(&crypto->dev, dev);
if (r < 0) {
goto fail_start;
}
return 0;
fail_start:
vhost_dev_disable_notifiers(&crypto->dev, dev);
fail_notifiers:
return r;
}
static void
cryptodev_vhost_stop_one(CryptoDevBackendVhost *crypto,
VirtIODevice *dev)
{
vhost_dev_stop(&crypto->dev, dev);
vhost_dev_disable_notifiers(&crypto->dev, dev);
}
CryptoDevBackendVhost *
cryptodev_get_vhost(CryptoDevBackendClient *cc,
CryptoDevBackend *b,
uint16_t queue)
{
CryptoDevBackendVhost *vhost_crypto = NULL;
if (!cc) {
return NULL;
}
switch (cc->type) {
#if defined(CONFIG_VHOST_USER) && defined(CONFIG_LINUX)
case CRYPTODEV_BACKEND_TYPE_VHOST_USER:
vhost_crypto = cryptodev_vhost_user_get_vhost(cc, b, queue);
break;
#endif
default:
break;
}
return vhost_crypto;
}
static void
cryptodev_vhost_set_vq_index(CryptoDevBackendVhost *crypto,
int vq_index)
{
crypto->dev.vq_index = vq_index;
}
static int
vhost_set_vring_enable(CryptoDevBackendClient *cc,
CryptoDevBackend *b,
uint16_t queue, int enable)
{
CryptoDevBackendVhost *crypto =
cryptodev_get_vhost(cc, b, queue);
const VhostOps *vhost_ops;
cc->vring_enable = enable;
if (!crypto) {
return 0;
}
vhost_ops = crypto->dev.vhost_ops;
if (vhost_ops->vhost_set_vring_enable) {
return vhost_ops->vhost_set_vring_enable(&crypto->dev, enable);
}
return 0;
}
int cryptodev_vhost_start(VirtIODevice *dev, int total_queues)
{
VirtIOCrypto *vcrypto = VIRTIO_CRYPTO(dev);
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
VirtioBusState *vbus = VIRTIO_BUS(qbus);
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
int r, e;
int i;
CryptoDevBackend *b = vcrypto->cryptodev;
CryptoDevBackendVhost *vhost_crypto;
CryptoDevBackendClient *cc;
if (!k->set_guest_notifiers) {
error_report("binding does not support guest notifiers");
return -ENOSYS;
}
for (i = 0; i < total_queues; i++) {
cc = b->conf.peers.ccs[i];
vhost_crypto = cryptodev_get_vhost(cc, b, i);
cryptodev_vhost_set_vq_index(vhost_crypto, i);
/* Suppress the masking guest notifiers on vhost user
* because vhost user doesn't interrupt masking/unmasking
* properly.
*/
if (cc->type == CRYPTODEV_BACKEND_TYPE_VHOST_USER) {
dev->use_guest_notifier_mask = false;
}
}
r = k->set_guest_notifiers(qbus->parent, total_queues, true);
if (r < 0) {
error_report("error binding guest notifier: %d", -r);
goto err;
}
for (i = 0; i < total_queues; i++) {
cc = b->conf.peers.ccs[i];
vhost_crypto = cryptodev_get_vhost(cc, b, i);
r = cryptodev_vhost_start_one(vhost_crypto, dev);
if (r < 0) {
goto err_start;
}
if (cc->vring_enable) {
/* restore vring enable state */
r = vhost_set_vring_enable(cc, b, i, cc->vring_enable);
if (r < 0) {
goto err_start;
}
}
}
return 0;
err_start:
while (--i >= 0) {
cc = b->conf.peers.ccs[i];
vhost_crypto = cryptodev_get_vhost(cc, b, i);
cryptodev_vhost_stop_one(vhost_crypto, dev);
}
e = k->set_guest_notifiers(qbus->parent, total_queues, false);
if (e < 0) {
error_report("vhost guest notifier cleanup failed: %d", e);
}
err:
return r;
}
void cryptodev_vhost_stop(VirtIODevice *dev, int total_queues)
{
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
VirtioBusState *vbus = VIRTIO_BUS(qbus);
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
VirtIOCrypto *vcrypto = VIRTIO_CRYPTO(dev);
CryptoDevBackend *b = vcrypto->cryptodev;
CryptoDevBackendVhost *vhost_crypto;
CryptoDevBackendClient *cc;
size_t i;
int r;
for (i = 0; i < total_queues; i++) {
cc = b->conf.peers.ccs[i];
vhost_crypto = cryptodev_get_vhost(cc, b, i);
cryptodev_vhost_stop_one(vhost_crypto, dev);
}
r = k->set_guest_notifiers(qbus->parent, total_queues, false);
if (r < 0) {
error_report("vhost guest notifier cleanup failed: %d", r);
}
assert(r >= 0);
}
void cryptodev_vhost_virtqueue_mask(VirtIODevice *dev,
int queue,
int idx, bool mask)
{
VirtIOCrypto *vcrypto = VIRTIO_CRYPTO(dev);
CryptoDevBackend *b = vcrypto->cryptodev;
CryptoDevBackendVhost *vhost_crypto;
CryptoDevBackendClient *cc;
assert(queue < MAX_CRYPTO_QUEUE_NUM);
cc = b->conf.peers.ccs[queue];
vhost_crypto = cryptodev_get_vhost(cc, b, queue);
vhost_virtqueue_mask(&vhost_crypto->dev, dev, idx, mask);
}
bool cryptodev_vhost_virtqueue_pending(VirtIODevice *dev,
int queue, int idx)
{
VirtIOCrypto *vcrypto = VIRTIO_CRYPTO(dev);
CryptoDevBackend *b = vcrypto->cryptodev;
CryptoDevBackendVhost *vhost_crypto;
CryptoDevBackendClient *cc;
assert(queue < MAX_CRYPTO_QUEUE_NUM);
cc = b->conf.peers.ccs[queue];
vhost_crypto = cryptodev_get_vhost(cc, b, queue);
return vhost_virtqueue_pending(&vhost_crypto->dev, idx);
}
#else
uint64_t
cryptodev_vhost_get_max_queues(CryptoDevBackendVhost *crypto)
{
return 0;
}
void cryptodev_vhost_cleanup(CryptoDevBackendVhost *crypto)
{
}
struct CryptoDevBackendVhost *
cryptodev_vhost_init(CryptoDevBackendVhostOptions *options)
{
return NULL;
}
CryptoDevBackendVhost *
cryptodev_get_vhost(CryptoDevBackendClient *cc,
CryptoDevBackend *b,
uint16_t queue)
{
return NULL;
}
int cryptodev_vhost_start(VirtIODevice *dev, int total_queues)
{
return -1;
}
void cryptodev_vhost_stop(VirtIODevice *dev, int total_queues)
{
}
void cryptodev_vhost_virtqueue_mask(VirtIODevice *dev,
int queue,
int idx, bool mask)
{
}
bool cryptodev_vhost_virtqueue_pending(VirtIODevice *dev,
int queue, int idx)
{
return false;
}
#endif

View File

@@ -1,269 +0,0 @@
/*
* QEMU Crypto Device Implementation
*
* Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD.
*
* Authors:
* Gonglei <arei.gonglei@huawei.com>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu/osdep.h"
#include "sysemu/cryptodev.h"
#include "hw/boards.h"
#include "qapi/error.h"
#include "qapi/visitor.h"
#include "qemu/config-file.h"
#include "qom/object_interfaces.h"
#include "hw/virtio/virtio-crypto.h"
static QTAILQ_HEAD(, CryptoDevBackendClient) crypto_clients;
CryptoDevBackendClient *
cryptodev_backend_new_client(const char *model,
const char *name)
{
CryptoDevBackendClient *cc;
cc = g_malloc0(sizeof(CryptoDevBackendClient));
cc->model = g_strdup(model);
if (name) {
cc->name = g_strdup(name);
}
QTAILQ_INSERT_TAIL(&crypto_clients, cc, next);
return cc;
}
void cryptodev_backend_free_client(
CryptoDevBackendClient *cc)
{
QTAILQ_REMOVE(&crypto_clients, cc, next);
g_free(cc->name);
g_free(cc->model);
g_free(cc->info_str);
g_free(cc);
}
void cryptodev_backend_cleanup(
CryptoDevBackend *backend,
Error **errp)
{
CryptoDevBackendClass *bc =
CRYPTODEV_BACKEND_GET_CLASS(backend);
if (bc->cleanup) {
bc->cleanup(backend, errp);
}
}
int64_t cryptodev_backend_sym_create_session(
CryptoDevBackend *backend,
CryptoDevBackendSymSessionInfo *sess_info,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendClass *bc =
CRYPTODEV_BACKEND_GET_CLASS(backend);
if (bc->create_session) {
return bc->create_session(backend, sess_info, queue_index, errp);
}
return -1;
}
int cryptodev_backend_sym_close_session(
CryptoDevBackend *backend,
uint64_t session_id,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendClass *bc =
CRYPTODEV_BACKEND_GET_CLASS(backend);
if (bc->close_session) {
return bc->close_session(backend, session_id, queue_index, errp);
}
return -1;
}
static int cryptodev_backend_sym_operation(
CryptoDevBackend *backend,
CryptoDevBackendSymOpInfo *op_info,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendClass *bc =
CRYPTODEV_BACKEND_GET_CLASS(backend);
if (bc->do_sym_op) {
return bc->do_sym_op(backend, op_info, queue_index, errp);
}
return -VIRTIO_CRYPTO_ERR;
}
int cryptodev_backend_crypto_operation(
CryptoDevBackend *backend,
void *opaque,
uint32_t queue_index, Error **errp)
{
VirtIOCryptoReq *req = opaque;
if (req->flags == CRYPTODEV_BACKEND_ALG_SYM) {
CryptoDevBackendSymOpInfo *op_info;
op_info = req->u.sym_op_info;
return cryptodev_backend_sym_operation(backend,
op_info, queue_index, errp);
} else {
error_setg(errp, "Unsupported cryptodev alg type: %" PRIu32 "",
req->flags);
return -VIRTIO_CRYPTO_NOTSUPP;
}
return -VIRTIO_CRYPTO_ERR;
}
static void
cryptodev_backend_get_queues(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
CryptoDevBackend *backend = CRYPTODEV_BACKEND(obj);
uint32_t value = backend->conf.peers.queues;
visit_type_uint32(v, name, &value, errp);
}
static void
cryptodev_backend_set_queues(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
CryptoDevBackend *backend = CRYPTODEV_BACKEND(obj);
Error *local_err = NULL;
uint32_t value;
visit_type_uint32(v, name, &value, &local_err);
if (local_err) {
goto out;
}
if (!value) {
error_setg(&local_err, "Property '%s.%s' doesn't take value '%"
PRIu32 "'", object_get_typename(obj), name, value);
goto out;
}
backend->conf.peers.queues = value;
out:
error_propagate(errp, local_err);
}
static void
cryptodev_backend_complete(UserCreatable *uc, Error **errp)
{
CryptoDevBackend *backend = CRYPTODEV_BACKEND(uc);
CryptoDevBackendClass *bc = CRYPTODEV_BACKEND_GET_CLASS(uc);
Error *local_err = NULL;
if (bc->init) {
bc->init(backend, &local_err);
if (local_err) {
goto out;
}
}
return;
out:
error_propagate(errp, local_err);
}
void cryptodev_backend_set_used(CryptoDevBackend *backend, bool used)
{
backend->is_used = used;
}
bool cryptodev_backend_is_used(CryptoDevBackend *backend)
{
return backend->is_used;
}
void cryptodev_backend_set_ready(CryptoDevBackend *backend, bool ready)
{
backend->ready = ready;
}
bool cryptodev_backend_is_ready(CryptoDevBackend *backend)
{
return backend->ready;
}
static bool
cryptodev_backend_can_be_deleted(UserCreatable *uc)
{
return !cryptodev_backend_is_used(CRYPTODEV_BACKEND(uc));
}
static void cryptodev_backend_instance_init(Object *obj)
{
object_property_add(obj, "queues", "uint32",
cryptodev_backend_get_queues,
cryptodev_backend_set_queues,
NULL, NULL, NULL);
/* Initialize devices' queues property to 1 */
object_property_set_int(obj, 1, "queues", NULL);
}
static void cryptodev_backend_finalize(Object *obj)
{
CryptoDevBackend *backend = CRYPTODEV_BACKEND(obj);
cryptodev_backend_cleanup(backend, NULL);
}
static void
cryptodev_backend_class_init(ObjectClass *oc, void *data)
{
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
ucc->complete = cryptodev_backend_complete;
ucc->can_be_deleted = cryptodev_backend_can_be_deleted;
QTAILQ_INIT(&crypto_clients);
}
static const TypeInfo cryptodev_backend_info = {
.name = TYPE_CRYPTODEV_BACKEND,
.parent = TYPE_OBJECT,
.instance_size = sizeof(CryptoDevBackend),
.instance_init = cryptodev_backend_instance_init,
.instance_finalize = cryptodev_backend_finalize,
.class_size = sizeof(CryptoDevBackendClass),
.class_init = cryptodev_backend_class_init,
.interfaces = (InterfaceInfo[]) {
{ TYPE_USER_CREATABLE },
{ }
}
};
static void
cryptodev_backend_register_types(void)
{
type_register_static(&cryptodev_backend_info);
}
type_init(cryptodev_backend_register_types);

View File

@@ -9,8 +9,6 @@
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "sysemu/hostmem.h"
#include "sysemu/sysemu.h"
@@ -31,9 +29,8 @@ typedef struct HostMemoryBackendFile HostMemoryBackendFile;
struct HostMemoryBackendFile {
HostMemoryBackend parent_obj;
bool discard_data;
bool share;
char *mem_path;
uint64_t align;
};
static void
@@ -46,25 +43,30 @@ file_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
return;
}
if (!fb->mem_path) {
error_setg(errp, "mem-path property not set");
error_setg(errp, "mem_path property not set");
return;
}
#ifndef CONFIG_LINUX
error_setg(errp, "-mem-path not supported on this host");
#else
if (!host_memory_backend_mr_inited(backend)) {
gchar *path;
if (!memory_region_size(&backend->mr)) {
backend->force_prealloc = mem_prealloc;
path = object_get_canonical_path(OBJECT(backend));
memory_region_init_ram_from_file(&backend->mr, OBJECT(backend),
path,
backend->size, fb->align, backend->share,
object_get_canonical_path(OBJECT(backend)),
backend->size, fb->share,
fb->mem_path, errp);
g_free(path);
}
#endif
}
static void
file_backend_class_init(ObjectClass *oc, void *data)
{
HostMemoryBackendClass *bc = MEMORY_BACKEND_CLASS(oc);
bc->alloc = file_backend_memory_alloc;
}
static char *get_mem_path(Object *o, Error **errp)
{
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
@@ -77,104 +79,50 @@ static void set_mem_path(Object *o, const char *str, Error **errp)
HostMemoryBackend *backend = MEMORY_BACKEND(o);
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
if (host_memory_backend_mr_inited(backend)) {
if (memory_region_size(&backend->mr)) {
error_setg(errp, "cannot change property value");
return;
}
g_free(fb->mem_path);
if (fb->mem_path) {
g_free(fb->mem_path);
}
fb->mem_path = g_strdup(str);
}
static bool file_memory_backend_get_discard_data(Object *o, Error **errp)
{
return MEMORY_BACKEND_FILE(o)->discard_data;
}
static void file_memory_backend_set_discard_data(Object *o, bool value,
Error **errp)
{
MEMORY_BACKEND_FILE(o)->discard_data = value;
}
static void file_memory_backend_get_align(Object *o, Visitor *v,
const char *name, void *opaque,
Error **errp)
static bool file_memory_backend_get_share(Object *o, Error **errp)
{
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
uint64_t val = fb->align;
visit_type_size(v, name, &val, errp);
return fb->share;
}
static void file_memory_backend_set_align(Object *o, Visitor *v,
const char *name, void *opaque,
Error **errp)
static void file_memory_backend_set_share(Object *o, bool value, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(o);
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
Error *local_err = NULL;
uint64_t val;
if (host_memory_backend_mr_inited(backend)) {
error_setg(&local_err, "cannot change property value");
goto out;
}
visit_type_size(v, name, &val, &local_err);
if (local_err) {
goto out;
}
fb->align = val;
out:
error_propagate(errp, local_err);
}
static void file_backend_unparent(Object *obj)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(obj);
if (host_memory_backend_mr_inited(backend) && fb->discard_data) {
void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr);
qemu_madvise(ptr, sz, QEMU_MADV_REMOVE);
if (memory_region_size(&backend->mr)) {
error_setg(errp, "cannot change property value");
return;
}
fb->share = value;
}
static void
file_backend_class_init(ObjectClass *oc, void *data)
file_backend_instance_init(Object *o)
{
HostMemoryBackendClass *bc = MEMORY_BACKEND_CLASS(oc);
bc->alloc = file_backend_memory_alloc;
oc->unparent = file_backend_unparent;
object_class_property_add_bool(oc, "discard-data",
file_memory_backend_get_discard_data, file_memory_backend_set_discard_data,
&error_abort);
object_class_property_add_str(oc, "mem-path",
get_mem_path, set_mem_path,
&error_abort);
object_class_property_add(oc, "align", "int",
file_memory_backend_get_align,
file_memory_backend_set_align,
NULL, NULL, &error_abort);
}
static void file_backend_instance_finalize(Object *o)
{
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
g_free(fb->mem_path);
object_property_add_bool(o, "share",
file_memory_backend_get_share,
file_memory_backend_set_share, NULL);
object_property_add_str(o, "mem-path", get_mem_path,
set_mem_path, NULL);
}
static const TypeInfo file_backend_info = {
.name = TYPE_MEMORY_BACKEND_FILE,
.parent = TYPE_MEMORY_BACKEND,
.class_init = file_backend_class_init,
.instance_finalize = file_backend_instance_finalize,
.instance_init = file_backend_instance_init,
.instance_size = sizeof(HostMemoryBackendFile),
};

View File

@@ -1,170 +0,0 @@
/*
* QEMU host memfd memory backend
*
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Marc-André Lureau <marcandre.lureau@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "sysemu/hostmem.h"
#include "sysemu/sysemu.h"
#include "qom/object_interfaces.h"
#include "qemu/memfd.h"
#include "qapi/error.h"
#define TYPE_MEMORY_BACKEND_MEMFD "memory-backend-memfd"
#define MEMORY_BACKEND_MEMFD(obj) \
OBJECT_CHECK(HostMemoryBackendMemfd, (obj), TYPE_MEMORY_BACKEND_MEMFD)
typedef struct HostMemoryBackendMemfd HostMemoryBackendMemfd;
struct HostMemoryBackendMemfd {
HostMemoryBackend parent_obj;
bool hugetlb;
uint64_t hugetlbsize;
bool seal;
};
static void
memfd_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
HostMemoryBackendMemfd *m = MEMORY_BACKEND_MEMFD(backend);
char *name;
int fd;
if (!backend->size) {
error_setg(errp, "can't create backend with size 0");
return;
}
if (host_memory_backend_mr_inited(backend)) {
return;
}
backend->force_prealloc = mem_prealloc;
fd = qemu_memfd_create(TYPE_MEMORY_BACKEND_MEMFD, backend->size,
m->hugetlb, m->hugetlbsize, m->seal ?
F_SEAL_GROW | F_SEAL_SHRINK | F_SEAL_SEAL : 0,
errp);
if (fd == -1) {
return;
}
name = object_get_canonical_path(OBJECT(backend));
memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend),
name, backend->size, true, fd, errp);
g_free(name);
}
static bool
memfd_backend_get_hugetlb(Object *o, Error **errp)
{
return MEMORY_BACKEND_MEMFD(o)->hugetlb;
}
static void
memfd_backend_set_hugetlb(Object *o, bool value, Error **errp)
{
MEMORY_BACKEND_MEMFD(o)->hugetlb = value;
}
static void
memfd_backend_set_hugetlbsize(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
HostMemoryBackendMemfd *m = MEMORY_BACKEND_MEMFD(obj);
Error *local_err = NULL;
uint64_t value;
if (host_memory_backend_mr_inited(MEMORY_BACKEND(obj))) {
error_setg(&local_err, "cannot change property value");
goto out;
}
visit_type_size(v, name, &value, &local_err);
if (local_err) {
goto out;
}
if (!value) {
error_setg(&local_err, "Property '%s.%s' doesn't take value '%"
PRIu64 "'", object_get_typename(obj), name, value);
goto out;
}
m->hugetlbsize = value;
out:
error_propagate(errp, local_err);
}
static void
memfd_backend_get_hugetlbsize(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
HostMemoryBackendMemfd *m = MEMORY_BACKEND_MEMFD(obj);
uint64_t value = m->hugetlbsize;
visit_type_size(v, name, &value, errp);
}
static bool
memfd_backend_get_seal(Object *o, Error **errp)
{
return MEMORY_BACKEND_MEMFD(o)->seal;
}
static void
memfd_backend_set_seal(Object *o, bool value, Error **errp)
{
MEMORY_BACKEND_MEMFD(o)->seal = value;
}
static void
memfd_backend_instance_init(Object *obj)
{
HostMemoryBackendMemfd *m = MEMORY_BACKEND_MEMFD(obj);
/* default to sealed file */
m->seal = true;
}
static void
memfd_backend_class_init(ObjectClass *oc, void *data)
{
HostMemoryBackendClass *bc = MEMORY_BACKEND_CLASS(oc);
bc->alloc = memfd_backend_memory_alloc;
object_class_property_add_bool(oc, "hugetlb",
memfd_backend_get_hugetlb,
memfd_backend_set_hugetlb,
&error_abort);
object_class_property_add(oc, "hugetlbsize", "int",
memfd_backend_get_hugetlbsize,
memfd_backend_set_hugetlbsize,
NULL, NULL, &error_abort);
object_class_property_add_bool(oc, "seal",
memfd_backend_get_seal,
memfd_backend_set_seal,
&error_abort);
}
static const TypeInfo memfd_backend_info = {
.name = TYPE_MEMORY_BACKEND_MEMFD,
.parent = TYPE_MEMORY_BACKEND,
.instance_init = memfd_backend_instance_init,
.class_init = memfd_backend_class_init,
.instance_size = sizeof(HostMemoryBackendMemfd),
};
static void register_types(void)
{
type_register_static(&memfd_backend_info);
}
type_init(register_types);

View File

@@ -9,9 +9,7 @@
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/hostmem.h"
#include "qapi/error.h"
#include "qom/object_interfaces.h"
#define TYPE_MEMORY_BACKEND_RAM "memory-backend-ram"
@@ -28,8 +26,8 @@ ram_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
}
path = object_get_canonical_path_component(OBJECT(backend));
memory_region_init_ram_shared_nomigrate(&backend->mr, OBJECT(backend), path,
backend->size, backend->share, errp);
memory_region_init_ram(&backend->mr, OBJECT(backend), path,
backend->size);
g_free(path);
}

View File

@@ -9,13 +9,11 @@
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/hostmem.h"
#include "hw/boards.h"
#include "qapi/error.h"
#include "qapi/qapi-builtin-visit.h"
#include "qapi/visitor.h"
#include "qapi-types.h"
#include "qapi-visit.h"
#include "qapi/qmp/qerror.h"
#include "qemu/config-file.h"
#include "qom/object_interfaces.h"
@@ -28,29 +26,29 @@ QEMU_BUILD_BUG_ON(HOST_MEM_POLICY_INTERLEAVE != MPOL_INTERLEAVE);
#endif
static void
host_memory_backend_get_size(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
host_memory_backend_get_size(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
uint64_t value = backend->size;
visit_type_size(v, name, &value, errp);
visit_type_size(v, &value, name, errp);
}
static void
host_memory_backend_set_size(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
host_memory_backend_set_size(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
Error *local_err = NULL;
uint64_t value;
if (host_memory_backend_mr_inited(backend)) {
if (memory_region_size(&backend->mr)) {
error_setg(&local_err, "cannot change property value");
goto out;
}
visit_type_size(v, name, &value, &local_err);
visit_type_size(v, &value, name, &local_err);
if (local_err) {
goto out;
}
@@ -65,8 +63,8 @@ out:
}
static void
host_memory_backend_get_host_nodes(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
host_memory_backend_get_host_nodes(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
uint16List *host_nodes = NULL;
@@ -93,18 +91,18 @@ host_memory_backend_get_host_nodes(Object *obj, Visitor *v, const char *name,
node = &(*node)->next;
} while (true);
visit_type_uint16List(v, name, &host_nodes, errp);
visit_type_uint16List(v, &host_nodes, name, errp);
}
static void
host_memory_backend_set_host_nodes(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
host_memory_backend_set_host_nodes(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
#ifdef CONFIG_NUMA
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
uint16List *l = NULL;
visit_type_uint16List(v, name, &l, errp);
visit_type_uint16List(v, &l, name, errp);
while (l) {
bitmap_set(backend->host_nodes, l->value, 1);
@@ -115,17 +113,24 @@ host_memory_backend_set_host_nodes(Object *obj, Visitor *v, const char *name,
#endif
}
static int
host_memory_backend_get_policy(Object *obj, Error **errp G_GNUC_UNUSED)
static void
host_memory_backend_get_policy(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
return backend->policy;
int policy = backend->policy;
visit_type_enum(v, &policy, HostMemPolicy_lookup, NULL, name, errp);
}
static void
host_memory_backend_set_policy(Object *obj, int policy, Error **errp)
host_memory_backend_set_policy(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
int policy;
visit_type_enum(v, &policy, HostMemPolicy_lookup, NULL, name, errp);
backend->policy = policy;
#ifndef CONFIG_NUMA
@@ -146,7 +151,7 @@ static void host_memory_backend_set_merge(Object *obj, bool value, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (!host_memory_backend_mr_inited(backend)) {
if (!memory_region_size(&backend->mr)) {
backend->merge = value;
return;
}
@@ -172,7 +177,7 @@ static void host_memory_backend_set_dump(Object *obj, bool value, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (!host_memory_backend_mr_inited(backend)) {
if (!memory_region_size(&backend->mr)) {
backend->dump = value;
return;
}
@@ -197,7 +202,6 @@ static bool host_memory_backend_get_prealloc(Object *obj, Error **errp)
static void host_memory_backend_set_prealloc(Object *obj, bool value,
Error **errp)
{
Error *local_err = NULL;
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (backend->force_prealloc) {
@@ -208,7 +212,7 @@ static void host_memory_backend_set_prealloc(Object *obj, bool value,
}
}
if (!host_memory_backend_mr_inited(backend)) {
if (!memory_region_size(&backend->mr)) {
backend->prealloc = value;
return;
}
@@ -218,11 +222,7 @@ static void host_memory_backend_set_prealloc(Object *obj, bool value,
void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr);
os_mem_prealloc(fd, ptr, sz, smp_cpus, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
os_mem_prealloc(fd, ptr, sz);
backend->prealloc = true;
}
}
@@ -230,36 +230,46 @@ static void host_memory_backend_set_prealloc(Object *obj, bool value,
static void host_memory_backend_init(Object *obj)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
MachineState *machine = MACHINE(qdev_get_machine());
backend->merge = machine_mem_merge(machine);
backend->dump = machine_dump_guest_core(machine);
backend->merge = qemu_opt_get_bool(qemu_get_machine_opts(),
"mem-merge", true);
backend->dump = qemu_opt_get_bool(qemu_get_machine_opts(),
"dump-guest-core", true);
backend->prealloc = mem_prealloc;
object_property_add_bool(obj, "merge",
host_memory_backend_get_merge,
host_memory_backend_set_merge, NULL);
object_property_add_bool(obj, "dump",
host_memory_backend_get_dump,
host_memory_backend_set_dump, NULL);
object_property_add_bool(obj, "prealloc",
host_memory_backend_get_prealloc,
host_memory_backend_set_prealloc, NULL);
object_property_add(obj, "size", "int",
host_memory_backend_get_size,
host_memory_backend_set_size, NULL, NULL, NULL);
object_property_add(obj, "host-nodes", "int",
host_memory_backend_get_host_nodes,
host_memory_backend_set_host_nodes, NULL, NULL, NULL);
object_property_add(obj, "policy", "str",
host_memory_backend_get_policy,
host_memory_backend_set_policy, NULL, NULL, NULL);
}
bool host_memory_backend_mr_inited(HostMemoryBackend *backend)
static void host_memory_backend_finalize(Object *obj)
{
/*
* NOTE: We forbid zero-length memory backend, so here zero means
* "we haven't inited the backend memory region yet".
*/
return memory_region_size(&backend->mr) != 0;
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (memory_region_size(&backend->mr)) {
memory_region_destroy(&backend->mr);
}
}
MemoryRegion *
host_memory_backend_get_memory(HostMemoryBackend *backend, Error **errp)
{
return host_memory_backend_mr_inited(backend) ? &backend->mr : NULL;
}
void host_memory_backend_set_mapped(HostMemoryBackend *backend, bool mapped)
{
backend->is_mapped = mapped;
}
bool host_memory_backend_is_mapped(HostMemoryBackend *backend)
{
return backend->is_mapped;
return memory_region_size(&backend->mr) ? &backend->mr : NULL;
}
static void
@@ -274,7 +284,8 @@ host_memory_backend_memory_complete(UserCreatable *uc, Error **errp)
if (bc->alloc) {
bc->alloc(backend, &local_err);
if (local_err) {
goto out;
error_propagate(errp, local_err);
return;
}
ptr = memory_region_get_ram_ptr(&backend->mr);
@@ -304,7 +315,7 @@ host_memory_backend_memory_complete(UserCreatable *uc, Error **errp)
return;
} else if (maxnode == 0 && backend->policy != MPOL_DEFAULT) {
error_setg(errp, "host-nodes must be set for policy %s",
HostMemPolicy_str(backend->policy));
HostMemPolicy_lookup[backend->policy]);
return;
}
@@ -318,11 +329,9 @@ host_memory_backend_memory_complete(UserCreatable *uc, Error **errp)
assert(maxnode <= MAX_NODES);
if (mbind(ptr, sz, backend->policy,
maxnode ? backend->host_nodes : NULL, maxnode + 1, flags)) {
if (backend->policy != MPOL_DEFAULT || errno != ENOSYS) {
error_setg_errno(errp, errno,
"cannot bind memory to host NUMA nodes");
return;
}
error_setg_errno(errp, errno,
"cannot bind memory to host NUMA nodes");
return;
}
#endif
/* Preallocate memory after the NUMA policy has been instantiated.
@@ -330,61 +339,9 @@ host_memory_backend_memory_complete(UserCreatable *uc, Error **errp)
* specified NUMA policy in place.
*/
if (backend->prealloc) {
os_mem_prealloc(memory_region_get_fd(&backend->mr), ptr, sz,
smp_cpus, &local_err);
if (local_err) {
goto out;
}
os_mem_prealloc(memory_region_get_fd(&backend->mr), ptr, sz);
}
}
out:
error_propagate(errp, local_err);
}
static bool
host_memory_backend_can_be_deleted(UserCreatable *uc)
{
if (host_memory_backend_is_mapped(MEMORY_BACKEND(uc))) {
return false;
} else {
return true;
}
}
static char *get_id(Object *o, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(o);
return g_strdup(backend->id);
}
static void set_id(Object *o, const char *str, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(o);
if (backend->id) {
error_setg(errp, "cannot change property value");
return;
}
backend->id = g_strdup(str);
}
static bool host_memory_backend_get_share(Object *o, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(o);
return backend->share;
}
static void host_memory_backend_set_share(Object *o, bool value, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(o);
if (host_memory_backend_mr_inited(backend)) {
error_setg(errp, "cannot change property value");
return;
}
backend->share = value;
}
static void
@@ -393,39 +350,6 @@ host_memory_backend_class_init(ObjectClass *oc, void *data)
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
ucc->complete = host_memory_backend_memory_complete;
ucc->can_be_deleted = host_memory_backend_can_be_deleted;
object_class_property_add_bool(oc, "merge",
host_memory_backend_get_merge,
host_memory_backend_set_merge, &error_abort);
object_class_property_add_bool(oc, "dump",
host_memory_backend_get_dump,
host_memory_backend_set_dump, &error_abort);
object_class_property_add_bool(oc, "prealloc",
host_memory_backend_get_prealloc,
host_memory_backend_set_prealloc, &error_abort);
object_class_property_add(oc, "size", "int",
host_memory_backend_get_size,
host_memory_backend_set_size,
NULL, NULL, &error_abort);
object_class_property_add(oc, "host-nodes", "int",
host_memory_backend_get_host_nodes,
host_memory_backend_set_host_nodes,
NULL, NULL, &error_abort);
object_class_property_add_enum(oc, "policy", "HostMemPolicy",
&HostMemPolicy_lookup,
host_memory_backend_get_policy,
host_memory_backend_set_policy, &error_abort);
object_class_property_add_str(oc, "id", get_id, set_id, &error_abort);
object_class_property_add_bool(oc, "share",
host_memory_backend_get_share, host_memory_backend_set_share,
&error_abort);
}
static void host_memory_backend_finalize(Object *o)
{
HostMemoryBackend *backend = MEMORY_BACKEND(o);
g_free(backend->id);
}
static const TypeInfo host_memory_backend_info = {

85
backends/msmouse.c Normal file
View File

@@ -0,0 +1,85 @@
/*
* QEMU Microsoft serial mouse emulation
*
* Copyright (c) 2008 Lubomir Rintel
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <stdlib.h>
#include "qemu-common.h"
#include "sysemu/char.h"
#include "ui/console.h"
#define MSMOUSE_LO6(n) ((n) & 0x3f)
#define MSMOUSE_HI2(n) (((n) & 0xc0) >> 6)
static void msmouse_event(void *opaque,
int dx, int dy, int dz, int buttons_state)
{
CharDriverState *chr = (CharDriverState *)opaque;
unsigned char bytes[4] = { 0x40, 0x00, 0x00, 0x00 };
/* Movement deltas */
bytes[0] |= (MSMOUSE_HI2(dy) << 2) | MSMOUSE_HI2(dx);
bytes[1] |= MSMOUSE_LO6(dx);
bytes[2] |= MSMOUSE_LO6(dy);
/* Buttons */
bytes[0] |= (buttons_state & 0x01 ? 0x20 : 0x00);
bytes[0] |= (buttons_state & 0x02 ? 0x10 : 0x00);
bytes[3] |= (buttons_state & 0x04 ? 0x20 : 0x00);
/* We always send the packet of, so that we do not have to keep track
of previous state of the middle button. This can potentially confuse
some very old drivers for two button mice though. */
qemu_chr_be_write(chr, bytes, 4);
}
static int msmouse_chr_write (struct CharDriverState *s, const uint8_t *buf, int len)
{
/* Ignore writes to mouse port */
return len;
}
static void msmouse_chr_close (struct CharDriverState *chr)
{
g_free (chr);
}
CharDriverState *qemu_chr_open_msmouse(void)
{
CharDriverState *chr;
chr = qemu_chr_alloc();
chr->chr_write = msmouse_chr_write;
chr->chr_close = msmouse_chr_close;
chr->explicit_be_open = true;
qemu_add_mouse_event_handler(msmouse_event, chr, 0, "QEMU Microsoft Mouse");
return chr;
}
static void register_types(void)
{
register_char_driver_qapi("msmouse", CHARDEV_BACKEND_KIND_MSMOUSE, NULL);
}
type_init(register_types);

View File

@@ -10,11 +10,10 @@
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/rng.h"
#include "chardev/char-fe.h"
#include "qapi/error.h"
#include "sysemu/char.h"
#include "qapi/qmp/qerror.h"
#include "hw/qdev.h" /* just for DEFINE_PROP_CHR */
#define TYPE_RNG_EGD "rng-egd"
#define RNG_EGD(obj) OBJECT_CHECK(RngEgd, (obj), TYPE_RNG_EGD)
@@ -23,14 +22,35 @@ typedef struct RngEgd
{
RngBackend parent;
CharBackend chr;
CharDriverState *chr;
char *chr_name;
GSList *requests;
} RngEgd;
static void rng_egd_request_entropy(RngBackend *b, RngRequest *req)
typedef struct RngRequest
{
EntropyReceiveFunc *receive_entropy;
uint8_t *data;
void *opaque;
size_t offset;
size_t size;
} RngRequest;
static void rng_egd_request_entropy(RngBackend *b, size_t size,
EntropyReceiveFunc *receive_entropy,
void *opaque)
{
RngEgd *s = RNG_EGD(b);
size_t size = req->size;
RngRequest *req;
req = g_malloc(sizeof(*req));
req->offset = 0;
req->size = size;
req->receive_entropy = receive_entropy;
req->opaque = opaque;
req->data = g_malloc(req->size);
while (size > 0) {
uint8_t header[2];
@@ -40,21 +60,28 @@ static void rng_egd_request_entropy(RngBackend *b, RngRequest *req)
header[0] = 0x02;
header[1] = len;
/* XXX this blocks entire thread. Rewrite to use
* qemu_chr_fe_write and background I/O callbacks */
qemu_chr_fe_write_all(&s->chr, header, sizeof(header));
qemu_chr_fe_write(s->chr, header, sizeof(header));
size -= len;
}
s->requests = g_slist_append(s->requests, req);
}
static void rng_egd_free_request(RngRequest *req)
{
g_free(req->data);
g_free(req);
}
static int rng_egd_chr_can_read(void *opaque)
{
RngEgd *s = RNG_EGD(opaque);
RngRequest *req;
GSList *i;
int size = 0;
QSIMPLEQ_FOREACH(req, &s->parent.requests, next) {
for (i = s->requests; i; i = i->next) {
RngRequest *req = i->data;
size += req->size - req->offset;
}
@@ -66,8 +93,8 @@ static void rng_egd_chr_read(void *opaque, const uint8_t *buf, int size)
RngEgd *s = RNG_EGD(opaque);
size_t buf_offset = 0;
while (size > 0 && !QSIMPLEQ_EMPTY(&s->parent.requests)) {
RngRequest *req = QSIMPLEQ_FIRST(&s->parent.requests);
while (size > 0 && s->requests) {
RngRequest *req = s->requests->data;
int len = MIN(size, req->size - req->offset);
memcpy(req->data + req->offset, buf + buf_offset, len);
@@ -76,37 +103,62 @@ static void rng_egd_chr_read(void *opaque, const uint8_t *buf, int size)
size -= len;
if (req->offset == req->size) {
s->requests = g_slist_remove_link(s->requests, s->requests);
req->receive_entropy(req->opaque, req->data, req->size);
rng_backend_finalize_request(&s->parent, req);
rng_egd_free_request(req);
}
}
}
static void rng_egd_free_requests(RngEgd *s)
{
GSList *i;
for (i = s->requests; i; i = i->next) {
rng_egd_free_request(i->data);
}
g_slist_free(s->requests);
s->requests = NULL;
}
static void rng_egd_cancel_requests(RngBackend *b)
{
RngEgd *s = RNG_EGD(b);
/* We simply delete the list of pending requests. If there is data in the
* queue waiting to be read, this is okay, because there will always be
* more data than we requested originally
*/
rng_egd_free_requests(s);
}
static void rng_egd_opened(RngBackend *b, Error **errp)
{
RngEgd *s = RNG_EGD(b);
Chardev *chr;
if (s->chr_name == NULL) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
"chardev", "a valid character device");
error_set(errp, QERR_INVALID_PARAMETER_VALUE,
"chardev", "a valid character device");
return;
}
chr = qemu_chr_find(s->chr_name);
if (chr == NULL) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", s->chr_name);
s->chr = qemu_chr_find(s->chr_name);
if (s->chr == NULL) {
error_set(errp, QERR_DEVICE_NOT_FOUND, s->chr_name);
return;
}
if (!qemu_chr_fe_init(&s->chr, chr, errp)) {
if (qemu_chr_fe_claim(s->chr) != 0) {
error_set(errp, QERR_DEVICE_IN_USE, s->chr_name);
return;
}
/* FIXME we should resubmit pending requests when the CDS reconnects. */
qemu_chr_fe_set_handlers(&s->chr, rng_egd_chr_can_read,
rng_egd_chr_read, NULL, NULL, s, NULL, true);
qemu_chr_add_handlers(s->chr, rng_egd_chr_can_read, rng_egd_chr_read,
NULL, s);
}
static void rng_egd_set_chardev(Object *obj, const char *value, Error **errp)
@@ -115,7 +167,7 @@ static void rng_egd_set_chardev(Object *obj, const char *value, Error **errp)
RngEgd *s = RNG_EGD(b);
if (b->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
error_set(errp, QERR_PERMISSION_DENIED);
} else {
g_free(s->chr_name);
s->chr_name = g_strdup(value);
@@ -125,10 +177,9 @@ static void rng_egd_set_chardev(Object *obj, const char *value, Error **errp)
static char *rng_egd_get_chardev(Object *obj, Error **errp)
{
RngEgd *s = RNG_EGD(obj);
Chardev *chr = qemu_chr_fe_get_driver(&s->chr);
if (chr && chr->label) {
return g_strdup(chr->label);
if (s->chr && s->chr->label) {
return g_strdup(s->chr->label);
}
return NULL;
@@ -145,8 +196,14 @@ static void rng_egd_finalize(Object *obj)
{
RngEgd *s = RNG_EGD(obj);
qemu_chr_fe_deinit(&s->chr, false);
if (s->chr) {
qemu_chr_add_handlers(s->chr, NULL, NULL, NULL, NULL);
qemu_chr_fe_release(s->chr);
}
g_free(s->chr_name);
rng_egd_free_requests(s);
}
static void rng_egd_class_init(ObjectClass *klass, void *data)
@@ -154,6 +211,7 @@ static void rng_egd_class_init(ObjectClass *klass, void *data)
RngBackendClass *rbc = RNG_BACKEND_CLASS(klass);
rbc->request_entropy = rng_egd_request_entropy;
rbc->cancel_requests = rng_egd_cancel_requests;
rbc->opened = rng_egd_opened;
}

View File

@@ -10,19 +10,21 @@
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/rng-random.h"
#include "sysemu/rng.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "qemu/main-loop.h"
struct RngRandom
struct RndRandom
{
RngBackend parent;
int fd;
char *filename;
EntropyReceiveFunc *receive_func;
void *opaque;
size_t size;
};
/**
@@ -34,45 +36,46 @@ struct RngRandom
static void entropy_available(void *opaque)
{
RngRandom *s = RNG_RANDOM(opaque);
RndRandom *s = RNG_RANDOM(opaque);
uint8_t buffer[s->size];
ssize_t len;
while (!QSIMPLEQ_EMPTY(&s->parent.requests)) {
RngRequest *req = QSIMPLEQ_FIRST(&s->parent.requests);
ssize_t len;
len = read(s->fd, req->data, req->size);
if (len < 0 && errno == EAGAIN) {
return;
}
g_assert(len != -1);
req->receive_entropy(req->opaque, req->data, len);
rng_backend_finalize_request(&s->parent, req);
len = read(s->fd, buffer, s->size);
if (len < 0 && errno == EAGAIN) {
return;
}
g_assert(len != -1);
s->receive_func(s->opaque, buffer, len);
s->receive_func = NULL;
/* We've drained all requests, the fd handler can be reset. */
qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
}
static void rng_random_request_entropy(RngBackend *b, RngRequest *req)
static void rng_random_request_entropy(RngBackend *b, size_t size,
EntropyReceiveFunc *receive_entropy,
void *opaque)
{
RngRandom *s = RNG_RANDOM(b);
RndRandom *s = RNG_RANDOM(b);
if (QSIMPLEQ_EMPTY(&s->parent.requests)) {
/* If there are no pending requests yet, we need to
* install our fd handler. */
qemu_set_fd_handler(s->fd, entropy_available, NULL, s);
if (s->receive_func) {
s->receive_func(s->opaque, NULL, 0);
}
s->receive_func = receive_entropy;
s->opaque = opaque;
s->size = size;
qemu_set_fd_handler(s->fd, entropy_available, NULL, s);
}
static void rng_random_opened(RngBackend *b, Error **errp)
{
RngRandom *s = RNG_RANDOM(b);
RndRandom *s = RNG_RANDOM(b);
if (s->filename == NULL) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
"filename", "a valid filename");
error_set(errp, QERR_INVALID_PARAMETER_VALUE,
"filename", "a valid filename");
} else {
s->fd = qemu_open(s->filename, O_RDONLY | O_NONBLOCK);
if (s->fd == -1) {
@@ -83,19 +86,23 @@ static void rng_random_opened(RngBackend *b, Error **errp)
static char *rng_random_get_filename(Object *obj, Error **errp)
{
RngRandom *s = RNG_RANDOM(obj);
RndRandom *s = RNG_RANDOM(obj);
return g_strdup(s->filename);
if (s->filename) {
return g_strdup(s->filename);
}
return NULL;
}
static void rng_random_set_filename(Object *obj, const char *filename,
Error **errp)
{
RngBackend *b = RNG_BACKEND(obj);
RngRandom *s = RNG_RANDOM(obj);
RndRandom *s = RNG_RANDOM(obj);
if (b->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
error_set(errp, QERR_PERMISSION_DENIED);
return;
}
@@ -105,7 +112,7 @@ static void rng_random_set_filename(Object *obj, const char *filename,
static void rng_random_init(Object *obj)
{
RngRandom *s = RNG_RANDOM(obj);
RndRandom *s = RNG_RANDOM(obj);
object_property_add_str(obj, "filename",
rng_random_get_filename,
@@ -118,7 +125,7 @@ static void rng_random_init(Object *obj)
static void rng_random_finalize(Object *obj)
{
RngRandom *s = RNG_RANDOM(obj);
RndRandom *s = RNG_RANDOM(obj);
if (s->fd != -1) {
qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
@@ -139,7 +146,7 @@ static void rng_random_class_init(ObjectClass *klass, void *data)
static const TypeInfo rng_random_info = {
.name = TYPE_RNG_RANDOM,
.parent = TYPE_RNG_BACKEND,
.instance_size = sizeof(RngRandom),
.instance_size = sizeof(RndRandom),
.class_init = rng_random_class_init,
.instance_init = rng_random_init,
.instance_finalize = rng_random_finalize,

View File

@@ -10,9 +10,7 @@
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/rng.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "qom/object_interfaces.h"
@@ -21,20 +19,18 @@ void rng_backend_request_entropy(RngBackend *s, size_t size,
void *opaque)
{
RngBackendClass *k = RNG_BACKEND_GET_CLASS(s);
RngRequest *req;
if (k->request_entropy) {
req = g_malloc(sizeof(*req));
k->request_entropy(s, size, receive_entropy, opaque);
}
}
req->offset = 0;
req->size = size;
req->receive_entropy = receive_entropy;
req->opaque = opaque;
req->data = g_malloc(req->size);
void rng_backend_cancel_requests(RngBackend *s)
{
RngBackendClass *k = RNG_BACKEND_GET_CLASS(s);
k->request_entropy(s, req);
QSIMPLEQ_INSERT_TAIL(&s->requests, req, next);
if (k->cancel_requests) {
k->cancel_requests(s);
}
}
@@ -61,7 +57,7 @@ static void rng_backend_prop_set_opened(Object *obj, bool value, Error **errp)
}
if (!value && s->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
error_set(errp, QERR_PERMISSION_DENIED);
return;
}
@@ -76,48 +72,14 @@ static void rng_backend_prop_set_opened(Object *obj, bool value, Error **errp)
s->opened = true;
}
static void rng_backend_free_request(RngRequest *req)
{
g_free(req->data);
g_free(req);
}
static void rng_backend_free_requests(RngBackend *s)
{
RngRequest *req, *next;
QSIMPLEQ_FOREACH_SAFE(req, &s->requests, next, next) {
rng_backend_free_request(req);
}
QSIMPLEQ_INIT(&s->requests);
}
void rng_backend_finalize_request(RngBackend *s, RngRequest *req)
{
QSIMPLEQ_REMOVE(&s->requests, req, RngRequest, next);
rng_backend_free_request(req);
}
static void rng_backend_init(Object *obj)
{
RngBackend *s = RNG_BACKEND(obj);
QSIMPLEQ_INIT(&s->requests);
object_property_add_bool(obj, "opened",
rng_backend_prop_get_opened,
rng_backend_prop_set_opened,
NULL);
}
static void rng_backend_finalize(Object *obj)
{
RngBackend *s = RNG_BACKEND(obj);
rng_backend_free_requests(s);
}
static void rng_backend_class_init(ObjectClass *oc, void *data)
{
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
@@ -130,7 +92,6 @@ static const TypeInfo rng_backend_info = {
.parent = TYPE_OBJECT,
.instance_size = sizeof(RngBackend),
.instance_init = rng_backend_init,
.instance_finalize = rng_backend_finalize,
.class_size = sizeof(RngBackendClass),
.class_init = rng_backend_class_init,
.abstract = true,

View File

@@ -12,196 +12,182 @@
* Based on backends/rng.c by Anthony Liguori
*/
#include "qemu/osdep.h"
#include "sysemu/tpm_backend.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "sysemu/tpm.h"
#include "qemu/thread.h"
#include "qemu/main-loop.h"
#include "block/thread-pool.h"
#include "qemu/error-report.h"
static void tpm_backend_request_completed(void *opaque, int ret)
{
TPMBackend *s = TPM_BACKEND(opaque);
TPMIfClass *tic = TPM_IF_GET_CLASS(s->tpmif);
tic->request_completed(s->tpmif, ret);
/* no need for atomic, as long the BQL is taken */
s->cmd = NULL;
object_unref(OBJECT(s));
}
static int tpm_backend_worker_thread(gpointer data)
{
TPMBackend *s = TPM_BACKEND(data);
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
Error *err = NULL;
k->handle_request(s, s->cmd, &err);
if (err) {
error_report_err(err);
return -1;
}
return 0;
}
void tpm_backend_finish_sync(TPMBackend *s)
{
while (s->cmd) {
aio_poll(qemu_get_aio_context(), true);
}
}
#include "sysemu/tpm_backend_int.h"
enum TpmType tpm_backend_get_type(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->type;
return k->ops->type;
}
int tpm_backend_init(TPMBackend *s, TPMIf *tpmif, Error **errp)
const char *tpm_backend_get_desc(TPMBackend *s)
{
if (s->tpmif) {
error_setg(errp, "TPM backend '%s' is already initialized", s->id);
return -1;
}
s->tpmif = tpmif;
object_ref(OBJECT(tpmif));
s->had_startup_error = false;
return 0;
}
int tpm_backend_startup_tpm(TPMBackend *s, size_t buffersize)
{
int res = 0;
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
/* terminate a running TPM */
tpm_backend_finish_sync(s);
return k->ops->desc();
}
res = k->startup_tpm ? k->startup_tpm(s, buffersize) : 0;
void tpm_backend_destroy(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
s->had_startup_error = (res != 0);
return k->ops->destroy(s);
}
return res;
int tpm_backend_init(TPMBackend *s, TPMState *state,
TPMRecvDataCB *datacb)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->init(s, state, datacb);
}
int tpm_backend_startup_tpm(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->startup_tpm(s);
}
bool tpm_backend_had_startup_error(TPMBackend *s)
{
return s->had_startup_error;
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->had_startup_error(s);
}
void tpm_backend_deliver_request(TPMBackend *s, TPMBackendCmd *cmd)
size_t tpm_backend_realloc_buffer(TPMBackend *s, TPMSizedBuffer *sb)
{
ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
if (s->cmd != NULL) {
error_report("There is a TPM request pending");
return;
}
return k->ops->realloc_buffer(sb);
}
s->cmd = cmd;
object_ref(OBJECT(s));
thread_pool_submit_aio(pool, tpm_backend_worker_thread, s,
tpm_backend_request_completed, s);
void tpm_backend_deliver_request(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
k->ops->deliver_request(s);
}
void tpm_backend_reset(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
if (k->reset) {
k->reset(s);
}
tpm_backend_finish_sync(s);
s->had_startup_error = false;
k->ops->reset(s);
}
void tpm_backend_cancel_cmd(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
k->cancel_cmd(s);
k->ops->cancel_cmd(s);
}
bool tpm_backend_get_tpm_established_flag(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->get_tpm_established_flag ?
k->get_tpm_established_flag(s) : false;
return k->ops->get_tpm_established_flag(s);
}
int tpm_backend_reset_tpm_established_flag(TPMBackend *s, uint8_t locty)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->reset_tpm_established_flag ?
k->reset_tpm_established_flag(s, locty) : 0;
}
TPMVersion tpm_backend_get_tpm_version(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->get_tpm_version(s);
}
size_t tpm_backend_get_buffer_size(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->get_buffer_size(s);
}
TPMInfo *tpm_backend_query_tpm(TPMBackend *s)
{
TPMInfo *info = g_new0(TPMInfo, 1);
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
TPMIfClass *tic = TPM_IF_GET_CLASS(s->tpmif);
info->id = g_strdup(s->id);
info->model = tic->model;
info->options = k->get_tpm_options(s);
return info;
}
static void tpm_backend_instance_finalize(Object *obj)
static bool tpm_backend_prop_get_opened(Object *obj, Error **errp)
{
TPMBackend *s = TPM_BACKEND(obj);
object_unref(OBJECT(s->tpmif));
g_free(s->id);
return s->opened;
}
void tpm_backend_open(TPMBackend *s, Error **errp)
{
object_property_set_bool(OBJECT(s), true, "opened", errp);
}
static void tpm_backend_prop_set_opened(Object *obj, bool value, Error **errp)
{
TPMBackend *s = TPM_BACKEND(obj);
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
Error *local_err = NULL;
if (value == s->opened) {
return;
}
if (!value && s->opened) {
error_set(errp, QERR_PERMISSION_DENIED);
return;
}
if (k->opened) {
k->opened(s, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
}
s->opened = true;
}
static void tpm_backend_instance_init(Object *obj)
{
object_property_add_bool(obj, "opened",
tpm_backend_prop_get_opened,
tpm_backend_prop_set_opened,
NULL);
}
void tpm_backend_thread_deliver_request(TPMBackendThread *tbt)
{
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_PROCESS_CMD, NULL);
}
void tpm_backend_thread_create(TPMBackendThread *tbt,
GFunc func, gpointer user_data)
{
if (!tbt->pool) {
tbt->pool = g_thread_pool_new(func, user_data, 1, TRUE, NULL);
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_INIT, NULL);
}
}
void tpm_backend_thread_end(TPMBackendThread *tbt)
{
if (tbt->pool) {
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_END, NULL);
g_thread_pool_free(tbt->pool, FALSE, TRUE);
tbt->pool = NULL;
}
}
void tpm_backend_thread_tpm_reset(TPMBackendThread *tbt,
GFunc func, gpointer user_data)
{
if (!tbt->pool) {
tpm_backend_thread_create(tbt, func, user_data);
} else {
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_TPM_RESET,
NULL);
}
}
static const TypeInfo tpm_backend_info = {
.name = TYPE_TPM_BACKEND,
.parent = TYPE_OBJECT,
.instance_size = sizeof(TPMBackend),
.instance_finalize = tpm_backend_instance_finalize,
.instance_init = tpm_backend_instance_init,
.class_size = sizeof(TPMBackendClass),
.abstract = true,
};
static const TypeInfo tpm_if_info = {
.name = TYPE_TPM_IF,
.parent = TYPE_INTERFACE,
.class_size = sizeof(TPMIfClass),
};
static void register_types(void)
{
type_register_static(&tpm_backend_info);
type_register_static(&tpm_if_info);
}
type_init(register_types);

View File

@@ -24,45 +24,17 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "monitor/monitor.h"
#include "exec/cpu-common.h"
#include "sysemu/kvm.h"
#include "sysemu/balloon.h"
#include "trace-root.h"
#include "qapi/error.h"
#include "qapi/qapi-commands-misc.h"
#include "qapi/qmp/qerror.h"
#include "trace.h"
#include "qmp-commands.h"
#include "qapi/qmp/qjson.h"
static QEMUBalloonEvent *balloon_event_fn;
static QEMUBalloonStatus *balloon_stat_fn;
static void *balloon_opaque;
static bool balloon_inhibited;
bool qemu_balloon_is_inhibited(void)
{
return balloon_inhibited;
}
void qemu_balloon_inhibit(bool state)
{
balloon_inhibited = state;
}
static bool have_balloon(Error **errp)
{
if (kvm_enabled() && !kvm_has_sync_mmu()) {
error_set(errp, ERROR_CLASS_KVM_MISSING_CAP,
"Using KVM without synchronous MMU, balloon unavailable");
return false;
}
if (!balloon_event_fn) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_ACTIVE,
"No balloon device has been activated");
return false;
}
return true;
}
int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
QEMUBalloonStatus *stat_func, void *opaque)
@@ -71,6 +43,7 @@ int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
/* We're already registered one balloon handler. How many can
* a guest really have?
*/
error_report("Another balloon device already registered");
return -1;
}
balloon_event_fn = event_func;
@@ -89,30 +62,58 @@ void qemu_remove_balloon_handler(void *opaque)
balloon_opaque = NULL;
}
static int qemu_balloon(ram_addr_t target)
{
if (!balloon_event_fn) {
return 0;
}
trace_balloon_event(balloon_opaque, target);
balloon_event_fn(balloon_opaque, target);
return 1;
}
static int qemu_balloon_status(BalloonInfo *info)
{
if (!balloon_stat_fn) {
return 0;
}
balloon_stat_fn(balloon_opaque, info);
return 1;
}
BalloonInfo *qmp_query_balloon(Error **errp)
{
BalloonInfo *info;
if (!have_balloon(errp)) {
if (kvm_enabled() && !kvm_has_sync_mmu()) {
error_set(errp, QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
return NULL;
}
info = g_malloc0(sizeof(*info));
balloon_stat_fn(balloon_opaque, info);
if (qemu_balloon_status(info) == 0) {
error_set(errp, QERR_DEVICE_NOT_ACTIVE, "balloon");
qapi_free_BalloonInfo(info);
return NULL;
}
return info;
}
void qmp_balloon(int64_t target, Error **errp)
void qmp_balloon(int64_t value, Error **errp)
{
if (!have_balloon(errp)) {
if (kvm_enabled() && !kvm_has_sync_mmu()) {
error_set(errp, QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
return;
}
if (target <= 0) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "target", "a size");
if (value <= 0) {
error_set(errp, QERR_INVALID_PARAMETER_VALUE, "target", "a size");
return;
}
trace_balloon_event(balloon_opaque, target);
balloon_event_fn(balloon_opaque, target);
if (qemu_balloon(value) == 0) {
error_set(errp, QERR_DEVICE_NOT_ACTIVE, "balloon");
}
}

891
block-migration.c Normal file
View File

@@ -0,0 +1,891 @@
/*
* QEMU live block migration
*
* Copyright IBM, Corp. 2009
*
* Authors:
* Liran Schour <lirans@il.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "hw/hw.h"
#include "qemu/queue.h"
#include "qemu/timer.h"
#include "migration/block.h"
#include "migration/migration.h"
#include "sysemu/blockdev.h"
#include <assert.h>
#define BLOCK_SIZE (1 << 20)
#define BDRV_SECTORS_PER_DIRTY_CHUNK (BLOCK_SIZE >> BDRV_SECTOR_BITS)
#define BLK_MIG_FLAG_DEVICE_BLOCK 0x01
#define BLK_MIG_FLAG_EOS 0x02
#define BLK_MIG_FLAG_PROGRESS 0x04
#define BLK_MIG_FLAG_ZERO_BLOCK 0x08
#define MAX_IS_ALLOCATED_SEARCH 65536
//#define DEBUG_BLK_MIGRATION
#ifdef DEBUG_BLK_MIGRATION
#define DPRINTF(fmt, ...) \
do { printf("blk_migration: " fmt, ## __VA_ARGS__); } while (0)
#else
#define DPRINTF(fmt, ...) \
do { } while (0)
#endif
typedef struct BlkMigDevState {
/* Written during setup phase. Can be read without a lock. */
BlockDriverState *bs;
int shared_base;
int64_t total_sectors;
QSIMPLEQ_ENTRY(BlkMigDevState) entry;
/* Only used by migration thread. Does not need a lock. */
int bulk_completed;
int64_t cur_sector;
int64_t cur_dirty;
/* Protected by block migration lock. */
unsigned long *aio_bitmap;
int64_t completed_sectors;
BdrvDirtyBitmap *dirty_bitmap;
Error *blocker;
} BlkMigDevState;
typedef struct BlkMigBlock {
/* Only used by migration thread. */
uint8_t *buf;
BlkMigDevState *bmds;
int64_t sector;
int nr_sectors;
struct iovec iov;
QEMUIOVector qiov;
BlockDriverAIOCB *aiocb;
/* Protected by block migration lock. */
int ret;
QSIMPLEQ_ENTRY(BlkMigBlock) entry;
} BlkMigBlock;
typedef struct BlkMigState {
/* Written during setup phase. Can be read without a lock. */
int blk_enable;
int shared_base;
QSIMPLEQ_HEAD(bmds_list, BlkMigDevState) bmds_list;
int64_t total_sector_sum;
bool zero_blocks;
/* Protected by lock. */
QSIMPLEQ_HEAD(blk_list, BlkMigBlock) blk_list;
int submitted;
int read_done;
/* Only used by migration thread. Does not need a lock. */
int transferred;
int prev_progress;
int bulk_completed;
/* Lock must be taken _inside_ the iothread lock. */
QemuMutex lock;
} BlkMigState;
static BlkMigState block_mig_state;
static void blk_mig_lock(void)
{
qemu_mutex_lock(&block_mig_state.lock);
}
static void blk_mig_unlock(void)
{
qemu_mutex_unlock(&block_mig_state.lock);
}
/* Must run outside of the iothread lock during the bulk phase,
* or the VM will stall.
*/
static void blk_send(QEMUFile *f, BlkMigBlock * blk)
{
int len;
uint64_t flags = BLK_MIG_FLAG_DEVICE_BLOCK;
if (block_mig_state.zero_blocks &&
buffer_is_zero(blk->buf, BLOCK_SIZE)) {
flags |= BLK_MIG_FLAG_ZERO_BLOCK;
}
/* sector number and flags */
qemu_put_be64(f, (blk->sector << BDRV_SECTOR_BITS)
| flags);
/* device name */
len = strlen(blk->bmds->bs->device_name);
qemu_put_byte(f, len);
qemu_put_buffer(f, (uint8_t *)blk->bmds->bs->device_name, len);
/* if a block is zero we need to flush here since the network
* bandwidth is now a lot higher than the storage device bandwidth.
* thus if we queue zero blocks we slow down the migration */
if (flags & BLK_MIG_FLAG_ZERO_BLOCK) {
qemu_fflush(f);
return;
}
qemu_put_buffer(f, blk->buf, BLOCK_SIZE);
}
int blk_mig_active(void)
{
return !QSIMPLEQ_EMPTY(&block_mig_state.bmds_list);
}
uint64_t blk_mig_bytes_transferred(void)
{
BlkMigDevState *bmds;
uint64_t sum = 0;
blk_mig_lock();
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
sum += bmds->completed_sectors;
}
blk_mig_unlock();
return sum << BDRV_SECTOR_BITS;
}
uint64_t blk_mig_bytes_remaining(void)
{
return blk_mig_bytes_total() - blk_mig_bytes_transferred();
}
uint64_t blk_mig_bytes_total(void)
{
BlkMigDevState *bmds;
uint64_t sum = 0;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
sum += bmds->total_sectors;
}
return sum << BDRV_SECTOR_BITS;
}
/* Called with migration lock held. */
static int bmds_aio_inflight(BlkMigDevState *bmds, int64_t sector)
{
int64_t chunk = sector / (int64_t)BDRV_SECTORS_PER_DIRTY_CHUNK;
if ((sector << BDRV_SECTOR_BITS) < bdrv_getlength(bmds->bs)) {
return !!(bmds->aio_bitmap[chunk / (sizeof(unsigned long) * 8)] &
(1UL << (chunk % (sizeof(unsigned long) * 8))));
} else {
return 0;
}
}
/* Called with migration lock held. */
static void bmds_set_aio_inflight(BlkMigDevState *bmds, int64_t sector_num,
int nb_sectors, int set)
{
int64_t start, end;
unsigned long val, idx, bit;
start = sector_num / BDRV_SECTORS_PER_DIRTY_CHUNK;
end = (sector_num + nb_sectors - 1) / BDRV_SECTORS_PER_DIRTY_CHUNK;
for (; start <= end; start++) {
idx = start / (sizeof(unsigned long) * 8);
bit = start % (sizeof(unsigned long) * 8);
val = bmds->aio_bitmap[idx];
if (set) {
val |= 1UL << bit;
} else {
val &= ~(1UL << bit);
}
bmds->aio_bitmap[idx] = val;
}
}
static void alloc_aio_bitmap(BlkMigDevState *bmds)
{
BlockDriverState *bs = bmds->bs;
int64_t bitmap_size;
bitmap_size = (bdrv_getlength(bs) >> BDRV_SECTOR_BITS) +
BDRV_SECTORS_PER_DIRTY_CHUNK * 8 - 1;
bitmap_size /= BDRV_SECTORS_PER_DIRTY_CHUNK * 8;
bmds->aio_bitmap = g_malloc0(bitmap_size);
}
/* Never hold migration lock when yielding to the main loop! */
static void blk_mig_read_cb(void *opaque, int ret)
{
BlkMigBlock *blk = opaque;
blk_mig_lock();
blk->ret = ret;
QSIMPLEQ_INSERT_TAIL(&block_mig_state.blk_list, blk, entry);
bmds_set_aio_inflight(blk->bmds, blk->sector, blk->nr_sectors, 0);
block_mig_state.submitted--;
block_mig_state.read_done++;
assert(block_mig_state.submitted >= 0);
blk_mig_unlock();
}
/* Called with no lock taken. */
static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
{
int64_t total_sectors = bmds->total_sectors;
int64_t cur_sector = bmds->cur_sector;
BlockDriverState *bs = bmds->bs;
BlkMigBlock *blk;
int nr_sectors;
if (bmds->shared_base) {
qemu_mutex_lock_iothread();
while (cur_sector < total_sectors &&
!bdrv_is_allocated(bs, cur_sector, MAX_IS_ALLOCATED_SEARCH,
&nr_sectors)) {
cur_sector += nr_sectors;
}
qemu_mutex_unlock_iothread();
}
if (cur_sector >= total_sectors) {
bmds->cur_sector = bmds->completed_sectors = total_sectors;
return 1;
}
bmds->completed_sectors = cur_sector;
cur_sector &= ~((int64_t)BDRV_SECTORS_PER_DIRTY_CHUNK - 1);
/* we are going to transfer a full block even if it is not allocated */
nr_sectors = BDRV_SECTORS_PER_DIRTY_CHUNK;
if (total_sectors - cur_sector < BDRV_SECTORS_PER_DIRTY_CHUNK) {
nr_sectors = total_sectors - cur_sector;
}
blk = g_malloc(sizeof(BlkMigBlock));
blk->buf = g_malloc(BLOCK_SIZE);
blk->bmds = bmds;
blk->sector = cur_sector;
blk->nr_sectors = nr_sectors;
blk->iov.iov_base = blk->buf;
blk->iov.iov_len = nr_sectors * BDRV_SECTOR_SIZE;
qemu_iovec_init_external(&blk->qiov, &blk->iov, 1);
blk_mig_lock();
block_mig_state.submitted++;
blk_mig_unlock();
qemu_mutex_lock_iothread();
blk->aiocb = bdrv_aio_readv(bs, cur_sector, &blk->qiov,
nr_sectors, blk_mig_read_cb, blk);
bdrv_reset_dirty(bs, cur_sector, nr_sectors);
qemu_mutex_unlock_iothread();
bmds->cur_sector = cur_sector + nr_sectors;
return (bmds->cur_sector >= total_sectors);
}
/* Called with iothread lock taken. */
static int set_dirty_tracking(void)
{
BlkMigDevState *bmds;
int ret;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
bmds->dirty_bitmap = bdrv_create_dirty_bitmap(bmds->bs, BLOCK_SIZE,
NULL);
if (!bmds->dirty_bitmap) {
ret = -errno;
goto fail;
}
}
return 0;
fail:
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
if (bmds->dirty_bitmap) {
bdrv_release_dirty_bitmap(bmds->bs, bmds->dirty_bitmap);
}
}
return ret;
}
static void unset_dirty_tracking(void)
{
BlkMigDevState *bmds;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
bdrv_release_dirty_bitmap(bmds->bs, bmds->dirty_bitmap);
}
}
static void init_blk_migration_it(void *opaque, BlockDriverState *bs)
{
BlkMigDevState *bmds;
int64_t sectors;
if (!bdrv_is_read_only(bs)) {
sectors = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
if (sectors <= 0) {
return;
}
bmds = g_malloc0(sizeof(BlkMigDevState));
bmds->bs = bs;
bmds->bulk_completed = 0;
bmds->total_sectors = sectors;
bmds->completed_sectors = 0;
bmds->shared_base = block_mig_state.shared_base;
alloc_aio_bitmap(bmds);
error_setg(&bmds->blocker, "block device is in use by migration");
bdrv_op_block_all(bs, bmds->blocker);
bdrv_ref(bs);
block_mig_state.total_sector_sum += sectors;
if (bmds->shared_base) {
DPRINTF("Start migration for %s with shared base image\n",
bs->device_name);
} else {
DPRINTF("Start full migration for %s\n", bs->device_name);
}
QSIMPLEQ_INSERT_TAIL(&block_mig_state.bmds_list, bmds, entry);
}
}
static void init_blk_migration(QEMUFile *f)
{
block_mig_state.submitted = 0;
block_mig_state.read_done = 0;
block_mig_state.transferred = 0;
block_mig_state.total_sector_sum = 0;
block_mig_state.prev_progress = -1;
block_mig_state.bulk_completed = 0;
block_mig_state.zero_blocks = migrate_zero_blocks();
bdrv_iterate(init_blk_migration_it, NULL);
}
/* Called with no lock taken. */
static int blk_mig_save_bulked_block(QEMUFile *f)
{
int64_t completed_sector_sum = 0;
BlkMigDevState *bmds;
int progress;
int ret = 0;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
if (bmds->bulk_completed == 0) {
if (mig_save_device_bulk(f, bmds) == 1) {
/* completed bulk section for this device */
bmds->bulk_completed = 1;
}
completed_sector_sum += bmds->completed_sectors;
ret = 1;
break;
} else {
completed_sector_sum += bmds->completed_sectors;
}
}
if (block_mig_state.total_sector_sum != 0) {
progress = completed_sector_sum * 100 /
block_mig_state.total_sector_sum;
} else {
progress = 100;
}
if (progress != block_mig_state.prev_progress) {
block_mig_state.prev_progress = progress;
qemu_put_be64(f, (progress << BDRV_SECTOR_BITS)
| BLK_MIG_FLAG_PROGRESS);
DPRINTF("Completed %d %%\r", progress);
}
return ret;
}
static void blk_mig_reset_dirty_cursor(void)
{
BlkMigDevState *bmds;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
bmds->cur_dirty = 0;
}
}
/* Called with iothread lock taken. */
static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
int is_async)
{
BlkMigBlock *blk;
int64_t total_sectors = bmds->total_sectors;
int64_t sector;
int nr_sectors;
int ret = -EIO;
for (sector = bmds->cur_dirty; sector < bmds->total_sectors;) {
blk_mig_lock();
if (bmds_aio_inflight(bmds, sector)) {
blk_mig_unlock();
bdrv_drain_all();
} else {
blk_mig_unlock();
}
if (bdrv_get_dirty(bmds->bs, bmds->dirty_bitmap, sector)) {
if (total_sectors - sector < BDRV_SECTORS_PER_DIRTY_CHUNK) {
nr_sectors = total_sectors - sector;
} else {
nr_sectors = BDRV_SECTORS_PER_DIRTY_CHUNK;
}
blk = g_malloc(sizeof(BlkMigBlock));
blk->buf = g_malloc(BLOCK_SIZE);
blk->bmds = bmds;
blk->sector = sector;
blk->nr_sectors = nr_sectors;
if (is_async) {
blk->iov.iov_base = blk->buf;
blk->iov.iov_len = nr_sectors * BDRV_SECTOR_SIZE;
qemu_iovec_init_external(&blk->qiov, &blk->iov, 1);
blk->aiocb = bdrv_aio_readv(bmds->bs, sector, &blk->qiov,
nr_sectors, blk_mig_read_cb, blk);
blk_mig_lock();
block_mig_state.submitted++;
bmds_set_aio_inflight(bmds, sector, nr_sectors, 1);
blk_mig_unlock();
} else {
ret = bdrv_read(bmds->bs, sector, blk->buf, nr_sectors);
if (ret < 0) {
goto error;
}
blk_send(f, blk);
g_free(blk->buf);
g_free(blk);
}
bdrv_reset_dirty(bmds->bs, sector, nr_sectors);
break;
}
sector += BDRV_SECTORS_PER_DIRTY_CHUNK;
bmds->cur_dirty = sector;
}
return (bmds->cur_dirty >= bmds->total_sectors);
error:
DPRINTF("Error reading sector %" PRId64 "\n", sector);
g_free(blk->buf);
g_free(blk);
return ret;
}
/* Called with iothread lock taken.
*
* return value:
* 0: too much data for max_downtime
* 1: few enough data for max_downtime
*/
static int blk_mig_save_dirty_block(QEMUFile *f, int is_async)
{
BlkMigDevState *bmds;
int ret = 1;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
ret = mig_save_device_dirty(f, bmds, is_async);
if (ret <= 0) {
break;
}
}
return ret;
}
/* Called with no locks taken. */
static int flush_blks(QEMUFile *f)
{
BlkMigBlock *blk;
int ret = 0;
DPRINTF("%s Enter submitted %d read_done %d transferred %d\n",
__FUNCTION__, block_mig_state.submitted, block_mig_state.read_done,
block_mig_state.transferred);
blk_mig_lock();
while ((blk = QSIMPLEQ_FIRST(&block_mig_state.blk_list)) != NULL) {
if (qemu_file_rate_limit(f)) {
break;
}
if (blk->ret < 0) {
ret = blk->ret;
break;
}
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.blk_list, entry);
blk_mig_unlock();
blk_send(f, blk);
blk_mig_lock();
g_free(blk->buf);
g_free(blk);
block_mig_state.read_done--;
block_mig_state.transferred++;
assert(block_mig_state.read_done >= 0);
}
blk_mig_unlock();
DPRINTF("%s Exit submitted %d read_done %d transferred %d\n", __FUNCTION__,
block_mig_state.submitted, block_mig_state.read_done,
block_mig_state.transferred);
return ret;
}
/* Called with iothread lock taken. */
static int64_t get_remaining_dirty(void)
{
BlkMigDevState *bmds;
int64_t dirty = 0;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
dirty += bdrv_get_dirty_count(bmds->bs, bmds->dirty_bitmap);
}
return dirty << BDRV_SECTOR_BITS;
}
/* Called with iothread lock taken. */
static void blk_mig_cleanup(void)
{
BlkMigDevState *bmds;
BlkMigBlock *blk;
bdrv_drain_all();
unset_dirty_tracking();
blk_mig_lock();
while ((bmds = QSIMPLEQ_FIRST(&block_mig_state.bmds_list)) != NULL) {
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.bmds_list, entry);
bdrv_op_unblock_all(bmds->bs, bmds->blocker);
error_free(bmds->blocker);
bdrv_unref(bmds->bs);
g_free(bmds->aio_bitmap);
g_free(bmds);
}
while ((blk = QSIMPLEQ_FIRST(&block_mig_state.blk_list)) != NULL) {
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.blk_list, entry);
g_free(blk->buf);
g_free(blk);
}
blk_mig_unlock();
}
static void block_migration_cancel(void *opaque)
{
blk_mig_cleanup();
}
static int block_save_setup(QEMUFile *f, void *opaque)
{
int ret;
DPRINTF("Enter save live setup submitted %d transferred %d\n",
block_mig_state.submitted, block_mig_state.transferred);
qemu_mutex_lock_iothread();
init_blk_migration(f);
/* start track dirty blocks */
ret = set_dirty_tracking();
if (ret) {
qemu_mutex_unlock_iothread();
return ret;
}
qemu_mutex_unlock_iothread();
ret = flush_blks(f);
blk_mig_reset_dirty_cursor();
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
return ret;
}
static int block_save_iterate(QEMUFile *f, void *opaque)
{
int ret;
int64_t last_ftell = qemu_ftell(f);
int64_t delta_ftell;
DPRINTF("Enter save live iterate submitted %d transferred %d\n",
block_mig_state.submitted, block_mig_state.transferred);
ret = flush_blks(f);
if (ret) {
return ret;
}
blk_mig_reset_dirty_cursor();
/* control the rate of transfer */
blk_mig_lock();
while ((block_mig_state.submitted +
block_mig_state.read_done) * BLOCK_SIZE <
qemu_file_get_rate_limit(f)) {
blk_mig_unlock();
if (block_mig_state.bulk_completed == 0) {
/* first finish the bulk phase */
if (blk_mig_save_bulked_block(f) == 0) {
/* finished saving bulk on all devices */
block_mig_state.bulk_completed = 1;
}
ret = 0;
} else {
/* Always called with iothread lock taken for
* simplicity, block_save_complete also calls it.
*/
qemu_mutex_lock_iothread();
ret = blk_mig_save_dirty_block(f, 1);
qemu_mutex_unlock_iothread();
}
if (ret < 0) {
return ret;
}
blk_mig_lock();
if (ret != 0) {
/* no more dirty blocks */
break;
}
}
blk_mig_unlock();
ret = flush_blks(f);
if (ret) {
return ret;
}
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
delta_ftell = qemu_ftell(f) - last_ftell;
if (delta_ftell > 0) {
return 1;
} else if (delta_ftell < 0) {
return -1;
} else {
return 0;
}
}
/* Called with iothread lock taken. */
static int block_save_complete(QEMUFile *f, void *opaque)
{
int ret;
DPRINTF("Enter save live complete submitted %d transferred %d\n",
block_mig_state.submitted, block_mig_state.transferred);
ret = flush_blks(f);
if (ret) {
return ret;
}
blk_mig_reset_dirty_cursor();
/* we know for sure that save bulk is completed and
all async read completed */
blk_mig_lock();
assert(block_mig_state.submitted == 0);
blk_mig_unlock();
do {
ret = blk_mig_save_dirty_block(f, 0);
if (ret < 0) {
return ret;
}
} while (ret == 0);
/* report completion */
qemu_put_be64(f, (100 << BDRV_SECTOR_BITS) | BLK_MIG_FLAG_PROGRESS);
DPRINTF("Block migration completed\n");
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
blk_mig_cleanup();
return 0;
}
static uint64_t block_save_pending(QEMUFile *f, void *opaque, uint64_t max_size)
{
/* Estimate pending number of bytes to send */
uint64_t pending;
qemu_mutex_lock_iothread();
blk_mig_lock();
pending = get_remaining_dirty() +
block_mig_state.submitted * BLOCK_SIZE +
block_mig_state.read_done * BLOCK_SIZE;
/* Report at least one block pending during bulk phase */
if (pending <= max_size && !block_mig_state.bulk_completed) {
pending = max_size + BLOCK_SIZE;
}
blk_mig_unlock();
qemu_mutex_unlock_iothread();
DPRINTF("Enter save live pending %" PRIu64 "\n", pending);
return pending;
}
static int block_load(QEMUFile *f, void *opaque, int version_id)
{
static int banner_printed;
int len, flags;
char device_name[256];
int64_t addr;
BlockDriverState *bs, *bs_prev = NULL;
uint8_t *buf;
int64_t total_sectors = 0;
int nr_sectors;
int ret;
do {
addr = qemu_get_be64(f);
flags = addr & ~BDRV_SECTOR_MASK;
addr >>= BDRV_SECTOR_BITS;
if (flags & BLK_MIG_FLAG_DEVICE_BLOCK) {
/* get device name */
len = qemu_get_byte(f);
qemu_get_buffer(f, (uint8_t *)device_name, len);
device_name[len] = '\0';
bs = bdrv_find(device_name);
if (!bs) {
fprintf(stderr, "Error unknown block device %s\n",
device_name);
return -EINVAL;
}
if (bs != bs_prev) {
bs_prev = bs;
total_sectors = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
if (total_sectors <= 0) {
error_report("Error getting length of block device %s",
device_name);
return -EINVAL;
}
}
if (total_sectors - addr < BDRV_SECTORS_PER_DIRTY_CHUNK) {
nr_sectors = total_sectors - addr;
} else {
nr_sectors = BDRV_SECTORS_PER_DIRTY_CHUNK;
}
if (flags & BLK_MIG_FLAG_ZERO_BLOCK) {
ret = bdrv_write_zeroes(bs, addr, nr_sectors,
BDRV_REQ_MAY_UNMAP);
} else {
buf = g_malloc(BLOCK_SIZE);
qemu_get_buffer(f, buf, BLOCK_SIZE);
ret = bdrv_write(bs, addr, buf, nr_sectors);
g_free(buf);
}
if (ret < 0) {
return ret;
}
} else if (flags & BLK_MIG_FLAG_PROGRESS) {
if (!banner_printed) {
printf("Receiving block device images\n");
banner_printed = 1;
}
printf("Completed %d %%%c", (int)addr,
(addr == 100) ? '\n' : '\r');
fflush(stdout);
} else if (!(flags & BLK_MIG_FLAG_EOS)) {
fprintf(stderr, "Unknown block migration flags: %#x\n", flags);
return -EINVAL;
}
ret = qemu_file_get_error(f);
if (ret != 0) {
return ret;
}
} while (!(flags & BLK_MIG_FLAG_EOS));
return 0;
}
static void block_set_params(const MigrationParams *params, void *opaque)
{
block_mig_state.blk_enable = params->blk;
block_mig_state.shared_base = params->shared;
/* shared base means that blk_enable = 1 */
block_mig_state.blk_enable |= params->shared;
}
static bool block_is_active(void *opaque)
{
return block_mig_state.blk_enable == 1;
}
static SaveVMHandlers savevm_block_handlers = {
.set_params = block_set_params,
.save_live_setup = block_save_setup,
.save_live_iterate = block_save_iterate,
.save_live_complete = block_save_complete,
.save_live_pending = block_save_pending,
.load_state = block_load,
.cancel = block_migration_cancel,
.is_active = block_is_active,
};
void blk_mig_init(void)
{
QSIMPLEQ_INIT(&block_mig_state.bmds_list);
QSIMPLEQ_INIT(&block_mig_state.blk_list);
qemu_mutex_init(&block_mig_state.lock);
register_savevm_live(NULL, "block", 0, 1, &savevm_block_handlers,
&block_mig_state);
}

6929
block.c

File diff suppressed because it is too large Load Diff

View File

@@ -1,38 +1,30 @@
block-obj-y += raw-format.o qcow.o vdi.o vmdk.o cloop.o bochs.o vpc.o vvfat.o dmg.o
block-obj-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-cache.o qcow2-bitmap.o
block-obj-y += qed.o qed-l2-cache.o qed-table.o qed-cluster.o
block-obj-y += raw_bsd.o cow.o qcow.o vdi.o vmdk.o cloop.o dmg.o bochs.o vpc.o vvfat.o
block-obj-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-cache.o
block-obj-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o qed-cluster.o
block-obj-y += qed-check.o
block-obj-y += vhdx.o vhdx-endian.o vhdx-log.o
block-obj-y += quorum.o
block-obj-y += parallels.o blkdebug.o blkverify.o blkreplay.o
block-obj-y += block-backend.o snapshot.o qapi.o
block-obj-$(CONFIG_WIN32) += file-win32.o win32-aio.o
block-obj-$(CONFIG_POSIX) += file-posix.o
block-obj-$(CONFIG_VHDX) += vhdx.o vhdx-endian.o vhdx-log.o
block-obj-$(CONFIG_QUORUM) += quorum.o
block-obj-y += parallels.o blkdebug.o blkverify.o
block-obj-y += snapshot.o qapi.o
block-obj-$(CONFIG_WIN32) += raw-win32.o win32-aio.o
block-obj-$(CONFIG_POSIX) += raw-posix.o
block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
block-obj-y += null.o mirror.o commit.o io.o create.o
block-obj-y += throttle-groups.o
block-obj-$(CONFIG_LINUX) += nvme.o
ifeq ($(CONFIG_POSIX),y)
block-obj-y += nbd.o nbd-client.o sheepdog.o
block-obj-$(CONFIG_LIBISCSI) += iscsi.o
block-obj-$(if $(CONFIG_LIBISCSI),y,n) += iscsi-opts.o
block-obj-$(CONFIG_LIBNFS) += nfs.o
block-obj-$(CONFIG_CURL) += curl.o
block-obj-$(CONFIG_RBD) += rbd.o
block-obj-$(CONFIG_GLUSTERFS) += gluster.o
block-obj-$(CONFIG_VXHS) += vxhs.o
block-obj-$(CONFIG_LIBSSH2) += ssh.o
block-obj-y += accounting.o dirty-bitmap.o
block-obj-y += write-threshold.o
block-obj-y += backup.o
block-obj-$(CONFIG_REPLICATION) += replication.o
block-obj-y += throttle.o
block-obj-y += crypto.o
endif
common-obj-y += stream.o
common-obj-y += commit.o
common-obj-y += mirror.o
common-obj-y += backup.o
nfs.o-libs := $(LIBNFS_LIBS)
iscsi.o-cflags := $(LIBISCSI_CFLAGS)
iscsi.o-libs := $(LIBISCSI_LIBS)
curl.o-cflags := $(CURL_CFLAGS)
@@ -41,12 +33,7 @@ rbd.o-cflags := $(RBD_CFLAGS)
rbd.o-libs := $(RBD_LIBS)
gluster.o-cflags := $(GLUSTERFS_CFLAGS)
gluster.o-libs := $(GLUSTERFS_LIBS)
vxhs.o-libs := $(VXHS_LIBS)
ssh.o-cflags := $(LIBSSH2_CFLAGS)
ssh.o-libs := $(LIBSSH2_LIBS)
block-obj-$(if $(CONFIG_BZIP2),m,n) += dmg-bz2.o
dmg-bz2.o-libs := $(BZIP2_LIBS)
qcow.o-libs := -lz
linux-aio.o-libs := -laio
parallels.o-cflags := $(LIBXML2_CFLAGS)
parallels.o-libs := $(LIBXML2_LIBS)

View File

@@ -1,185 +0,0 @@
/*
* QEMU System Emulator block accounting
*
* Copyright (c) 2011 Christoph Hellwig
* Copyright (c) 2015 Igalia, S.L.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "block/accounting.h"
#include "block/block_int.h"
#include "qemu/timer.h"
#include "sysemu/qtest.h"
static QEMUClockType clock_type = QEMU_CLOCK_REALTIME;
static const int qtest_latency_ns = NANOSECONDS_PER_SECOND / 1000;
void block_acct_init(BlockAcctStats *stats)
{
qemu_mutex_init(&stats->lock);
if (qtest_enabled()) {
clock_type = QEMU_CLOCK_VIRTUAL;
}
}
void block_acct_setup(BlockAcctStats *stats, bool account_invalid,
bool account_failed)
{
stats->account_invalid = account_invalid;
stats->account_failed = account_failed;
}
void block_acct_cleanup(BlockAcctStats *stats)
{
BlockAcctTimedStats *s, *next;
QSLIST_FOREACH_SAFE(s, &stats->intervals, entries, next) {
g_free(s);
}
qemu_mutex_destroy(&stats->lock);
}
void block_acct_add_interval(BlockAcctStats *stats, unsigned interval_length)
{
BlockAcctTimedStats *s;
unsigned i;
s = g_new0(BlockAcctTimedStats, 1);
s->interval_length = interval_length;
s->stats = stats;
qemu_mutex_lock(&stats->lock);
QSLIST_INSERT_HEAD(&stats->intervals, s, entries);
for (i = 0; i < BLOCK_MAX_IOTYPE; i++) {
timed_average_init(&s->latency[i], clock_type,
(uint64_t) interval_length * NANOSECONDS_PER_SECOND);
}
qemu_mutex_unlock(&stats->lock);
}
BlockAcctTimedStats *block_acct_interval_next(BlockAcctStats *stats,
BlockAcctTimedStats *s)
{
if (s == NULL) {
return QSLIST_FIRST(&stats->intervals);
} else {
return QSLIST_NEXT(s, entries);
}
}
void block_acct_start(BlockAcctStats *stats, BlockAcctCookie *cookie,
int64_t bytes, enum BlockAcctType type)
{
assert(type < BLOCK_MAX_IOTYPE);
cookie->bytes = bytes;
cookie->start_time_ns = qemu_clock_get_ns(clock_type);
cookie->type = type;
}
static void block_account_one_io(BlockAcctStats *stats, BlockAcctCookie *cookie,
bool failed)
{
BlockAcctTimedStats *s;
int64_t time_ns = qemu_clock_get_ns(clock_type);
int64_t latency_ns = time_ns - cookie->start_time_ns;
if (qtest_enabled()) {
latency_ns = qtest_latency_ns;
}
assert(cookie->type < BLOCK_MAX_IOTYPE);
qemu_mutex_lock(&stats->lock);
if (failed) {
stats->failed_ops[cookie->type]++;
} else {
stats->nr_bytes[cookie->type] += cookie->bytes;
stats->nr_ops[cookie->type]++;
}
if (!failed || stats->account_failed) {
stats->total_time_ns[cookie->type] += latency_ns;
stats->last_access_time_ns = time_ns;
QSLIST_FOREACH(s, &stats->intervals, entries) {
timed_average_account(&s->latency[cookie->type], latency_ns);
}
}
qemu_mutex_unlock(&stats->lock);
}
void block_acct_done(BlockAcctStats *stats, BlockAcctCookie *cookie)
{
block_account_one_io(stats, cookie, false);
}
void block_acct_failed(BlockAcctStats *stats, BlockAcctCookie *cookie)
{
block_account_one_io(stats, cookie, true);
}
void block_acct_invalid(BlockAcctStats *stats, enum BlockAcctType type)
{
assert(type < BLOCK_MAX_IOTYPE);
/* block_account_one_io() updates total_time_ns[], but this one does
* not. The reason is that invalid requests are accounted during their
* submission, therefore there's no actual I/O involved.
*/
qemu_mutex_lock(&stats->lock);
stats->invalid_ops[type]++;
if (stats->account_invalid) {
stats->last_access_time_ns = qemu_clock_get_ns(clock_type);
}
qemu_mutex_unlock(&stats->lock);
}
void block_acct_merge_done(BlockAcctStats *stats, enum BlockAcctType type,
int num_requests)
{
assert(type < BLOCK_MAX_IOTYPE);
qemu_mutex_lock(&stats->lock);
stats->merged[type] += num_requests;
qemu_mutex_unlock(&stats->lock);
}
int64_t block_acct_idle_time_ns(BlockAcctStats *stats)
{
return qemu_clock_get_ns(clock_type) - stats->last_access_time_ns;
}
double block_acct_queue_depth(BlockAcctTimedStats *stats,
enum BlockAcctType type)
{
uint64_t sum, elapsed;
assert(type < BLOCK_MAX_IOTYPE);
qemu_mutex_lock(&stats->stats->lock);
sum = timed_average_sum(&stats->latency[type], &elapsed);
qemu_mutex_unlock(&stats->stats->lock);
return (double) sum / elapsed;
}

View File

@@ -11,41 +11,40 @@
*
*/
#include "qemu/osdep.h"
#include <stdio.h>
#include <errno.h>
#include <unistd.h>
#include "trace.h"
#include "block/block.h"
#include "block/block_int.h"
#include "block/blockjob_int.h"
#include "block/block_backup.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "block/blockjob.h"
#include "qemu/ratelimit.h"
#include "qemu/cutils.h"
#include "sysemu/block-backend.h"
#include "qemu/bitmap.h"
#include "qemu/error-report.h"
#define BACKUP_CLUSTER_SIZE_DEFAULT (1 << 16)
#define BACKUP_CLUSTER_BITS 16
#define BACKUP_CLUSTER_SIZE (1 << BACKUP_CLUSTER_BITS)
#define BACKUP_SECTORS_PER_CLUSTER (BACKUP_CLUSTER_SIZE / BDRV_SECTOR_SIZE)
#define SLICE_TIME 100000000ULL /* ns */
typedef struct CowRequest {
int64_t start;
int64_t end;
QLIST_ENTRY(CowRequest) list;
CoQueue wait_queue; /* coroutines blocked on this request */
} CowRequest;
typedef struct BackupBlockJob {
BlockJob common;
BlockBackend *target;
/* bitmap for sync=incremental */
BdrvDirtyBitmap *sync_bitmap;
BlockDriverState *target;
MirrorSyncMode sync_mode;
RateLimit limit;
BlockdevOnError on_source_error;
BlockdevOnError on_target_error;
CoRwlock flush_rwlock;
uint64_t bytes_read;
int64_t cluster_size;
bool compress;
NotifierWithReturn before_write;
uint64_t sectors_read;
HBitmap *bitmap;
QLIST_HEAD(, CowRequest) inflight_reqs;
HBitmap *copy_bitmap;
} BackupBlockJob;
/* See if in-flight requests overlap and wait for them to complete */
@@ -59,8 +58,8 @@ static void coroutine_fn wait_for_overlapping_requests(BackupBlockJob *job,
do {
retry = false;
QLIST_FOREACH(req, &job->inflight_reqs, list) {
if (end > req->start_byte && start < req->end_byte) {
qemu_co_queue_wait(&req->wait_queue, NULL);
if (end > req->start && start < req->end) {
qemu_co_queue_wait(&req->wait_queue);
retry = true;
break;
}
@@ -70,10 +69,10 @@ static void coroutine_fn wait_for_overlapping_requests(BackupBlockJob *job,
/* Keep track of an in-flight request */
static void cow_request_begin(CowRequest *req, BackupBlockJob *job,
int64_t start, int64_t end)
int64_t start, int64_t end)
{
req->start_byte = start;
req->end_byte = end;
req->start = start;
req->end = end;
qemu_co_queue_init(&req->wait_queue);
QLIST_INSERT_HEAD(&job->inflight_reqs, req, list);
}
@@ -85,81 +84,82 @@ static void cow_request_end(CowRequest *req)
qemu_co_queue_restart_all(&req->wait_queue);
}
static int coroutine_fn backup_do_cow(BackupBlockJob *job,
int64_t offset, uint64_t bytes,
bool *error_is_read,
bool is_write_notifier)
static int coroutine_fn backup_do_cow(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
bool *error_is_read)
{
BlockBackend *blk = job->common.blk;
BackupBlockJob *job = (BackupBlockJob *)bs->job;
CowRequest cow_request;
struct iovec iov;
QEMUIOVector bounce_qiov;
void *bounce_buffer = NULL;
int ret = 0;
int64_t start, end; /* bytes */
int n; /* bytes */
int64_t start, end;
int n;
qemu_co_rwlock_rdlock(&job->flush_rwlock);
start = QEMU_ALIGN_DOWN(offset, job->cluster_size);
end = QEMU_ALIGN_UP(bytes + offset, job->cluster_size);
start = sector_num / BACKUP_SECTORS_PER_CLUSTER;
end = DIV_ROUND_UP(sector_num + nb_sectors, BACKUP_SECTORS_PER_CLUSTER);
trace_backup_do_cow_enter(job, start, offset, bytes);
trace_backup_do_cow_enter(job, start, sector_num, nb_sectors);
wait_for_overlapping_requests(job, start, end);
cow_request_begin(&cow_request, job, start, end);
for (; start < end; start += job->cluster_size) {
if (!hbitmap_get(job->copy_bitmap, start / job->cluster_size)) {
for (; start < end; start++) {
if (hbitmap_get(job->bitmap, start)) {
trace_backup_do_cow_skip(job, start);
continue; /* already copied */
}
hbitmap_reset(job->copy_bitmap, start / job->cluster_size, 1);
trace_backup_do_cow_process(job, start);
n = MIN(job->cluster_size, job->common.len - start);
n = MIN(BACKUP_SECTORS_PER_CLUSTER,
job->common.len / BDRV_SECTOR_SIZE -
start * BACKUP_SECTORS_PER_CLUSTER);
if (!bounce_buffer) {
bounce_buffer = blk_blockalign(blk, job->cluster_size);
bounce_buffer = qemu_blockalign(bs, BACKUP_CLUSTER_SIZE);
}
iov.iov_base = bounce_buffer;
iov.iov_len = n;
iov.iov_len = n * BDRV_SECTOR_SIZE;
qemu_iovec_init_external(&bounce_qiov, &iov, 1);
ret = blk_co_preadv(blk, start, bounce_qiov.size, &bounce_qiov,
is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0);
ret = bdrv_co_readv(bs, start * BACKUP_SECTORS_PER_CLUSTER, n,
&bounce_qiov);
if (ret < 0) {
trace_backup_do_cow_read_fail(job, start, ret);
if (error_is_read) {
*error_is_read = true;
}
hbitmap_set(job->copy_bitmap, start / job->cluster_size, 1);
goto out;
}
if (buffer_is_zero(iov.iov_base, iov.iov_len)) {
ret = blk_co_pwrite_zeroes(job->target, start,
bounce_qiov.size, BDRV_REQ_MAY_UNMAP);
ret = bdrv_co_write_zeroes(job->target,
start * BACKUP_SECTORS_PER_CLUSTER,
n, BDRV_REQ_MAY_UNMAP);
} else {
ret = blk_co_pwritev(job->target, start,
bounce_qiov.size, &bounce_qiov,
job->compress ? BDRV_REQ_WRITE_COMPRESSED : 0);
ret = bdrv_co_writev(job->target,
start * BACKUP_SECTORS_PER_CLUSTER, n,
&bounce_qiov);
}
if (ret < 0) {
trace_backup_do_cow_write_fail(job, start, ret);
if (error_is_read) {
*error_is_read = false;
}
hbitmap_set(job->copy_bitmap, start / job->cluster_size, 1);
goto out;
}
hbitmap_set(job->bitmap, start, 1);
/* Publish progress, guest I/O counts as progress too. Note that the
* offset field is an opaque progress value, it is not a disk offset.
*/
job->bytes_read += n;
job->common.offset += n;
job->sectors_read += n;
job->common.offset += n * BDRV_SECTOR_SIZE;
}
out:
@@ -169,7 +169,7 @@ out:
cow_request_end(&cow_request);
trace_backup_do_cow_return(job, offset, bytes, ret);
trace_backup_do_cow_return(job, sector_num, nb_sectors, ret);
qemu_co_rwlock_unlock(&job->flush_rwlock);
@@ -180,14 +180,14 @@ static int coroutine_fn backup_before_write_notify(
NotifierWithReturn *notifier,
void *opaque)
{
BackupBlockJob *job = container_of(notifier, BackupBlockJob, before_write);
BdrvTrackedRequest *req = opaque;
int64_t sector_num = req->offset >> BDRV_SECTOR_BITS;
int nb_sectors = req->bytes >> BDRV_SECTOR_BITS;
assert(req->bs == blk_bs(job->common.blk));
assert(QEMU_IS_ALIGNED(req->offset, BDRV_SECTOR_SIZE));
assert(QEMU_IS_ALIGNED(req->bytes, BDRV_SECTOR_SIZE));
assert((req->offset & (BDRV_SECTOR_SIZE - 1)) == 0);
assert((req->bytes & (BDRV_SECTOR_SIZE - 1)) == 0);
return backup_do_cow(job, req->offset, req->bytes, NULL, true);
return backup_do_cow(req->bs, sector_num, nb_sectors, NULL);
}
static void backup_set_speed(BlockJob *job, int64_t speed, Error **errp)
@@ -195,289 +195,106 @@ static void backup_set_speed(BlockJob *job, int64_t speed, Error **errp)
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
if (speed < 0) {
error_setg(errp, QERR_INVALID_PARAMETER, "speed");
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed, SLICE_TIME);
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
}
static void backup_cleanup_sync_bitmap(BackupBlockJob *job, int ret)
{
BdrvDirtyBitmap *bm;
BlockDriverState *bs = blk_bs(job->common.blk);
if (ret < 0 || block_job_is_cancelled(&job->common)) {
/* Merge the successor back into the parent, delete nothing. */
bm = bdrv_reclaim_dirty_bitmap(bs, job->sync_bitmap, NULL);
assert(bm);
} else {
/* Everything is fine, delete this bitmap and install the backup. */
bm = bdrv_dirty_bitmap_abdicate(bs, job->sync_bitmap, NULL);
assert(bm);
}
}
static void backup_commit(BlockJob *job)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
if (s->sync_bitmap) {
backup_cleanup_sync_bitmap(s, 0);
}
}
static void backup_abort(BlockJob *job)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
if (s->sync_bitmap) {
backup_cleanup_sync_bitmap(s, -1);
}
}
static void backup_clean(BlockJob *job)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
assert(s->target);
blk_unref(s->target);
s->target = NULL;
}
static void backup_attached_aio_context(BlockJob *job, AioContext *aio_context)
static void backup_iostatus_reset(BlockJob *job)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
blk_set_aio_context(s->target, aio_context);
bdrv_iostatus_reset(s->target);
}
void backup_do_checkpoint(BlockJob *job, Error **errp)
{
BackupBlockJob *backup_job = container_of(job, BackupBlockJob, common);
int64_t len;
assert(job->driver->job_type == BLOCK_JOB_TYPE_BACKUP);
if (backup_job->sync_mode != MIRROR_SYNC_MODE_NONE) {
error_setg(errp, "The backup job only supports block checkpoint in"
" sync=none mode");
return;
}
len = DIV_ROUND_UP(backup_job->common.len, backup_job->cluster_size);
hbitmap_set(backup_job->copy_bitmap, 0, len);
}
void backup_wait_for_overlapping_requests(BlockJob *job, int64_t offset,
uint64_t bytes)
{
BackupBlockJob *backup_job = container_of(job, BackupBlockJob, common);
int64_t start, end;
assert(job->driver->job_type == BLOCK_JOB_TYPE_BACKUP);
start = QEMU_ALIGN_DOWN(offset, backup_job->cluster_size);
end = QEMU_ALIGN_UP(offset + bytes, backup_job->cluster_size);
wait_for_overlapping_requests(backup_job, start, end);
}
void backup_cow_request_begin(CowRequest *req, BlockJob *job,
int64_t offset, uint64_t bytes)
{
BackupBlockJob *backup_job = container_of(job, BackupBlockJob, common);
int64_t start, end;
assert(job->driver->job_type == BLOCK_JOB_TYPE_BACKUP);
start = QEMU_ALIGN_DOWN(offset, backup_job->cluster_size);
end = QEMU_ALIGN_UP(offset + bytes, backup_job->cluster_size);
cow_request_begin(req, backup_job, start, end);
}
void backup_cow_request_end(CowRequest *req)
{
cow_request_end(req);
}
static void backup_drain(BlockJob *job)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
/* Need to keep a reference in case blk_drain triggers execution
* of backup_complete...
*/
if (s->target) {
BlockBackend *target = s->target;
blk_ref(target);
blk_drain(target);
blk_unref(target);
}
}
static const BlockJobDriver backup_job_driver = {
.instance_size = sizeof(BackupBlockJob),
.job_type = BLOCK_JOB_TYPE_BACKUP,
.set_speed = backup_set_speed,
.iostatus_reset = backup_iostatus_reset,
};
static BlockErrorAction backup_error_action(BackupBlockJob *job,
bool read, int error)
{
if (read) {
return block_job_error_action(&job->common, job->on_source_error,
true, error);
return block_job_error_action(&job->common, job->common.bs,
job->on_source_error, true, error);
} else {
return block_job_error_action(&job->common, job->on_target_error,
false, error);
return block_job_error_action(&job->common, job->target,
job->on_target_error, false, error);
}
}
typedef struct {
int ret;
} BackupCompleteData;
static void backup_complete(BlockJob *job, void *opaque)
{
BackupCompleteData *data = opaque;
block_job_completed(job, data->ret);
g_free(data);
}
static bool coroutine_fn yield_and_check(BackupBlockJob *job)
{
if (block_job_is_cancelled(&job->common)) {
return true;
}
/* we need to yield so that bdrv_drain_all() returns.
* (without, VM does not reboot)
*/
if (job->common.speed) {
uint64_t delay_ns = ratelimit_calculate_delay(&job->limit,
job->bytes_read);
job->bytes_read = 0;
block_job_sleep_ns(&job->common, delay_ns);
} else {
block_job_sleep_ns(&job->common, 0);
}
if (block_job_is_cancelled(&job->common)) {
return true;
}
return false;
}
static int coroutine_fn backup_run_incremental(BackupBlockJob *job)
{
int ret;
bool error_is_read;
int64_t cluster;
HBitmapIter hbi;
hbitmap_iter_init(&hbi, job->copy_bitmap, 0);
while ((cluster = hbitmap_iter_next(&hbi)) != -1) {
do {
if (yield_and_check(job)) {
return 0;
}
ret = backup_do_cow(job, cluster * job->cluster_size,
job->cluster_size, &error_is_read, false);
if (ret < 0 && backup_error_action(job, error_is_read, -ret) ==
BLOCK_ERROR_ACTION_REPORT)
{
return ret;
}
} while (ret < 0);
}
return 0;
}
/* init copy_bitmap from sync_bitmap */
static void backup_incremental_init_copy_bitmap(BackupBlockJob *job)
{
BdrvDirtyBitmapIter *dbi;
int64_t offset;
int64_t end = DIV_ROUND_UP(bdrv_dirty_bitmap_size(job->sync_bitmap),
job->cluster_size);
dbi = bdrv_dirty_iter_new(job->sync_bitmap);
while ((offset = bdrv_dirty_iter_next(dbi)) != -1) {
int64_t cluster = offset / job->cluster_size;
int64_t next_cluster;
offset += bdrv_dirty_bitmap_granularity(job->sync_bitmap);
if (offset >= bdrv_dirty_bitmap_size(job->sync_bitmap)) {
hbitmap_set(job->copy_bitmap, cluster, end - cluster);
break;
}
offset = bdrv_dirty_bitmap_next_zero(job->sync_bitmap, offset);
if (offset == -1) {
hbitmap_set(job->copy_bitmap, cluster, end - cluster);
break;
}
next_cluster = DIV_ROUND_UP(offset, job->cluster_size);
hbitmap_set(job->copy_bitmap, cluster, next_cluster - cluster);
if (next_cluster >= end) {
break;
}
bdrv_set_dirty_iter(dbi, next_cluster * job->cluster_size);
}
job->common.offset = job->common.len -
hbitmap_count(job->copy_bitmap) * job->cluster_size;
bdrv_dirty_iter_free(dbi);
}
static void coroutine_fn backup_run(void *opaque)
{
BackupBlockJob *job = opaque;
BackupCompleteData *data;
BlockDriverState *bs = blk_bs(job->common.blk);
int64_t offset, nb_clusters;
BlockDriverState *bs = job->common.bs;
BlockDriverState *target = job->target;
BlockdevOnError on_target_error = job->on_target_error;
NotifierWithReturn before_write = {
.notify = backup_before_write_notify,
};
int64_t start, end;
int ret = 0;
QLIST_INIT(&job->inflight_reqs);
qemu_co_rwlock_init(&job->flush_rwlock);
nb_clusters = DIV_ROUND_UP(job->common.len, job->cluster_size);
job->copy_bitmap = hbitmap_alloc(nb_clusters, 0);
if (job->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
backup_incremental_init_copy_bitmap(job);
} else {
hbitmap_set(job->copy_bitmap, 0, nb_clusters);
}
start = 0;
end = DIV_ROUND_UP(job->common.len / BDRV_SECTOR_SIZE,
BACKUP_SECTORS_PER_CLUSTER);
job->bitmap = hbitmap_alloc(end, 0);
job->before_write.notify = backup_before_write_notify;
bdrv_add_before_write_notifier(bs, &job->before_write);
bdrv_set_enable_write_cache(target, true);
bdrv_set_on_error(target, on_target_error, on_target_error);
bdrv_iostatus_enable(target);
bdrv_add_before_write_notifier(bs, &before_write);
if (job->sync_mode == MIRROR_SYNC_MODE_NONE) {
/* All bits are set in copy_bitmap to allow any cluster to be copied.
* This does not actually require them to be copied. */
while (!block_job_is_cancelled(&job->common)) {
/* Yield until the job is cancelled. We just let our before_write
* notify callback service CoW requests. */
block_job_yield(&job->common);
job->common.busy = false;
qemu_coroutine_yield();
job->common.busy = true;
}
} else if (job->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
ret = backup_run_incremental(job);
} else {
/* Both FULL and TOP SYNC_MODE's require copying.. */
for (offset = 0; offset < job->common.len;
offset += job->cluster_size) {
for (; start < end; start++) {
bool error_is_read;
int alloced = 0;
if (yield_and_check(job)) {
if (block_job_is_cancelled(&job->common)) {
break;
}
/* we need to yield so that qemu_aio_flush() returns.
* (without, VM does not reboot)
*/
if (job->common.speed) {
uint64_t delay_ns = ratelimit_calculate_delay(
&job->limit, job->sectors_read);
job->sectors_read = 0;
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, delay_ns);
} else {
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, 0);
}
if (block_job_is_cancelled(&job->common)) {
break;
}
if (job->sync_mode == MIRROR_SYNC_MODE_TOP) {
int i;
int64_t n;
int i, n;
int alloced = 0;
/* Check to see if these blocks are already in the
* backing file. */
for (i = 0; i < job->cluster_size;) {
for (i = 0; i < BACKUP_SECTORS_PER_CLUSTER;) {
/* bdrv_is_allocated() only returns true/false based
* on the first set of sectors it comes across that
* are are all in the same state.
@@ -485,11 +302,12 @@ static void coroutine_fn backup_run(void *opaque)
* backup cluster length. We end up copying more than
* needed but at some point that is always the case. */
alloced =
bdrv_is_allocated(bs, offset + i,
job->cluster_size - i, &n);
bdrv_is_allocated(bs,
start * BACKUP_SECTORS_PER_CLUSTER + i,
BACKUP_SECTORS_PER_CLUSTER - i, &n);
i += n;
if (alloced || n == 0) {
if (alloced == 1 || n == 0) {
break;
}
}
@@ -501,12 +319,8 @@ static void coroutine_fn backup_run(void *opaque)
}
}
/* FULL sync mode we copy the whole drive. */
if (alloced < 0) {
ret = alloced;
} else {
ret = backup_do_cow(job, offset, job->cluster_size,
&error_is_read, false);
}
ret = backup_do_cow(bs, start * BACKUP_SECTORS_PER_CLUSTER,
BACKUP_SECTORS_PER_CLUSTER, &error_is_read);
if (ret < 0) {
/* Depending on error action, fail now or retry cluster */
BlockErrorAction action =
@@ -514,181 +328,65 @@ static void coroutine_fn backup_run(void *opaque)
if (action == BLOCK_ERROR_ACTION_REPORT) {
break;
} else {
offset -= job->cluster_size;
start--;
continue;
}
}
}
}
notifier_with_return_remove(&job->before_write);
notifier_with_return_remove(&before_write);
/* wait until pending backup_do_cow() calls have completed */
qemu_co_rwlock_wrlock(&job->flush_rwlock);
qemu_co_rwlock_unlock(&job->flush_rwlock);
hbitmap_free(job->copy_bitmap);
data = g_malloc(sizeof(*data));
data->ret = ret;
block_job_defer_to_main_loop(&job->common, backup_complete, data);
hbitmap_free(job->bitmap);
bdrv_iostatus_disable(target);
bdrv_unref(target);
block_job_completed(&job->common, ret);
}
static const BlockJobDriver backup_job_driver = {
.instance_size = sizeof(BackupBlockJob),
.job_type = BLOCK_JOB_TYPE_BACKUP,
.start = backup_run,
.set_speed = backup_set_speed,
.commit = backup_commit,
.abort = backup_abort,
.clean = backup_clean,
.attached_aio_context = backup_attached_aio_context,
.drain = backup_drain,
};
BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
BlockDriverState *target, int64_t speed,
MirrorSyncMode sync_mode, BdrvDirtyBitmap *sync_bitmap,
bool compress,
void backup_start(BlockDriverState *bs, BlockDriverState *target,
int64_t speed, MirrorSyncMode sync_mode,
BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
int creation_flags,
BlockCompletionFunc *cb, void *opaque,
BlockJobTxn *txn, Error **errp)
BlockDriverCompletionFunc *cb, void *opaque,
Error **errp)
{
int64_t len;
BlockDriverInfo bdi;
BackupBlockJob *job = NULL;
int ret;
assert(bs);
assert(target);
assert(cb);
if (bs == target) {
error_setg(errp, "Source and target cannot be the same");
return NULL;
}
if (!bdrv_is_inserted(bs)) {
error_setg(errp, "Device is not inserted: %s",
bdrv_get_device_name(bs));
return NULL;
}
if (!bdrv_is_inserted(target)) {
error_setg(errp, "Device is not inserted: %s",
bdrv_get_device_name(target));
return NULL;
}
if (compress && target->drv->bdrv_co_pwritev_compressed == NULL) {
error_setg(errp, "Compression is not supported for this drive %s",
bdrv_get_device_name(target));
return NULL;
}
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) {
return NULL;
}
if (bdrv_op_is_blocked(target, BLOCK_OP_TYPE_BACKUP_TARGET, errp)) {
return NULL;
}
if (sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
if (!sync_bitmap) {
error_setg(errp, "must provide a valid bitmap name for "
"\"incremental\" sync mode");
return NULL;
}
/* Create a new bitmap, and freeze/disable this one. */
if (bdrv_dirty_bitmap_create_successor(bs, sync_bitmap, errp) < 0) {
return NULL;
}
} else if (sync_bitmap) {
error_setg(errp,
"a sync_bitmap was provided to backup_run, "
"but received an incompatible sync_mode (%s)",
MirrorSyncMode_str(sync_mode));
return NULL;
if ((on_source_error == BLOCKDEV_ON_ERROR_STOP ||
on_source_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
error_set(errp, QERR_INVALID_PARAMETER, "on-source-error");
return;
}
len = bdrv_getlength(bs);
if (len < 0) {
error_setg_errno(errp, -len, "unable to get length for '%s'",
bdrv_get_device_name(bs));
goto error;
return;
}
/* job->common.len is fixed, so we can't allow resize */
job = block_job_create(job_id, &backup_job_driver, bs,
BLK_PERM_CONSISTENT_READ,
BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE |
BLK_PERM_WRITE_UNCHANGED | BLK_PERM_GRAPH_MOD,
speed, creation_flags, cb, opaque, errp);
BackupBlockJob *job = block_job_create(&backup_job_driver, bs, speed,
cb, opaque, errp);
if (!job) {
goto error;
}
/* The target must match the source in size, so no resize here either */
job->target = blk_new(BLK_PERM_WRITE,
BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE |
BLK_PERM_WRITE_UNCHANGED | BLK_PERM_GRAPH_MOD);
ret = blk_insert_bs(job->target, target, errp);
if (ret < 0) {
goto error;
return;
}
job->on_source_error = on_source_error;
job->on_target_error = on_target_error;
job->target = target;
job->sync_mode = sync_mode;
job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_INCREMENTAL ?
sync_bitmap : NULL;
job->compress = compress;
/* If there is no backing file on the target, we cannot rely on COW if our
* backup cluster size is smaller than the target cluster size. Even for
* targets with a backing file, try to avoid COW if possible. */
ret = bdrv_get_info(target, &bdi);
if (ret == -ENOTSUP && !target->backing) {
/* Cluster size is not defined */
warn_report("The target block device doesn't provide "
"information about the block size and it doesn't have a "
"backing file. The default block size of %u bytes is "
"used. If the actual block size of the target exceeds "
"this default, the backup may be unusable",
BACKUP_CLUSTER_SIZE_DEFAULT);
job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT;
} else if (ret < 0 && !target->backing) {
error_setg_errno(errp, -ret,
"Couldn't determine the cluster size of the target image, "
"which has no backing file");
error_append_hint(errp,
"Aborting, since this may create an unusable destination image\n");
goto error;
} else if (ret < 0 && target->backing) {
/* Not fatal; just trudge on ahead. */
job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT;
} else {
job->cluster_size = MAX(BACKUP_CLUSTER_SIZE_DEFAULT, bdi.cluster_size);
}
/* Required permissions are already taken with target's blk_new() */
block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
&error_abort);
job->common.len = len;
block_job_txn_add_job(txn, &job->common);
return &job->common;
error:
if (sync_bitmap) {
bdrv_reclaim_dirty_bitmap(bs, sync_bitmap, NULL);
}
if (job) {
backup_clean(&job->common);
block_job_early_fail(&job->common);
}
return NULL;
job->common.co = qemu_coroutine_create(backup_run);
qemu_coroutine_enter(job->common.co, job);
}

View File

@@ -1,7 +1,6 @@
/*
* Block protocol for I/O error injection
*
* Copyright (C) 2016-2017 Red Hat, Inc.
* Copyright (c) 2010 Kevin Wolf <kwolf@redhat.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
@@ -23,37 +22,23 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu/cutils.h"
#include "qemu-common.h"
#include "qemu/config-file.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "qemu/option.h"
#include "qapi/qmp/qdict.h"
#include "qapi/qmp/qstring.h"
#include "sysemu/qtest.h"
typedef struct BDRVBlkdebugState {
int state;
int new_state;
uint64_t align;
uint64_t max_transfer;
uint64_t opt_write_zero;
uint64_t max_write_zero;
uint64_t opt_discard;
uint64_t max_discard;
/* For blkdebug_refresh_filename() */
char *config_file;
QLIST_HEAD(, BlkdebugRule) rules[BLKDBG__MAX];
QLIST_HEAD(, BlkdebugRule) rules[BLKDBG_EVENT_MAX];
QSIMPLEQ_HEAD(, BlkdebugRule) active_rules;
QLIST_HEAD(, BlkdebugSuspendedReq) suspended_reqs;
} BDRVBlkdebugState;
typedef struct BlkdebugAIOCB {
BlockAIOCB common;
BlockDriverAIOCB common;
QEMUBH *bh;
int ret;
} BlkdebugAIOCB;
@@ -63,6 +48,13 @@ typedef struct BlkdebugSuspendedReq {
QLIST_ENTRY(BlkdebugSuspendedReq) next;
} BlkdebugSuspendedReq;
static void blkdebug_aio_cancel(BlockDriverAIOCB *blockacb);
static const AIOCBInfo blkdebug_aiocb_info = {
.aiocb_size = sizeof(BlkdebugAIOCB),
.cancel = blkdebug_aio_cancel,
};
enum {
ACTION_INJECT_ERROR,
ACTION_SET_STATE,
@@ -70,7 +62,7 @@ enum {
};
typedef struct BlkdebugRule {
BlkdebugEvent event;
BlkDebugEvent event;
int action;
int state;
union {
@@ -78,7 +70,7 @@ typedef struct BlkdebugRule {
int error;
int immediately;
int once;
int64_t offset;
int64_t sector;
} inject;
struct {
int new_state;
@@ -149,28 +141,91 @@ static QemuOptsList *config_groups[] = {
NULL
};
static const char *event_names[BLKDBG_EVENT_MAX] = {
[BLKDBG_L1_UPDATE] = "l1_update",
[BLKDBG_L1_GROW_ALLOC_TABLE] = "l1_grow.alloc_table",
[BLKDBG_L1_GROW_WRITE_TABLE] = "l1_grow.write_table",
[BLKDBG_L1_GROW_ACTIVATE_TABLE] = "l1_grow.activate_table",
[BLKDBG_L2_LOAD] = "l2_load",
[BLKDBG_L2_UPDATE] = "l2_update",
[BLKDBG_L2_UPDATE_COMPRESSED] = "l2_update_compressed",
[BLKDBG_L2_ALLOC_COW_READ] = "l2_alloc.cow_read",
[BLKDBG_L2_ALLOC_WRITE] = "l2_alloc.write",
[BLKDBG_READ_AIO] = "read_aio",
[BLKDBG_READ_BACKING_AIO] = "read_backing_aio",
[BLKDBG_READ_COMPRESSED] = "read_compressed",
[BLKDBG_WRITE_AIO] = "write_aio",
[BLKDBG_WRITE_COMPRESSED] = "write_compressed",
[BLKDBG_VMSTATE_LOAD] = "vmstate_load",
[BLKDBG_VMSTATE_SAVE] = "vmstate_save",
[BLKDBG_COW_READ] = "cow_read",
[BLKDBG_COW_WRITE] = "cow_write",
[BLKDBG_REFTABLE_LOAD] = "reftable_load",
[BLKDBG_REFTABLE_GROW] = "reftable_grow",
[BLKDBG_REFTABLE_UPDATE] = "reftable_update",
[BLKDBG_REFBLOCK_LOAD] = "refblock_load",
[BLKDBG_REFBLOCK_UPDATE] = "refblock_update",
[BLKDBG_REFBLOCK_UPDATE_PART] = "refblock_update_part",
[BLKDBG_REFBLOCK_ALLOC] = "refblock_alloc",
[BLKDBG_REFBLOCK_ALLOC_HOOKUP] = "refblock_alloc.hookup",
[BLKDBG_REFBLOCK_ALLOC_WRITE] = "refblock_alloc.write",
[BLKDBG_REFBLOCK_ALLOC_WRITE_BLOCKS] = "refblock_alloc.write_blocks",
[BLKDBG_REFBLOCK_ALLOC_WRITE_TABLE] = "refblock_alloc.write_table",
[BLKDBG_REFBLOCK_ALLOC_SWITCH_TABLE] = "refblock_alloc.switch_table",
[BLKDBG_CLUSTER_ALLOC] = "cluster_alloc",
[BLKDBG_CLUSTER_ALLOC_BYTES] = "cluster_alloc_bytes",
[BLKDBG_CLUSTER_FREE] = "cluster_free",
[BLKDBG_FLUSH_TO_OS] = "flush_to_os",
[BLKDBG_FLUSH_TO_DISK] = "flush_to_disk",
[BLKDBG_PWRITEV_RMW_HEAD] = "pwritev_rmw.head",
[BLKDBG_PWRITEV_RMW_AFTER_HEAD] = "pwritev_rmw.after_head",
[BLKDBG_PWRITEV_RMW_TAIL] = "pwritev_rmw.tail",
[BLKDBG_PWRITEV_RMW_AFTER_TAIL] = "pwritev_rmw.after_tail",
[BLKDBG_PWRITEV] = "pwritev",
[BLKDBG_PWRITEV_ZERO] = "pwritev_zero",
[BLKDBG_PWRITEV_DONE] = "pwritev_done",
};
static int get_event_by_name(const char *name, BlkDebugEvent *event)
{
int i;
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
if (!strcmp(event_names[i], name)) {
*event = i;
return 0;
}
}
return -1;
}
struct add_rule_data {
BDRVBlkdebugState *s;
int action;
};
static int add_rule(void *opaque, QemuOpts *opts, Error **errp)
static int add_rule(QemuOpts *opts, void *opaque)
{
struct add_rule_data *d = opaque;
BDRVBlkdebugState *s = d->s;
const char* event_name;
int event;
BlkDebugEvent event;
struct BlkdebugRule *rule;
int64_t sector;
/* Find the right event for the rule */
event_name = qemu_opt_get(opts, "event");
if (!event_name) {
error_setg(errp, "Missing event name for rule");
return -1;
}
event = qapi_enum_parse(&BlkdebugEvent_lookup, event_name, -1, errp);
if (event < 0) {
if (!event_name || get_event_by_name(event_name, &event) < 0) {
return -1;
}
@@ -189,9 +244,7 @@ static int add_rule(void *opaque, QemuOpts *opts, Error **errp)
rule->options.inject.once = qemu_opt_get_bool(opts, "once", 0);
rule->options.inject.immediately =
qemu_opt_get_bool(opts, "immediately", 0);
sector = qemu_opt_get_number(opts, "sector", -1);
rule->options.inject.offset =
sector == -1 ? -1 : sector * BDRV_SECTOR_SIZE;
rule->options.inject.sector = qemu_opt_get_number(opts, "sector", -1);
break;
case ACTION_SET_STATE:
@@ -244,6 +297,7 @@ static int read_config(BDRVBlkdebugState *s, const char *filename,
ret = qemu_config_parse(f, config_groups, filename);
if (ret < 0) {
error_setg(errp, "Could not parse blkdebug config file");
ret = -EINVAL;
goto fail;
}
}
@@ -257,20 +311,10 @@ static int read_config(BDRVBlkdebugState *s, const char *filename,
d.s = s;
d.action = ACTION_INJECT_ERROR;
qemu_opts_foreach(&inject_error_opts, add_rule, &d, &local_err);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
}
qemu_opts_foreach(&inject_error_opts, add_rule, &d, 0);
d.action = ACTION_SET_STATE;
qemu_opts_foreach(&set_state_opts, add_rule, &d, &local_err);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
}
qemu_opts_foreach(&set_state_opts, add_rule, &d, 0);
ret = 0;
fail:
@@ -292,7 +336,7 @@ static void blkdebug_parse_filename(const char *filename, QDict *options,
if (!strstart(filename, "blkdebug:", &filename)) {
/* There was no prefix; therefore, all options have to be already
present in the QDict (except for the filename) */
qdict_put_str(options, "x-image", filename);
qdict_put(options, "x-image", qstring_from_str(filename));
return;
}
@@ -311,7 +355,7 @@ static void blkdebug_parse_filename(const char *filename, QDict *options,
/* TODO Allow multi-level nesting and set file.filename here */
filename = c + 1;
qdict_put_str(options, "x-image", filename);
qdict_put(options, "x-image", qstring_from_str(filename));
}
static QemuOptsList runtime_opts = {
@@ -333,31 +377,6 @@ static QemuOptsList runtime_opts = {
.type = QEMU_OPT_SIZE,
.help = "Required alignment in bytes",
},
{
.name = "max-transfer",
.type = QEMU_OPT_SIZE,
.help = "Maximum transfer size in bytes",
},
{
.name = "opt-write-zero",
.type = QEMU_OPT_SIZE,
.help = "Optimum write zero alignment in bytes",
},
{
.name = "max-write-zero",
.type = QEMU_OPT_SIZE,
.help = "Maximum write zero size in bytes",
},
{
.name = "opt-discard",
.type = QEMU_OPT_SIZE,
.help = "Optimum discard alignment in bytes",
},
{
.name = "max-discard",
.type = QEMU_OPT_SIZE,
.help = "Maximum discard size in bytes",
},
{ /* end of list */ }
},
};
@@ -368,8 +387,9 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
BDRVBlkdebugState *s = bs->opaque;
QemuOpts *opts;
Error *local_err = NULL;
int ret;
const char *config;
uint64_t align;
int ret;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
@@ -380,8 +400,8 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
}
/* Read rules from config file or command line options */
s->config_file = g_strdup(qemu_opt_get(opts, "config"));
ret = read_config(s, s->config_file, options, errp);
config = qemu_opt_get(opts, "config");
ret = read_config(s, config, options, errp);
if (ret) {
goto out;
}
@@ -389,256 +409,142 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
/* Set initial state */
s->state = 1;
/* Open the image file */
bs->file = bdrv_open_child(qemu_opt_get(opts, "x-image"), options, "image",
bs, &child_file, false, &local_err);
if (local_err) {
ret = -EINVAL;
/* Open the backing file */
assert(bs->file == NULL);
ret = bdrv_open_image(&bs->file, qemu_opt_get(opts, "x-image"), options, "image",
flags | BDRV_O_PROTOCOL, false, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
goto out;
}
bs->supported_write_flags = BDRV_REQ_FUA &
bs->file->bs->supported_write_flags;
bs->supported_zero_flags = (BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP) &
bs->file->bs->supported_zero_flags;
ret = -EINVAL;
/* Set alignment overrides */
s->align = qemu_opt_get_size(opts, "align", 0);
if (s->align && (s->align >= INT_MAX || !is_power_of_2(s->align))) {
error_setg(errp, "Cannot meet constraints with align %" PRIu64,
s->align);
goto out;
}
align = MAX(s->align, bs->file->bs->bl.request_alignment);
s->max_transfer = qemu_opt_get_size(opts, "max-transfer", 0);
if (s->max_transfer &&
(s->max_transfer >= INT_MAX ||
!QEMU_IS_ALIGNED(s->max_transfer, align))) {
error_setg(errp, "Cannot meet constraints with max-transfer %" PRIu64,
s->max_transfer);
goto out;
}
s->opt_write_zero = qemu_opt_get_size(opts, "opt-write-zero", 0);
if (s->opt_write_zero &&
(s->opt_write_zero >= INT_MAX ||
!QEMU_IS_ALIGNED(s->opt_write_zero, align))) {
error_setg(errp, "Cannot meet constraints with opt-write-zero %" PRIu64,
s->opt_write_zero);
goto out;
}
s->max_write_zero = qemu_opt_get_size(opts, "max-write-zero", 0);
if (s->max_write_zero &&
(s->max_write_zero >= INT_MAX ||
!QEMU_IS_ALIGNED(s->max_write_zero,
MAX(s->opt_write_zero, align)))) {
error_setg(errp, "Cannot meet constraints with max-write-zero %" PRIu64,
s->max_write_zero);
goto out;
}
s->opt_discard = qemu_opt_get_size(opts, "opt-discard", 0);
if (s->opt_discard &&
(s->opt_discard >= INT_MAX ||
!QEMU_IS_ALIGNED(s->opt_discard, align))) {
error_setg(errp, "Cannot meet constraints with opt-discard %" PRIu64,
s->opt_discard);
goto out;
}
s->max_discard = qemu_opt_get_size(opts, "max-discard", 0);
if (s->max_discard &&
(s->max_discard >= INT_MAX ||
!QEMU_IS_ALIGNED(s->max_discard,
MAX(s->opt_discard, align)))) {
error_setg(errp, "Cannot meet constraints with max-discard %" PRIu64,
s->max_discard);
goto out;
/* Set request alignment */
align = qemu_opt_get_size(opts, "align", bs->request_alignment);
if (align > 0 && align < INT_MAX && !(align & (align - 1))) {
bs->request_alignment = align;
} else {
error_setg(errp, "Invalid alignment");
ret = -EINVAL;
goto fail_unref;
}
ret = 0;
goto out;
fail_unref:
bdrv_unref(bs->file);
out:
if (ret < 0) {
g_free(s->config_file);
}
qemu_opts_del(opts);
return ret;
}
static int rule_check(BlockDriverState *bs, uint64_t offset, uint64_t bytes)
static void error_callback_bh(void *opaque)
{
struct BlkdebugAIOCB *acb = opaque;
qemu_bh_delete(acb->bh);
acb->common.cb(acb->common.opaque, acb->ret);
qemu_aio_release(acb);
}
static void blkdebug_aio_cancel(BlockDriverAIOCB *blockacb)
{
BlkdebugAIOCB *acb = container_of(blockacb, BlkdebugAIOCB, common);
if (acb->bh) {
qemu_bh_delete(acb->bh);
acb->bh = NULL;
}
qemu_aio_release(acb);
}
static BlockDriverAIOCB *inject_error(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque, BlkdebugRule *rule)
{
BDRVBlkdebugState *s = bs->opaque;
int error = rule->options.inject.error;
struct BlkdebugAIOCB *acb;
QEMUBH *bh;
if (rule->options.inject.once) {
QSIMPLEQ_INIT(&s->active_rules);
}
if (rule->options.inject.immediately) {
return NULL;
}
acb = qemu_aio_get(&blkdebug_aiocb_info, bs, cb, opaque);
acb->ret = -error;
bh = aio_bh_new(bdrv_get_aio_context(bs), error_callback_bh, acb);
acb->bh = bh;
qemu_bh_schedule(bh);
return &acb->common;
}
static BlockDriverAIOCB *blkdebug_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugRule *rule = NULL;
int error;
bool immediately;
QSIMPLEQ_FOREACH(rule, &s->active_rules, active_next) {
uint64_t inject_offset = rule->options.inject.offset;
if (inject_offset == -1 ||
(bytes && inject_offset >= offset &&
inject_offset < offset + bytes))
{
if (rule->options.inject.sector == -1 ||
(rule->options.inject.sector >= sector_num &&
rule->options.inject.sector < sector_num + nb_sectors)) {
break;
}
}
if (!rule || !rule->options.inject.error) {
return 0;
if (rule && rule->options.inject.error) {
return inject_error(bs, cb, opaque, rule);
}
immediately = rule->options.inject.immediately;
error = rule->options.inject.error;
if (rule->options.inject.once) {
QSIMPLEQ_REMOVE(&s->active_rules, rule, BlkdebugRule, active_next);
remove_rule(rule);
}
if (!immediately) {
aio_co_schedule(qemu_get_current_aio_context(), qemu_coroutine_self());
qemu_coroutine_yield();
}
return -error;
return bdrv_aio_readv(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
}
static int coroutine_fn
blkdebug_co_preadv(BlockDriverState *bs, uint64_t offset, uint64_t bytes,
QEMUIOVector *qiov, int flags)
static BlockDriverAIOCB *blkdebug_aio_writev(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
int err;
BDRVBlkdebugState *s = bs->opaque;
BlkdebugRule *rule = NULL;
/* Sanity check block layer guarantees */
assert(QEMU_IS_ALIGNED(offset, bs->bl.request_alignment));
assert(QEMU_IS_ALIGNED(bytes, bs->bl.request_alignment));
if (bs->bl.max_transfer) {
assert(bytes <= bs->bl.max_transfer);
QSIMPLEQ_FOREACH(rule, &s->active_rules, active_next) {
if (rule->options.inject.sector == -1 ||
(rule->options.inject.sector >= sector_num &&
rule->options.inject.sector < sector_num + nb_sectors)) {
break;
}
}
err = rule_check(bs, offset, bytes);
if (err) {
return err;
if (rule && rule->options.inject.error) {
return inject_error(bs, cb, opaque, rule);
}
return bdrv_co_preadv(bs->file, offset, bytes, qiov, flags);
return bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
}
static int coroutine_fn
blkdebug_co_pwritev(BlockDriverState *bs, uint64_t offset, uint64_t bytes,
QEMUIOVector *qiov, int flags)
static BlockDriverAIOCB *blkdebug_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
int err;
BDRVBlkdebugState *s = bs->opaque;
BlkdebugRule *rule = NULL;
/* Sanity check block layer guarantees */
assert(QEMU_IS_ALIGNED(offset, bs->bl.request_alignment));
assert(QEMU_IS_ALIGNED(bytes, bs->bl.request_alignment));
if (bs->bl.max_transfer) {
assert(bytes <= bs->bl.max_transfer);
QSIMPLEQ_FOREACH(rule, &s->active_rules, active_next) {
if (rule->options.inject.sector == -1) {
break;
}
}
err = rule_check(bs, offset, bytes);
if (err) {
return err;
if (rule && rule->options.inject.error) {
return inject_error(bs, cb, opaque, rule);
}
return bdrv_co_pwritev(bs->file, offset, bytes, qiov, flags);
return bdrv_aio_flush(bs->file, cb, opaque);
}
static int blkdebug_co_flush(BlockDriverState *bs)
{
int err = rule_check(bs, 0, 0);
if (err) {
return err;
}
return bdrv_co_flush(bs->file->bs);
}
static int coroutine_fn blkdebug_co_pwrite_zeroes(BlockDriverState *bs,
int64_t offset, int bytes,
BdrvRequestFlags flags)
{
uint32_t align = MAX(bs->bl.request_alignment,
bs->bl.pwrite_zeroes_alignment);
int err;
/* Only pass through requests that are larger than requested
* preferred alignment (so that we test the fallback to writes on
* unaligned portions), and check that the block layer never hands
* us anything unaligned that crosses an alignment boundary. */
if (bytes < align) {
assert(QEMU_IS_ALIGNED(offset, align) ||
QEMU_IS_ALIGNED(offset + bytes, align) ||
DIV_ROUND_UP(offset, align) ==
DIV_ROUND_UP(offset + bytes, align));
return -ENOTSUP;
}
assert(QEMU_IS_ALIGNED(offset, align));
assert(QEMU_IS_ALIGNED(bytes, align));
if (bs->bl.max_pwrite_zeroes) {
assert(bytes <= bs->bl.max_pwrite_zeroes);
}
err = rule_check(bs, offset, bytes);
if (err) {
return err;
}
return bdrv_co_pwrite_zeroes(bs->file, offset, bytes, flags);
}
static int coroutine_fn blkdebug_co_pdiscard(BlockDriverState *bs,
int64_t offset, int bytes)
{
uint32_t align = bs->bl.pdiscard_alignment;
int err;
/* Only pass through requests that are larger than requested
* minimum alignment, and ensure that unaligned requests do not
* cross optimum discard boundaries. */
if (bytes < bs->bl.request_alignment) {
assert(QEMU_IS_ALIGNED(offset, align) ||
QEMU_IS_ALIGNED(offset + bytes, align) ||
DIV_ROUND_UP(offset, align) ==
DIV_ROUND_UP(offset + bytes, align));
return -ENOTSUP;
}
assert(QEMU_IS_ALIGNED(offset, bs->bl.request_alignment));
assert(QEMU_IS_ALIGNED(bytes, bs->bl.request_alignment));
if (align && bytes >= align) {
assert(QEMU_IS_ALIGNED(offset, align));
assert(QEMU_IS_ALIGNED(bytes, align));
}
if (bs->bl.max_pdiscard) {
assert(bytes <= bs->bl.max_pdiscard);
}
err = rule_check(bs, offset, bytes);
if (err) {
return err;
}
return bdrv_co_pdiscard(bs->file->bs, offset, bytes);
}
static int coroutine_fn blkdebug_co_block_status(BlockDriverState *bs,
bool want_zero,
int64_t offset,
int64_t bytes,
int64_t *pnum,
int64_t *map,
BlockDriverState **file)
{
assert(QEMU_IS_ALIGNED(offset | bytes, bs->bl.request_alignment));
return bdrv_co_block_status_from_file(bs, want_zero, offset, bytes,
pnum, map, file);
}
static void blkdebug_close(BlockDriverState *bs)
{
@@ -646,13 +552,11 @@ static void blkdebug_close(BlockDriverState *bs)
BlkdebugRule *rule, *next;
int i;
for (i = 0; i < BLKDBG__MAX; i++) {
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
QLIST_FOREACH_SAFE(rule, &s->rules[i], next, next) {
remove_rule(rule);
}
}
g_free(s->config_file);
}
static void suspend_request(BlockDriverState *bs, BlkdebugRule *rule)
@@ -668,13 +572,9 @@ static void suspend_request(BlockDriverState *bs, BlkdebugRule *rule)
remove_rule(rule);
QLIST_INSERT_HEAD(&s->suspended_reqs, &r, next);
if (!qtest_enabled()) {
printf("blkdebug: Suspended request '%s'\n", r.tag);
}
printf("blkdebug: Suspended request '%s'\n", r.tag);
qemu_coroutine_yield();
if (!qtest_enabled()) {
printf("blkdebug: Resuming request '%s'\n", r.tag);
}
printf("blkdebug: Resuming request '%s'\n", r.tag);
QLIST_REMOVE(&r, next);
g_free(r.tag);
@@ -711,13 +611,13 @@ static bool process_rule(BlockDriverState *bs, struct BlkdebugRule *rule,
return injected;
}
static void blkdebug_debug_event(BlockDriverState *bs, BlkdebugEvent event)
static void blkdebug_debug_event(BlockDriverState *bs, BlkDebugEvent event)
{
BDRVBlkdebugState *s = bs->opaque;
struct BlkdebugRule *rule, *next;
bool injected;
assert((int)event >= 0 && event < BLKDBG__MAX);
assert((int)event >= 0 && event < BLKDBG_EVENT_MAX);
injected = false;
s->new_state = s->state;
@@ -732,13 +632,13 @@ static int blkdebug_debug_breakpoint(BlockDriverState *bs, const char *event,
{
BDRVBlkdebugState *s = bs->opaque;
struct BlkdebugRule *rule;
int blkdebug_event;
BlkDebugEvent blkdebug_event;
blkdebug_event = qapi_enum_parse(&BlkdebugEvent_lookup, event, -1, NULL);
if (blkdebug_event < 0) {
if (get_event_by_name(event, &blkdebug_event) < 0) {
return -ENOENT;
}
rule = g_malloc(sizeof(*rule));
*rule = (struct BlkdebugRule) {
.event = blkdebug_event,
@@ -759,7 +659,7 @@ static int blkdebug_debug_resume(BlockDriverState *bs, const char *tag)
QLIST_FOREACH_SAFE(r, &s->suspended_reqs, next, next) {
if (!strcmp(r->tag, tag)) {
qemu_coroutine_enter(r->co);
qemu_coroutine_enter(r->co, NULL);
return 0;
}
}
@@ -774,7 +674,7 @@ static int blkdebug_debug_remove_breakpoint(BlockDriverState *bs,
BlkdebugRule *rule, *next;
int i, ret = -ENOENT;
for (i = 0; i < BLKDBG__MAX; i++) {
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
QLIST_FOREACH_SAFE(rule, &s->rules[i], next, next) {
if (rule->action == ACTION_SUSPEND &&
!strcmp(rule->options.suspend.tag, tag)) {
@@ -785,7 +685,7 @@ static int blkdebug_debug_remove_breakpoint(BlockDriverState *bs,
}
QLIST_FOREACH_SAFE(r, &s->suspended_reqs, next, r_next) {
if (!strcmp(r->tag, tag)) {
qemu_coroutine_enter(r->co);
qemu_coroutine_enter(r->co, NULL);
ret = 0;
}
}
@@ -807,109 +707,22 @@ static bool blkdebug_debug_is_suspended(BlockDriverState *bs, const char *tag)
static int64_t blkdebug_getlength(BlockDriverState *bs)
{
return bdrv_getlength(bs->file->bs);
}
static void blkdebug_refresh_filename(BlockDriverState *bs, QDict *options)
{
BDRVBlkdebugState *s = bs->opaque;
QDict *opts;
const QDictEntry *e;
bool force_json = false;
for (e = qdict_first(options); e; e = qdict_next(options, e)) {
if (strcmp(qdict_entry_key(e), "config") &&
strcmp(qdict_entry_key(e), "x-image"))
{
force_json = true;
break;
}
}
if (force_json && !bs->file->bs->full_open_options) {
/* The config file cannot be recreated, so creating a plain filename
* is impossible */
return;
}
if (!force_json && bs->file->bs->exact_filename[0]) {
int ret = snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"blkdebug:%s:%s", s->config_file ?: "",
bs->file->bs->exact_filename);
if (ret >= sizeof(bs->exact_filename)) {
/* An overflow makes the filename unusable, so do not report any */
bs->exact_filename[0] = 0;
}
}
opts = qdict_new();
qdict_put_str(opts, "driver", "blkdebug");
QINCREF(bs->file->bs->full_open_options);
qdict_put(opts, "image", bs->file->bs->full_open_options);
for (e = qdict_first(options); e; e = qdict_next(options, e)) {
if (strcmp(qdict_entry_key(e), "x-image")) {
qobject_incref(qdict_entry_value(e));
qdict_put_obj(opts, qdict_entry_key(e), qdict_entry_value(e));
}
}
bs->full_open_options = opts;
}
static void blkdebug_refresh_limits(BlockDriverState *bs, Error **errp)
{
BDRVBlkdebugState *s = bs->opaque;
if (s->align) {
bs->bl.request_alignment = s->align;
}
if (s->max_transfer) {
bs->bl.max_transfer = s->max_transfer;
}
if (s->opt_write_zero) {
bs->bl.pwrite_zeroes_alignment = s->opt_write_zero;
}
if (s->max_write_zero) {
bs->bl.max_pwrite_zeroes = s->max_write_zero;
}
if (s->opt_discard) {
bs->bl.pdiscard_alignment = s->opt_discard;
}
if (s->max_discard) {
bs->bl.max_pdiscard = s->max_discard;
}
}
static int blkdebug_reopen_prepare(BDRVReopenState *reopen_state,
BlockReopenQueue *queue, Error **errp)
{
return 0;
return bdrv_getlength(bs->file);
}
static BlockDriver bdrv_blkdebug = {
.format_name = "blkdebug",
.protocol_name = "blkdebug",
.instance_size = sizeof(BDRVBlkdebugState),
.is_filter = true,
.bdrv_parse_filename = blkdebug_parse_filename,
.bdrv_file_open = blkdebug_open,
.bdrv_close = blkdebug_close,
.bdrv_reopen_prepare = blkdebug_reopen_prepare,
.bdrv_child_perm = bdrv_filter_default_perms,
.bdrv_getlength = blkdebug_getlength,
.bdrv_refresh_filename = blkdebug_refresh_filename,
.bdrv_refresh_limits = blkdebug_refresh_limits,
.bdrv_co_preadv = blkdebug_co_preadv,
.bdrv_co_pwritev = blkdebug_co_pwritev,
.bdrv_co_flush_to_disk = blkdebug_co_flush,
.bdrv_co_pwrite_zeroes = blkdebug_co_pwrite_zeroes,
.bdrv_co_pdiscard = blkdebug_co_pdiscard,
.bdrv_co_block_status = blkdebug_co_block_status,
.bdrv_aio_readv = blkdebug_aio_readv,
.bdrv_aio_writev = blkdebug_aio_writev,
.bdrv_aio_flush = blkdebug_aio_flush,
.bdrv_debug_event = blkdebug_debug_event,
.bdrv_debug_breakpoint = blkdebug_debug_breakpoint,

View File

@@ -1,152 +0,0 @@
/*
* Block protocol for record/replay
*
* Copyright (c) 2010-2016 Institute for System Programming
* of the Russian Academy of Sciences.
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "block/block_int.h"
#include "sysemu/replay.h"
#include "qapi/error.h"
typedef struct Request {
Coroutine *co;
QEMUBH *bh;
} Request;
static int blkreplay_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
Error *local_err = NULL;
int ret;
/* Open the image file */
bs->file = bdrv_open_child(NULL, options, "image",
bs, &child_file, false, &local_err);
if (local_err) {
ret = -EINVAL;
error_propagate(errp, local_err);
goto fail;
}
ret = 0;
fail:
return ret;
}
static void blkreplay_close(BlockDriverState *bs)
{
}
static int64_t blkreplay_getlength(BlockDriverState *bs)
{
return bdrv_getlength(bs->file->bs);
}
/* This bh is used for synchronization of return from coroutines.
It continues yielded coroutine which then finishes its execution.
BH is called adjusted to some replay checkpoint, therefore
record and replay will always finish coroutines deterministically.
*/
static void blkreplay_bh_cb(void *opaque)
{
Request *req = opaque;
aio_co_wake(req->co);
qemu_bh_delete(req->bh);
g_free(req);
}
static void block_request_create(uint64_t reqid, BlockDriverState *bs,
Coroutine *co)
{
Request *req = g_new(Request, 1);
*req = (Request) {
.co = co,
.bh = aio_bh_new(bdrv_get_aio_context(bs), blkreplay_bh_cb, req),
};
replay_block_event(req->bh, reqid);
}
static int coroutine_fn blkreplay_co_preadv(BlockDriverState *bs,
uint64_t offset, uint64_t bytes, QEMUIOVector *qiov, int flags)
{
uint64_t reqid = blkreplay_next_id();
int ret = bdrv_co_preadv(bs->file, offset, bytes, qiov, flags);
block_request_create(reqid, bs, qemu_coroutine_self());
qemu_coroutine_yield();
return ret;
}
static int coroutine_fn blkreplay_co_pwritev(BlockDriverState *bs,
uint64_t offset, uint64_t bytes, QEMUIOVector *qiov, int flags)
{
uint64_t reqid = blkreplay_next_id();
int ret = bdrv_co_pwritev(bs->file, offset, bytes, qiov, flags);
block_request_create(reqid, bs, qemu_coroutine_self());
qemu_coroutine_yield();
return ret;
}
static int coroutine_fn blkreplay_co_pwrite_zeroes(BlockDriverState *bs,
int64_t offset, int bytes, BdrvRequestFlags flags)
{
uint64_t reqid = blkreplay_next_id();
int ret = bdrv_co_pwrite_zeroes(bs->file, offset, bytes, flags);
block_request_create(reqid, bs, qemu_coroutine_self());
qemu_coroutine_yield();
return ret;
}
static int coroutine_fn blkreplay_co_pdiscard(BlockDriverState *bs,
int64_t offset, int bytes)
{
uint64_t reqid = blkreplay_next_id();
int ret = bdrv_co_pdiscard(bs->file->bs, offset, bytes);
block_request_create(reqid, bs, qemu_coroutine_self());
qemu_coroutine_yield();
return ret;
}
static int coroutine_fn blkreplay_co_flush(BlockDriverState *bs)
{
uint64_t reqid = blkreplay_next_id();
int ret = bdrv_co_flush(bs->file->bs);
block_request_create(reqid, bs, qemu_coroutine_self());
qemu_coroutine_yield();
return ret;
}
static BlockDriver bdrv_blkreplay = {
.format_name = "blkreplay",
.instance_size = 0,
.bdrv_open = blkreplay_open,
.bdrv_close = blkreplay_close,
.bdrv_child_perm = bdrv_filter_default_perms,
.bdrv_getlength = blkreplay_getlength,
.bdrv_co_preadv = blkreplay_co_preadv,
.bdrv_co_pwritev = blkreplay_co_pwritev,
.bdrv_co_pwrite_zeroes = blkreplay_co_pwrite_zeroes,
.bdrv_co_pdiscard = blkreplay_co_pdiscard,
.bdrv_co_flush = blkreplay_co_flush,
};
static void bdrv_blkreplay_init(void)
{
bdrv_register(&bdrv_blkreplay);
}
block_init(bdrv_blkreplay_init);

Some files were not shown because too many files have changed in this diff Show More