Compare commits

..

21 Commits

Author SHA1 Message Date
João Silva
fe33d8e072 block: Add a thread-pool version of fstat
The fstat call can take a long time to finish when running over
NFS. Add a version of it that runs in the thread pool.

Adapt one of its users, raw_co_get_allocated_file size to use the new
version. That function is called via QMP under the qemu_global_mutex
so it has a large chance of blocking VCPU threads in case it takes too
long to finish.

Reviewed-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: João Silva <jsilva@suse.de>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-06-09 16:35:41 -03:00
Fabiano Rosas
a800b3d01e block: Convert qmp_query_block() to coroutine_fn
This is another caller of bdrv_get_allocated_file_size() that needs to
be converted to a coroutine because that function will be made
asynchronous when called (indirectly) from the QMP dispatcher.

This QMP command is a candidate because it calls bdrv_do_query_node_info(),
which in turn calls bdrv_get_allocated_file_size().

We've determined bdrv_do_query_node_info() to be coroutine-safe (see
previous commits), so we can just put this QMP command in a coroutine.

Since qmp_query_block() now expects to run in a coroutine, its callers
need to be converted as well. Convert hmp_info_block(), which calls
only coroutine-safe code, including qmp_query_named_block_nodes()
which has been converted to coroutine in the previous patches.

Now that all callers of bdrv_[co_]block_device_info() are using the
coroutine version, a few things happen:

 - we can return to using bdrv_block_device_info() without a wrapper;

 - bdrv_get_allocated_file_size() can stop being mixed;

 - bdrv_co_get_allocated_file_size() needs to be put under the graph
   lock because it is being called wthout the wrapper;

 - bdrv_do_query_node_info() doesn't need to acquire the AioContext
   because it doesn't call aio_poll anymore;

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-06-09 16:35:41 -03:00
Fabiano Rosas
5c2003c4dc block: Don't query all block devices at hmp_nbd_server_start
We're currently doing a full query-block just to enumerate the devices
for qmp_nbd_server_add and then discarding the BlockInfoList
afterwards. Alter hmp_nbd_server_start to instead iterate explicitly
over the block_backends list.

This allows the removal of the dependency on qmp_query_block from
hmp_nbd_server_start. This is desirable because we're about to move
qmp_query_block into a coroutine and don't need to change the NBD code
at the same time.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-06-09 16:35:41 -03:00
4b8fe49d54 block: Convert qmp_query_named_block_nodes to coroutine
We're converting callers of bdrv_get_allocated_file_size() to run in
coroutines because that function will be made asynchronous when called
(indirectly) from the QMP dispatcher.

This QMP command is a candidate because it indirectly calls
bdrv_get_allocated_file_size() through bdrv_block_device_info() ->
bdrv_query_image_info() -> bdrv_query_image_info().

The previous patches have determined that bdrv_query_image_info() and
bdrv_do_query_node_info() are coroutine-safe so we can just make the
QMP command run in a coroutine.

Signed-off-by: Lin Ma <lma@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-06-09 16:35:41 -03:00
Fabiano Rosas
c78cc95092 block: Convert bdrv_block_device_info into co_wrapper
We're converting callers of bdrv_get_allocated_file_size() to run in
coroutines because that function will be made asynchronous when called
(indirectly) from the QMP dispatcher.

This function is a candidate because it calls bdrv_query_image_info()
-> bdrv_do_query_node_info() -> bdrv_get_allocated_file_size().

It is safe to turn this is a coroutine because the code it calls is
made up of either simple accessors and string manipulation functions
[1] or it has already been determined to be safe [2].

1) bdrv_refresh_filename(), bdrv_is_read_only(),
   blk_enable_write_cache(), bdrv_cow_bs(), blk_get_public(),
   throttle_group_get_name(), bdrv_write_threshold_get(),
   bdrv_query_dirty_bitmaps(), throttle_group_get_config(),
   bdrv_filter_or_cow_bs(), bdrv_skip_implicit_filters()

2) bdrv_do_query_node_info() (see previous commit);

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-06-09 16:35:41 -03:00
Fabiano Rosas
cb0889a971 block: Convert bdrv_query_block_graph_info to coroutine
We're converting callers of bdrv_get_allocated_file_size() to run in
coroutines because that function will be made asynchronous when called
(indirectly) from the QMP dispatcher.

This function is a candidate because it calls bdrv_do_query_node_info(),
which in turn calls bdrv_get_allocated_file_size().

All the functions called from bdrv_do_query_node_info() onwards are
coroutine-safe, either have a coroutine version themselves[1] or are
mostly simple code/string manipulation[2].

1) bdrv_getlength(), bdrv_get_allocated_file_size(), bdrv_get_info(),
   bdrv_get_specific_info();

2) bdrv_refresh_filename(), bdrv_get_format_name(),
   bdrv_get_full_backing_filename(), bdrv_query_snapshot_info_list();

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-06-09 16:35:41 -03:00
Fabiano Rosas
13372539a9 block: Temporarily mark bdrv_co_get_allocated_file_size as mixed
Some callers of this function are about to be converted to run in
coroutines, so allow it to be executed both inside and outside a
coroutine while we convert all the callers.

This will be reverted once all callers of bdrv_do_query_node_info run
in a coroutine.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
2023-06-09 16:35:31 -03:00
Fabiano Rosas
de7e85364a block: Allow the wrapper script to see functions declared in qapi.h
The following patches will add co_wrapper annotations to functions
declared in qapi.h. Add that header to the set of files used by
block-coroutine-wrapper.py.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-06-09 16:26:21 -03:00
Fabiano Rosas
7d0fad7309 block: Remove unnecessary variable in bdrv_block_device_info
The commit 5d8813593f ("block/qapi: Let bdrv_query_image_info()
recurse") removed the loop where we set the 'bs0' variable, so now it
is just the same as 'bs'.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-06-09 16:26:21 -03:00
Fabiano Rosas
be4827d4b3 block: Remove bdrv_query_block_node_info
The last call site of this function has been removed by commit
c04d0ab026 ("qemu-img: Let info print block graph").

Reviewed-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-06-09 16:26:21 -03:00
Kevin Wolf
83979a2599 Revert "graph-lock: Disable locking for now"
Now that bdrv_graph_wrlock() temporarily drops the AioContext lock that
its caller holds, it can poll without causing deadlocks. We can now
re-enable graph locking.

This reverts commit ad128dff0bf4b6f971d05eb4335a627883a19c1d.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-06-09 16:26:20 -03:00
Kevin Wolf
23cbf06c46 graph-lock: Unlock the AioContext while polling
If the caller keeps the AioContext lock for a block node in an iothread,
polling in bdrv_graph_wrlock() deadlocks if the condition isn't
fulfilled immediately.

Now that all callers make sure to actually have the AioContext locked
when they call bdrv_replace_child_noperm() like they should, we can
change bdrv_graph_wrlock() to take a BlockDriverState whose AioContext
lock the caller holds (NULL if it doesn't) and unlock it temporarily
while polling.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-06-09 16:26:20 -03:00
Kevin Wolf
c7470ec453 blockjob: Fix AioContext locking in block_job_add_bdrv()
bdrv_root_attach_child() requires callers to hold the AioContext lock
for child_bs. Take it in block_job_add_bdrv() before calling the
function.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-06-09 16:26:20 -03:00
Kevin Wolf
75754c77a1 block: Fix AioContext locking in bdrv_open_backing_file()
bdrv_set_backing() requires the caller to hold the AioContext lock for
@backing_hd. Take it in bdrv_open_backing_file() before calling the
function.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-06-09 16:26:20 -03:00
Kevin Wolf
983dc8c938 block: Fix AioContext locking in bdrv_open_inherit()
bdrv_open_inherit() calls several functions for which it needs to hold
the AioContext lock, but currently doesn't. This includes calls in
bdrv_append_temp_snapshot(), for which bdrv_open_inherit() is the only
caller. Fix the locking in these places.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-06-09 16:26:20 -03:00
Kevin Wolf
65ff652e2b block: Fix AioContext locking in bdrv_reopen_parse_file_or_backing()
bdrv_set_file_or_backing_noperm() requires the caller to hold the
AioContext lock for the child node, but we hold the one for the parent
node in bdrv_reopen_parse_file_or_backing(). Take the other one
temporarily.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-06-09 16:26:20 -03:00
Kevin Wolf
5baee68874 block: Fix AioContext locking in bdrv_attach_child_common()
The function can move the child node to a different AioContext. In this
case, it also must take the AioContext lock for the new context before
calling functions that require the caller to hold the AioContext for the
child node.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-06-09 16:26:20 -03:00
Kevin Wolf
4e781916d2 block: Fix AioContext locking in bdrv_open_child()
bdrv_attach_child() requires that the caller holds the AioContext lock
for the new child node. Take it in bdrv_open_child() and document that
the caller must not hold any AioContext apart from the main AioContext.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-06-09 16:26:20 -03:00
Kevin Wolf
931516de7c test-block-iothread: Lock AioContext for blk_insert_bs()
blk_insert_bs() requires that callers hold the AioContext lock for the
node that should be inserted. Take it.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-06-09 16:26:20 -03:00
Kevin Wolf
85739ecf0c qdev-properties-system: Lock AioContext for blk_insert_bs()
blk_insert_bs() requires that callers hold the AioContext lock for the
node that should be inserted. Take it.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-06-09 16:26:20 -03:00
Kevin Wolf
b1d28fadec iotests: Test active commit with iothread and background I/O
This is a better regression test for the bugs hidden by commit 80fc5d26
('graph-lock: Disable locking for now'). With that commit reverted, it
hangs instantaneously and reliably for me.

It is important to have a reliable test like this, because the following
commits will set out to fix the actual root cause of the deadlocks and
then finally revert commit 80fc5d26, which was only a stopgap solution.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-06-09 16:26:20 -03:00
2975 changed files with 73260 additions and 161664 deletions

View File

@@ -1,24 +1,10 @@
variables:
# On stable branches this is changed by later rules. Should also
# be overridden per pipeline if running pipelines concurrently
# for different branches in contributor forks.
QEMU_CI_CONTAINER_TAG: latest
# For purposes of CI rules, upstream is the gitlab.com/qemu-project
# namespace. When testing CI, it might be usefult to override this
# to point to a fork repo
QEMU_CI_UPSTREAM: qemu-project
# The order of rules defined here is critically important.
# They are evaluated in order and first match wins.
#
# Thus we group them into a number of stages, ordered from
# most restrictive to least restrictive
#
# For pipelines running for stable "staging-X.Y" branches
# we must override QEMU_CI_CONTAINER_TAG
#
.base_job_template:
variables:
# Each script line from will be in a collapsible section in the job output
@@ -33,68 +19,48 @@ variables:
# want jobs to run
#############################################################
# Never run jobs upstream on stable branch, staging branch jobs already ran
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_BRANCH =~ /^stable-/'
when: never
# Never run jobs upstream on tags, staging branch jobs already ran
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_TAG'
when: never
# Cirrus jobs can't run unless the creds / target repo are set
- if: '$QEMU_JOB_CIRRUS && ($CIRRUS_GITHUB_REPO == null || $CIRRUS_API_TOKEN == null)'
when: never
# Publishing jobs should only run on the default branch in upstream
- if: '$QEMU_JOB_PUBLISH == "1" && $CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH'
- if: '$QEMU_JOB_PUBLISH == "1" && $CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH'
when: never
# Non-publishing jobs should only run on staging branches in upstream
- if: '$QEMU_JOB_PUBLISH != "1" && $CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_BRANCH !~ /staging/'
- if: '$QEMU_JOB_PUBLISH != "1" && $CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH !~ /staging/'
when: never
# Jobs only intended for forks should always be skipped on upstream
- if: '$QEMU_JOB_ONLY_FORKS == "1" && $CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM'
- if: '$QEMU_JOB_ONLY_FORKS == "1" && $CI_PROJECT_NAMESPACE == "qemu-project"'
when: never
# Forks don't get pipelines unless QEMU_CI=1 or QEMU_CI=2 is set
- if: '$QEMU_CI != "1" && $QEMU_CI != "2" && $CI_PROJECT_NAMESPACE != $QEMU_CI_UPSTREAM'
- if: '$QEMU_CI != "1" && $QEMU_CI != "2" && $CI_PROJECT_NAMESPACE != "qemu-project"'
when: never
# Avocado jobs don't run in forks unless $QEMU_CI_AVOCADO_TESTING is set
- if: '$QEMU_JOB_AVOCADO && $QEMU_CI_AVOCADO_TESTING != "1" && $CI_PROJECT_NAMESPACE != $QEMU_CI_UPSTREAM'
- if: '$QEMU_JOB_AVOCADO && $QEMU_CI_AVOCADO_TESTING != "1" && $CI_PROJECT_NAMESPACE != "qemu-project"'
when: never
#############################################################
# Stage 2: fine tune execution of jobs in specific scenarios
# where the catch all logic is inappropriate
# where the catch all logic is inapprorpaite
#############################################################
# Optional jobs should not be run unless manually triggered
- if: '$QEMU_JOB_OPTIONAL && $CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_BRANCH =~ /staging-[[:digit:]]+\.[[:digit:]]/'
when: manual
allow_failure: true
variables:
QEMU_CI_CONTAINER_TAG: $CI_COMMIT_REF_SLUG
- if: '$QEMU_JOB_OPTIONAL'
when: manual
allow_failure: true
# Skipped jobs should not be run unless manually triggered
- if: '$QEMU_JOB_SKIPPED && $CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_BRANCH =~ /staging-[[:digit:]]+\.[[:digit:]]/'
when: manual
allow_failure: true
variables:
QEMU_CI_CONTAINER_TAG: $CI_COMMIT_REF_SLUG
- if: '$QEMU_JOB_SKIPPED'
when: manual
allow_failure: true
# Avocado jobs can be manually start in forks if $QEMU_CI_AVOCADO_TESTING is unset
- if: '$QEMU_JOB_AVOCADO && $CI_PROJECT_NAMESPACE != $QEMU_CI_UPSTREAM'
- if: '$QEMU_JOB_AVOCADO && $CI_PROJECT_NAMESPACE != "qemu-project"'
when: manual
allow_failure: true
@@ -106,23 +72,8 @@ variables:
# Forks pipeline jobs don't start automatically unless
# QEMU_CI=2 is set
- if: '$QEMU_CI != "2" && $CI_PROJECT_NAMESPACE != $QEMU_CI_UPSTREAM'
when: manual
# Upstream pipeline jobs start automatically unless told not to
# by setting QEMU_CI=1
- if: '$QEMU_CI == "1" && $CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_BRANCH =~ /staging-[[:digit:]]+\.[[:digit:]]/'
when: manual
variables:
QEMU_CI_CONTAINER_TAG: $CI_COMMIT_REF_SLUG
- if: '$QEMU_CI == "1" && $CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM'
- if: '$QEMU_CI != "2" && $CI_PROJECT_NAMESPACE != "qemu-project"'
when: manual
# Jobs can run if any jobs they depend on were successful
- if: '$QEMU_JOB_SKIPPED && $CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_BRANCH =~ /staging-[[:digit:]]+\.[[:digit:]]/'
when: on_success
variables:
QEMU_CI_CONTAINER_TAG: $CI_COMMIT_REF_SLUG
- when: on_success

View File

@@ -1,22 +1,12 @@
.native_build_job_template:
extends: .base_job_template
stage: build
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
cache:
paths:
- ccache
key: "$CI_JOB_NAME"
when: always
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
before_script:
- JOBS=$(expr $(nproc) + 1)
script:
- export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
- export CCACHE_MAXSIZE="500M"
- export PATH="$CCACHE_WRAPPERSDIR:$PATH"
- mkdir build
- cd build
- ccache --zero-stats
- ../configure --enable-werror --disable-docs --enable-fdt=system
${TARGETS:+--target-list="$TARGETS"}
$CONFIGURE_ARGS ||
@@ -30,13 +20,11 @@
then
make -j"$JOBS" $MAKE_CHECK_ARGS ;
fi
- ccache --show-stats
# We jump some hoops in common_test_job_template to avoid
# rebuilding all the object files we skip in the artifacts
.native_build_artifact_template:
artifacts:
when: on_success
expire_in: 2 days
paths:
- build
@@ -52,7 +40,7 @@
.common_test_job_template:
extends: .base_job_template
stage: test
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
script:
- scripts/git-submodule.sh update roms/SLOF
- meson subprojects download $(cd build/subprojects && echo *)
@@ -65,7 +53,6 @@
extends: .common_test_job_template
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
when: always
expire_in: 7 days
paths:
- build/meson-logs/testlog.txt
@@ -81,7 +68,7 @@
policy: pull-push
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
when: always
when: on_failure
expire_in: 7 days
paths:
- build/tests/results/latest/results.xml

View File

@@ -30,7 +30,6 @@ avocado-system-alpine:
variables:
IMAGE: alpine
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:avr arch:loongarch64 arch:mips64 arch:mipsel
build-system-ubuntu:
extends:
@@ -41,7 +40,8 @@ build-system-ubuntu:
variables:
IMAGE: ubuntu2204
CONFIGURE_ARGS: --enable-docs
TARGETS: alpha-softmmu microblazeel-softmmu mips64el-softmmu
TARGETS: alpha-softmmu cris-softmmu hppa-softmmu
microblazeel-softmmu mips64el-softmmu
MAKE_CHECK_ARGS: check-build
check-system-ubuntu:
@@ -61,7 +61,6 @@ avocado-system-ubuntu:
variables:
IMAGE: ubuntu2204
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:alpha arch:microblaze arch:mips64el
build-system-debian:
extends:
@@ -73,7 +72,7 @@ build-system-debian:
IMAGE: debian-amd64
CONFIGURE_ARGS: --with-coroutine=sigaltstack
TARGETS: arm-softmmu i386-softmmu riscv64-softmmu sh4eb-softmmu
sparc-softmmu xtensa-softmmu
sparc-softmmu xtensaeb-softmmu
MAKE_CHECK_ARGS: check-build
check-system-debian:
@@ -93,7 +92,6 @@ avocado-system-debian:
variables:
IMAGE: debian-amd64
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:arm arch:i386 arch:riscv64 arch:sh4 arch:sparc arch:xtensa
crash-test-debian:
extends: .native_test_job_template
@@ -105,7 +103,7 @@ crash-test-debian:
script:
- cd build
- make NINJA=":" check-venv
- pyvenv/bin/python3 scripts/device-crash-test -q --tcg-only ./qemu-system-i386
- tests/venv/bin/python3 scripts/device-crash-test -q --tcg-only ./qemu-system-i386
build-system-fedora:
extends:
@@ -116,7 +114,7 @@ build-system-fedora:
variables:
IMAGE: fedora
CONFIGURE_ARGS: --disable-gcrypt --enable-nettle --enable-docs
TARGETS: microblaze-softmmu mips-softmmu
TARGETS: tricore-softmmu microblaze-softmmu mips-softmmu
xtensa-softmmu m68k-softmmu riscv32-softmmu ppc-softmmu sparc64-softmmu
MAKE_CHECK_ARGS: check-build
@@ -137,8 +135,6 @@ avocado-system-fedora:
variables:
IMAGE: fedora
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:microblaze arch:mips arch:xtensa arch:m68k
arch:riscv32 arch:ppc arch:sparc64
crash-test-fedora:
extends: .native_test_job_template
@@ -150,8 +146,8 @@ crash-test-fedora:
script:
- cd build
- make NINJA=":" check-venv
- pyvenv/bin/python3 scripts/device-crash-test -q ./qemu-system-ppc
- pyvenv/bin/python3 scripts/device-crash-test -q ./qemu-system-riscv32
- tests/venv/bin/python3 scripts/device-crash-test -q ./qemu-system-ppc
- tests/venv/bin/python3 scripts/device-crash-test -q ./qemu-system-riscv32
build-system-centos:
extends:
@@ -184,8 +180,6 @@ avocado-system-centos:
variables:
IMAGE: centos8
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:ppc64 arch:or1k arch:390x arch:x86_64 arch:rx
arch:sh4 arch:nios2
build-system-opensuse:
extends:
@@ -215,7 +209,6 @@ avocado-system-opensuse:
variables:
IMAGE: opensuse-leap
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:s390x arch:x86_64 arch:aarch64
# This jobs explicitly disable TCG (--disable-tcg), KVM is detected by
@@ -256,7 +249,6 @@ build-user:
variables:
IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --disable-tools --disable-system
--target-list-exclude=alpha-linux-user,sh4-linux-user
MAKE_CHECK_ARGS: check-tcg
build-user-static:
@@ -266,18 +258,6 @@ build-user-static:
variables:
IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --disable-tools --disable-system --static
--target-list-exclude=alpha-linux-user,sh4-linux-user
MAKE_CHECK_ARGS: check-tcg
# targets stuck on older compilers
build-legacy:
extends: .native_build_job_template
needs:
job: amd64-debian-legacy-cross-container
variables:
IMAGE: debian-legacy-test-cross
TARGETS: alpha-linux-user alpha-softmmu sh4-linux-user
CONFIGURE_ARGS: --disable-tools
MAKE_CHECK_ARGS: check-tcg
build-user-hexagon:
@@ -290,9 +270,7 @@ build-user-hexagon:
CONFIGURE_ARGS: --disable-tools --disable-docs --enable-debug-tcg
MAKE_CHECK_ARGS: check-tcg
# Build the softmmu targets we have check-tcg tests and compilers in
# our omnibus all-test-cross container. Those targets that haven't got
# Debian cross compiler support need to use special containers.
# Only build the softmmu targets we have check-tcg tests for
build-some-softmmu:
extends: .native_build_job_template
needs:
@@ -300,18 +278,7 @@ build-some-softmmu:
variables:
IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --disable-tools --enable-debug
TARGETS: arm-softmmu aarch64-softmmu i386-softmmu riscv64-softmmu
s390x-softmmu x86_64-softmmu
MAKE_CHECK_ARGS: check-tcg
build-loongarch64:
extends: .native_build_job_template
needs:
job: loongarch-debian-cross-container
variables:
IMAGE: debian-loongarch-cross
CONFIGURE_ARGS: --disable-tools --enable-debug
TARGETS: loongarch64-linux-user loongarch64-softmmu
TARGETS: xtensa-softmmu arm-softmmu aarch64-softmmu alpha-softmmu
MAKE_CHECK_ARGS: check-tcg
# We build tricore in a very minimal tricore only container
@@ -344,7 +311,7 @@ clang-user:
variables:
IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --cc=clang --cxx=clang++ --disable-system
--target-list-exclude=alpha-linux-user,microblazeel-linux-user,aarch64_be-linux-user,i386-linux-user,m68k-linux-user,mipsn32el-linux-user,xtensaeb-linux-user
--target-list-exclude=microblazeel-linux-user,aarch64_be-linux-user,i386-linux-user,m68k-linux-user,mipsn32el-linux-user,xtensaeb-linux-user
--extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined
MAKE_CHECK_ARGS: check-unit check-tcg
@@ -487,7 +454,7 @@ gcov:
IMAGE: ubuntu2204
CONFIGURE_ARGS: --enable-gcov
TARGETS: aarch64-softmmu ppc64-softmmu s390x-softmmu x86_64-softmmu
MAKE_CHECK_ARGS: check-unit check-softfloat
MAKE_CHECK_ARGS: check
after_script:
- cd build
- gcovr --xml-pretty --exclude-unreachable-branches --print-summary
@@ -495,12 +462,8 @@ gcov:
coverage: /^\s*lines:\s*\d+.\d+\%/
artifacts:
name: ${CI_JOB_NAME}-${CI_COMMIT_REF_NAME}-${CI_COMMIT_SHA}
when: always
expire_in: 2 days
paths:
- build/meson-logs/testlog.txt
reports:
junit: build/meson-logs/testlog.junit.xml
coverage_report:
coverage_format: cobertura
path: build/coverage.xml
@@ -531,7 +494,7 @@ build-tci:
variables:
IMAGE: debian-all-test-cross
script:
- TARGETS="aarch64 arm hppa m68k microblaze ppc64 s390x x86_64"
- TARGETS="aarch64 alpha arm hppa m68k microblaze ppc64 s390x x86_64"
- mkdir build
- cd build
- ../configure --enable-tcg-interpreter --disable-docs --disable-gtk --disable-vnc
@@ -569,7 +532,7 @@ build-without-defaults:
build-libvhost-user:
extends: .base_job_template
stage: build
image: $CI_REGISTRY_IMAGE/qemu/fedora:$QEMU_CI_CONTAINER_TAG
image: $CI_REGISTRY_IMAGE/qemu/fedora:latest
needs:
job: amd64-fedora-container
script:
@@ -609,7 +572,7 @@ build-tools-and-docs-debian:
# of what topic branch they're currently using
pages:
extends: .base_job_template
image: $CI_REGISTRY_IMAGE/qemu/debian-amd64:$QEMU_CI_CONTAINER_TAG
image: $CI_REGISTRY_IMAGE/qemu/debian-amd64:latest
stage: test
needs:
- job: build-tools-and-docs-debian
@@ -624,7 +587,6 @@ pages:
- make -C build install DESTDIR=$(pwd)/temp-install
- mv temp-install/usr/local/share/doc/qemu/* public/
artifacts:
when: on_success
paths:
- public
variables:

View File

@@ -15,10 +15,8 @@
stage: build
image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:master
needs: []
# 20 mins larger than "timeout_in" in cirrus/build.yml
# as there's often a 5-10 minute delay before Cirrus CI
# actually starts the task
timeout: 80m
allow_failure: true
script:
- source .gitlab-ci.d/cirrus/$NAME.vars
- sed -e "s|[@]CI_REPOSITORY_URL@|$CI_REPOSITORY_URL|g"
@@ -52,7 +50,7 @@ x64-freebsd-13-build:
NAME: freebsd-13
CIRRUS_VM_INSTANCE_TYPE: freebsd_instance
CIRRUS_VM_IMAGE_SELECTOR: image_family
CIRRUS_VM_IMAGE_NAME: freebsd-13-2
CIRRUS_VM_IMAGE_NAME: freebsd-13-1
CIRRUS_VM_CPUS: 8
CIRRUS_VM_RAM: 8G
UPDATE_COMMAND: pkg update; pkg upgrade -y

View File

@@ -16,8 +16,6 @@ env:
TEST_TARGETS: "@TEST_TARGETS@"
build_task:
# A little shorter than GitLab timeout in ../cirrus.yml
timeout_in: 60m
install_script:
- @UPDATE_COMMAND@
- @INSTALL_COMMAND@ @PKGS@

View File

@@ -11,6 +11,6 @@ MAKE='/usr/local/bin/gmake'
NINJA='/usr/local/bin/ninja'
PACKAGING_COMMAND='pkg'
PIP3='/usr/local/bin/pip-3.8'
PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache cmocka ctags curl cyrus-sasl dbus diffutils dtc flex fusefs-libs3 gettext git glib gmake gnutls gsed gtk3 json-c libepoxy libffi libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm lzo2 meson mtools ncurses nettle ninja opencv pixman pkgconf png py39-numpy py39-pillow py39-pip py39-sphinx py39-sphinx_rtd_theme py39-tomli py39-yaml python3 rpm2cpio sdl2 sdl2_image snappy sndio socat spice-protocol tesseract usbredir virglrenderer vte3 xorriso zstd'
PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache cmocka ctags curl cyrus-sasl dbus diffutils dtc flex fusefs-libs3 gettext git glib gmake gnutls gsed gtk3 json-c libepoxy libffi libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm lzo2 meson mtools ncurses nettle ninja opencv pixman pkgconf png py39-numpy py39-pillow py39-pip py39-sphinx py39-sphinx_rtd_theme py39-yaml python3 rpm2cpio sdl2 sdl2_image snappy sndio socat spice-protocol tesseract usbredir virglrenderer vte3 xorriso zstd'
PYPI_PKGS=''
PYTHON='/usr/local/bin/python3'

View File

@@ -15,7 +15,7 @@ env:
folder: $HOME/.cache/qemu-vm
install_script:
- dnf update -y
- dnf install -y git make openssh-clients qemu-img qemu-system-x86 wget meson
- dnf install -y git make openssh-clients qemu-img qemu-system-x86 wget
clone_script:
- git clone --depth 100 "$CI_REPOSITORY_URL" .
- git fetch origin "$CI_COMMIT_REF_NAME"

View File

@@ -11,6 +11,6 @@ MAKE='/opt/homebrew/bin/gmake'
NINJA='/opt/homebrew/bin/ninja'
PACKAGING_COMMAND='brew'
PIP3='/opt/homebrew/bin/pip3'
PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd'
PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli'
PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol tesseract usbredir vde vte3 xorriso zlib zstd'
PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme'
PYTHON='/opt/homebrew/bin/python3'

View File

@@ -1,3 +1,9 @@
alpha-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-alpha-cross
amd64-debian-cross-container:
extends: .container_job_template
stage: containers
@@ -10,12 +16,6 @@ amd64-debian-user-cross-container:
variables:
NAME: debian-all-test-cross
amd64-debian-legacy-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-legacy-test-cross
arm64-debian-cross-container:
extends: .container_job_template
stage: containers
@@ -40,11 +40,23 @@ hexagon-cross-container:
variables:
NAME: debian-hexagon-cross
loongarch-debian-cross-container:
hppa-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-loongarch-cross
NAME: debian-hppa-cross
m68k-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-m68k-cross
mips64-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-mips64-cross
mips64el-debian-cross-container:
extends: .container_job_template
@@ -52,12 +64,24 @@ mips64el-debian-cross-container:
variables:
NAME: debian-mips64el-cross
mips-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-mips-cross
mipsel-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-mipsel-cross
powerpc-test-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-powerpc-test-cross
ppc64el-debian-cross-container:
extends: .container_job_template
stage: containers
@@ -71,7 +95,13 @@ riscv64-debian-cross-container:
allow_failure: true
variables:
NAME: debian-riscv64-cross
QEMU_JOB_OPTIONAL: 1
# we can however build TCG tests using a non-sid base
riscv64-debian-test-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-riscv64-test-cross
s390x-debian-cross-container:
extends: .container_job_template
@@ -79,6 +109,18 @@ s390x-debian-cross-container:
variables:
NAME: debian-s390x-cross
sh4-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-sh4-cross
sparc64-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-sparc64-cross
tricore-debian-cross-container:
extends: .container_job_template
stage: containers

View File

@@ -5,8 +5,7 @@
services:
- docker:dind
before_script:
- export TAG="$CI_REGISTRY_IMAGE/qemu/$NAME:$QEMU_CI_CONTAINER_TAG"
# Always ':latest' because we always use upstream as a common cache source
- export TAG="$CI_REGISTRY_IMAGE/qemu/$NAME:latest"
- export COMMON_TAG="$CI_REGISTRY/qemu-project/qemu/qemu/$NAME:latest"
- docker login $CI_REGISTRY -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD"
- until docker info; do sleep 1; done

View File

@@ -1,21 +1,11 @@
.cross_system_build_job:
extends: .base_job_template
stage: build
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
cache:
paths:
- ccache
key: "$CI_JOB_NAME"
when: always
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
timeout: 80m
script:
- export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
- export CCACHE_MAXSIZE="500M"
- export PATH="$CCACHE_WRAPPERSDIR:$PATH"
- mkdir build
- cd build
- ccache --zero-stats
- ../configure --enable-werror --disable-docs --enable-fdt=system
--disable-user $QEMU_CONFIGURE_OPTS $EXTRA_CONFIGURE_OPTS
--target-list-exclude="arm-softmmu cris-softmmu
@@ -28,7 +18,6 @@
version="$(git describe --match v[0-9]* 2>/dev/null || git rev-parse --short HEAD)";
mv -v qemu-setup*.exe qemu-setup-${version}.exe;
fi
- ccache --show-stats
# Job to cross-build specific accelerators.
#
@@ -38,17 +27,9 @@
.cross_accel_build_job:
extends: .base_job_template
stage: build
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
timeout: 30m
cache:
paths:
- ccache/
key: "$CI_JOB_NAME"
script:
- export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
- export CCACHE_MAXSIZE="500M"
- export PATH="$CCACHE_WRAPPERSDIR:$PATH"
- mkdir build
- cd build
- ../configure --enable-werror --disable-docs $QEMU_CONFIGURE_OPTS
@@ -58,15 +39,8 @@
.cross_user_build_job:
extends: .base_job_template
stage: build
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
cache:
paths:
- ccache/
key: "$CI_JOB_NAME"
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
script:
- export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
- export CCACHE_MAXSIZE="500M"
- mkdir build
- cd build
- ../configure --enable-werror --disable-docs $QEMU_CONFIGURE_OPTS
@@ -81,7 +55,6 @@
.cross_test_artifacts:
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
when: always
expire_in: 7 days
paths:
- build/meson-logs/testlog.txt

View File

@@ -57,7 +57,7 @@ cross-i386-tci:
variables:
IMAGE: fedora-i386-cross
ACCEL: tcg-interpreter
EXTRA_CONFIGURE_OPTS: --target-list=i386-softmmu,i386-linux-user,aarch64-softmmu,aarch64-linux-user,ppc-softmmu,ppc-linux-user --disable-plugins
EXTRA_CONFIGURE_OPTS: --target-list=i386-softmmu,i386-linux-user,aarch64-softmmu,aarch64-linux-user,ppc-softmmu,ppc-linux-user
MAKE_CHECK_ARGS: check check-tcg
cross-mipsel-system:
@@ -165,11 +165,10 @@ cross-win32-system:
job: win32-fedora-cross-container
variables:
IMAGE: fedora-win32-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu m68k-softmmu
microblazeel-softmmu mips64el-softmmu nios2-softmmu
artifacts:
when: on_success
paths:
- build/qemu-setup*.exe
@@ -179,13 +178,12 @@ cross-win64-system:
job: win64-fedora-cross-container
variables:
IMAGE: fedora-win64-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu
m68k-softmmu microblazeel-softmmu nios2-softmmu
or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu
tricore-softmmu xtensaeb-softmmu
artifacts:
when: on_success
paths:
- build/qemu-setup*.exe

View File

@@ -63,7 +63,6 @@ build-opensbi:
stage: build
needs: ['docker-opensbi']
artifacts:
when: on_success
paths: # 'artifacts.zip' will contains the following files:
- pc-bios/opensbi-riscv32-generic-fw_dynamic.bin
- pc-bios/opensbi-riscv64-generic-fw_dynamic.bin

View File

@@ -26,7 +26,7 @@ check-dco:
check-python-minreqs:
extends: .base_job_template
stage: test
image: $CI_REGISTRY_IMAGE/qemu/python:$QEMU_CI_CONTAINER_TAG
image: $CI_REGISTRY_IMAGE/qemu/python:latest
script:
- make -C python check-minreqs
variables:
@@ -37,7 +37,7 @@ check-python-minreqs:
check-python-tox:
extends: .base_job_template
stage: test
image: $CI_REGISTRY_IMAGE/qemu/python:$QEMU_CI_CONTAINER_TAG
image: $CI_REGISTRY_IMAGE/qemu/python:latest
script:
- make -C python check-tox
variables:

View File

@@ -5,60 +5,21 @@
- windows
- windows-1809
cache:
key: "$CI_JOB_NAME"
key: "${CI_JOB_NAME}-cache"
paths:
- msys64/var/cache
- ccache
when: always
- ${CI_PROJECT_DIR}/msys64/var/cache
needs: []
stage: build
timeout: 100m
variables:
# This feature doesn't (currently) work with PowerShell, it stops
# the echo'ing of commands being run and doesn't show any timing
FF_SCRIPT_SECTIONS: 0
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
expire_in: 7 days
paths:
- build/meson-logs/testlog.txt
reports:
junit: "build/meson-logs/testlog.junit.xml"
timeout: 80m
before_script:
- Write-Output "Acquiring msys2.exe installer at $(Get-Date -Format u)"
- If ( !(Test-Path -Path msys64\var\cache ) ) {
mkdir msys64\var\cache
}
- Invoke-WebRequest
"https://repo.msys2.org/distrib/msys2-x86_64-latest.sfx.exe.sig"
-outfile "msys2.exe.sig"
- if ( Test-Path -Path msys64\var\cache\msys2.exe.sig ) {
Write-Output "Cached installer sig" ;
if ( ((Get-FileHash msys2.exe.sig).Hash -ne (Get-FileHash msys64\var\cache\msys2.exe.sig).Hash) ) {
Write-Output "Mis-matched installer sig, new installer download required" ;
Remove-Item -Path msys64\var\cache\msys2.exe.sig ;
if ( Test-Path -Path msys64\var\cache\msys2.exe ) {
Remove-Item -Path msys64\var\cache\msys2.exe
}
} else {
Write-Output "Matched installer sig, cached installer still valid"
}
} else {
Write-Output "No cached installer sig, new installer download required" ;
if ( Test-Path -Path msys64\var\cache\msys2.exe ) {
Remove-Item -Path msys64\var\cache\msys2.exe
}
}
- if ( !(Test-Path -Path msys64\var\cache\msys2.exe ) ) {
Write-Output "Fetching latest installer" ;
- If ( !(Test-Path -Path msys64\var\cache\msys2.exe ) ) {
Invoke-WebRequest
"https://repo.msys2.org/distrib/msys2-x86_64-latest.sfx.exe"
-outfile "msys64\var\cache\msys2.exe" ;
Copy-Item -Path msys2.exe.sig -Destination msys64\var\cache\msys2.exe.sig
} else {
Write-Output "Using cached installer"
"https://github.com/msys2/msys2-installer/releases/download/2022-06-03/msys2-base-x86_64-20220603.sfx.exe"
-outfile "msys64\var\cache\msys2.exe"
}
- Write-Output "Invoking msys2.exe installer at $(Get-Date -Format u)"
- msys64\var\cache\msys2.exe -y
- ((Get-Content -path .\msys64\etc\\post-install\\07-pacman-key.post -Raw)
-replace '--refresh-keys', '--version') |
@@ -67,75 +28,97 @@
- .\msys64\usr\bin\bash -lc 'pacman --noconfirm -Syuu' # Core update
- .\msys64\usr\bin\bash -lc 'pacman --noconfirm -Syuu' # Normal update
- taskkill /F /FI "MODULES eq msys-2.0.dll"
script:
- Write-Output "Installing mingw packages at $(Get-Date -Format u)"
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
bison diffutils flex
git grep make sed
$MINGW_TARGET-binutils
$MINGW_TARGET-capstone
$MINGW_TARGET-ccache
$MINGW_TARGET-curl
$MINGW_TARGET-cyrus-sasl
$MINGW_TARGET-dtc
$MINGW_TARGET-gcc
$MINGW_TARGET-glib2
$MINGW_TARGET-gnutls
$MINGW_TARGET-gtk3
$MINGW_TARGET-libgcrypt
$MINGW_TARGET-libjpeg-turbo
$MINGW_TARGET-libnfs
$MINGW_TARGET-libpng
$MINGW_TARGET-libssh
$MINGW_TARGET-libtasn1
$MINGW_TARGET-libusb
$MINGW_TARGET-lzo2
$MINGW_TARGET-nettle
$MINGW_TARGET-ninja
$MINGW_TARGET-pixman
$MINGW_TARGET-pkgconf
$MINGW_TARGET-python
$MINGW_TARGET-SDL2
$MINGW_TARGET-SDL2_image
$MINGW_TARGET-snappy
$MINGW_TARGET-spice
$MINGW_TARGET-usbredir
$MINGW_TARGET-zstd "
- Write-Output "Running build at $(Get-Date -Format u)"
- $env:CHERE_INVOKING = 'yes' # Preserve the current working directory
- $env:MSYS = 'winsymlinks:native' # Enable native Windows symlink
- $env:CCACHE_BASEDIR = "$env:CI_PROJECT_DIR"
- $env:CCACHE_DIR = "$env:CCACHE_BASEDIR/ccache"
- $env:CCACHE_MAXSIZE = "500M"
- $env:CCACHE_DEPEND = 1 # cache misses are too expensive with preprocessor mode
- $env:CC = "ccache gcc"
- mkdir build
- cd build
- ..\msys64\usr\bin\bash -lc "ccache --zero-stats"
- ..\msys64\usr\bin\bash -lc "../configure --enable-fdt=system $CONFIGURE_ARGS"
- ..\msys64\usr\bin\bash -lc "make"
- ..\msys64\usr\bin\bash -lc "make check MTESTARGS='$TEST_ARGS' || { cat meson-logs/testlog.txt; exit 1; } ;"
- ..\msys64\usr\bin\bash -lc "ccache --show-stats"
- Write-Output "Finished build at $(Get-Date -Format u)"
msys2-64bit:
extends: .shared_msys2_builder
variables:
MINGW_TARGET: mingw-w64-x86_64
MSYSTEM: MINGW64
# do not remove "--without-default-devices"!
# commit 9f8e6cad65a6 ("gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices"
# changed to compile QEMU with the --without-default-devices switch
# for the msys2 64-bit job, due to the build could not complete within
CONFIGURE_ARGS: --target-list=x86_64-softmmu --without-default-devices -Ddebug=false -Doptimization=0
# qTests don't run successfully with "--without-default-devices",
# so let's exclude the qtests from CI for now.
TEST_ARGS: --no-suite qtest
script:
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
bison diffutils flex
git grep make sed
mingw-w64-x86_64-capstone
mingw-w64-x86_64-curl
mingw-w64-x86_64-cyrus-sasl
mingw-w64-x86_64-dtc
mingw-w64-x86_64-gcc
mingw-w64-x86_64-glib2
mingw-w64-x86_64-gnutls
mingw-w64-x86_64-gtk3
mingw-w64-x86_64-libgcrypt
mingw-w64-x86_64-libjpeg-turbo
mingw-w64-x86_64-libnfs
mingw-w64-x86_64-libpng
mingw-w64-x86_64-libssh
mingw-w64-x86_64-libtasn1
mingw-w64-x86_64-libusb
mingw-w64-x86_64-lzo2
mingw-w64-x86_64-nettle
mingw-w64-x86_64-ninja
mingw-w64-x86_64-pixman
mingw-w64-x86_64-pkgconf
mingw-w64-x86_64-python
mingw-w64-x86_64-SDL2
mingw-w64-x86_64-SDL2_image
mingw-w64-x86_64-snappy
mingw-w64-x86_64-spice
mingw-w64-x86_64-usbredir
mingw-w64-x86_64-zstd "
- $env:CHERE_INVOKING = 'yes' # Preserve the current working directory
- $env:MSYSTEM = 'MINGW64' # Start a 64-bit MinGW environment
- $env:MSYS = 'winsymlinks:native' # Enable native Windows symlink
- mkdir output
- cd output
# Note: do not remove "--without-default-devices"!
# commit 9f8e6cad65a6 ("gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices"
# changed to compile QEMU with the --without-default-devices switch
# for the msys2 64-bit job, due to the build could not complete within
# the project timeout.
- ..\msys64\usr\bin\bash -lc '../configure --target-list=x86_64-softmmu
--without-default-devices --enable-fdt=system'
- ..\msys64\usr\bin\bash -lc 'make'
# qTests don't run successfully with "--without-default-devices",
# so let's exclude the qtests from CI for now.
- ..\msys64\usr\bin\bash -lc 'make check MTESTARGS=\"--no-suite qtest\" || { cat meson-logs/testlog.txt; exit 1; } ;'
msys2-32bit:
extends: .shared_msys2_builder
variables:
MINGW_TARGET: mingw-w64-i686
MSYSTEM: MINGW32
CONFIGURE_ARGS: --target-list=ppc64-softmmu -Ddebug=false -Doptimization=0
TEST_ARGS: --no-suite qtest
script:
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
bison diffutils flex
git grep make sed
mingw-w64-i686-capstone
mingw-w64-i686-curl
mingw-w64-i686-cyrus-sasl
mingw-w64-i686-dtc
mingw-w64-i686-gcc
mingw-w64-i686-glib2
mingw-w64-i686-gnutls
mingw-w64-i686-gtk3
mingw-w64-i686-libgcrypt
mingw-w64-i686-libjpeg-turbo
mingw-w64-i686-libnfs
mingw-w64-i686-libpng
mingw-w64-i686-libssh
mingw-w64-i686-libtasn1
mingw-w64-i686-libusb
mingw-w64-i686-lzo2
mingw-w64-i686-nettle
mingw-w64-i686-ninja
mingw-w64-i686-pixman
mingw-w64-i686-pkgconf
mingw-w64-i686-python
mingw-w64-i686-SDL2
mingw-w64-i686-SDL2_image
mingw-w64-i686-snappy
mingw-w64-i686-spice
mingw-w64-i686-usbredir
mingw-w64-i686-zstd "
- $env:CHERE_INVOKING = 'yes' # Preserve the current working directory
- $env:MSYSTEM = 'MINGW32' # Start a 32-bit MinGW environment
- $env:MSYS = 'winsymlinks:native' # Enable native Windows symlink
- mkdir output
- cd output
- ..\msys64\usr\bin\bash -lc '../configure --target-list=ppc64-softmmu
--enable-fdt=system'
- ..\msys64\usr\bin\bash -lc 'make'
- ..\msys64\usr\bin\bash -lc 'make check MTESTARGS=\"--no-suite qtest\" ||
{ cat meson-logs/testlog.txt; exit 1; }'

View File

@@ -30,38 +30,22 @@ malc <av1474@comtv.ru> malc <malc@c046a42c-6fe2-441c-8c8c-71466251a162>
# Corrupted Author fields
Aaron Larson <alarson@ddci.com> alarson@ddci.com
Andreas Färber <andreas.faerber@web.de> Andreas Färber <andreas.faerber>
fanwenjie <fanwj@mail.ustc.edu.cn> fanwj@mail.ustc.edu.cn <fanwj@mail.ustc.edu.cn>
Jason Wang <jasowang@redhat.com> Jason Wang <jasowang>
Marek Dolata <mkdolata@us.ibm.com> mkdolata@us.ibm.com <mkdolata@us.ibm.com>
Michael Ellerman <mpe@ellerman.id.au> michael@ozlabs.org <michael@ozlabs.org>
Nick Hudson <hnick@vmware.com> hnick@vmware.com <hnick@vmware.com>
Timothée Cocault <timothee.cocault@gmail.com> timothee.cocault@gmail.com <timothee.cocault@gmail.com>
# There is also a:
# (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162>
# for the cvs2svn initialization commit e63c3dc74bf.
# Next, translate a few commits where mailman rewrote the From: line due
# to strict SPF and DMARC. Usually, our build process should be flagging
# commits like these before maintainer merges; if you find the need to add
# a line here, please also report a bug against the part of the build
# process that let the mis-attribution slip through in the first place.
#
# If the mailing list munges your emails, use:
# git config sendemail.from '"Your Name" <your.email@example.com>'
# the use of "" in that line will differ from the typically unquoted
# 'git config user.name', which in turn is sufficient for 'git send-email'
# to add an extra From: line in the body of your email that takes
# precedence over any munged From: in the mail's headers.
# See https://lists.openembedded.org/g/openembedded-core/message/166515
# and https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg06784.html
# to strict SPF, although we prefer to avoid adding more entries like that.
Ed Swierk <eswierk@skyportsystems.com> Ed Swierk via Qemu-devel <qemu-devel@nongnu.org>
Ian McKellar <ianloic@google.com> Ian McKellar via Qemu-devel <qemu-devel@nongnu.org>
Julia Suvorova <jusual@mail.ru> Julia Suvorova via Qemu-devel <qemu-devel@nongnu.org>
Justin Terry (VM) <juterry@microsoft.com> Justin Terry (VM) via Qemu-devel <qemu-devel@nongnu.org>
Stefan Weil <sw@weilnetz.de> Stefan Weil via <qemu-devel@nongnu.org>
Andrey Drobyshev <andrey.drobyshev@virtuozzo.com> Andrey Drobyshev via <qemu-block@nongnu.org>
BALATON Zoltan <balaton@eik.bme.hu> BALATON Zoltan via <qemu-ppc@nongnu.org>
# Next, replace old addresses by a more recent one.
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com> <aleksandar.markovic@mips.com>
@@ -83,9 +67,6 @@ Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
Luc Michel <luc@lmichel.fr> <luc.michel@git.antfield.fr>
Luc Michel <luc@lmichel.fr> <luc.michel@greensocs.com>
Luc Michel <luc@lmichel.fr> <lmichel@kalray.eu>
Radoslaw Biernacki <rad@semihalf.com> <radoslaw.biernacki@linaro.org>
Paul Brook <paul@nowt.org> <paul@codesourcery.com>
Paul Burton <paulburton@kernel.org> <paul.burton@mips.com>
@@ -95,10 +76,9 @@ Paul Burton <paulburton@kernel.org> <pburton@wavecomp.com>
Philippe Mathieu-Daudé <philmd@linaro.org> <f4bug@amsat.org>
Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@redhat.com>
Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@fungible.com>
Roman Bolshakov <rbolshakov@ddn.com> <r.bolshakov@yadro.com>
Stefan Brankovic <stefan.brankovic@syrmia.com> <stefan.brankovic@rt-rk.com.com>
Taylor Simpson <ltaylorsimpson@gmail.com> <tsimpson@quicinc.com>
Yongbok Kim <yongbok.kim@mips.com> <yongbok.kim@imgtec.com>
Taylor Simpson <ltaylorsimpson@gmail.com> <tsimpson@quicinc.com>
# Also list preferred name forms where people have changed their
# git author config, or had utf8/latin1 encoding issues.

View File

@@ -34,7 +34,7 @@ env:
- BASE_CONFIG="--disable-docs --disable-tools"
- TEST_BUILD_CMD=""
- TEST_CMD="make check V=1"
# This is broadly a list of "mainline" system targets which have support across the major distros
# This is broadly a list of "mainline" softmmu targets which have support across the major distros
- MAIN_SOFTMMU_TARGETS="aarch64-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu"
- CCACHE_SLOPPINESS="include_file_ctime,include_file_mtime"
- CCACHE_MAXSIZE=1G
@@ -197,7 +197,7 @@ jobs:
$(exit $BUILD_RC);
fi
- name: "[s390x] GCC (other-system)"
- name: "[s390x] GCC (other-softmmu)"
arch: s390x
dist: focal
addons:

View File

@@ -11,9 +11,6 @@ config OPENGL
config X11
bool
config PIXMAN
bool
config SPICE
bool
@@ -49,6 +46,3 @@ config FUZZ
config VFIO_USER_SERVER_ALLOWED
bool
imply VFIO_USER_SERVER
config HV_BALLOON_POSSIBLE
bool

File diff suppressed because it is too large Load Diff

View File

@@ -28,7 +28,7 @@ quiet-command = $(quiet-@)$(call quiet-command-run,$1,$2,$3)
UNCHECKED_GOALS := TAGS gtags cscope ctags dist \
help check-help print-% \
docker docker-% lcitool-refresh vm-help vm-test vm-build-%
docker docker-% vm-help vm-test vm-build-%
all:
.PHONY: all clean distclean recurse-all dist msi FORCE
@@ -83,17 +83,16 @@ config-host.mak: $(SRC_PATH)/configure $(SRC_PATH)/scripts/meson-buildoptions.sh
@if test -f meson-private/coredata.dat; then \
./config.status --skip-meson; \
else \
./config.status; \
./config.status && touch build.ninja.stamp; \
fi
# 2. meson.stamp exists if meson has run at least once (so ninja reconfigure
# works), but otherwise never needs to be updated
meson-private/coredata.dat: meson.stamp
meson.stamp: config-host.mak
@touch meson.stamp
# 3. ensure meson-generated build files are up-to-date
# 3. ensure generated build files are up-to-date
ifneq ($(NINJA),)
Makefile.ninja: build.ninja
@@ -107,19 +106,11 @@ Makefile.ninja: build.ninja
endif
ifneq ($(MESON),)
# The path to meson always points to pyvenv/bin/meson, but the absolute
# paths could change. In that case, force a regeneration of build.ninja.
# Note that this invocation of $(NINJA), just like when Make rebuilds
# Makefiles, does not include -n.
# A separate rule is needed for Makefile dependencies to avoid -n
build.ninja: build.ninja.stamp
$(build-files):
build.ninja.stamp: meson.stamp $(build-files)
@if test "$$(cat build.ninja.stamp)" = "$(MESON)" && test -n "$(NINJA)"; then \
$(NINJA) build.ninja; \
else \
echo "$(MESON) setup --reconfigure $(SRC_PATH)"; \
$(MESON) setup --reconfigure $(SRC_PATH); \
fi && echo "$(MESON)" > $@
$(MESON) setup --reconfigure $(SRC_PATH) && touch $@
Makefile.mtest: build.ninja scripts/mtest2make.py
$(MESON) introspect --targets --tests --benchmarks | $(PYTHON) scripts/mtest2make.py > $@
@@ -164,6 +155,14 @@ ifneq ($(filter $(ninja-targets), $(ninja-cmd-goals)),)
endif
endif
ifeq ($(CONFIG_PLUGIN),y)
.PHONY: plugins
plugins:
$(call quiet-command,\
$(MAKE) $(SUBDIR_MAKEFLAGS) -C contrib/plugins V="$(V)", \
"BUILD", "example plugins")
endif # $(CONFIG_PLUGIN)
else # config-host.mak does not exist
ifneq ($(filter-out $(UNCHECKED_GOALS),$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
$(error Please call configure before running make)
@@ -176,20 +175,15 @@ include $(SRC_PATH)/tests/Makefile.include
all: recurse-all
SUBDIR_RULES=$(foreach t, all clean distclean, $(addsuffix /$(t), $(SUBDIRS)))
.PHONY: $(SUBDIR_RULES)
$(SUBDIR_RULES):
ROMS_RULES=$(foreach t, all clean distclean, $(addsuffix /$(t), $(ROMS)))
.PHONY: $(ROMS_RULES)
$(ROMS_RULES):
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C $(dir $@) V="$(V)" TARGET_DIR="$(dir $@)" $(notdir $@),)
ifneq ($(filter contrib/plugins, $(SUBDIRS)),)
.PHONY: plugins
plugins: contrib/plugins/all
endif
.PHONY: recurse-all recurse-clean
recurse-all: $(addsuffix /all, $(SUBDIRS))
recurse-clean: $(addsuffix /clean, $(SUBDIRS))
recurse-distclean: $(addsuffix /distclean, $(SUBDIRS))
recurse-all: $(addsuffix /all, $(ROMS))
recurse-clean: $(addsuffix /clean, $(ROMS))
recurse-distclean: $(addsuffix /distclean, $(ROMS))
######################################################################
@@ -283,13 +277,6 @@ include $(SRC_PATH)/tests/vm/Makefile.include
print-help-run = printf " %-30s - %s\\n" "$1" "$2"
print-help = @$(call print-help-run,$1,$2)
.PHONY: update-linux-vdso
update-linux-vdso:
@for m in $(SRC_PATH)/linux-user/*/Makefile.vdso; do \
$(MAKE) $(SUBDIR_MAKEFLAGS) -C $$(dirname $$m) -f Makefile.vdso \
SRC_PATH=$(SRC_PATH) BUILD_DIR=$(BUILD_DIR); \
done
.PHONY: help
help:
@echo 'Generic targets:'
@@ -300,7 +287,7 @@ help:
$(call print-help,cscope,Generate cscope index)
$(call print-help,sparse,Run sparse on the QEMU source)
@echo ''
ifneq ($(filter contrib/plugins, $(SUBDIRS)),)
ifeq ($(CONFIG_PLUGIN),y)
@echo 'Plugin targets:'
$(call print-help,plugins,Build the example TCG plugins)
@echo ''
@@ -310,9 +297,6 @@ endif
$(call print-help,distclean,Remove all generated files)
$(call print-help,dist,Build a distributable tarball)
@echo ''
@echo 'Linux-user targets:'
$(call print-help,update-linux-vdso,Build linux-user vdso images)
@echo ''
@echo 'Test targets:'
$(call print-help,check,Run all tests (check-help for details))
$(call print-help,bench,Run all benchmarks)
@@ -323,7 +307,7 @@ endif
@echo 'Documentation targets:'
$(call print-help,html man,Build documentation in specified format)
@echo ''
ifneq ($(filter msi, $(ninja-targets)),)
ifdef CONFIG_WIN32
@echo 'Windows targets:'
$(call print-help,installer,Build NSIS-based installer for QEMU)
$(call print-help,msi,Build MSI-based installer for qemu-ga)

View File

@@ -1 +1 @@
8.1.50
8.0.50

View File

@@ -4,6 +4,9 @@ config WHPX
config NVMM
bool
config HAX
bool
config HVF
bool

View File

@@ -30,7 +30,7 @@
#include "hw/core/accel-cpu.h"
#ifndef CONFIG_USER_ONLY
#include "accel-system.h"
#include "accel-softmmu.h"
#endif /* !CONFIG_USER_ONLY */
static const TypeInfo accel_type = {
@@ -119,37 +119,16 @@ void accel_cpu_instance_init(CPUState *cpu)
}
}
bool accel_cpu_common_realize(CPUState *cpu, Error **errp)
bool accel_cpu_realizefn(CPUState *cpu, Error **errp)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
AccelState *accel = current_accel();
AccelClass *acc = ACCEL_GET_CLASS(accel);
/* target specific realization */
if (cc->accel_cpu && cc->accel_cpu->cpu_target_realize
&& !cc->accel_cpu->cpu_target_realize(cpu, errp)) {
return false;
if (cc->accel_cpu && cc->accel_cpu->cpu_realizefn) {
return cc->accel_cpu->cpu_realizefn(cpu, errp);
}
/* generic realization */
if (acc->cpu_common_realize && !acc->cpu_common_realize(cpu, errp)) {
return false;
}
return true;
}
void accel_cpu_common_unrealize(CPUState *cpu)
{
AccelState *accel = current_accel();
AccelClass *acc = ACCEL_GET_CLASS(accel);
/* generic unrealization */
if (acc->cpu_common_unrealize) {
acc->cpu_common_unrealize(cpu);
}
}
int accel_supported_gdbstub_sstep_flags(void)
{
AccelState *accel = current_accel();

View File

@@ -28,7 +28,7 @@
#include "hw/boards.h"
#include "sysemu/cpus.h"
#include "qemu/error-report.h"
#include "accel-system.h"
#include "accel-softmmu.h"
int accel_init_machine(AccelState *accel, MachineState *ms)
{
@@ -99,8 +99,8 @@ static const TypeInfo accel_ops_type_info = {
.class_size = sizeof(AccelOpsClass),
};
static void accel_system_register_types(void)
static void accel_softmmu_register_types(void)
{
type_register_static(&accel_ops_type_info);
}
type_init(accel_system_register_types);
type_init(accel_softmmu_register_types);

View File

@@ -7,9 +7,9 @@
* See the COPYING file in the top-level directory.
*/
#ifndef ACCEL_SYSTEM_H
#define ACCEL_SYSTEM_H
#ifndef ACCEL_SOFTMMU_H
#define ACCEL_SOFTMMU_H
void accel_init_ops_interfaces(AccelClass *ac);
#endif /* ACCEL_SYSTEM_H */
#endif /* ACCEL_SOFTMMU_H */

View File

@@ -27,7 +27,7 @@ static void *dummy_cpu_thread_fn(void *arg)
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
cpu->can_do_io = 1;
current_cpu = cpu;
#ifndef _WIN32

View File

@@ -304,7 +304,7 @@ static void hvf_region_del(MemoryListener *listener,
static MemoryListener hvf_memory_listener = {
.name = "hvf",
.priority = MEMORY_LISTENER_PRIORITY_ACCEL,
.priority = 10,
.region_add = hvf_region_add,
.region_del = hvf_region_del,
.log_start = hvf_log_start,
@@ -372,19 +372,19 @@ type_init(hvf_type_init);
static void hvf_vcpu_destroy(CPUState *cpu)
{
hv_return_t ret = hv_vcpu_destroy(cpu->accel->fd);
hv_return_t ret = hv_vcpu_destroy(cpu->hvf->fd);
assert_hvf_ok(ret);
hvf_arch_vcpu_destroy(cpu);
g_free(cpu->accel);
cpu->accel = NULL;
g_free(cpu->hvf);
cpu->hvf = NULL;
}
static int hvf_init_vcpu(CPUState *cpu)
{
int r;
cpu->accel = g_new0(AccelCPUState, 1);
cpu->hvf = g_malloc0(sizeof(*cpu->hvf));
/* init cpu signals */
struct sigaction sigact;
@@ -393,19 +393,18 @@ static int hvf_init_vcpu(CPUState *cpu)
sigact.sa_handler = dummy_signal;
sigaction(SIG_IPI, &sigact, NULL);
pthread_sigmask(SIG_BLOCK, NULL, &cpu->accel->unblock_ipi_mask);
sigdelset(&cpu->accel->unblock_ipi_mask, SIG_IPI);
pthread_sigmask(SIG_BLOCK, NULL, &cpu->hvf->unblock_ipi_mask);
sigdelset(&cpu->hvf->unblock_ipi_mask, SIG_IPI);
#ifdef __aarch64__
r = hv_vcpu_create(&cpu->accel->fd,
(hv_vcpu_exit_t **)&cpu->accel->exit, NULL);
r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
#else
r = hv_vcpu_create((hv_vcpuid_t *)&cpu->accel->fd, HV_VCPU_DEFAULT);
r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT);
#endif
cpu->vcpu_dirty = 1;
assert_hvf_ok(r);
cpu->accel->guest_debug_enabled = false;
cpu->hvf->guest_debug_enabled = false;
return hvf_arch_init_vcpu(cpu);
}
@@ -428,7 +427,7 @@ static void *hvf_cpu_thread_fn(void *arg)
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
cpu->can_do_io = 1;
current_cpu = cpu;
hvf_init_vcpu(cpu);
@@ -474,7 +473,7 @@ static void hvf_start_vcpu_thread(CPUState *cpu)
cpu, QEMU_THREAD_JOINABLE);
}
static int hvf_insert_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len)
static int hvf_insert_breakpoint(CPUState *cpu, int type, hwaddr addr, hwaddr len)
{
struct hvf_sw_breakpoint *bp;
int err;
@@ -512,7 +511,7 @@ static int hvf_insert_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len)
return 0;
}
static int hvf_remove_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len)
static int hvf_remove_breakpoint(CPUState *cpu, int type, hwaddr addr, hwaddr len)
{
struct hvf_sw_breakpoint *bp;
int err;

View File

@@ -38,12 +38,6 @@ void assert_hvf_ok(hv_return_t ret)
case HV_UNSUPPORTED:
error_report("Error: HV_UNSUPPORTED");
break;
#if defined(MAC_OS_VERSION_11_0) && \
MAC_OS_X_VERSION_MIN_REQUIRED >= MAC_OS_VERSION_11_0
case HV_DENIED:
error_report("Error: HV_DENIED");
break;
#endif
default:
error_report("Unknown Error");
}
@@ -51,7 +45,7 @@ void assert_hvf_ok(hv_return_t ret)
abort();
}
struct hvf_sw_breakpoint *hvf_find_sw_breakpoint(CPUState *cpu, vaddr pc)
struct hvf_sw_breakpoint *hvf_find_sw_breakpoint(CPUState *cpu, target_ulong pc)
{
struct hvf_sw_breakpoint *bp;

View File

@@ -36,7 +36,7 @@ static void *kvm_vcpu_thread_fn(void *arg)
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
cpu->can_do_io = 1;
current_cpu = cpu;
r = kvm_init_vcpu(cpu, &error_fatal);

View File

@@ -90,6 +90,8 @@ bool kvm_kernel_irqchip;
bool kvm_split_irqchip;
bool kvm_async_interrupts_allowed;
bool kvm_halt_in_kernel_allowed;
bool kvm_eventfds_allowed;
bool kvm_irqfds_allowed;
bool kvm_resamplefds_allowed;
bool kvm_msi_via_irqfd_allowed;
bool kvm_gsi_routing_allowed;
@@ -97,6 +99,8 @@ bool kvm_gsi_direct_mapping;
bool kvm_allowed;
bool kvm_readonly_mem_allowed;
bool kvm_vm_attributes_allowed;
bool kvm_direct_msi_allowed;
bool kvm_ioeventfd_any_length_allowed;
bool kvm_msi_use_devid;
bool kvm_has_guest_debug;
static int kvm_sstep_flags;
@@ -107,9 +111,6 @@ static const KVMCapabilityInfo kvm_required_capabilites[] = {
KVM_CAP_INFO(USER_MEMORY),
KVM_CAP_INFO(DESTROY_MEMORY_REGION_WORKS),
KVM_CAP_INFO(JOIN_MEMORY_REGIONS_WORKS),
KVM_CAP_INFO(INTERNAL_ERROR_DATA),
KVM_CAP_INFO(IOEVENTFD),
KVM_CAP_INFO(IOEVENTFD_ANY_LENGTH),
KVM_CAP_LAST_INFO
};
@@ -173,31 +174,13 @@ void kvm_resample_fd_notify(int gsi)
}
}
unsigned int kvm_get_max_memslots(void)
int kvm_get_max_memslots(void)
{
KVMState *s = KVM_STATE(current_accel());
return s->nr_slots;
}
unsigned int kvm_get_free_memslots(void)
{
unsigned int used_slots = 0;
KVMState *s = kvm_state;
int i;
kvm_slots_lock();
for (i = 0; i < s->nr_as; i++) {
if (!s->as[i].ml) {
continue;
}
used_slots = MAX(used_slots, s->as[i].ml->nr_used_slots);
}
kvm_slots_unlock();
return s->nr_slots - used_slots;
}
/* Called with KVMMemoryListener.slots_lock held */
static KVMSlot *kvm_get_free_slot(KVMMemoryListener *kml)
{
@@ -213,6 +196,19 @@ static KVMSlot *kvm_get_free_slot(KVMMemoryListener *kml)
return NULL;
}
bool kvm_has_free_slot(MachineState *ms)
{
KVMState *s = KVM_STATE(ms->accelerator);
bool result;
KVMMemoryListener *kml = &s->memory_listener;
kvm_slots_lock();
result = !!kvm_get_free_slot(kml);
kvm_slots_unlock();
return result;
}
/* Called with KVMMemoryListener.slots_lock held */
static KVMSlot *kvm_alloc_slot(KVMMemoryListener *kml)
{
@@ -454,8 +450,6 @@ int kvm_init_vcpu(CPUState *cpu, Error **errp)
"kvm_init_vcpu: kvm_arch_init_vcpu failed (%lu)",
kvm_arch_vcpu_id(cpu));
}
cpu->kvm_vcpu_stats_fd = kvm_vcpu_ioctl(cpu, KVM_GET_STATS_FD, NULL);
err:
return ret;
}
@@ -1105,6 +1099,12 @@ static void kvm_coalesce_pio_del(MemoryListener *listener,
}
}
static MemoryListener kvm_coalesced_pio_listener = {
.name = "kvm-coalesced-pio",
.coalesced_io_add = kvm_coalesce_pio_add,
.coalesced_io_del = kvm_coalesce_pio_del,
};
int kvm_check_extension(KVMState *s, unsigned int extension)
{
int ret;
@@ -1246,6 +1246,43 @@ static int kvm_set_ioeventfd_pio(int fd, uint16_t addr, uint16_t val,
}
static int kvm_check_many_ioeventfds(void)
{
/* Userspace can use ioeventfd for io notification. This requires a host
* that supports eventfd(2) and an I/O thread; since eventfd does not
* support SIGIO it cannot interrupt the vcpu.
*
* Older kernels have a 6 device limit on the KVM io bus. Find out so we
* can avoid creating too many ioeventfds.
*/
#if defined(CONFIG_EVENTFD)
int ioeventfds[7];
int i, ret = 0;
for (i = 0; i < ARRAY_SIZE(ioeventfds); i++) {
ioeventfds[i] = eventfd(0, EFD_CLOEXEC);
if (ioeventfds[i] < 0) {
break;
}
ret = kvm_set_ioeventfd_pio(ioeventfds[i], 0, i, true, 2, true);
if (ret < 0) {
close(ioeventfds[i]);
break;
}
}
/* Decide whether many devices are supported or not */
ret = i == ARRAY_SIZE(ioeventfds);
while (i-- > 0) {
kvm_set_ioeventfd_pio(ioeventfds[i], 0, i, false, 2, true);
close(ioeventfds[i]);
}
return ret;
#else
return 0;
#endif
}
static const KVMCapabilityInfo *
kvm_check_extension_list(KVMState *s, const KVMCapabilityInfo *list)
{
@@ -1347,7 +1384,6 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
}
start_addr += slot_size;
size -= slot_size;
kml->nr_used_slots--;
} while (size);
return;
}
@@ -1373,7 +1409,6 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
ram_start_offset += slot_size;
ram += slot_size;
size -= slot_size;
kml->nr_used_slots++;
} while (size);
}
@@ -1416,13 +1451,15 @@ static void *kvm_dirty_ring_reaper_thread(void *data)
return NULL;
}
static void kvm_dirty_ring_reaper_init(KVMState *s)
static int kvm_dirty_ring_reaper_init(KVMState *s)
{
struct KVMDirtyRingReaper *r = &s->reaper;
qemu_thread_create(&r->reaper_thr, "kvm-reaper",
kvm_dirty_ring_reaper_thread,
s, QEMU_THREAD_JOINABLE);
return 0;
}
static int kvm_dirty_ring_init(KVMState *s)
@@ -1738,7 +1775,7 @@ void kvm_memory_listener_register(KVMState *s, KVMMemoryListener *kml,
kml->listener.commit = kvm_region_commit;
kml->listener.log_start = kvm_log_start;
kml->listener.log_stop = kvm_log_stop;
kml->listener.priority = MEMORY_LISTENER_PRIORITY_ACCEL;
kml->listener.priority = 10;
kml->listener.name = name;
if (s->kvm_dirty_ring_size) {
@@ -1761,11 +1798,9 @@ void kvm_memory_listener_register(KVMState *s, KVMMemoryListener *kml,
static MemoryListener kvm_io_listener = {
.name = "kvm-io",
.coalesced_io_add = kvm_coalesce_pio_add,
.coalesced_io_del = kvm_coalesce_pio_del,
.eventfd_add = kvm_io_ioeventfd_add,
.eventfd_del = kvm_io_ioeventfd_del,
.priority = MEMORY_LISTENER_PRIORITY_DEV_BACKEND,
.priority = 10,
};
int kvm_set_irq(KVMState *s, int irq, int level)
@@ -1804,7 +1839,7 @@ static void clear_gsi(KVMState *s, unsigned int gsi)
void kvm_init_irq_routing(KVMState *s)
{
int gsi_count;
int gsi_count, i;
gsi_count = kvm_check_extension(s, KVM_CAP_IRQ_ROUTING) - 1;
if (gsi_count > 0) {
@@ -1816,6 +1851,12 @@ void kvm_init_irq_routing(KVMState *s)
s->irq_routes = g_malloc0(sizeof(*s->irq_routes));
s->nr_allocated_irq_routes = 0;
if (!kvm_direct_msi_allowed) {
for (i = 0; i < KVM_MSI_HASHTAB_SIZE; i++) {
QTAILQ_INIT(&s->msi_hashtab[i]);
}
}
kvm_arch_init_irq_routing(s);
}
@@ -1935,10 +1976,41 @@ void kvm_irqchip_change_notify(void)
notifier_list_notify(&kvm_irqchip_change_notifiers, NULL);
}
static unsigned int kvm_hash_msi(uint32_t data)
{
/* This is optimized for IA32 MSI layout. However, no other arch shall
* repeat the mistake of not providing a direct MSI injection API. */
return data & 0xff;
}
static void kvm_flush_dynamic_msi_routes(KVMState *s)
{
KVMMSIRoute *route, *next;
unsigned int hash;
for (hash = 0; hash < KVM_MSI_HASHTAB_SIZE; hash++) {
QTAILQ_FOREACH_SAFE(route, &s->msi_hashtab[hash], entry, next) {
kvm_irqchip_release_virq(s, route->kroute.gsi);
QTAILQ_REMOVE(&s->msi_hashtab[hash], route, entry);
g_free(route);
}
}
}
static int kvm_irqchip_get_virq(KVMState *s)
{
int next_virq;
/*
* PIC and IOAPIC share the first 16 GSI numbers, thus the available
* GSI numbers are more than the number of IRQ route. Allocating a GSI
* number can succeed even though a new route entry cannot be added.
* When this happens, flush dynamic MSI entries to free IRQ route entries.
*/
if (!kvm_direct_msi_allowed && s->irq_routes->nr == s->gsi_count) {
kvm_flush_dynamic_msi_routes(s);
}
/* Return the lowest unused GSI in the bitmap */
next_virq = find_first_zero_bit(s->used_gsi_bitmap, s->gsi_count);
if (next_virq >= s->gsi_count) {
@@ -1948,17 +2020,63 @@ static int kvm_irqchip_get_virq(KVMState *s)
}
}
static KVMMSIRoute *kvm_lookup_msi_route(KVMState *s, MSIMessage msg)
{
unsigned int hash = kvm_hash_msi(msg.data);
KVMMSIRoute *route;
QTAILQ_FOREACH(route, &s->msi_hashtab[hash], entry) {
if (route->kroute.u.msi.address_lo == (uint32_t)msg.address &&
route->kroute.u.msi.address_hi == (msg.address >> 32) &&
route->kroute.u.msi.data == le32_to_cpu(msg.data)) {
return route;
}
}
return NULL;
}
int kvm_irqchip_send_msi(KVMState *s, MSIMessage msg)
{
struct kvm_msi msi;
KVMMSIRoute *route;
msi.address_lo = (uint32_t)msg.address;
msi.address_hi = msg.address >> 32;
msi.data = le32_to_cpu(msg.data);
msi.flags = 0;
memset(msi.pad, 0, sizeof(msi.pad));
if (kvm_direct_msi_allowed) {
msi.address_lo = (uint32_t)msg.address;
msi.address_hi = msg.address >> 32;
msi.data = le32_to_cpu(msg.data);
msi.flags = 0;
memset(msi.pad, 0, sizeof(msi.pad));
return kvm_vm_ioctl(s, KVM_SIGNAL_MSI, &msi);
return kvm_vm_ioctl(s, KVM_SIGNAL_MSI, &msi);
}
route = kvm_lookup_msi_route(s, msg);
if (!route) {
int virq;
virq = kvm_irqchip_get_virq(s);
if (virq < 0) {
return virq;
}
route = g_new0(KVMMSIRoute, 1);
route->kroute.gsi = virq;
route->kroute.type = KVM_IRQ_ROUTING_MSI;
route->kroute.flags = 0;
route->kroute.u.msi.address_lo = (uint32_t)msg.address;
route->kroute.u.msi.address_hi = msg.address >> 32;
route->kroute.u.msi.data = le32_to_cpu(msg.data);
kvm_add_routing_entry(s, &route->kroute);
kvm_irqchip_commit_routes(s);
QTAILQ_INSERT_TAIL(&s->msi_hashtab[kvm_hash_msi(msg.data)], route,
entry);
}
assert(route->kroute.type == KVM_IRQ_ROUTING_MSI);
return kvm_set_irq(s, route->kroute.gsi, 1);
}
int kvm_irqchip_add_msi_route(KVMRouteChange *c, int vector, PCIDevice *dev)
@@ -2085,6 +2203,10 @@ static int kvm_irqchip_assign_irqfd(KVMState *s, EventNotifier *event,
}
}
if (!kvm_irqfds_enabled()) {
return -ENOSYS;
}
return kvm_vm_ioctl(s, KVM_IRQFD, &irqfd);
}
@@ -2245,11 +2367,6 @@ static void kvm_irqchip_create(KVMState *s)
return;
}
if (kvm_check_extension(s, KVM_CAP_IRQFD) <= 0) {
fprintf(stderr, "kvm: irqfd not implemented\n");
exit(1);
}
/* First probe and see if there's a arch-specific hook to create the
* in-kernel irqchip for us */
ret = kvm_arch_irqchip_create(s);
@@ -2338,7 +2455,7 @@ static int kvm_init(MachineState *ms)
KVMState *s;
const KVMCapabilityInfo *missing_cap;
int ret;
int type;
int type = 0;
uint64_t dirty_log_manual_caps;
qemu_mutex_init(&kml_slots_lock);
@@ -2403,13 +2520,6 @@ static int kvm_init(MachineState *ms)
type = mc->kvm_type(ms, kvm_type);
} else if (mc->kvm_type) {
type = mc->kvm_type(ms, NULL);
} else {
type = kvm_arch_get_default_type(ms);
}
if (type < 0) {
ret = -EINVAL;
goto err;
}
do {
@@ -2524,8 +2634,22 @@ static int kvm_init(MachineState *ms)
#ifdef KVM_CAP_VCPU_EVENTS
s->vcpu_events = kvm_check_extension(s, KVM_CAP_VCPU_EVENTS);
#endif
s->robust_singlestep =
kvm_check_extension(s, KVM_CAP_X86_ROBUST_SINGLESTEP);
#ifdef KVM_CAP_DEBUGREGS
s->debugregs = kvm_check_extension(s, KVM_CAP_DEBUGREGS);
#endif
s->max_nested_state_len = kvm_check_extension(s, KVM_CAP_NESTED_STATE);
#ifdef KVM_CAP_IRQ_ROUTING
kvm_direct_msi_allowed = (kvm_check_extension(s, KVM_CAP_SIGNAL_MSI) > 0);
#endif
s->intx_set_mask = kvm_check_extension(s, KVM_CAP_PCI_2_3);
s->irq_set_ioctl = KVM_IRQ_LINE;
if (kvm_check_extension(s, KVM_CAP_IRQ_INJECT_STATUS)) {
s->irq_set_ioctl = KVM_IRQ_LINE_STATUS;
@@ -2534,12 +2658,21 @@ static int kvm_init(MachineState *ms)
kvm_readonly_mem_allowed =
(kvm_check_extension(s, KVM_CAP_READONLY_MEM) > 0);
kvm_eventfds_allowed =
(kvm_check_extension(s, KVM_CAP_IOEVENTFD) > 0);
kvm_irqfds_allowed =
(kvm_check_extension(s, KVM_CAP_IRQFD) > 0);
kvm_resamplefds_allowed =
(kvm_check_extension(s, KVM_CAP_IRQFD_RESAMPLE) > 0);
kvm_vm_attributes_allowed =
(kvm_check_extension(s, KVM_CAP_VM_ATTRIBUTES) > 0);
kvm_ioeventfd_any_length_allowed =
(kvm_check_extension(s, KVM_CAP_IOEVENTFD_ANY_LENGTH) > 0);
#ifdef KVM_CAP_SET_GUEST_DEBUG
kvm_has_guest_debug =
(kvm_check_extension(s, KVM_CAP_SET_GUEST_DEBUG) > 0);
@@ -2576,16 +2709,24 @@ static int kvm_init(MachineState *ms)
kvm_irqchip_create(s);
}
s->memory_listener.listener.eventfd_add = kvm_mem_ioeventfd_add;
s->memory_listener.listener.eventfd_del = kvm_mem_ioeventfd_del;
if (kvm_eventfds_allowed) {
s->memory_listener.listener.eventfd_add = kvm_mem_ioeventfd_add;
s->memory_listener.listener.eventfd_del = kvm_mem_ioeventfd_del;
}
s->memory_listener.listener.coalesced_io_add = kvm_coalesce_mmio_region;
s->memory_listener.listener.coalesced_io_del = kvm_uncoalesce_mmio_region;
kvm_memory_listener_register(s, &s->memory_listener,
&address_space_memory, 0, "kvm-memory");
memory_listener_register(&kvm_io_listener,
if (kvm_eventfds_allowed) {
memory_listener_register(&kvm_io_listener,
&address_space_io);
}
memory_listener_register(&kvm_coalesced_pio_listener,
&address_space_io);
s->many_ioeventfds = kvm_check_many_ioeventfds();
s->sync_mmu = !!kvm_vm_check_extension(kvm_state, KVM_CAP_SYNC_MMU);
if (!s->sync_mmu) {
ret = ram_block_discard_disable(true);
@@ -2593,7 +2734,10 @@ static int kvm_init(MachineState *ms)
}
if (s->kvm_dirty_ring_size) {
kvm_dirty_ring_reaper_init(s);
ret = kvm_dirty_ring_reaper_init(s);
if (ret) {
goto err;
}
}
if (kvm_check_extension(kvm_state, KVM_CAP_BINARY_STATS_FD)) {
@@ -2611,7 +2755,6 @@ err:
if (s->fd != -1) {
close(s->fd);
}
g_free(s->as);
g_free(s->memory_listener.slots);
return ret;
@@ -2638,14 +2781,16 @@ static void kvm_handle_io(uint16_t port, MemTxAttrs attrs, void *data, int direc
static int kvm_handle_internal_error(CPUState *cpu, struct kvm_run *run)
{
int i;
fprintf(stderr, "KVM internal error. Suberror: %d\n",
run->internal.suberror);
for (i = 0; i < run->internal.ndata; ++i) {
fprintf(stderr, "extra data[%d]: 0x%016"PRIx64"\n",
i, (uint64_t)run->internal.data[i]);
if (kvm_check_extension(kvm_state, KVM_CAP_INTERNAL_ERROR_DATA)) {
int i;
for (i = 0; i < run->internal.ndata; ++i) {
fprintf(stderr, "extra data[%d]: 0x%016"PRIx64"\n",
i, (uint64_t)run->internal.data[i]);
}
}
if (run->internal.suberror == KVM_INTERNAL_ERROR_EMULATION) {
fprintf(stderr, "emulation failure\n");
@@ -2664,7 +2809,7 @@ void kvm_flush_coalesced_mmio_buffer(void)
{
KVMState *s = kvm_state;
if (!s || s->coalesced_flush_in_progress) {
if (s->coalesced_flush_in_progress) {
return;
}
@@ -2700,13 +2845,7 @@ bool kvm_cpu_check_are_resettable(void)
static void do_kvm_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
{
if (!cpu->vcpu_dirty) {
int ret = kvm_arch_get_registers(cpu);
if (ret) {
error_report("Failed to get registers: %s", strerror(-ret));
cpu_dump_state(cpu, stderr, CPU_DUMP_CODE);
vm_stop(RUN_STATE_INTERNAL_ERROR);
}
kvm_arch_get_registers(cpu);
cpu->vcpu_dirty = true;
}
}
@@ -2720,13 +2859,7 @@ void kvm_cpu_synchronize_state(CPUState *cpu)
static void do_kvm_cpu_synchronize_post_reset(CPUState *cpu, run_on_cpu_data arg)
{
int ret = kvm_arch_put_registers(cpu, KVM_PUT_RESET_STATE);
if (ret) {
error_report("Failed to put registers after reset: %s", strerror(-ret));
cpu_dump_state(cpu, stderr, CPU_DUMP_CODE);
vm_stop(RUN_STATE_INTERNAL_ERROR);
}
kvm_arch_put_registers(cpu, KVM_PUT_RESET_STATE);
cpu->vcpu_dirty = false;
}
@@ -2737,12 +2870,7 @@ void kvm_cpu_synchronize_post_reset(CPUState *cpu)
static void do_kvm_cpu_synchronize_post_init(CPUState *cpu, run_on_cpu_data arg)
{
int ret = kvm_arch_put_registers(cpu, KVM_PUT_FULL_STATE);
if (ret) {
error_report("Failed to put registers after init: %s", strerror(-ret));
exit(1);
}
kvm_arch_put_registers(cpu, KVM_PUT_FULL_STATE);
cpu->vcpu_dirty = false;
}
@@ -2835,14 +2963,7 @@ int kvm_cpu_exec(CPUState *cpu)
MemTxAttrs attrs;
if (cpu->vcpu_dirty) {
ret = kvm_arch_put_registers(cpu, KVM_PUT_RUNTIME_STATE);
if (ret) {
error_report("Failed to put registers after init: %s",
strerror(-ret));
ret = -1;
break;
}
kvm_arch_put_registers(cpu, KVM_PUT_RUNTIME_STATE);
cpu->vcpu_dirty = false;
}
@@ -3139,11 +3260,29 @@ int kvm_has_vcpu_events(void)
return kvm_state->vcpu_events;
}
int kvm_has_robust_singlestep(void)
{
return kvm_state->robust_singlestep;
}
int kvm_has_debugregs(void)
{
return kvm_state->debugregs;
}
int kvm_max_nested_state_length(void)
{
return kvm_state->max_nested_state_len;
}
int kvm_has_many_ioeventfds(void)
{
if (!kvm_enabled()) {
return 0;
}
return kvm_state->many_ioeventfds;
}
int kvm_has_gsi_routing(void)
{
#ifdef KVM_CAP_IRQ_ROUTING
@@ -3153,13 +3292,19 @@ int kvm_has_gsi_routing(void)
#endif
}
int kvm_has_intx_set_mask(void)
{
return kvm_state->intx_set_mask;
}
bool kvm_arm_supports_user_irq(void)
{
return kvm_check_extension(kvm_state, KVM_CAP_ARM_USER_IRQ);
}
#ifdef KVM_CAP_SET_GUEST_DEBUG
struct kvm_sw_breakpoint *kvm_find_sw_breakpoint(CPUState *cpu, vaddr pc)
struct kvm_sw_breakpoint *kvm_find_sw_breakpoint(CPUState *cpu,
target_ulong pc)
{
struct kvm_sw_breakpoint *bp;
@@ -3613,7 +3758,6 @@ static void kvm_accel_instance_init(Object *obj)
/* KVM dirty ring is by default off */
s->kvm_dirty_ring_size = 0;
s->kvm_dirty_ring_with_bitmap = false;
s->kvm_eager_split_size = 0;
s->notify_vmexit = NOTIFY_VMEXIT_OPTION_RUN;
s->notify_window = 0;
s->xen_version = 0;
@@ -3863,7 +4007,7 @@ static StatsDescriptors *find_stats_descriptors(StatsTarget target, int stats_fd
/* Read stats header */
kvm_stats_header = &descriptors->kvm_stats_header;
ret = pread(stats_fd, kvm_stats_header, sizeof(*kvm_stats_header), 0);
ret = read(stats_fd, kvm_stats_header, sizeof(*kvm_stats_header));
if (ret != sizeof(*kvm_stats_header)) {
error_setg(errp, "KVM stats: failed to read stats header: "
"expected %zu actual %zu",
@@ -3894,8 +4038,7 @@ static StatsDescriptors *find_stats_descriptors(StatsTarget target, int stats_fd
}
static void query_stats(StatsResultList **result, StatsTarget target,
strList *names, int stats_fd, CPUState *cpu,
Error **errp)
strList *names, int stats_fd, Error **errp)
{
struct kvm_stats_desc *kvm_stats_desc;
struct kvm_stats_header *kvm_stats_header;
@@ -3953,7 +4096,7 @@ static void query_stats(StatsResultList **result, StatsTarget target,
break;
case STATS_TARGET_VCPU:
add_stats_entry(result, STATS_PROVIDER_KVM,
cpu->parent_obj.canonical_path,
current_cpu->parent_obj.canonical_path,
stats_list);
break;
default:
@@ -3990,9 +4133,10 @@ static void query_stats_schema(StatsSchemaList **result, StatsTarget target,
add_stats_schema(result, STATS_PROVIDER_KVM, target, stats_list);
}
static void query_stats_vcpu(CPUState *cpu, StatsArgs *kvm_stats_args)
static void query_stats_vcpu(CPUState *cpu, run_on_cpu_data data)
{
int stats_fd = cpu->kvm_vcpu_stats_fd;
StatsArgs *kvm_stats_args = (StatsArgs *) data.host_ptr;
int stats_fd = kvm_vcpu_ioctl(cpu, KVM_GET_STATS_FD, NULL);
Error *local_err = NULL;
if (stats_fd == -1) {
@@ -4001,13 +4145,14 @@ static void query_stats_vcpu(CPUState *cpu, StatsArgs *kvm_stats_args)
return;
}
query_stats(kvm_stats_args->result.stats, STATS_TARGET_VCPU,
kvm_stats_args->names, stats_fd, cpu,
kvm_stats_args->errp);
kvm_stats_args->names, stats_fd, kvm_stats_args->errp);
close(stats_fd);
}
static void query_stats_schema_vcpu(CPUState *cpu, StatsArgs *kvm_stats_args)
static void query_stats_schema_vcpu(CPUState *cpu, run_on_cpu_data data)
{
int stats_fd = cpu->kvm_vcpu_stats_fd;
StatsArgs *kvm_stats_args = (StatsArgs *) data.host_ptr;
int stats_fd = kvm_vcpu_ioctl(cpu, KVM_GET_STATS_FD, NULL);
Error *local_err = NULL;
if (stats_fd == -1) {
@@ -4017,6 +4162,7 @@ static void query_stats_schema_vcpu(CPUState *cpu, StatsArgs *kvm_stats_args)
}
query_stats_schema(kvm_stats_args->result.schema, STATS_TARGET_VCPU, stats_fd,
kvm_stats_args->errp);
close(stats_fd);
}
static void query_stats_cb(StatsResultList **result, StatsTarget target,
@@ -4034,7 +4180,7 @@ static void query_stats_cb(StatsResultList **result, StatsTarget target,
error_setg_errno(errp, errno, "KVM stats: ioctl failed");
return;
}
query_stats(result, target, names, stats_fd, NULL, errp);
query_stats(result, target, names, stats_fd, errp);
close(stats_fd);
break;
}
@@ -4048,7 +4194,7 @@ static void query_stats_cb(StatsResultList **result, StatsTarget target,
if (!apply_str_list_filter(cpu->parent_obj.canonical_path, targets)) {
continue;
}
query_stats_vcpu(cpu, &stats_args);
run_on_cpu(cpu, query_stats_vcpu, RUN_ON_CPU_HOST_PTR(&stats_args));
}
break;
}
@@ -4074,6 +4220,6 @@ void query_stats_schemas_cb(StatsSchemaList **result, Error **errp)
if (first_cpu) {
stats_args.result.schema = result;
stats_args.errp = errp;
query_stats_schema_vcpu(first_cpu, &stats_args);
run_on_cpu(first_cpu, query_stats_schema_vcpu, RUN_ON_CPU_HOST_PTR(&stats_args));
}
}

View File

@@ -1,5 +1,5 @@
specific_ss.add(files('accel-target.c'))
system_ss.add(files('accel-system.c', 'accel-blocker.c'))
specific_ss.add(files('accel-common.c', 'accel-blocker.c'))
softmmu_ss.add(files('accel-softmmu.c'))
user_ss.add(files('accel-user.c'))
subdir('tcg')
@@ -12,4 +12,4 @@ if have_system
endif
# qtest
system_ss.add(files('dummy-cpus.c'))
softmmu_ss.add(files('dummy-cpus.c'))

View File

@@ -1 +1 @@
qtest_module_ss.add(when: ['CONFIG_SYSTEM_ONLY'], if_true: files('qtest.c'))
qtest_module_ss.add(when: ['CONFIG_SOFTMMU'], if_true: files('qtest.c'))

24
accel/stubs/hax-stub.c Normal file
View File

@@ -0,0 +1,24 @@
/*
* QEMU HAXM support
*
* Copyright (c) 2015, Intel Corporation
*
* Copyright 2016 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "sysemu/hax.h"
bool hax_allowed;
int hax_sync_vcpus(void)
{
return 0;
}

View File

@@ -17,12 +17,15 @@
KVMState *kvm_state;
bool kvm_kernel_irqchip;
bool kvm_async_interrupts_allowed;
bool kvm_eventfds_allowed;
bool kvm_irqfds_allowed;
bool kvm_resamplefds_allowed;
bool kvm_msi_via_irqfd_allowed;
bool kvm_gsi_routing_allowed;
bool kvm_gsi_direct_mapping;
bool kvm_allowed;
bool kvm_readonly_mem_allowed;
bool kvm_ioeventfd_any_length_allowed;
bool kvm_msi_use_devid;
void kvm_flush_coalesced_mmio_buffer(void)
@@ -38,6 +41,11 @@ bool kvm_has_sync_mmu(void)
return false;
}
int kvm_has_many_ioeventfds(void)
{
return 0;
}
int kvm_on_sigbus_vcpu(CPUState *cpu, int code, void *addr)
{
return 1;
@@ -83,6 +91,11 @@ void kvm_irqchip_change_notify(void)
{
}
int kvm_irqchip_add_adapter_route(KVMState *s, AdapterInfo *adapter)
{
return -ENOSYS;
}
int kvm_irqchip_add_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
EventNotifier *rn, int virq)
{
@@ -95,14 +108,9 @@ int kvm_irqchip_remove_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
return -ENOSYS;
}
unsigned int kvm_get_max_memslots(void)
bool kvm_has_free_slot(MachineState *ms)
{
return 0;
}
unsigned int kvm_get_free_memslots(void)
{
return 0;
return false;
}
void kvm_init_cpu_signals(CPUState *cpu)

View File

@@ -1,6 +1,7 @@
system_stubs_ss = ss.source_set()
system_stubs_ss.add(when: 'CONFIG_XEN', if_false: files('xen-stub.c'))
system_stubs_ss.add(when: 'CONFIG_KVM', if_false: files('kvm-stub.c'))
system_stubs_ss.add(when: 'CONFIG_TCG', if_false: files('tcg-stub.c'))
sysemu_stubs_ss = ss.source_set()
sysemu_stubs_ss.add(when: 'CONFIG_HAX', if_false: files('hax-stub.c'))
sysemu_stubs_ss.add(when: 'CONFIG_XEN', if_false: files('xen-stub.c'))
sysemu_stubs_ss.add(when: 'CONFIG_KVM', if_false: files('kvm-stub.c'))
sysemu_stubs_ss.add(when: 'CONFIG_TCG', if_false: files('tcg-stub.c'))
specific_ss.add_all(when: ['CONFIG_SYSTEM_ONLY'], if_true: system_stubs_ss)
specific_ss.add_all(when: ['CONFIG_SOFTMMU'], if_true: sysemu_stubs_ss)

View File

@@ -18,18 +18,22 @@ void tb_flush(CPUState *cpu)
{
}
void tlb_set_dirty(CPUState *cpu, vaddr vaddr)
void tlb_set_dirty(CPUState *cpu, target_ulong vaddr)
{
}
int probe_access_flags(CPUArchState *env, vaddr addr, int size,
void tcg_flush_jmp_cache(CPUState *cpu)
{
}
int probe_access_flags(CPUArchState *env, target_ulong addr, int size,
MMUAccessType access_type, int mmu_idx,
bool nonfault, void **phost, uintptr_t retaddr)
{
g_assert_not_reached();
}
void *probe_access(CPUArchState *env, vaddr addr, int size,
void *probe_access(CPUArchState *env, target_ulong addr, int size,
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr)
{
/* Handled by hardware accelerator. */

View File

@@ -41,7 +41,7 @@ CMPXCHG_HELPER(cmpxchgq_be, uint64_t)
CMPXCHG_HELPER(cmpxchgq_le, uint64_t)
#endif
#if HAVE_CMPXCHG128
#ifdef CONFIG_CMPXCHG128
CMPXCHG_HELPER(cmpxchgo_be, Int128)
CMPXCHG_HELPER(cmpxchgo_le, Int128)
#endif

View File

@@ -69,12 +69,11 @@
# define END _le
#endif
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE cmpv, ABI_TYPE newv,
MemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi,
DATA_SIZE, retaddr);
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr);
DATA_TYPE ret;
#if DATA_SIZE == 16
@@ -88,11 +87,10 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
}
#if DATA_SIZE < 16
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val,
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr, ABI_TYPE val,
MemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi,
DATA_SIZE, retaddr);
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr);
DATA_TYPE ret;
ret = qatomic_xchg__nocheck(haddr, val);
@@ -102,11 +100,11 @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val,
}
#define GEN_ATOMIC_HELPER(X) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE val, MemOpIdx oi, uintptr_t retaddr) \
{ \
DATA_TYPE *haddr, ret; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \
ret = qatomic_##X(haddr, val); \
ATOMIC_MMU_CLEANUP; \
atomic_trace_rmw_post(env, addr, oi); \
@@ -133,11 +131,11 @@ GEN_ATOMIC_HELPER(xor_fetch)
* of CF_PARALLEL's value, we'll trace just a read and a write.
*/
#define GEN_ATOMIC_HELPER_FN(X, FN, XDATA_TYPE, RET) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE xval, MemOpIdx oi, uintptr_t retaddr) \
{ \
XDATA_TYPE *haddr, cmp, old, new, val = xval; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \
smp_mb(); \
cmp = qatomic_read__nocheck(haddr); \
do { \
@@ -174,12 +172,11 @@ GEN_ATOMIC_HELPER_FN(umax_fetch, MAX, DATA_TYPE, new)
# define END _be
#endif
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE cmpv, ABI_TYPE newv,
MemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi,
DATA_SIZE, retaddr);
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr);
DATA_TYPE ret;
#if DATA_SIZE == 16
@@ -193,11 +190,10 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
}
#if DATA_SIZE < 16
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val,
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr, ABI_TYPE val,
MemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi,
DATA_SIZE, retaddr);
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr);
ABI_TYPE ret;
ret = qatomic_xchg__nocheck(haddr, BSWAP(val));
@@ -207,11 +203,11 @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val,
}
#define GEN_ATOMIC_HELPER(X) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE val, MemOpIdx oi, uintptr_t retaddr) \
{ \
DATA_TYPE *haddr, ret; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \
ret = qatomic_##X(haddr, BSWAP(val)); \
ATOMIC_MMU_CLEANUP; \
atomic_trace_rmw_post(env, addr, oi); \
@@ -235,11 +231,11 @@ GEN_ATOMIC_HELPER(xor_fetch)
* of CF_PARALLEL's value, we'll trace just a read and a write.
*/
#define GEN_ATOMIC_HELPER_FN(X, FN, XDATA_TYPE, RET) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE xval, MemOpIdx oi, uintptr_t retaddr) \
{ \
XDATA_TYPE *haddr, ldo, ldn, old, new, val = xval; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \
smp_mb(); \
ldn = qatomic_read__nocheck(haddr); \
do { \

View File

@@ -20,8 +20,9 @@
#include "qemu/osdep.h"
#include "sysemu/cpus.h"
#include "sysemu/tcg.h"
#include "exec/exec-all.h"
#include "qemu/plugin.h"
#include "internal-common.h"
#include "internal.h"
bool tcg_allowed;
@@ -32,10 +33,40 @@ void cpu_loop_exit_noexc(CPUState *cpu)
cpu_loop_exit(cpu);
}
#if defined(CONFIG_SOFTMMU)
void cpu_reloading_memory_map(void)
{
if (qemu_in_vcpu_thread() && current_cpu->running) {
/* The guest can in theory prolong the RCU critical section as long
* as it feels like. The major problem with this is that because it
* can do multiple reconfigurations of the memory map within the
* critical section, we could potentially accumulate an unbounded
* collection of memory data structures awaiting reclamation.
*
* Because the only thing we're currently protecting with RCU is the
* memory data structures, it's sufficient to break the critical section
* in this callback, which we know will get called every time the
* memory map is rearranged.
*
* (If we add anything else in the system that uses RCU to protect
* its data structures, we will need to implement some other mechanism
* to force TCG CPUs to exit the critical section, at which point this
* part of this callback might become unnecessary.)
*
* This pair matches cpu_exec's rcu_read_lock()/rcu_read_unlock(), which
* only protects cpu->as->dispatch. Since we know our caller is about
* to reload it, it's safe to split the critical section.
*/
rcu_read_unlock();
rcu_read_lock();
}
}
#endif
void cpu_loop_exit(CPUState *cpu)
{
/* Undo the setting in cpu_tb_exec. */
cpu->neg.can_do_io = true;
cpu->can_do_io = 1;
/* Undo any setting in generated code. */
qemu_plugin_disable_mem_helpers(cpu);
siglongjmp(cpu->jmp_env, 1);

View File

@@ -38,12 +38,11 @@
#include "sysemu/cpu-timers.h"
#include "exec/replay-core.h"
#include "sysemu/tcg.h"
#include "exec/helper-proto-common.h"
#include "exec/helper-proto.h"
#include "tb-jmp-cache.h"
#include "tb-hash.h"
#include "tb-context.h"
#include "internal-common.h"
#include "internal-target.h"
#include "internal.h"
/* -icount align implementation. */
@@ -74,7 +73,7 @@ static void align_clocks(SyncClocks *sc, CPUState *cpu)
return;
}
cpu_icount = cpu->icount_extra + cpu->neg.icount_decr.u16.low;
cpu_icount = cpu->icount_extra + cpu_neg(cpu)->icount_decr.u16.low;
sc->diff_clk += icount_to_ns(sc->last_cpu_icount - cpu_icount);
sc->last_cpu_icount = cpu_icount;
@@ -125,7 +124,7 @@ static void init_delay_params(SyncClocks *sc, CPUState *cpu)
sc->realtime_clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT);
sc->diff_clk = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - sc->realtime_clock;
sc->last_cpu_icount
= cpu->icount_extra + cpu->neg.icount_decr.u16.low;
= cpu->icount_extra + cpu_neg(cpu)->icount_decr.u16.low;
if (sc->diff_clk < max_delay) {
max_delay = sc->diff_clk;
}
@@ -170,8 +169,8 @@ uint32_t curr_cflags(CPUState *cpu)
}
struct tb_desc {
vaddr pc;
uint64_t cs_base;
target_ulong pc;
target_ulong cs_base;
CPUArchState *env;
tb_page_addr_t page_addr0;
uint32_t flags;
@@ -194,7 +193,7 @@ static bool tb_lookup_cmp(const void *p, const void *d)
return true;
} else {
tb_page_addr_t phys_page1;
vaddr virt_page1;
target_ulong virt_page1;
/*
* We know that the first page matched, and an otherwise valid TB
@@ -215,15 +214,15 @@ static bool tb_lookup_cmp(const void *p, const void *d)
return false;
}
static TranslationBlock *tb_htable_lookup(CPUState *cpu, vaddr pc,
uint64_t cs_base, uint32_t flags,
static TranslationBlock *tb_htable_lookup(CPUState *cpu, target_ulong pc,
target_ulong cs_base, uint32_t flags,
uint32_t cflags)
{
tb_page_addr_t phys_pc;
struct tb_desc desc;
uint32_t h;
desc.env = cpu_env(cpu);
desc.env = cpu->env_ptr;
desc.cs_base = cs_base;
desc.flags = flags;
desc.cflags = cflags;
@@ -239,9 +238,9 @@ static TranslationBlock *tb_htable_lookup(CPUState *cpu, vaddr pc,
}
/* Might cause an exception, so have a longjmp destination ready */
static inline TranslationBlock *tb_lookup(CPUState *cpu, vaddr pc,
uint64_t cs_base, uint32_t flags,
uint32_t cflags)
static inline TranslationBlock *tb_lookup(CPUState *cpu, target_ulong pc,
target_ulong cs_base,
uint32_t flags, uint32_t cflags)
{
TranslationBlock *tb;
CPUJumpCache *jc;
@@ -293,13 +292,13 @@ static inline TranslationBlock *tb_lookup(CPUState *cpu, vaddr pc,
return tb;
}
static void log_cpu_exec(vaddr pc, CPUState *cpu,
static void log_cpu_exec(target_ulong pc, CPUState *cpu,
const TranslationBlock *tb)
{
if (qemu_log_in_addr_range(pc)) {
qemu_log_mask(CPU_LOG_EXEC,
"Trace %d: %p [%08" PRIx64
"/%016" VADDR_PRIx "/%08x/%08x] %s\n",
"/" TARGET_FMT_lx "/%08x/%08x] %s\n",
cpu->cpu_index, tb->tc.ptr, tb->cs_base, pc,
tb->flags, tb->cflags, lookup_symbol(pc));
@@ -314,9 +313,6 @@ static void log_cpu_exec(vaddr pc, CPUState *cpu,
#if defined(TARGET_I386)
flags |= CPU_DUMP_CCOP;
#endif
if (qemu_loglevel_mask(CPU_LOG_TB_VPU)) {
flags |= CPU_DUMP_VPU;
}
cpu_dump_state(cpu, logfile, flags);
qemu_log_unlock(logfile);
}
@@ -324,7 +320,7 @@ static void log_cpu_exec(vaddr pc, CPUState *cpu,
}
}
static bool check_for_breakpoints_slow(CPUState *cpu, vaddr pc,
static bool check_for_breakpoints_slow(CPUState *cpu, target_ulong pc,
uint32_t *cflags)
{
CPUBreakpoint *bp;
@@ -390,7 +386,7 @@ static bool check_for_breakpoints_slow(CPUState *cpu, vaddr pc,
return false;
}
static inline bool check_for_breakpoints(CPUState *cpu, vaddr pc,
static inline bool check_for_breakpoints(CPUState *cpu, target_ulong pc,
uint32_t *cflags)
{
return unlikely(!QTAILQ_EMPTY(&cpu->breakpoints)) &&
@@ -409,8 +405,7 @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
{
CPUState *cpu = env_cpu(env);
TranslationBlock *tb;
vaddr pc;
uint64_t cs_base;
target_ulong cs_base, pc;
uint32_t flags, cflags;
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
@@ -445,7 +440,7 @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
static inline TranslationBlock * QEMU_DISABLE_CFI
cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
{
CPUArchState *env = cpu_env(cpu);
CPUArchState *env = cpu->env_ptr;
uintptr_t ret;
TranslationBlock *last_tb;
const void *tb_ptr = itb->tc.ptr;
@@ -456,7 +451,7 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
qemu_thread_jit_execute();
ret = tcg_qemu_tb_exec(env, tb_ptr);
cpu->neg.can_do_io = true;
cpu->can_do_io = 1;
qemu_plugin_disable_mem_helpers(cpu);
/*
* TODO: Delay swapping back to the read-write region of the TB
@@ -486,10 +481,10 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
cc->set_pc(cpu, last_tb->pc);
}
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
vaddr pc = log_pc(cpu, last_tb);
target_ulong pc = log_pc(cpu, last_tb);
if (qemu_log_in_addr_range(pc)) {
qemu_log("Stopped execution of TB chain before %p [%016"
VADDR_PRIx "] %s\n",
qemu_log("Stopped execution of TB chain before %p ["
TARGET_FMT_lx "] %s\n",
last_tb->tc.ptr, pc, lookup_symbol(pc));
}
}
@@ -527,49 +522,11 @@ static void cpu_exec_exit(CPUState *cpu)
}
}
static void cpu_exec_longjmp_cleanup(CPUState *cpu)
{
/* Non-buggy compilers preserve this; assert the correct value. */
g_assert(cpu == current_cpu);
#ifdef CONFIG_USER_ONLY
clear_helper_retaddr();
if (have_mmap_lock()) {
mmap_unlock();
}
#else
/*
* For softmmu, a tlb_fill fault during translation will land here,
* and we need to release any page locks held. In system mode we
* have one tcg_ctx per thread, so we know it was this cpu doing
* the translation.
*
* Alternative 1: Install a cleanup to be called via an exception
* handling safe longjmp. It seems plausible that all our hosts
* support such a thing. We'd have to properly register unwind info
* for the JIT for EH, rather that just for GDB.
*
* Alternative 2: Set and restore cpu->jmp_env in tb_gen_code to
* capture the cpu_loop_exit longjmp, perform the cleanup, and
* jump again to arrive here.
*/
if (tcg_ctx->gen_tb) {
tb_unlock_pages(tcg_ctx->gen_tb);
tcg_ctx->gen_tb = NULL;
}
#endif
if (qemu_mutex_iothread_locked()) {
qemu_mutex_unlock_iothread();
}
assert_no_pages_locked();
}
void cpu_exec_step_atomic(CPUState *cpu)
{
CPUArchState *env = cpu_env(cpu);
CPUArchState *env = cpu->env_ptr;
TranslationBlock *tb;
vaddr pc;
uint64_t cs_base;
target_ulong cs_base, pc;
uint32_t flags, cflags;
int tb_exit;
@@ -606,7 +563,16 @@ void cpu_exec_step_atomic(CPUState *cpu)
cpu_tb_exec(cpu, tb, &tb_exit);
cpu_exec_exit(cpu);
} else {
cpu_exec_longjmp_cleanup(cpu);
#ifndef CONFIG_SOFTMMU
clear_helper_retaddr();
if (have_mmap_lock()) {
mmap_unlock();
}
#endif
if (qemu_mutex_iothread_locked()) {
qemu_mutex_unlock_iothread();
}
assert_no_pages_locked();
}
/*
@@ -718,10 +684,10 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
if (cpu->exception_index < 0) {
#ifndef CONFIG_USER_ONLY
if (replay_has_exception()
&& cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0) {
&& cpu_neg(cpu)->icount_decr.u16.low + cpu->icount_extra == 0) {
/* Execute just one insn to trigger exception pending in the log */
cpu->cflags_next_tb = (curr_cflags(cpu) & ~CF_USE_ICOUNT)
| CF_LAST_IO | CF_NOIRQ | 1;
| CF_NOIRQ | 1;
}
#endif
return false;
@@ -808,7 +774,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
* Ensure zeroing happens before reading cpu->exit_request or
* cpu->interrupt_request (see also smp_wmb in cpu_exit())
*/
qatomic_set_mb(&cpu->neg.icount_decr.u16.high, 0);
qatomic_set_mb(&cpu_neg(cpu)->icount_decr.u16.high, 0);
if (unlikely(qatomic_read(&cpu->interrupt_request))) {
int interrupt_request;
@@ -899,7 +865,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
if (unlikely(qatomic_read(&cpu->exit_request))
|| (icount_enabled()
&& (cpu->cflags_next_tb == -1 || cpu->cflags_next_tb & CF_USE_ICOUNT)
&& cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0)) {
&& cpu_neg(cpu)->icount_decr.u16.low + cpu->icount_extra == 0)) {
qatomic_set(&cpu->exit_request, 0);
if (cpu->exception_index == -1) {
cpu->exception_index = EXCP_INTERRUPT;
@@ -911,8 +877,8 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
}
static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
vaddr pc, TranslationBlock **last_tb,
int *tb_exit)
target_ulong pc,
TranslationBlock **last_tb, int *tb_exit)
{
int32_t insns_left;
@@ -924,7 +890,7 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
}
*last_tb = NULL;
insns_left = qatomic_read(&cpu->neg.icount_decr.u32);
insns_left = qatomic_read(&cpu_neg(cpu)->icount_decr.u32);
if (insns_left < 0) {
/* Something asked us to stop executing chained TBs; just
* continue round the main loop. Whatever requested the exit
@@ -943,7 +909,7 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
icount_update(cpu);
/* Refill decrementer and continue execution. */
insns_left = MIN(0xffff, cpu->icount_budget);
cpu->neg.icount_decr.u16.low = insns_left;
cpu_neg(cpu)->icount_decr.u16.low = insns_left;
cpu->icount_extra = cpu->icount_budget - insns_left;
/*
@@ -973,11 +939,10 @@ cpu_exec_loop(CPUState *cpu, SyncClocks *sc)
while (!cpu_handle_interrupt(cpu, &last_tb)) {
TranslationBlock *tb;
vaddr pc;
uint64_t cs_base;
target_ulong cs_base, pc;
uint32_t flags, cflags;
cpu_get_tb_cpu_state(cpu_env(cpu), &pc, &cs_base, &flags);
cpu_get_tb_cpu_state(cpu->env_ptr, &pc, &cs_base, &flags);
/*
* When requested, use an exact setting for cflags for the next
@@ -1052,7 +1017,20 @@ static int cpu_exec_setjmp(CPUState *cpu, SyncClocks *sc)
{
/* Prepare setjmp context for exception handling. */
if (unlikely(sigsetjmp(cpu->jmp_env, 0) != 0)) {
cpu_exec_longjmp_cleanup(cpu);
/* Non-buggy compilers preserve this; assert the correct value. */
g_assert(cpu == current_cpu);
#ifndef CONFIG_SOFTMMU
clear_helper_retaddr();
if (have_mmap_lock()) {
mmap_unlock();
}
#endif
if (qemu_mutex_iothread_locked()) {
qemu_mutex_unlock_iothread();
}
assert_no_pages_locked();
}
return cpu_exec_loop(cpu, sc);
@@ -1089,7 +1067,7 @@ int cpu_exec(CPUState *cpu)
return ret;
}
bool tcg_exec_realizefn(CPUState *cpu, Error **errp)
void tcg_exec_realizefn(CPUState *cpu, Error **errp)
{
static bool tcg_target_initialized;
CPUClass *cc = CPU_GET_CLASS(cpu);
@@ -1105,8 +1083,6 @@ bool tcg_exec_realizefn(CPUState *cpu, Error **errp)
tcg_iommu_init_notifier_list(cpu);
#endif /* !CONFIG_USER_ONLY */
/* qemu_plugin_vcpu_init_hook delayed until cpu_index assigned. */
return true;
}
/* undo the initializations in reverse order */

File diff suppressed because it is too large Load Diff

View File

@@ -1,26 +0,0 @@
/*
* Internal execution defines for qemu (target agnostic)
*
* Copyright (c) 2003 Fabrice Bellard
*
* SPDX-License-Identifier: LGPL-2.1-or-later
*/
#ifndef ACCEL_TCG_INTERNAL_COMMON_H
#define ACCEL_TCG_INTERNAL_COMMON_H
#include "exec/translation-block.h"
extern int64_t max_delay;
extern int64_t max_advance;
/*
* Return true if CS is not running in parallel with other cpus, either
* because there are no other cpus or we are within an exclusive context.
*/
static inline bool cpu_in_serial_context(CPUState *cs)
{
return !(cs->tcg_cflags & CF_PARALLEL) || cpu_in_exclusive_context(cs);
}
#endif

View File

@@ -1,132 +0,0 @@
/*
* Internal execution defines for qemu (target specific)
*
* Copyright (c) 2003 Fabrice Bellard
*
* SPDX-License-Identifier: LGPL-2.1-or-later
*/
#ifndef ACCEL_TCG_INTERNAL_TARGET_H
#define ACCEL_TCG_INTERNAL_TARGET_H
#include "exec/exec-all.h"
#include "exec/translate-all.h"
/*
* Access to the various translations structures need to be serialised
* via locks for consistency. In user-mode emulation access to the
* memory related structures are protected with mmap_lock.
* In !user-mode we use per-page locks.
*/
#ifdef CONFIG_USER_ONLY
#define assert_memory_lock() tcg_debug_assert(have_mmap_lock())
#else
#define assert_memory_lock()
#endif
#if defined(CONFIG_SOFTMMU) && defined(CONFIG_DEBUG_TCG)
void assert_no_pages_locked(void);
#else
static inline void assert_no_pages_locked(void) { }
#endif
#ifdef CONFIG_USER_ONLY
static inline void page_table_config_init(void) { }
#else
void page_table_config_init(void);
#endif
#ifdef CONFIG_USER_ONLY
/*
* For user-only, page_protect sets the page read-only.
* Since most execution is already on read-only pages, and we'd need to
* account for other TBs on the same page, defer undoing any page protection
* until we receive the write fault.
*/
static inline void tb_lock_page0(tb_page_addr_t p0)
{
page_protect(p0);
}
static inline void tb_lock_page1(tb_page_addr_t p0, tb_page_addr_t p1)
{
page_protect(p1);
}
static inline void tb_unlock_page1(tb_page_addr_t p0, tb_page_addr_t p1) { }
static inline void tb_unlock_pages(TranslationBlock *tb) { }
#else
void tb_lock_page0(tb_page_addr_t);
void tb_lock_page1(tb_page_addr_t, tb_page_addr_t);
void tb_unlock_page1(tb_page_addr_t, tb_page_addr_t);
void tb_unlock_pages(TranslationBlock *);
#endif
#ifdef CONFIG_SOFTMMU
void tb_invalidate_phys_range_fast(ram_addr_t ram_addr,
unsigned size,
uintptr_t retaddr);
G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr);
#endif /* CONFIG_SOFTMMU */
TranslationBlock *tb_gen_code(CPUState *cpu, vaddr pc,
uint64_t cs_base, uint32_t flags,
int cflags);
void page_init(void);
void tb_htable_init(void);
void tb_reset_jump(TranslationBlock *tb, int n);
TranslationBlock *tb_link_page(TranslationBlock *tb);
bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc);
void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
uintptr_t host_pc);
bool tcg_exec_realizefn(CPUState *cpu, Error **errp);
void tcg_exec_unrealizefn(CPUState *cpu);
/* Return the current PC from CPU, which may be cached in TB. */
static inline vaddr log_pc(CPUState *cpu, const TranslationBlock *tb)
{
if (tb_cflags(tb) & CF_PCREL) {
return cpu->cc->get_pc(cpu);
} else {
return tb->pc;
}
}
extern bool one_insn_per_tb;
/**
* tcg_req_mo:
* @type: TCGBar
*
* Filter @type to the barrier that is required for the guest
* memory ordering vs the host memory ordering. A non-zero
* result indicates that some barrier is required.
*
* If TCG_GUEST_DEFAULT_MO is not defined, assume that the
* guest requires strict ordering.
*
* This is a macro so that it's constant even without optimization.
*/
#ifdef TCG_GUEST_DEFAULT_MO
# define tcg_req_mo(type) \
((type) & TCG_GUEST_DEFAULT_MO & ~TCG_TARGET_DEFAULT_MO)
#else
# define tcg_req_mo(type) ((type) & ~TCG_TARGET_DEFAULT_MO)
#endif
/**
* cpu_req_mo:
* @type: TCGBar
*
* If tcg_req_mo indicates a barrier for @type is required
* for the guest memory model, issue a host memory barrier.
*/
#define cpu_req_mo(type) \
do { \
if (tcg_req_mo(type)) { \
smp_mb(); \
} \
} while (0)
#endif /* ACCEL_TCG_INTERNAL_H */

81
accel/tcg/internal.h Normal file
View File

@@ -0,0 +1,81 @@
/*
* Internal execution defines for qemu
*
* Copyright (c) 2003 Fabrice Bellard
*
* SPDX-License-Identifier: LGPL-2.1-or-later
*/
#ifndef ACCEL_TCG_INTERNAL_H
#define ACCEL_TCG_INTERNAL_H
#include "exec/exec-all.h"
/*
* Access to the various translations structures need to be serialised
* via locks for consistency. In user-mode emulation access to the
* memory related structures are protected with mmap_lock.
* In !user-mode we use per-page locks.
*/
#ifdef CONFIG_SOFTMMU
#define assert_memory_lock()
#else
#define assert_memory_lock() tcg_debug_assert(have_mmap_lock())
#endif
#if defined(CONFIG_SOFTMMU) && defined(CONFIG_DEBUG_TCG)
void assert_no_pages_locked(void);
#else
static inline void assert_no_pages_locked(void) { }
#endif
#ifdef CONFIG_USER_ONLY
static inline void page_table_config_init(void) { }
#else
void page_table_config_init(void);
#endif
#ifdef CONFIG_SOFTMMU
void tb_invalidate_phys_range_fast(ram_addr_t ram_addr,
unsigned size,
uintptr_t retaddr);
G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr);
#endif /* CONFIG_SOFTMMU */
TranslationBlock *tb_gen_code(CPUState *cpu, target_ulong pc,
target_ulong cs_base, uint32_t flags,
int cflags);
void page_init(void);
void tb_htable_init(void);
void tb_reset_jump(TranslationBlock *tb, int n);
TranslationBlock *tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
tb_page_addr_t phys_page2);
bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc);
void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
uintptr_t host_pc);
/* Return the current PC from CPU, which may be cached in TB. */
static inline target_ulong log_pc(CPUState *cpu, const TranslationBlock *tb)
{
if (tb_cflags(tb) & CF_PCREL) {
return cpu->cc->get_pc(cpu);
} else {
return tb->pc;
}
}
/*
* Return true if CS is not running in parallel with other cpus, either
* because there are no other cpus or we are within an exclusive context.
*/
static inline bool cpu_in_serial_context(CPUState *cs)
{
return !(cs->tcg_cflags & CF_PARALLEL) || cpu_in_exclusive_context(cs);
}
extern int64_t max_delay;
extern int64_t max_advance;
extern bool one_insn_per_tb;
#endif /* ACCEL_TCG_INTERNAL_H */

View File

@@ -26,7 +26,7 @@
* If the operation must be split into two operations to be
* examined separately for atomicity, return -lg2.
*/
static int required_atomicity(CPUState *cpu, uintptr_t p, MemOp memop)
static int required_atomicity(CPUArchState *env, uintptr_t p, MemOp memop)
{
MemOp atom = memop & MO_ATOM_MASK;
MemOp size = memop & MO_SIZE;
@@ -93,7 +93,7 @@ static int required_atomicity(CPUState *cpu, uintptr_t p, MemOp memop)
* host atomicity in order to avoid racing. This reduction
* avoids looping with cpu_loop_exit_atomic.
*/
if (cpu_in_serial_context(cpu)) {
if (cpu_in_serial_context(env_cpu(env))) {
return MO_8;
}
return atmax;
@@ -139,14 +139,14 @@ static inline uint64_t load_atomic8(void *pv)
/**
* load_atomic8_or_exit:
* @cpu: generic cpu state
* @env: cpu context
* @ra: host unwind address
* @pv: host address
*
* Atomically load 8 aligned bytes from @pv.
* If this is not possible, longjmp out to restart serially.
*/
static uint64_t load_atomic8_or_exit(CPUState *cpu, uintptr_t ra, void *pv)
static uint64_t load_atomic8_or_exit(CPUArchState *env, uintptr_t ra, void *pv)
{
if (HAVE_al8) {
return load_atomic8(pv);
@@ -159,28 +159,26 @@ static uint64_t load_atomic8_or_exit(CPUState *cpu, uintptr_t ra, void *pv)
* another process, because the fallback start_exclusive solution
* provides no protection across processes.
*/
WITH_MMAP_LOCK_GUARD() {
if (!page_check_range(h2g(pv), 8, PAGE_WRITE_ORG)) {
uint64_t *p = __builtin_assume_aligned(pv, 8);
return *p;
}
if (!page_check_range(h2g(pv), 8, PAGE_WRITE_ORG)) {
uint64_t *p = __builtin_assume_aligned(pv, 8);
return *p;
}
#endif
/* Ultimate fallback: re-execute in serial context. */
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
/**
* load_atomic16_or_exit:
* @cpu: generic cpu state
* @env: cpu context
* @ra: host unwind address
* @pv: host address
*
* Atomically load 16 aligned bytes from @pv.
* If this is not possible, longjmp out to restart serially.
*/
static Int128 load_atomic16_or_exit(CPUState *cpu, uintptr_t ra, void *pv)
static Int128 load_atomic16_or_exit(CPUArchState *env, uintptr_t ra, void *pv)
{
Int128 *p = __builtin_assume_aligned(pv, 16);
@@ -188,31 +186,29 @@ static Int128 load_atomic16_or_exit(CPUState *cpu, uintptr_t ra, void *pv)
return atomic16_read_ro(p);
}
#ifdef CONFIG_USER_ONLY
/*
* We can only use cmpxchg to emulate a load if the page is writable.
* If the page is not writable, then assume the value is immutable
* and requires no locking. This ignores the case of MAP_SHARED with
* another process, because the fallback start_exclusive solution
* provides no protection across processes.
*
* In system mode all guest pages are writable. For user mode,
* we must take mmap_lock so that the query remains valid until
* the write is complete -- tests/tcg/multiarch/munmap-pthread.c
* is an example that can race.
*/
WITH_MMAP_LOCK_GUARD() {
#ifdef CONFIG_USER_ONLY
if (!page_check_range(h2g(p), 16, PAGE_WRITE_ORG)) {
return *p;
}
if (!page_check_range(h2g(p), 16, PAGE_WRITE_ORG)) {
return *p;
}
#endif
if (HAVE_ATOMIC128_RW) {
return atomic16_read_rw(p);
}
/*
* In system mode all guest pages are writable, and for user-only
* we have just checked writability. Try cmpxchg.
*/
if (HAVE_ATOMIC128_RW) {
return atomic16_read_rw(p);
}
/* Ultimate fallback: re-execute in serial context. */
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
/**
@@ -263,7 +259,7 @@ static uint64_t load_atom_extract_al8x2(void *pv)
/**
* load_atom_extract_al8_or_exit:
* @cpu: generic cpu state
* @env: cpu context
* @ra: host unwind address
* @pv: host address
* @s: object size in bytes, @s <= 4.
@@ -273,7 +269,7 @@ static uint64_t load_atom_extract_al8x2(void *pv)
* 8-byte load and extract.
* The value is returned in the low bits of a uint32_t.
*/
static uint32_t load_atom_extract_al8_or_exit(CPUState *cpu, uintptr_t ra,
static uint32_t load_atom_extract_al8_or_exit(CPUArchState *env, uintptr_t ra,
void *pv, int s)
{
uintptr_t pi = (uintptr_t)pv;
@@ -281,12 +277,12 @@ static uint32_t load_atom_extract_al8_or_exit(CPUState *cpu, uintptr_t ra,
int shr = (HOST_BIG_ENDIAN ? 8 - s - o : o) * 8;
pv = (void *)(pi & ~7);
return load_atomic8_or_exit(cpu, ra, pv) >> shr;
return load_atomic8_or_exit(env, ra, pv) >> shr;
}
/**
* load_atom_extract_al16_or_exit:
* @cpu: generic cpu state
* @env: cpu context
* @ra: host unwind address
* @p: host address
* @s: object size in bytes, @s <= 8.
@@ -299,7 +295,7 @@ static uint32_t load_atom_extract_al8_or_exit(CPUState *cpu, uintptr_t ra,
*
* If this is not possible, longjmp out to restart serially.
*/
static uint64_t load_atom_extract_al16_or_exit(CPUState *cpu, uintptr_t ra,
static uint64_t load_atom_extract_al16_or_exit(CPUArchState *env, uintptr_t ra,
void *pv, int s)
{
uintptr_t pi = (uintptr_t)pv;
@@ -312,7 +308,7 @@ static uint64_t load_atom_extract_al16_or_exit(CPUState *cpu, uintptr_t ra,
* Provoke SIGBUS if possible otherwise.
*/
pv = (void *)(pi & ~7);
r = load_atomic16_or_exit(cpu, ra, pv);
r = load_atomic16_or_exit(env, ra, pv);
r = int128_urshift(r, shr);
return int128_getlo(r);
@@ -394,7 +390,7 @@ static inline uint64_t load_atom_8_by_8_or_4(void *pv)
*
* Load 2 bytes from @p, honoring the atomicity of @memop.
*/
static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra,
static uint16_t load_atom_2(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop)
{
uintptr_t pi = (uintptr_t)pv;
@@ -404,13 +400,10 @@ static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra,
return load_atomic2(pv);
}
if (HAVE_ATOMIC128_RO) {
intptr_t left_in_page = -(pi | TARGET_PAGE_MASK);
if (likely(left_in_page > 8)) {
return load_atom_extract_al16_or_al8(pv, 2);
}
return load_atom_extract_al16_or_al8(pv, 2);
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
switch (atmax) {
case MO_8:
return lduw_he_p(pv);
@@ -421,9 +414,9 @@ static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra,
return load_atomic4(pv - 1) >> 8;
}
if ((pi & 15) != 7) {
return load_atom_extract_al8_or_exit(cpu, ra, pv, 2);
return load_atom_extract_al8_or_exit(env, ra, pv, 2);
}
return load_atom_extract_al16_or_exit(cpu, ra, pv, 2);
return load_atom_extract_al16_or_exit(env, ra, pv, 2);
default:
g_assert_not_reached();
}
@@ -436,7 +429,7 @@ static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra,
*
* Load 4 bytes from @p, honoring the atomicity of @memop.
*/
static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra,
static uint32_t load_atom_4(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop)
{
uintptr_t pi = (uintptr_t)pv;
@@ -446,13 +439,10 @@ static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra,
return load_atomic4(pv);
}
if (HAVE_ATOMIC128_RO) {
intptr_t left_in_page = -(pi | TARGET_PAGE_MASK);
if (likely(left_in_page > 8)) {
return load_atom_extract_al16_or_al8(pv, 4);
}
return load_atom_extract_al16_or_al8(pv, 4);
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
switch (atmax) {
case MO_8:
case MO_16:
@@ -466,9 +456,9 @@ static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra,
return load_atom_extract_al4x2(pv);
case MO_32:
if (!(pi & 4)) {
return load_atom_extract_al8_or_exit(cpu, ra, pv, 4);
return load_atom_extract_al8_or_exit(env, ra, pv, 4);
}
return load_atom_extract_al16_or_exit(cpu, ra, pv, 4);
return load_atom_extract_al16_or_exit(env, ra, pv, 4);
default:
g_assert_not_reached();
}
@@ -481,7 +471,7 @@ static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra,
*
* Load 8 bytes from @p, honoring the atomicity of @memop.
*/
static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
static uint64_t load_atom_8(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop)
{
uintptr_t pi = (uintptr_t)pv;
@@ -498,12 +488,12 @@ static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
return load_atom_extract_al16_or_al8(pv, 8);
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
if (atmax == MO_64) {
if (!HAVE_al8 && (pi & 7) == 0) {
load_atomic8_or_exit(cpu, ra, pv);
load_atomic8_or_exit(env, ra, pv);
}
return load_atom_extract_al16_or_exit(cpu, ra, pv, 8);
return load_atom_extract_al16_or_exit(env, ra, pv, 8);
}
if (HAVE_al8_fast) {
return load_atom_extract_al8x2(pv);
@@ -519,7 +509,7 @@ static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
if (HAVE_al8) {
return load_atom_extract_al8x2(pv);
}
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
default:
g_assert_not_reached();
}
@@ -532,7 +522,7 @@ static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
*
* Load 16 bytes from @p, honoring the atomicity of @memop.
*/
static Int128 load_atom_16(CPUState *cpu, uintptr_t ra,
static Int128 load_atom_16(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop)
{
uintptr_t pi = (uintptr_t)pv;
@@ -548,7 +538,7 @@ static Int128 load_atom_16(CPUState *cpu, uintptr_t ra,
return atomic16_read_ro(pv);
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
switch (atmax) {
case MO_8:
memcpy(&r, pv, 16);
@@ -563,20 +553,20 @@ static Int128 load_atom_16(CPUState *cpu, uintptr_t ra,
break;
case MO_64:
if (!HAVE_al8) {
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
a = load_atomic8(pv);
b = load_atomic8(pv + 8);
break;
case -MO_64:
if (!HAVE_al8) {
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
a = load_atom_extract_al8x2(pv);
b = load_atom_extract_al8x2(pv + 8);
break;
case MO_128:
return load_atomic16_or_exit(cpu, ra, pv);
return load_atomic16_or_exit(env, ra, pv);
default:
g_assert_not_reached();
}
@@ -825,7 +815,7 @@ static uint64_t store_whole_le16(void *pv, int size, Int128 val_le)
int sh = o * 8;
Int128 m, v;
qemu_build_assert(HAVE_CMPXCHG128);
qemu_build_assert(HAVE_ATOMIC128_RW);
/* Like MAKE_64BIT_MASK(0, sz), but larger. */
if (sz <= 64) {
@@ -857,7 +847,7 @@ static uint64_t store_whole_le16(void *pv, int size, Int128 val_le)
*
* Store 2 bytes to @p, honoring the atomicity of @memop.
*/
static void store_atom_2(CPUState *cpu, uintptr_t ra,
static void store_atom_2(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop, uint16_t val)
{
uintptr_t pi = (uintptr_t)pv;
@@ -868,7 +858,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
return;
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
if (atmax == MO_8) {
stw_he_p(pv, val);
return;
@@ -887,7 +877,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
return;
}
} else if ((pi & 15) == 7) {
if (HAVE_CMPXCHG128) {
if (HAVE_ATOMIC128_RW) {
Int128 v = int128_lshift(int128_make64(val), 56);
Int128 m = int128_lshift(int128_make64(0xffff), 56);
store_atom_insert_al16(pv - 7, v, m);
@@ -897,7 +887,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
g_assert_not_reached();
}
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
/**
@@ -908,7 +898,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
*
* Store 4 bytes to @p, honoring the atomicity of @memop.
*/
static void store_atom_4(CPUState *cpu, uintptr_t ra,
static void store_atom_4(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop, uint32_t val)
{
uintptr_t pi = (uintptr_t)pv;
@@ -919,7 +909,7 @@ static void store_atom_4(CPUState *cpu, uintptr_t ra,
return;
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
switch (atmax) {
case MO_8:
stl_he_p(pv, val);
@@ -956,12 +946,12 @@ static void store_atom_4(CPUState *cpu, uintptr_t ra,
return;
}
} else {
if (HAVE_CMPXCHG128) {
if (HAVE_ATOMIC128_RW) {
store_whole_le16(pv, 4, int128_make64(cpu_to_le32(val)));
return;
}
}
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
default:
g_assert_not_reached();
}
@@ -975,7 +965,7 @@ static void store_atom_4(CPUState *cpu, uintptr_t ra,
*
* Store 8 bytes to @p, honoring the atomicity of @memop.
*/
static void store_atom_8(CPUState *cpu, uintptr_t ra,
static void store_atom_8(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop, uint64_t val)
{
uintptr_t pi = (uintptr_t)pv;
@@ -986,7 +976,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
return;
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
switch (atmax) {
case MO_8:
stq_he_p(pv, val);
@@ -1021,7 +1011,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
}
break;
case MO_64:
if (HAVE_CMPXCHG128) {
if (HAVE_ATOMIC128_RW) {
store_whole_le16(pv, 8, int128_make64(cpu_to_le64(val)));
return;
}
@@ -1029,7 +1019,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
default:
g_assert_not_reached();
}
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
/**
@@ -1040,7 +1030,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
*
* Store 16 bytes to @p, honoring the atomicity of @memop.
*/
static void store_atom_16(CPUState *cpu, uintptr_t ra,
static void store_atom_16(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop, Int128 val)
{
uintptr_t pi = (uintptr_t)pv;
@@ -1052,7 +1042,7 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
return;
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
a = HOST_BIG_ENDIAN ? int128_gethi(val) : int128_getlo(val);
b = HOST_BIG_ENDIAN ? int128_getlo(val) : int128_gethi(val);
@@ -1076,7 +1066,7 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
}
break;
case -MO_64:
if (HAVE_CMPXCHG128) {
if (HAVE_ATOMIC128_RW) {
uint64_t val_le;
int s2 = pi & 15;
int s1 = 16 - s2;
@@ -1103,9 +1093,13 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
}
break;
case MO_128:
if (HAVE_ATOMIC128_RW) {
atomic16_set(pv, val);
return;
}
break;
default:
g_assert_not_reached();
}
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}

View File

@@ -8,231 +8,6 @@
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
/*
* Load helpers for tcg-ldst.h
*/
tcg_target_ulong helper_ldub_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_8);
return do_ld1_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
tcg_target_ulong helper_lduw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
return do_ld2_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
tcg_target_ulong helper_ldul_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
return do_ld4_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
uint64_t helper_ldq_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
return do_ld8_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
/*
* Provide signed versions of the load routines as well. We can of course
* avoid this for 64-bit data, or for 32-bit data on 32-bit host.
*/
tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
return (int8_t)helper_ldub_mmu(env, addr, oi, retaddr);
}
tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
return (int16_t)helper_lduw_mmu(env, addr, oi, retaddr);
}
tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
return (int32_t)helper_ldul_mmu(env, addr, oi, retaddr);
}
Int128 helper_ld16_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
return do_ld16_mmu(env_cpu(env), addr, oi, retaddr);
}
Int128 helper_ld_i128(CPUArchState *env, uint64_t addr, uint32_t oi)
{
return helper_ld16_mmu(env, addr, oi, GETPC());
}
/*
* Store helpers for tcg-ldst.h
*/
void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_8);
do_st1_mmu(env_cpu(env), addr, val, oi, ra);
}
void helper_stw_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
do_st2_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_stl_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
do_st4_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_stq_mmu(CPUArchState *env, uint64_t addr, uint64_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
do_st8_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_st16_mmu(CPUArchState *env, uint64_t addr, Int128 val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
do_st16_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx oi)
{
helper_st16_mmu(env, addr, val, oi, GETPC());
}
/*
* Load helpers for cpu_ldst.h
*/
static void plugin_load_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi)
{
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
}
uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra)
{
uint8_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_UB);
ret = do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, oi);
return ret;
}
uint16_t cpu_ldw_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint16_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
ret = do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, oi);
return ret;
}
uint32_t cpu_ldl_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint32_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
ret = do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, oi);
return ret;
}
uint64_t cpu_ldq_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint64_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
ret = do_ld8_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, oi);
return ret;
}
Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
Int128 ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
ret = do_ld16_mmu(env_cpu(env), addr, oi, ra);
plugin_load_cb(env, addr, oi);
return ret;
}
/*
* Store helpers for cpu_ldst.h
*/
static void plugin_store_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi)
{
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val,
MemOpIdx oi, uintptr_t retaddr)
{
helper_stb_mmu(env, addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uint16_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
do_st2_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uint32_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
do_st4_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
do_st8_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
void cpu_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
do_st16_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
/*
* Wrappers of the above
*/
uint32_t cpu_ldub_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra)

View File

@@ -1,9 +1,7 @@
tcg_ss = ss.source_set()
common_ss.add(when: 'CONFIG_TCG', if_true: files(
'cpu-exec-common.c',
))
tcg_ss.add(files(
'tcg-all.c',
'cpu-exec-common.c',
'cpu-exec.c',
'tb-maint.c',
'tcg-runtime-gvec.c',
@@ -12,24 +10,18 @@ tcg_ss.add(files(
'translator.c',
))
tcg_ss.add(when: 'CONFIG_USER_ONLY', if_true: files('user-exec.c'))
tcg_ss.add(when: 'CONFIG_SYSTEM_ONLY', if_false: files('user-exec-stub.c'))
if get_option('plugins')
tcg_ss.add(files('plugin-gen.c'))
endif
tcg_ss.add(when: 'CONFIG_SOFTMMU', if_false: files('user-exec-stub.c'))
tcg_ss.add(when: 'CONFIG_PLUGIN', if_true: [files('plugin-gen.c')])
tcg_ss.add(when: libdw, if_true: files('debuginfo.c'))
tcg_ss.add(when: 'CONFIG_LINUX', if_true: files('perf.c'))
specific_ss.add_all(when: 'CONFIG_TCG', if_true: tcg_ss)
specific_ss.add(when: ['CONFIG_SYSTEM_ONLY', 'CONFIG_TCG'], if_true: files(
specific_ss.add(when: ['CONFIG_SOFTMMU', 'CONFIG_TCG'], if_true: files(
'cputlb.c',
))
system_ss.add(when: ['CONFIG_TCG'], if_true: files(
'icount-common.c',
'monitor.c',
))
tcg_module_ss.add(when: ['CONFIG_SYSTEM_ONLY', 'CONFIG_TCG'], if_true: files(
tcg_module_ss.add(when: ['CONFIG_SOFTMMU', 'CONFIG_TCG'], if_true: files(
'tcg-accel-ops.c',
'tcg-accel-ops-mttcg.c',
'tcg-accel-ops-icount.c',

View File

@@ -8,7 +8,6 @@
#include "qemu/osdep.h"
#include "qemu/accel.h"
#include "qemu/qht.h"
#include "qapi/error.h"
#include "qapi/type-helpers.h"
#include "qapi/qapi-commands-machine.h"
@@ -17,8 +16,7 @@
#include "sysemu/cpu-timers.h"
#include "sysemu/tcg.h"
#include "tcg/tcg.h"
#include "internal-common.h"
#include "tb-context.h"
#include "internal.h"
static void dump_drift_info(GString *buf)
@@ -52,153 +50,6 @@ static void dump_accel_info(GString *buf)
one_insn_per_tb ? "on" : "off");
}
static void print_qht_statistics(struct qht_stats hst, GString *buf)
{
uint32_t hgram_opts;
size_t hgram_bins;
char *hgram;
if (!hst.head_buckets) {
return;
}
g_string_append_printf(buf, "TB hash buckets %zu/%zu "
"(%0.2f%% head buckets used)\n",
hst.used_head_buckets, hst.head_buckets,
(double)hst.used_head_buckets /
hst.head_buckets * 100);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_opts |= QDIST_PR_100X | QDIST_PR_PERCENT;
if (qdist_xmax(&hst.occupancy) - qdist_xmin(&hst.occupancy) == 1) {
hgram_opts |= QDIST_PR_NODECIMAL;
}
hgram = qdist_pr(&hst.occupancy, 10, hgram_opts);
g_string_append_printf(buf, "TB hash occupancy %0.2f%% avg chain occ. "
"Histogram: %s\n",
qdist_avg(&hst.occupancy) * 100, hgram);
g_free(hgram);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_bins = qdist_xmax(&hst.chain) - qdist_xmin(&hst.chain);
if (hgram_bins > 10) {
hgram_bins = 10;
} else {
hgram_bins = 0;
hgram_opts |= QDIST_PR_NODECIMAL | QDIST_PR_NOBINRANGE;
}
hgram = qdist_pr(&hst.chain, hgram_bins, hgram_opts);
g_string_append_printf(buf, "TB hash avg chain %0.3f buckets. "
"Histogram: %s\n",
qdist_avg(&hst.chain), hgram);
g_free(hgram);
}
struct tb_tree_stats {
size_t nb_tbs;
size_t host_size;
size_t target_size;
size_t max_target_size;
size_t direct_jmp_count;
size_t direct_jmp2_count;
size_t cross_page;
};
static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data)
{
const TranslationBlock *tb = value;
struct tb_tree_stats *tst = data;
tst->nb_tbs++;
tst->host_size += tb->tc.size;
tst->target_size += tb->size;
if (tb->size > tst->max_target_size) {
tst->max_target_size = tb->size;
}
if (tb->page_addr[1] != -1) {
tst->cross_page++;
}
if (tb->jmp_reset_offset[0] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp_count++;
if (tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp2_count++;
}
}
return false;
}
static void tlb_flush_counts(size_t *pfull, size_t *ppart, size_t *pelide)
{
CPUState *cpu;
size_t full = 0, part = 0, elide = 0;
CPU_FOREACH(cpu) {
full += qatomic_read(&cpu->neg.tlb.c.full_flush_count);
part += qatomic_read(&cpu->neg.tlb.c.part_flush_count);
elide += qatomic_read(&cpu->neg.tlb.c.elide_flush_count);
}
*pfull = full;
*ppart = part;
*pelide = elide;
}
static void tcg_dump_info(GString *buf)
{
g_string_append_printf(buf, "[TCG profiler not compiled]\n");
}
static void dump_exec_info(GString *buf)
{
struct tb_tree_stats tst = {};
struct qht_stats hst;
size_t nb_tbs, flush_full, flush_part, flush_elide;
tcg_tb_foreach(tb_tree_stats_iter, &tst);
nb_tbs = tst.nb_tbs;
/* XXX: avoid using doubles ? */
g_string_append_printf(buf, "Translation buffer state:\n");
/*
* Report total code size including the padding and TB structs;
* otherwise users might think "-accel tcg,tb-size" is not honoured.
* For avg host size we use the precise numbers from tb_tree_stats though.
*/
g_string_append_printf(buf, "gen code size %zu/%zu\n",
tcg_code_size(), tcg_code_capacity());
g_string_append_printf(buf, "TB count %zu\n", nb_tbs);
g_string_append_printf(buf, "TB avg target size %zu max=%zu bytes\n",
nb_tbs ? tst.target_size / nb_tbs : 0,
tst.max_target_size);
g_string_append_printf(buf, "TB avg host size %zu bytes "
"(expansion ratio: %0.1f)\n",
nb_tbs ? tst.host_size / nb_tbs : 0,
tst.target_size ?
(double)tst.host_size / tst.target_size : 0);
g_string_append_printf(buf, "cross page TB count %zu (%zu%%)\n",
tst.cross_page,
nb_tbs ? (tst.cross_page * 100) / nb_tbs : 0);
g_string_append_printf(buf, "direct jump count %zu (%zu%%) "
"(2 jumps=%zu %zu%%)\n",
tst.direct_jmp_count,
nb_tbs ? (tst.direct_jmp_count * 100) / nb_tbs : 0,
tst.direct_jmp2_count,
nb_tbs ? (tst.direct_jmp2_count * 100) / nb_tbs : 0);
qht_statistics_init(&tb_ctx.htable, &hst);
print_qht_statistics(hst, buf);
qht_statistics_destroy(&hst);
g_string_append_printf(buf, "\nStatistics:\n");
g_string_append_printf(buf, "TB flush count %u\n",
qatomic_read(&tb_ctx.tb_flush_count));
g_string_append_printf(buf, "TB invalidate count %u\n",
qatomic_read(&tb_ctx.tb_phys_invalidate_count));
tlb_flush_counts(&flush_full, &flush_part, &flush_elide);
g_string_append_printf(buf, "TLB full flushes %zu\n", flush_full);
g_string_append_printf(buf, "TLB partial flushes %zu\n", flush_part);
g_string_append_printf(buf, "TLB elided flushes %zu\n", flush_elide);
tcg_dump_info(buf);
}
HumanReadableText *qmp_x_query_jit(Error **errp)
{
g_autoptr(GString) buf = g_string_new("");
@@ -215,11 +66,6 @@ HumanReadableText *qmp_x_query_jit(Error **errp)
return human_readable_text_from_str(buf);
}
static void tcg_dump_op_count(GString *buf)
{
g_string_append_printf(buf, "[TCG profiler not compiled]\n");
}
HumanReadableText *qmp_x_query_opcount(Error **errp)
{
g_autoptr(GString) buf = g_string_new("");
@@ -235,6 +81,37 @@ HumanReadableText *qmp_x_query_opcount(Error **errp)
return human_readable_text_from_str(buf);
}
#ifdef CONFIG_PROFILER
int64_t dev_time;
HumanReadableText *qmp_x_query_profile(Error **errp)
{
g_autoptr(GString) buf = g_string_new("");
static int64_t last_cpu_exec_time;
int64_t cpu_exec_time;
int64_t delta;
cpu_exec_time = tcg_cpu_exec_time();
delta = cpu_exec_time - last_cpu_exec_time;
g_string_append_printf(buf, "async time %" PRId64 " (%0.3f)\n",
dev_time, dev_time / (double)NANOSECONDS_PER_SECOND);
g_string_append_printf(buf, "qemu time %" PRId64 " (%0.3f)\n",
delta, delta / (double)NANOSECONDS_PER_SECOND);
last_cpu_exec_time = cpu_exec_time;
dev_time = 0;
return human_readable_text_from_str(buf);
}
#else
HumanReadableText *qmp_x_query_profile(Error **errp)
{
error_setg(errp, "Internal profiler not compiled");
return NULL;
}
#endif
static void hmp_tcg_register(void)
{
monitor_register_hmp_info_hrt("jit", qmp_x_query_jit);

View File

@@ -104,7 +104,7 @@ static void gen_empty_udata_cb(void)
TCGv_ptr udata = tcg_temp_ebb_new_ptr();
tcg_gen_movi_ptr(udata, 0);
tcg_gen_ld_i32(cpu_index, tcg_env,
tcg_gen_ld_i32(cpu_index, cpu_env,
-offsetof(ArchCPU, env) + offsetof(CPUState, cpu_index));
gen_helper_plugin_vcpu_udata_cb(cpu_index, udata);
@@ -138,7 +138,7 @@ static void gen_empty_mem_cb(TCGv_i64 addr, uint32_t info)
tcg_gen_movi_i32(meminfo, info);
tcg_gen_movi_ptr(udata, 0);
tcg_gen_ld_i32(cpu_index, tcg_env,
tcg_gen_ld_i32(cpu_index, cpu_env,
-offsetof(ArchCPU, env) + offsetof(CPUState, cpu_index));
gen_helper_plugin_vcpu_mem_cb(cpu_index, meminfo, addr, udata);
@@ -157,7 +157,7 @@ static void gen_empty_mem_helper(void)
TCGv_ptr ptr = tcg_temp_ebb_new_ptr();
tcg_gen_movi_ptr(ptr, 0);
tcg_gen_st_ptr(ptr, tcg_env, offsetof(CPUState, plugin_mem_cbs) -
tcg_gen_st_ptr(ptr, cpu_env, offsetof(CPUState, plugin_mem_cbs) -
offsetof(ArchCPU, env));
tcg_temp_free_ptr(ptr);
}
@@ -327,7 +327,8 @@ static TCGOp *copy_st_ptr(TCGOp **begin_op, TCGOp *op)
return op;
}
static TCGOp *copy_call(TCGOp **begin_op, TCGOp *op, void *func, int *cb_idx)
static TCGOp *copy_call(TCGOp **begin_op, TCGOp *op, void *empty_func,
void *func, int *cb_idx)
{
TCGOp *old_op;
int func_idx;
@@ -371,7 +372,8 @@ static TCGOp *append_udata_cb(const struct qemu_plugin_dyn_cb *cb,
}
/* call */
op = copy_call(&begin_op, op, cb->f.vcpu_udata, cb_idx);
op = copy_call(&begin_op, op, HELPER(plugin_vcpu_udata_cb),
cb->f.vcpu_udata, cb_idx);
return op;
}
@@ -418,7 +420,8 @@ static TCGOp *append_mem_cb(const struct qemu_plugin_dyn_cb *cb,
if (type == PLUGIN_GEN_CB_MEM) {
/* call */
op = copy_call(&begin_op, op, cb->f.vcpu_udata, cb_idx);
op = copy_call(&begin_op, op, HELPER(plugin_vcpu_mem_cb),
cb->f.vcpu_udata, cb_idx);
}
return op;
@@ -578,7 +581,7 @@ void plugin_gen_disable_mem_helpers(void)
if (!tcg_ctx->plugin_tb->mem_helper) {
return;
}
tcg_gen_st_ptr(tcg_constant_ptr(NULL), tcg_env,
tcg_gen_st_ptr(tcg_constant_ptr(NULL), cpu_env,
offsetof(CPUState, plugin_mem_cbs) - offsetof(ArchCPU, env));
}
@@ -846,7 +849,7 @@ void plugin_gen_insn_start(CPUState *cpu, const DisasContextBase *db)
} else {
if (ptb->vaddr2 == -1) {
ptb->vaddr2 = TARGET_PAGE_ALIGN(db->pc_first);
get_page_addr_code_hostp(cpu_env(cpu), ptb->vaddr2, &ptb->haddr2);
get_page_addr_code_hostp(cpu->env_ptr, ptb->vaddr2, &ptb->haddr2);
}
pinsn->haddr = ptb->haddr2 + pinsn->vaddr - ptb->vaddr2;
}
@@ -863,14 +866,10 @@ void plugin_gen_insn_end(void)
* do any clean-up here and make sure things are reset in
* plugin_gen_tb_start.
*/
void plugin_gen_tb_end(CPUState *cpu, size_t num_insns)
void plugin_gen_tb_end(CPUState *cpu)
{
struct qemu_plugin_tb *ptb = tcg_ctx->plugin_tb;
/* translator may have removed instructions, update final count */
g_assert(num_insns <= ptb->n);
ptb->n = num_insns;
/* collect instrumentation requests */
qemu_plugin_tb_trans_cb(cpu, ptb);

View File

@@ -35,16 +35,16 @@
#define TB_JMP_ADDR_MASK (TB_JMP_PAGE_SIZE - 1)
#define TB_JMP_PAGE_MASK (TB_JMP_CACHE_SIZE - TB_JMP_PAGE_SIZE)
static inline unsigned int tb_jmp_cache_hash_page(vaddr pc)
static inline unsigned int tb_jmp_cache_hash_page(target_ulong pc)
{
vaddr tmp;
target_ulong tmp;
tmp = pc ^ (pc >> (TARGET_PAGE_BITS - TB_JMP_PAGE_BITS));
return (tmp >> (TARGET_PAGE_BITS - TB_JMP_PAGE_BITS)) & TB_JMP_PAGE_MASK;
}
static inline unsigned int tb_jmp_cache_hash_func(vaddr pc)
static inline unsigned int tb_jmp_cache_hash_func(target_ulong pc)
{
vaddr tmp;
target_ulong tmp;
tmp = pc ^ (pc >> (TARGET_PAGE_BITS - TB_JMP_PAGE_BITS));
return (((tmp >> (TARGET_PAGE_BITS - TB_JMP_PAGE_BITS)) & TB_JMP_PAGE_MASK)
| (tmp & TB_JMP_ADDR_MASK));
@@ -53,7 +53,7 @@ static inline unsigned int tb_jmp_cache_hash_func(vaddr pc)
#else
/* In user-mode we can get better hashing because we do not have a TLB */
static inline unsigned int tb_jmp_cache_hash_func(vaddr pc)
static inline unsigned int tb_jmp_cache_hash_func(target_ulong pc)
{
return (pc ^ (pc >> TB_JMP_CACHE_BITS)) & (TB_JMP_CACHE_SIZE - 1);
}
@@ -61,7 +61,7 @@ static inline unsigned int tb_jmp_cache_hash_func(vaddr pc)
#endif /* CONFIG_SOFTMMU */
static inline
uint32_t tb_hash_func(tb_page_addr_t phys_pc, vaddr pc,
uint32_t tb_hash_func(tb_page_addr_t phys_pc, target_ulong pc,
uint32_t flags, uint64_t flags2, uint32_t cf_mask)
{
return qemu_xxhash8(phys_pc, pc, flags2, flags, cf_mask);

View File

@@ -21,7 +21,7 @@ struct CPUJumpCache {
struct rcu_head rcu;
struct {
TranslationBlock *tb;
vaddr pc;
target_ulong pc;
} array[TB_JMP_CACHE_SIZE];
};

View File

@@ -1,5 +1,5 @@
/*
* Translation Block Maintenance
* Translation Block Maintaince
*
* Copyright (c) 2003 Fabrice Bellard
*
@@ -29,8 +29,7 @@
#include "tcg/tcg.h"
#include "tb-hash.h"
#include "tb-context.h"
#include "internal-common.h"
#include "internal-target.h"
#include "internal.h"
/* List iterators for lists of tagged pointers in TranslationBlock. */
@@ -71,7 +70,17 @@ typedef struct PageDesc PageDesc;
*/
#define assert_page_locked(pd) tcg_debug_assert(have_mmap_lock())
static inline void tb_lock_pages(const TranslationBlock *tb) { }
static inline void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1,
PageDesc **ret_p2, tb_page_addr_t phys2,
bool alloc)
{
*ret_p1 = NULL;
*ret_p2 = NULL;
}
static inline void page_unlock(PageDesc *pd) { }
static inline void page_lock_tb(const TranslationBlock *tb) { }
static inline void page_unlock_tb(const TranslationBlock *tb) { }
/*
* For user-only, since we are protecting all of memory with a single lock,
@@ -87,9 +96,9 @@ static void tb_remove_all(void)
}
/* Call with mmap_lock held. */
static void tb_record(TranslationBlock *tb)
static void tb_record(TranslationBlock *tb, PageDesc *p1, PageDesc *p2)
{
vaddr addr;
target_ulong addr;
int flags;
assert_memory_lock();
@@ -208,12 +217,13 @@ static PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc)
{
PageDesc *pd;
void **lp;
int i;
/* Level 1. Always allocated. */
lp = l1_map + ((index >> v_l1_shift) & (v_l1_size - 1));
/* Level 2..N-1. */
for (int i = v_l2_levels; i > 0; i--) {
for (i = v_l2_levels; i > 0; i--) {
void **p = qatomic_rcu_read(lp);
if (p == NULL) {
@@ -381,108 +391,12 @@ static void page_lock(PageDesc *pd)
qemu_spin_lock(&pd->lock);
}
/* Like qemu_spin_trylock, returns false on success */
static bool page_trylock(PageDesc *pd)
{
bool busy = qemu_spin_trylock(&pd->lock);
if (!busy) {
page_lock__debug(pd);
}
return busy;
}
static void page_unlock(PageDesc *pd)
{
qemu_spin_unlock(&pd->lock);
page_unlock__debug(pd);
}
void tb_lock_page0(tb_page_addr_t paddr)
{
page_lock(page_find_alloc(paddr >> TARGET_PAGE_BITS, true));
}
void tb_lock_page1(tb_page_addr_t paddr0, tb_page_addr_t paddr1)
{
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS;
PageDesc *pd0, *pd1;
if (pindex0 == pindex1) {
/* Identical pages, and the first page is already locked. */
return;
}
pd1 = page_find_alloc(pindex1, true);
if (pindex0 < pindex1) {
/* Correct locking order, we may block. */
page_lock(pd1);
return;
}
/* Incorrect locking order, we cannot block lest we deadlock. */
if (!page_trylock(pd1)) {
return;
}
/*
* Drop the lock on page0 and get both page locks in the right order.
* Restart translation via longjmp.
*/
pd0 = page_find_alloc(pindex0, false);
page_unlock(pd0);
page_lock(pd1);
page_lock(pd0);
siglongjmp(tcg_ctx->jmp_trans, -3);
}
void tb_unlock_page1(tb_page_addr_t paddr0, tb_page_addr_t paddr1)
{
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS;
if (pindex0 != pindex1) {
page_unlock(page_find_alloc(pindex1, false));
}
}
static void tb_lock_pages(TranslationBlock *tb)
{
tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS;
if (unlikely(paddr0 == -1)) {
return;
}
if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
if (pindex0 < pindex1) {
page_lock(page_find_alloc(pindex0, true));
page_lock(page_find_alloc(pindex1, true));
return;
}
page_lock(page_find_alloc(pindex1, true));
}
page_lock(page_find_alloc(pindex0, true));
}
void tb_unlock_pages(TranslationBlock *tb)
{
tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS;
if (unlikely(paddr0 == -1)) {
return;
}
if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
page_unlock(page_find_alloc(pindex1, false));
}
page_unlock(page_find_alloc(pindex0, false));
}
static inline struct page_entry *
page_entry_new(PageDesc *pd, tb_page_addr_t index)
{
@@ -506,10 +420,13 @@ static void page_entry_destroy(gpointer p)
/* returns false on success */
static bool page_entry_trylock(struct page_entry *pe)
{
bool busy = page_trylock(pe->pd);
bool busy;
busy = qemu_spin_trylock(&pe->pd->lock);
if (!busy) {
g_assert(!pe->locked);
pe->locked = true;
page_lock__debug(pe->pd);
}
return busy;
}
@@ -687,7 +604,8 @@ static void tb_remove_all(void)
* Add the tb in the target page and protect it if necessary.
* Called with @p->lock held.
*/
static void tb_page_add(PageDesc *p, TranslationBlock *tb, unsigned int n)
static inline void tb_page_add(PageDesc *p, TranslationBlock *tb,
unsigned int n)
{
bool page_already_protected;
@@ -707,21 +625,15 @@ static void tb_page_add(PageDesc *p, TranslationBlock *tb, unsigned int n)
}
}
static void tb_record(TranslationBlock *tb)
static void tb_record(TranslationBlock *tb, PageDesc *p1, PageDesc *p2)
{
tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr0 >> TARGET_PAGE_BITS;
assert(paddr0 != -1);
if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
tb_page_add(page_find_alloc(pindex1, false), tb, 1);
tb_page_add(p1, tb, 0);
if (unlikely(p2)) {
tb_page_add(p2, tb, 1);
}
tb_page_add(page_find_alloc(pindex0, false), tb, 0);
}
static void tb_page_remove(PageDesc *pd, TranslationBlock *tb)
static inline void tb_page_remove(PageDesc *pd, TranslationBlock *tb)
{
TranslationBlock *tb1;
uintptr_t *pprev;
@@ -741,16 +653,74 @@ static void tb_page_remove(PageDesc *pd, TranslationBlock *tb)
static void tb_remove(TranslationBlock *tb)
{
tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr0 >> TARGET_PAGE_BITS;
PageDesc *pd;
assert(paddr0 != -1);
if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
tb_page_remove(page_find_alloc(pindex1, false), tb);
pd = page_find(tb->page_addr[0] >> TARGET_PAGE_BITS);
tb_page_remove(pd, tb);
if (unlikely(tb->page_addr[1] != -1)) {
pd = page_find(tb->page_addr[1] >> TARGET_PAGE_BITS);
tb_page_remove(pd, tb);
}
}
static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1,
PageDesc **ret_p2, tb_page_addr_t phys2, bool alloc)
{
PageDesc *p1, *p2;
tb_page_addr_t page1;
tb_page_addr_t page2;
assert_memory_lock();
g_assert(phys1 != -1);
page1 = phys1 >> TARGET_PAGE_BITS;
page2 = phys2 >> TARGET_PAGE_BITS;
p1 = page_find_alloc(page1, alloc);
if (ret_p1) {
*ret_p1 = p1;
}
if (likely(phys2 == -1)) {
page_lock(p1);
return;
} else if (page1 == page2) {
page_lock(p1);
if (ret_p2) {
*ret_p2 = p1;
}
return;
}
p2 = page_find_alloc(page2, alloc);
if (ret_p2) {
*ret_p2 = p2;
}
if (page1 < page2) {
page_lock(p1);
page_lock(p2);
} else {
page_lock(p2);
page_lock(p1);
}
}
/* lock the page(s) of a TB in the correct acquisition order */
static void page_lock_tb(const TranslationBlock *tb)
{
page_lock_pair(NULL, tb_page_addr0(tb), NULL, tb_page_addr1(tb), false);
}
static void page_unlock_tb(const TranslationBlock *tb)
{
PageDesc *p1 = page_find(tb_page_addr0(tb) >> TARGET_PAGE_BITS);
page_unlock(p1);
if (unlikely(tb_page_addr1(tb) != -1)) {
PageDesc *p2 = page_find(tb_page_addr1(tb) >> TARGET_PAGE_BITS);
if (p2 != p1) {
page_unlock(p2);
}
}
tb_page_remove(page_find_alloc(pindex0, false), tb);
}
#endif /* CONFIG_USER_ONLY */
@@ -955,16 +925,18 @@ static void tb_phys_invalidate__locked(TranslationBlock *tb)
void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
{
if (page_addr == -1 && tb_page_addr0(tb) != -1) {
tb_lock_pages(tb);
page_lock_tb(tb);
do_tb_phys_invalidate(tb, true);
tb_unlock_pages(tb);
page_unlock_tb(tb);
} else {
do_tb_phys_invalidate(tb, false);
}
}
/*
* Add a new TB and link it to the physical page tables.
* Add a new TB and link it to the physical page tables. phys_page2 is
* (-1) to indicate that only one page contains the TB.
*
* Called with mmap_lock held for user-mode emulation.
*
* Returns a pointer @tb, or a pointer to an existing TB that matches @tb.
@@ -972,29 +944,43 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
* for the same block of guest code that @tb corresponds to. In that case,
* the caller should discard the original @tb, and use instead the returned TB.
*/
TranslationBlock *tb_link_page(TranslationBlock *tb)
TranslationBlock *tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
tb_page_addr_t phys_page2)
{
PageDesc *p;
PageDesc *p2 = NULL;
void *existing_tb = NULL;
uint32_t h;
assert_memory_lock();
tcg_debug_assert(!(tb->cflags & CF_INVALID));
tb_record(tb);
/*
* Add the TB to the page list, acquiring first the pages's locks.
* We keep the locks held until after inserting the TB in the hash table,
* so that if the insertion fails we know for sure that the TBs are still
* in the page descriptors.
* Note that inserting into the hash table first isn't an option, since
* we can only insert TBs that are fully initialized.
*/
page_lock_pair(&p, phys_pc, &p2, phys_page2, true);
tb_record(tb, p, p2);
/* add in the hash table */
h = tb_hash_func(tb_page_addr0(tb), (tb->cflags & CF_PCREL ? 0 : tb->pc),
h = tb_hash_func(phys_pc, (tb->cflags & CF_PCREL ? 0 : tb->pc),
tb->flags, tb->cs_base, tb->cflags);
qht_insert(&tb_ctx.htable, tb, h, &existing_tb);
/* remove TB from the page(s) if we couldn't insert it */
if (unlikely(existing_tb)) {
tb_remove(tb);
tb_unlock_pages(tb);
return existing_tb;
tb = existing_tb;
}
tb_unlock_pages(tb);
if (p2 && p2 != p) {
page_unlock(p2);
}
page_unlock(p);
return tb;
}
@@ -1083,8 +1069,7 @@ bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc)
if (current_tb_modified) {
/* Force execution of one insn next time. */
CPUState *cpu = current_cpu;
cpu->cflags_next_tb =
1 | CF_LAST_IO | CF_NOIRQ | curr_cflags(current_cpu);
cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu);
return true;
}
return false;
@@ -1107,9 +1092,6 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages,
TranslationBlock *current_tb = retaddr ? tcg_tb_lookup(retaddr) : NULL;
#endif /* TARGET_HAS_PRECISE_SMC */
/* Range may not cross a page. */
tcg_debug_assert(((start ^ last) & TARGET_PAGE_MASK) == 0);
/*
* We remove all the TBs in the range [start, last].
* XXX: see if in some cases it could be faster to invalidate all the code
@@ -1154,8 +1136,7 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages,
if (current_tb_modified) {
page_collection_unlock(pages);
/* Force execution of one insn next time. */
current_cpu->cflags_next_tb =
1 | CF_LAST_IO | CF_NOIRQ | curr_cflags(current_cpu);
current_cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu);
mmap_unlock();
cpu_loop_exit_noexc(current_cpu);
}
@@ -1201,17 +1182,15 @@ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t last)
index_last = last >> TARGET_PAGE_BITS;
for (index = start >> TARGET_PAGE_BITS; index <= index_last; index++) {
PageDesc *pd = page_find(index);
tb_page_addr_t page_start, page_last;
tb_page_addr_t bound;
if (pd == NULL) {
continue;
}
assert_page_locked(pd);
page_start = index << TARGET_PAGE_BITS;
page_last = page_start | ~TARGET_PAGE_MASK;
page_last = MIN(page_last, last);
tb_invalidate_phys_page_range__locked(pages, pd,
page_start, page_last, 0);
bound = (index << TARGET_PAGE_BITS) | ~TARGET_PAGE_MASK;
bound = MIN(bound, last);
tb_invalidate_phys_page_range__locked(pages, pd, start, bound, 0);
}
page_collection_unlock(pages);
}

View File

@@ -111,14 +111,14 @@ void icount_prepare_for_run(CPUState *cpu, int64_t cpu_budget)
* each vCPU execution. However u16.high can be raised
* asynchronously by cpu_exit/cpu_interrupt/tcg_handle_interrupt
*/
g_assert(cpu->neg.icount_decr.u16.low == 0);
g_assert(cpu_neg(cpu)->icount_decr.u16.low == 0);
g_assert(cpu->icount_extra == 0);
replay_mutex_lock();
cpu->icount_budget = MIN(icount_get_limit(), cpu_budget);
insns_left = MIN(0xffff, cpu->icount_budget);
cpu->neg.icount_decr.u16.low = insns_left;
cpu_neg(cpu)->icount_decr.u16.low = insns_left;
cpu->icount_extra = cpu->icount_budget - insns_left;
if (cpu->icount_budget == 0) {
@@ -138,7 +138,7 @@ void icount_process_data(CPUState *cpu)
icount_update(cpu);
/* Reset the counters */
cpu->neg.icount_decr.u16.low = 0;
cpu_neg(cpu)->icount_decr.u16.low = 0;
cpu->icount_extra = 0;
cpu->icount_budget = 0;
@@ -153,7 +153,7 @@ void icount_handle_interrupt(CPUState *cpu, int mask)
tcg_handle_interrupt(cpu, mask);
if (qemu_cpu_is_self(cpu) &&
!cpu->neg.can_do_io
!cpu->can_do_io
&& (mask & ~old_mask) != 0) {
cpu_abort(cpu, "Raised interrupt while not in I/O function");
}

View File

@@ -32,7 +32,7 @@
#include "qemu/guest-random.h"
#include "exec/exec-all.h"
#include "hw/boards.h"
#include "tcg/startup.h"
#include "tcg/tcg.h"
#include "tcg-accel-ops.h"
#include "tcg-accel-ops-mttcg.h"
@@ -80,7 +80,7 @@ static void *mttcg_cpu_thread_fn(void *arg)
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
cpu->can_do_io = 1;
current_cpu = cpu;
cpu_thread_signal_created(cpu);
qemu_guest_random_seed_thread_part2(cpu->random_seed);
@@ -100,9 +100,14 @@ static void *mttcg_cpu_thread_fn(void *arg)
break;
case EXCP_HALTED:
/*
* Usually cpu->halted is set, but may have already been
* reset by another thread by the time we arrive here.
* during start-up the vCPU is reset and the thread is
* kicked several times. If we don't ensure we go back
* to sleep in the halted state we won't cleanly
* start-up when the vCPU is enabled.
*
* cpu->halted should ensure we sleep in wait_io_event
*/
g_assert(cpu->halted);
break;
case EXCP_ATOMIC:
qemu_mutex_unlock_iothread();
@@ -147,4 +152,8 @@ void mttcg_start_vcpu_thread(CPUState *cpu)
qemu_thread_create(cpu->thread, thread_name, mttcg_cpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE);
#ifdef _WIN32
cpu->hThread = qemu_thread_get_handle(cpu->thread);
#endif
}

View File

@@ -32,7 +32,7 @@
#include "qemu/notify.h"
#include "qemu/guest-random.h"
#include "exec/exec-all.h"
#include "tcg/startup.h"
#include "tcg/tcg.h"
#include "tcg-accel-ops.h"
#include "tcg-accel-ops-rr.h"
#include "tcg-accel-ops-icount.h"
@@ -192,7 +192,7 @@ static void *rr_cpu_thread_fn(void *arg)
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
cpu->can_do_io = 1;
cpu_thread_signal_created(cpu);
qemu_guest_random_seed_thread_part2(cpu->random_seed);
@@ -329,12 +329,15 @@ void rr_start_vcpu_thread(CPUState *cpu)
single_tcg_halt_cond = cpu->halt_cond;
single_tcg_cpu_thread = cpu->thread;
#ifdef _WIN32
cpu->hThread = qemu_thread_get_handle(cpu->thread);
#endif
} else {
/* we share the thread */
cpu->thread = single_tcg_cpu_thread;
cpu->halt_cond = single_tcg_halt_cond;
cpu->thread_id = first_cpu->thread_id;
cpu->neg.can_do_io = 1;
cpu->can_do_io = 1;
cpu->created = true;
}
}

View File

@@ -34,7 +34,6 @@
#include "qemu/timer.h"
#include "exec/exec-all.h"
#include "exec/hwaddr.h"
#include "exec/tb-flush.h"
#include "exec/gdbstub.h"
#include "tcg-accel-ops.h"
@@ -71,20 +70,23 @@ void tcg_cpus_destroy(CPUState *cpu)
int tcg_cpus_exec(CPUState *cpu)
{
int ret;
#ifdef CONFIG_PROFILER
int64_t ti;
#endif
assert(tcg_enabled());
#ifdef CONFIG_PROFILER
ti = profile_getclock();
#endif
cpu_exec_start(cpu);
ret = cpu_exec(cpu);
cpu_exec_end(cpu);
#ifdef CONFIG_PROFILER
qatomic_set(&tcg_ctx->prof.cpu_exec_time,
tcg_ctx->prof.cpu_exec_time + profile_getclock() - ti);
#endif
return ret;
}
static void tcg_cpu_reset_hold(CPUState *cpu)
{
tcg_flush_jmp_cache(cpu);
tlb_flush(cpu);
}
/* mask must never be zero, except for A20 change call */
void tcg_handle_interrupt(CPUState *cpu, int mask)
{
@@ -99,7 +101,7 @@ void tcg_handle_interrupt(CPUState *cpu, int mask)
if (!qemu_cpu_is_self(cpu)) {
qemu_cpu_kick(cpu);
} else {
qatomic_set(&cpu->neg.icount_decr.u16.high, -1);
qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1);
}
}
@@ -213,7 +215,6 @@ static void tcg_accel_ops_init(AccelOpsClass *ops)
}
}
ops->cpu_reset_hold = tcg_cpu_reset_hold;
ops->supports_guest_debug = tcg_supports_guest_debug;
ops->insert_breakpoint = tcg_insert_breakpoint;
ops->remove_breakpoint = tcg_remove_breakpoint;

View File

@@ -27,7 +27,7 @@
#include "sysemu/tcg.h"
#include "exec/replay-core.h"
#include "sysemu/cpu-timers.h"
#include "tcg/startup.h"
#include "tcg/tcg.h"
#include "tcg/oversized-guest.h"
#include "qapi/error.h"
#include "qemu/error-report.h"
@@ -38,7 +38,7 @@
#if !defined(CONFIG_USER_ONLY)
#include "hw/boards.h"
#endif
#include "internal-target.h"
#include "internal.h"
struct TCGState {
AccelState parent_obj;
@@ -64,23 +64,37 @@ DECLARE_INSTANCE_CHECKER(TCGState, TCG_STATE,
* they can set the appropriate CONFIG flags in ${target}-softmmu.mak
*
* Once a guest architecture has been converted to the new primitives
* there is one remaining limitation to check:
* - The guest can't be oversized (e.g. 64 bit guest on 32 bit host)
* there are two remaining limitations to check.
*
* - The guest can't be oversized (e.g. 64 bit guest on 32 bit host)
* - The host must have a stronger memory order than the guest
*
* It may be possible in future to support strong guests on weak hosts
* but that will require tagging all load/stores in a guest with their
* implicit memory order requirements which would likely slow things
* down a lot.
*/
static bool check_tcg_memory_orders_compatible(void)
{
#if defined(TCG_GUEST_DEFAULT_MO) && defined(TCG_TARGET_DEFAULT_MO)
return (TCG_GUEST_DEFAULT_MO & ~TCG_TARGET_DEFAULT_MO) == 0;
#else
return false;
#endif
}
static bool default_mttcg_enabled(void)
{
if (icount_enabled() || TCG_OVERSIZED_GUEST) {
return false;
}
} else {
#ifdef TARGET_SUPPORTS_MTTCG
# ifndef TCG_GUEST_DEFAULT_MO
# error "TARGET_SUPPORTS_MTTCG without TCG_GUEST_DEFAULT_MO"
# endif
return true;
return check_tcg_memory_orders_compatible();
#else
return false;
return false;
#endif
}
}
static void tcg_accel_instance_init(Object *obj)
@@ -121,7 +135,7 @@ static int tcg_init_machine(MachineState *ms)
* There's no guest base to take into account, so go ahead and
* initialize the prologue now.
*/
tcg_prologue_init();
tcg_prologue_init(tcg_ctx);
#endif
return 0;
@@ -148,6 +162,11 @@ static void tcg_set_thread(Object *obj, const char *value, Error **errp)
warn_report("Guest not yet converted to MTTCG - "
"you may get unexpected results");
#endif
if (!check_tcg_memory_orders_compatible()) {
warn_report("Guest expects a stronger memory ordering "
"than the host provides");
error_printf("This may cause strange/hard to debug errors\n");
}
s->mttcg_enabled = true;
}
} else if (strcmp(value, "single") == 0) {
@@ -227,8 +246,6 @@ static void tcg_accel_class_init(ObjectClass *oc, void *data)
AccelClass *ac = ACCEL_CLASS(oc);
ac->name = "tcg";
ac->init_machine = tcg_init_machine;
ac->cpu_common_realize = tcg_exec_realizefn;
ac->cpu_common_unrealize = tcg_exec_unrealizefn;
ac->allowed = &tcg_allowed;
ac->gdbstub_supported_sstep_flags = tcg_gdbstub_supported_sstep_flags;

View File

@@ -1042,32 +1042,6 @@ DO_CMP2(64)
#undef DO_CMP1
#undef DO_CMP2
#define DO_CMP1(NAME, TYPE, OP) \
void HELPER(NAME)(void *d, void *a, uint64_t b64, uint32_t desc) \
{ \
intptr_t oprsz = simd_oprsz(desc); \
TYPE inv = simd_data(desc), b = b64; \
for (intptr_t i = 0; i < oprsz; i += sizeof(TYPE)) { \
*(TYPE *)(d + i) = -((*(TYPE *)(a + i) OP b) ^ inv); \
} \
clear_high(d, oprsz, desc); \
}
#define DO_CMP2(SZ) \
DO_CMP1(gvec_eqs##SZ, uint##SZ##_t, ==) \
DO_CMP1(gvec_lts##SZ, int##SZ##_t, <) \
DO_CMP1(gvec_les##SZ, int##SZ##_t, <=) \
DO_CMP1(gvec_ltus##SZ, uint##SZ##_t, <) \
DO_CMP1(gvec_leus##SZ, uint##SZ##_t, <=)
DO_CMP2(8)
DO_CMP2(16)
DO_CMP2(32)
DO_CMP2(64)
#undef DO_CMP1
#undef DO_CMP2
void HELPER(gvec_ssadd8)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);

View File

@@ -58,7 +58,7 @@ DEF_HELPER_FLAGS_5(atomic_cmpxchgq_be, TCG_CALL_NO_WG,
DEF_HELPER_FLAGS_5(atomic_cmpxchgq_le, TCG_CALL_NO_WG,
i64, env, i64, i64, i64, i32)
#endif
#if HAVE_CMPXCHG128
#ifdef CONFIG_CMPXCHG128
DEF_HELPER_FLAGS_5(atomic_cmpxchgo_be, TCG_CALL_NO_WG,
i128, env, i64, i128, i128, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgo_le, TCG_CALL_NO_WG,
@@ -297,29 +297,4 @@ DEF_HELPER_FLAGS_4(gvec_leu16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_leu32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_leu64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_eqs8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_eqs16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_eqs32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_eqs64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_lts8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_lts16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_lts32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_lts64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_les8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_les16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_les32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_les64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_ltus8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_ltus16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_ltus32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_ltus64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_leus8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_leus16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_leus32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_leus64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_5(gvec_bitsel, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)

View File

@@ -61,8 +61,7 @@
#include "tb-jmp-cache.h"
#include "tb-hash.h"
#include "tb-context.h"
#include "internal-common.h"
#include "internal-target.h"
#include "internal.h"
#include "perf.h"
#include "tcg/insn-start-words.h"
@@ -203,6 +202,10 @@ void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
uintptr_t host_pc)
{
uint64_t data[TARGET_INSN_START_WORDS];
#ifdef CONFIG_PROFILER
TCGProfile *prof = &tcg_ctx->prof;
int64_t ti = profile_getclock();
#endif
int insns_left = cpu_unwind_data_from_tb(tb, host_pc, data);
if (insns_left < 0) {
@@ -215,10 +218,16 @@ void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
* Reset the cycle counter to the start of the block and
* shift if to the number of actually executed instructions.
*/
cpu->neg.icount_decr.u16.low += insns_left;
cpu_neg(cpu)->icount_decr.u16.low += insns_left;
}
cpu->cc->tcg_ops->restore_state_to_opc(cpu, tb, data);
#ifdef CONFIG_PROFILER
qatomic_set(&prof->restore_time,
prof->restore_time + profile_getclock() - ti);
qatomic_set(&prof->restore_count, prof->restore_count + 1);
#endif
}
bool cpu_restore_state(CPUState *cpu, uintptr_t host_pc)
@@ -265,7 +274,7 @@ void page_init(void)
* Return the size of the generated code, or negative on error.
*/
static int setjmp_gen_code(CPUArchState *env, TranslationBlock *tb,
vaddr pc, void *host_pc,
target_ulong pc, void *host_pc,
int *max_insns, int64_t *ti)
{
int ret = sigsetjmp(tcg_ctx->jmp_trans, 0);
@@ -281,19 +290,29 @@ static int setjmp_gen_code(CPUArchState *env, TranslationBlock *tb,
tcg_ctx->cpu = NULL;
*max_insns = tb->icount;
#ifdef CONFIG_PROFILER
qatomic_set(&tcg_ctx->prof.tb_count, tcg_ctx->prof.tb_count + 1);
qatomic_set(&tcg_ctx->prof.interm_time,
tcg_ctx->prof.interm_time + profile_getclock() - *ti);
*ti = profile_getclock();
#endif
return tcg_gen_code(tcg_ctx, tb, pc);
}
/* Called with mmap_lock held for user mode emulation. */
TranslationBlock *tb_gen_code(CPUState *cpu,
vaddr pc, uint64_t cs_base,
target_ulong pc, target_ulong cs_base,
uint32_t flags, int cflags)
{
CPUArchState *env = cpu_env(cpu);
CPUArchState *env = cpu->env_ptr;
TranslationBlock *tb, *existing_tb;
tb_page_addr_t phys_pc, phys_p2;
tb_page_addr_t phys_pc;
tcg_insn_unit *gen_code_buf;
int gen_code_size, search_size, max_insns;
#ifdef CONFIG_PROFILER
TCGProfile *prof = &tcg_ctx->prof;
#endif
int64_t ti;
void *host_pc;
@@ -314,7 +333,6 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
QEMU_BUILD_BUG_ON(CF_COUNT_MASK + 1 != TCG_MAX_INSNS);
buffer_overflow:
assert_no_pages_locked();
tb = tcg_tb_alloc(tcg_ctx);
if (unlikely(!tb)) {
/* flush must be done */
@@ -335,16 +353,14 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
tb->cflags = cflags;
tb_set_page_addr0(tb, phys_pc);
tb_set_page_addr1(tb, -1);
if (phys_pc != -1) {
tb_lock_page0(phys_pc);
}
tcg_ctx->gen_tb = tb;
tcg_ctx->addr_type = TARGET_LONG_BITS == 32 ? TCG_TYPE_I32 : TCG_TYPE_I64;
#ifdef CONFIG_SOFTMMU
tcg_ctx->page_bits = TARGET_PAGE_BITS;
tcg_ctx->page_mask = TARGET_PAGE_MASK;
tcg_ctx->tlb_dyn_max_bits = CPU_TLB_DYN_MAX_BITS;
tcg_ctx->tlb_fast_offset =
(int)offsetof(ArchCPU, neg.tlb.f) - (int)offsetof(ArchCPU, env);
#endif
tcg_ctx->insn_start_words = TARGET_INSN_START_WORDS;
#ifdef TCG_GUEST_DEFAULT_MO
@@ -353,7 +369,14 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
tcg_ctx->guest_mo = TCG_MO_ALL;
#endif
restart_translate:
tb_overflow:
#ifdef CONFIG_PROFILER
/* includes aborted translations because of exceptions */
qatomic_set(&prof->tb_count1, prof->tb_count1 + 1);
ti = profile_getclock();
#endif
trace_translate_block(tb, pc, tb->tc.ptr);
gen_code_size = setjmp_gen_code(env, tb, pc, host_pc, &max_insns, &ti);
@@ -372,8 +395,6 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
qemu_log_mask(CPU_LOG_TB_OP | CPU_LOG_TB_OP_OPT,
"Restarting code generation for "
"code_gen_buffer overflow\n");
tb_unlock_pages(tb);
tcg_ctx->gen_tb = NULL;
goto buffer_overflow;
case -2:
@@ -392,39 +413,14 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
"Restarting code generation with "
"smaller translation block (max %d insns)\n",
max_insns);
/*
* The half-sized TB may not cross pages.
* TODO: Fix all targets that cross pages except with
* the first insn, at which point this can't be reached.
*/
phys_p2 = tb_page_addr1(tb);
if (unlikely(phys_p2 != -1)) {
tb_unlock_page1(phys_pc, phys_p2);
tb_set_page_addr1(tb, -1);
}
goto restart_translate;
case -3:
/*
* We had a page lock ordering problem. In order to avoid
* deadlock we had to drop the lock on page0, which means
* that everything we translated so far is compromised.
* Restart with locks held on both pages.
*/
qemu_log_mask(CPU_LOG_TB_OP | CPU_LOG_TB_OP_OPT,
"Restarting code generation with re-locked pages");
goto restart_translate;
goto tb_overflow;
default:
g_assert_not_reached();
}
}
tcg_ctx->gen_tb = NULL;
search_size = encode_search(tb, (void *)gen_code_buf + gen_code_size);
if (unlikely(search_size < 0)) {
tb_unlock_pages(tb);
goto buffer_overflow;
}
tb->tc.size = gen_code_size;
@@ -435,6 +431,13 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
*/
perf_report_code(pc, tb, tcg_splitwx_to_rx(gen_code_buf));
#ifdef CONFIG_PROFILER
qatomic_set(&prof->code_time, prof->code_time + profile_getclock() - ti);
qatomic_set(&prof->code_in_len, prof->code_in_len + tb->size);
qatomic_set(&prof->code_out_len, prof->code_out_len + gen_code_size);
qatomic_set(&prof->search_out_len, prof->search_out_len + search_size);
#endif
if (qemu_loglevel_mask(CPU_LOG_TB_OUT_ASM) &&
qemu_log_in_addr_range(pc)) {
FILE *logfile = qemu_log_trylock();
@@ -534,7 +537,6 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
* before attempting to link to other TBs or add to the lookup table.
*/
if (tb_page_addr0(tb) == -1) {
assert_no_pages_locked();
return tb;
}
@@ -549,9 +551,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
* No explicit memory barrier is required -- tb_link_page() makes the
* TB visible in a consistent state.
*/
existing_tb = tb_link_page(tb);
assert_no_pages_locked();
existing_tb = tb_link_page(tb, tb_page_addr0(tb), tb_page_addr1(tb));
/* if the TB already exists, discard what we just translated */
if (unlikely(existing_tb != tb)) {
uintptr_t orig_aligned = (uintptr_t)gen_code_buf;
@@ -579,9 +579,8 @@ void tb_check_watchpoint(CPUState *cpu, uintptr_t retaddr)
} else {
/* The exception probably happened in a helper. The CPU state should
have been saved before calling it. Fetch the PC from there. */
CPUArchState *env = cpu_env(cpu);
vaddr pc;
uint64_t cs_base;
CPUArchState *env = cpu->env_ptr;
target_ulong pc, cs_base;
tb_page_addr_t addr;
uint32_t flags;
@@ -622,7 +621,7 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
cc = CPU_GET_CLASS(cpu);
if (cc->tcg_ops->io_recompile_replay_branch &&
cc->tcg_ops->io_recompile_replay_branch(cpu, tb)) {
cpu->neg.icount_decr.u16.low++;
cpu_neg(cpu)->icount_decr.u16.low++;
n = 2;
}
@@ -635,23 +634,150 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
cpu->cflags_next_tb = curr_cflags(cpu) | CF_MEMI_ONLY | CF_LAST_IO | n;
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
vaddr pc = log_pc(cpu, tb);
target_ulong pc = log_pc(cpu, tb);
if (qemu_log_in_addr_range(pc)) {
qemu_log("cpu_io_recompile: rewound execution of TB to %016"
VADDR_PRIx "\n", pc);
qemu_log("cpu_io_recompile: rewound execution of TB to "
TARGET_FMT_lx "\n", pc);
}
}
cpu_loop_exit_noexc(cpu);
}
static void print_qht_statistics(struct qht_stats hst, GString *buf)
{
uint32_t hgram_opts;
size_t hgram_bins;
char *hgram;
if (!hst.head_buckets) {
return;
}
g_string_append_printf(buf, "TB hash buckets %zu/%zu "
"(%0.2f%% head buckets used)\n",
hst.used_head_buckets, hst.head_buckets,
(double)hst.used_head_buckets /
hst.head_buckets * 100);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_opts |= QDIST_PR_100X | QDIST_PR_PERCENT;
if (qdist_xmax(&hst.occupancy) - qdist_xmin(&hst.occupancy) == 1) {
hgram_opts |= QDIST_PR_NODECIMAL;
}
hgram = qdist_pr(&hst.occupancy, 10, hgram_opts);
g_string_append_printf(buf, "TB hash occupancy %0.2f%% avg chain occ. "
"Histogram: %s\n",
qdist_avg(&hst.occupancy) * 100, hgram);
g_free(hgram);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_bins = qdist_xmax(&hst.chain) - qdist_xmin(&hst.chain);
if (hgram_bins > 10) {
hgram_bins = 10;
} else {
hgram_bins = 0;
hgram_opts |= QDIST_PR_NODECIMAL | QDIST_PR_NOBINRANGE;
}
hgram = qdist_pr(&hst.chain, hgram_bins, hgram_opts);
g_string_append_printf(buf, "TB hash avg chain %0.3f buckets. "
"Histogram: %s\n",
qdist_avg(&hst.chain), hgram);
g_free(hgram);
}
struct tb_tree_stats {
size_t nb_tbs;
size_t host_size;
size_t target_size;
size_t max_target_size;
size_t direct_jmp_count;
size_t direct_jmp2_count;
size_t cross_page;
};
static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data)
{
const TranslationBlock *tb = value;
struct tb_tree_stats *tst = data;
tst->nb_tbs++;
tst->host_size += tb->tc.size;
tst->target_size += tb->size;
if (tb->size > tst->max_target_size) {
tst->max_target_size = tb->size;
}
if (tb_page_addr1(tb) != -1) {
tst->cross_page++;
}
if (tb->jmp_reset_offset[0] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp_count++;
if (tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp2_count++;
}
}
return false;
}
void dump_exec_info(GString *buf)
{
struct tb_tree_stats tst = {};
struct qht_stats hst;
size_t nb_tbs, flush_full, flush_part, flush_elide;
tcg_tb_foreach(tb_tree_stats_iter, &tst);
nb_tbs = tst.nb_tbs;
/* XXX: avoid using doubles ? */
g_string_append_printf(buf, "Translation buffer state:\n");
/*
* Report total code size including the padding and TB structs;
* otherwise users might think "-accel tcg,tb-size" is not honoured.
* For avg host size we use the precise numbers from tb_tree_stats though.
*/
g_string_append_printf(buf, "gen code size %zu/%zu\n",
tcg_code_size(), tcg_code_capacity());
g_string_append_printf(buf, "TB count %zu\n", nb_tbs);
g_string_append_printf(buf, "TB avg target size %zu max=%zu bytes\n",
nb_tbs ? tst.target_size / nb_tbs : 0,
tst.max_target_size);
g_string_append_printf(buf, "TB avg host size %zu bytes "
"(expansion ratio: %0.1f)\n",
nb_tbs ? tst.host_size / nb_tbs : 0,
tst.target_size ?
(double)tst.host_size / tst.target_size : 0);
g_string_append_printf(buf, "cross page TB count %zu (%zu%%)\n",
tst.cross_page,
nb_tbs ? (tst.cross_page * 100) / nb_tbs : 0);
g_string_append_printf(buf, "direct jump count %zu (%zu%%) "
"(2 jumps=%zu %zu%%)\n",
tst.direct_jmp_count,
nb_tbs ? (tst.direct_jmp_count * 100) / nb_tbs : 0,
tst.direct_jmp2_count,
nb_tbs ? (tst.direct_jmp2_count * 100) / nb_tbs : 0);
qht_statistics_init(&tb_ctx.htable, &hst);
print_qht_statistics(hst, buf);
qht_statistics_destroy(&hst);
g_string_append_printf(buf, "\nStatistics:\n");
g_string_append_printf(buf, "TB flush count %u\n",
qatomic_read(&tb_ctx.tb_flush_count));
g_string_append_printf(buf, "TB invalidate count %u\n",
qatomic_read(&tb_ctx.tb_phys_invalidate_count));
tlb_flush_counts(&flush_full, &flush_part, &flush_elide);
g_string_append_printf(buf, "TLB full flushes %zu\n", flush_full);
g_string_append_printf(buf, "TLB partial flushes %zu\n", flush_part);
g_string_append_printf(buf, "TLB elided flushes %zu\n", flush_elide);
tcg_dump_info(buf);
}
#else /* CONFIG_USER_ONLY */
void cpu_interrupt(CPUState *cpu, int mask)
{
g_assert(qemu_mutex_iothread_locked());
cpu->interrupt_request |= mask;
qatomic_set(&cpu->neg.icount_decr.u16.high, -1);
qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1);
}
#endif /* CONFIG_USER_ONLY */
@@ -673,3 +799,11 @@ void tcg_flush_jmp_cache(CPUState *cpu)
qatomic_set(&jc->array[i].tb, NULL);
}
}
/* This is a wrapper for common code that can not use CONFIG_SOFTMMU */
void tcg_flush_softmmu_tlb(CPUState *cs)
{
#ifdef CONFIG_SOFTMMU
tlb_flush(cs);
#endif
}

View File

@@ -12,25 +12,30 @@
#include "qemu/error-report.h"
#include "exec/exec-all.h"
#include "exec/translator.h"
#include "exec/translate-all.h"
#include "exec/plugin-gen.h"
#include "tcg/tcg-op-common.h"
#include "internal-target.h"
static void set_can_do_io(DisasContextBase *db, bool val)
static void gen_io_start(void)
{
if (db->saved_can_do_io != val) {
db->saved_can_do_io = val;
QEMU_BUILD_BUG_ON(sizeof_field(CPUState, neg.can_do_io) != 1);
tcg_gen_st8_i32(tcg_constant_i32(val), tcg_env,
offsetof(ArchCPU, parent_obj.neg.can_do_io) -
offsetof(ArchCPU, env));
}
tcg_gen_st_i32(tcg_constant_i32(1), cpu_env,
offsetof(ArchCPU, parent_obj.can_do_io) -
offsetof(ArchCPU, env));
}
bool translator_io_start(DisasContextBase *db)
{
set_can_do_io(db, true);
uint32_t cflags = tb_cflags(db->tb);
if (!(cflags & CF_USE_ICOUNT)) {
return false;
}
if (db->num_insns == db->max_insns && (cflags & CF_LAST_IO)) {
/* Already started in translator_loop. */
return true;
}
gen_io_start();
/*
* Ensure that this instruction will be the last in the TB.
@@ -42,17 +47,14 @@ bool translator_io_start(DisasContextBase *db)
return true;
}
static TCGOp *gen_tb_start(DisasContextBase *db, uint32_t cflags)
static TCGOp *gen_tb_start(uint32_t cflags)
{
TCGv_i32 count = NULL;
TCGv_i32 count = tcg_temp_new_i32();
TCGOp *icount_start_insn = NULL;
if ((cflags & CF_USE_ICOUNT) || !(cflags & CF_NOIRQ)) {
count = tcg_temp_new_i32();
tcg_gen_ld_i32(count, tcg_env,
offsetof(ArchCPU, parent_obj.neg.icount_decr.u32)
- offsetof(ArchCPU, env));
}
tcg_gen_ld_i32(count, cpu_env,
offsetof(ArchCPU, neg.icount_decr.u32) -
offsetof(ArchCPU, env));
if (cflags & CF_USE_ICOUNT) {
/*
@@ -79,18 +81,21 @@ static TCGOp *gen_tb_start(DisasContextBase *db, uint32_t cflags)
}
if (cflags & CF_USE_ICOUNT) {
tcg_gen_st16_i32(count, tcg_env,
offsetof(ArchCPU, parent_obj.neg.icount_decr.u16.low)
- offsetof(ArchCPU, env));
tcg_gen_st16_i32(count, cpu_env,
offsetof(ArchCPU, neg.icount_decr.u16.low) -
offsetof(ArchCPU, env));
/*
* cpu->can_do_io is cleared automatically here at the beginning of
* each translation block. The cost is minimal and only paid for
* -icount, plus it would be very easy to forget doing it in the
* translator. Doing it here means we don't need a gen_io_end() to
* go with gen_io_start().
*/
tcg_gen_st_i32(tcg_constant_i32(0), cpu_env,
offsetof(ArchCPU, parent_obj.can_do_io) -
offsetof(ArchCPU, env));
}
/*
* cpu->neg.can_do_io is set automatically here at the beginning of
* each translation block. The cost is minimal, plus it would be
* very easy to forget doing it in the translator.
*/
set_can_do_io(db, db->max_insns == 1 && (cflags & CF_LAST_IO));
return icount_start_insn;
}
@@ -112,7 +117,7 @@ static void gen_tb_end(const TranslationBlock *tb, uint32_t cflags,
}
}
bool translator_use_goto_tb(DisasContextBase *db, vaddr dest)
bool translator_use_goto_tb(DisasContextBase *db, target_ulong dest)
{
/* Suppress goto_tb if requested. */
if (tb_cflags(db->tb) & CF_NO_GOTO_TB) {
@@ -124,8 +129,8 @@ bool translator_use_goto_tb(DisasContextBase *db, vaddr dest)
}
void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
vaddr pc, void *host_pc, const TranslatorOps *ops,
DisasContextBase *db)
target_ulong pc, void *host_pc,
const TranslatorOps *ops, DisasContextBase *db)
{
uint32_t cflags = tb_cflags(tb);
TCGOp *icount_start_insn;
@@ -139,26 +144,22 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
db->num_insns = 0;
db->max_insns = *max_insns;
db->singlestep_enabled = cflags & CF_SINGLE_STEP;
db->saved_can_do_io = -1;
db->host_addr[0] = host_pc;
db->host_addr[1] = NULL;
#ifdef CONFIG_USER_ONLY
page_protect(pc);
#endif
ops->init_disas_context(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
/* Start translating. */
icount_start_insn = gen_tb_start(db, cflags);
icount_start_insn = gen_tb_start(cflags);
ops->tb_start(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
if (cflags & CF_MEMI_ONLY) {
/* We should only see CF_MEMI_ONLY for io_recompile. */
assert(cflags & CF_LAST_IO);
plugin_enabled = plugin_gen_tb_start(cpu, db, true);
} else {
plugin_enabled = plugin_gen_tb_start(cpu, db, false);
}
db->plugin_enabled = plugin_enabled;
plugin_enabled = plugin_gen_tb_start(cpu, db, cflags & CF_MEMI_ONLY);
while (true) {
*max_insns = ++db->num_insns;
@@ -175,9 +176,13 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
the next instruction. */
if (db->num_insns == db->max_insns && (cflags & CF_LAST_IO)) {
/* Accept I/O on the last instruction. */
set_can_do_io(db, true);
gen_io_start();
ops->translate_insn(db, cpu);
} else {
/* we should only see CF_MEMI_ONLY for io_recompile */
tcg_debug_assert(!(cflags & CF_MEMI_ONLY));
ops->translate_insn(db, cpu);
}
ops->translate_insn(db, cpu);
/*
* We can't instrument after instructions that change control
@@ -210,7 +215,7 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
gen_tb_end(tb, cflags, icount_start_insn, db->num_insns);
if (plugin_enabled) {
plugin_gen_tb_end(cpu, db->num_insns);
plugin_gen_tb_end(cpu);
}
/* The disas_log hook may use these values rather than recompute. */
@@ -230,10 +235,10 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
}
static void *translator_access(CPUArchState *env, DisasContextBase *db,
vaddr pc, size_t len)
target_ulong pc, size_t len)
{
void *host;
vaddr base, end;
target_ulong base, end;
TranslationBlock *tb;
tb = db->tb;
@@ -251,36 +256,22 @@ static void *translator_access(CPUArchState *env, DisasContextBase *db,
host = db->host_addr[1];
base = TARGET_PAGE_ALIGN(db->pc_first);
if (host == NULL) {
tb_page_addr_t page0, old_page1, new_page1;
new_page1 = get_page_addr_code_hostp(env, base, &db->host_addr[1]);
tb_page_addr_t phys_page =
get_page_addr_code_hostp(env, base, &db->host_addr[1]);
/*
* If the second page is MMIO, treat as if the first page
* was MMIO as well, so that we do not cache the TB.
*/
if (unlikely(new_page1 == -1)) {
tb_unlock_pages(tb);
if (unlikely(phys_page == -1)) {
tb_set_page_addr0(tb, -1);
return NULL;
}
/*
* If this is not the first time around, and page1 matches,
* then we already have the page locked. Alternately, we're
* not doing anything to prevent the PTE from changing, so
* we might wind up with a different page, requiring us to
* re-do the locking.
*/
old_page1 = tb_page_addr1(tb);
if (likely(new_page1 != old_page1)) {
page0 = tb_page_addr0(tb);
if (unlikely(old_page1 != -1)) {
tb_unlock_page1(page0, old_page1);
}
tb_set_page_addr1(tb, new_page1);
tb_lock_page1(page0, new_page1);
}
tb_set_page_addr1(tb, phys_page);
#ifdef CONFIG_USER_ONLY
page_protect(end);
#endif
host = db->host_addr[1];
}

View File

@@ -2,6 +2,8 @@
#include "hw/core/cpu.h"
#include "exec/replay-core.h"
bool enable_cpu_pm = false;
void cpu_resume(CPUState *cpu)
{
}
@@ -14,10 +16,6 @@ void qemu_init_vcpu(CPUState *cpu)
{
}
void cpu_exec_reset_hold(CPUState *cpu)
{
}
/* User mode emulation does not support record/replay yet. */
bool replay_exception(void)

View File

@@ -29,8 +29,7 @@
#include "qemu/atomic128.h"
#include "trace/trace-root.h"
#include "tcg/tcg-ldst.h"
#include "internal-common.h"
#include "internal-target.h"
#include "internal.h"
__thread uintptr_t helper_retaddr;
@@ -145,7 +144,7 @@ typedef struct PageFlagsNode {
static IntervalTreeRoot pageflags_root;
static PageFlagsNode *pageflags_find(target_ulong start, target_ulong last)
static PageFlagsNode *pageflags_find(target_ulong start, target_long last)
{
IntervalTreeNode *n;
@@ -154,7 +153,7 @@ static PageFlagsNode *pageflags_find(target_ulong start, target_ulong last)
}
static PageFlagsNode *pageflags_next(PageFlagsNode *p, target_ulong start,
target_ulong last)
target_long last)
{
IntervalTreeNode *n;
@@ -521,19 +520,19 @@ void page_set_flags(target_ulong start, target_ulong last, int flags)
}
}
bool page_check_range(target_ulong start, target_ulong len, int flags)
int page_check_range(target_ulong start, target_ulong len, int flags)
{
target_ulong last;
int locked; /* tri-state: =0: unlocked, +1: global, -1: local */
bool ret;
int ret;
if (len == 0) {
return true; /* trivial length */
return 0; /* trivial length */
}
last = start + len - 1;
if (last < start) {
return false; /* wrap around */
return -1; /* wrap around */
}
locked = have_mmap_lock();
@@ -552,33 +551,33 @@ bool page_check_range(target_ulong start, target_ulong len, int flags)
p = pageflags_find(start, last);
}
if (!p) {
ret = false; /* entire region invalid */
ret = -1; /* entire region invalid */
break;
}
}
if (start < p->itree.start) {
ret = false; /* initial bytes invalid */
ret = -1; /* initial bytes invalid */
break;
}
missing = flags & ~p->flags;
if (missing & ~PAGE_WRITE) {
ret = false; /* page doesn't match */
if (missing & PAGE_READ) {
ret = -1; /* page not readable */
break;
}
if (missing & PAGE_WRITE) {
if (!(p->flags & PAGE_WRITE_ORG)) {
ret = false; /* page not writable */
ret = -1; /* page not writable */
break;
}
/* Asking about writable, but has been protected: undo. */
if (!page_unprotect(start, 0)) {
ret = false;
ret = -1;
break;
}
/* TODO: page_unprotect should take a range, not a single page. */
if (last - start < TARGET_PAGE_SIZE) {
ret = true; /* ok */
ret = 0; /* ok */
break;
}
start += TARGET_PAGE_SIZE;
@@ -586,7 +585,7 @@ bool page_check_range(target_ulong start, target_ulong len, int flags)
}
if (last <= p->itree.last) {
ret = true; /* ok */
ret = 0; /* ok */
break;
}
start = p->itree.last + 1;
@@ -599,54 +598,6 @@ bool page_check_range(target_ulong start, target_ulong len, int flags)
return ret;
}
bool page_check_range_empty(target_ulong start, target_ulong last)
{
assert(last >= start);
assert_memory_lock();
return pageflags_find(start, last) == NULL;
}
target_ulong page_find_range_empty(target_ulong min, target_ulong max,
target_ulong len, target_ulong align)
{
target_ulong len_m1, align_m1;
assert(min <= max);
assert(max <= GUEST_ADDR_MAX);
assert(len != 0);
assert(is_power_of_2(align));
assert_memory_lock();
len_m1 = len - 1;
align_m1 = align - 1;
/* Iteratively narrow the search region. */
while (1) {
PageFlagsNode *p;
/* Align min and double-check there's enough space remaining. */
min = (min + align_m1) & ~align_m1;
if (min > max) {
return -1;
}
if (len_m1 > max - min) {
return -1;
}
p = pageflags_find(min, min + len_m1);
if (p == NULL) {
/* Found! */
return min;
}
if (max <= p->itree.last) {
/* Existing allocation fills the remainder of the search region. */
return -1;
}
/* Skip across existing allocation. */
min = p->itree.last + 1;
}
}
void page_protect(tb_page_addr_t address)
{
PageFlagsNode *p;
@@ -770,7 +721,7 @@ int page_unprotect(target_ulong address, uintptr_t pc)
return current_tb_invalidated ? 2 : 1;
}
static int probe_access_internal(CPUArchState *env, vaddr addr,
static int probe_access_internal(CPUArchState *env, target_ulong addr,
int fault_size, MMUAccessType access_type,
bool nonfault, uintptr_t ra)
{
@@ -794,10 +745,6 @@ static int probe_access_internal(CPUArchState *env, vaddr addr,
if (guest_addr_valid_untagged(addr)) {
int page_flags = page_get_flags(addr);
if (page_flags & acc_flag) {
if ((acc_flag == PAGE_READ || acc_flag == PAGE_WRITE)
&& cpu_plugin_mem_cbs_enabled(env_cpu(env))) {
return TLB_MMIO;
}
return 0; /* success */
}
maperr = !(page_flags & PAGE_VALID);
@@ -812,7 +759,7 @@ static int probe_access_internal(CPUArchState *env, vaddr addr,
cpu_loop_exit_sigsegv(env_cpu(env), addr, access_type, maperr, ra);
}
int probe_access_flags(CPUArchState *env, vaddr addr, int size,
int probe_access_flags(CPUArchState *env, target_ulong addr, int size,
MMUAccessType access_type, int mmu_idx,
bool nonfault, void **phost, uintptr_t ra)
{
@@ -820,23 +767,23 @@ int probe_access_flags(CPUArchState *env, vaddr addr, int size,
g_assert(-(addr | TARGET_PAGE_MASK) >= size);
flags = probe_access_internal(env, addr, size, access_type, nonfault, ra);
*phost = (flags & TLB_INVALID_MASK) ? NULL : g2h(env_cpu(env), addr);
*phost = flags ? NULL : g2h(env_cpu(env), addr);
return flags;
}
void *probe_access(CPUArchState *env, vaddr addr, int size,
void *probe_access(CPUArchState *env, target_ulong addr, int size,
MMUAccessType access_type, int mmu_idx, uintptr_t ra)
{
int flags;
g_assert(-(addr | TARGET_PAGE_MASK) >= size);
flags = probe_access_internal(env, addr, size, access_type, false, ra);
g_assert((flags & ~TLB_MMIO) == 0);
g_assert(flags == 0);
return size ? g2h(env_cpu(env), addr) : NULL;
}
tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, vaddr addr,
tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr,
void **hostp)
{
int flags;
@@ -940,9 +887,9 @@ void *page_get_target_data(target_ulong address)
void page_reset_target_data(target_ulong start, target_ulong last) { }
#endif /* TARGET_PAGE_DATA_SIZE */
/* The system-mode versions of these helpers are in cputlb.c. */
/* The softmmu versions of these helpers are in cputlb.c. */
static void *cpu_mmu_lookup(CPUState *cpu, vaddr addr,
static void *cpu_mmu_lookup(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra, MMUAccessType type)
{
int a_bits = get_alignment_bits(mop);
@@ -950,39 +897,58 @@ static void *cpu_mmu_lookup(CPUState *cpu, vaddr addr,
/* Enforce guest required alignment. */
if (unlikely(addr & ((1 << a_bits) - 1))) {
cpu_loop_exit_sigbus(cpu, addr, type, ra);
cpu_loop_exit_sigbus(env_cpu(env), addr, type, ra);
}
ret = g2h(cpu, addr);
ret = g2h(env_cpu(env), addr);
set_helper_retaddr(ra);
return ret;
}
#include "ldst_atomicity.c.inc"
static uint8_t do_ld1_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType access_type)
static uint8_t do_ld1_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{
void *haddr;
uint8_t ret;
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, get_memop(oi), ra, access_type);
tcg_debug_assert((mop & MO_SIZE) == MO_8);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = ldub_p(haddr);
clear_helper_retaddr();
return ret;
}
static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType access_type)
tcg_target_ulong helper_ldub_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return do_ld1_mmu(env, addr, get_memop(oi), ra);
}
tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return (int8_t)do_ld1_mmu(env, addr, get_memop(oi), ra);
}
uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint8_t ret = do_ld1_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static uint16_t do_ld2_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{
void *haddr;
uint16_t ret;
MemOp mop = get_memop(oi);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, access_type);
ret = load_atom_2(cpu, ra, haddr, mop);
tcg_debug_assert((mop & MO_SIZE) == MO_16);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_2(env, ra, haddr, mop);
clear_helper_retaddr();
if (mop & MO_BSWAP) {
@@ -991,16 +957,35 @@ static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
return ret;
}
static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType access_type)
tcg_target_ulong helper_lduw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return do_ld2_mmu(env, addr, get_memop(oi), ra);
}
tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return (int16_t)do_ld2_mmu(env, addr, get_memop(oi), ra);
}
uint16_t cpu_ldw_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint16_t ret = do_ld2_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static uint32_t do_ld4_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{
void *haddr;
uint32_t ret;
MemOp mop = get_memop(oi);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, access_type);
ret = load_atom_4(cpu, ra, haddr, mop);
tcg_debug_assert((mop & MO_SIZE) == MO_32);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_4(env, ra, haddr, mop);
clear_helper_retaddr();
if (mop & MO_BSWAP) {
@@ -1009,16 +994,35 @@ static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
return ret;
}
static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType access_type)
tcg_target_ulong helper_ldul_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return do_ld4_mmu(env, addr, get_memop(oi), ra);
}
tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return (int32_t)do_ld4_mmu(env, addr, get_memop(oi), ra);
}
uint32_t cpu_ldl_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint32_t ret = do_ld4_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static uint64_t do_ld8_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{
void *haddr;
uint64_t ret;
MemOp mop = get_memop(oi);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, access_type);
ret = load_atom_8(cpu, ra, haddr, mop);
tcg_debug_assert((mop & MO_SIZE) == MO_64);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_8(env, ra, haddr, mop);
clear_helper_retaddr();
if (mop & MO_BSWAP) {
@@ -1027,17 +1031,29 @@ static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
return ret;
}
static Int128 do_ld16_mmu(CPUState *cpu, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
uint64_t helper_ldq_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return do_ld8_mmu(env, addr, get_memop(oi), ra);
}
uint64_t cpu_ldq_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint64_t ret = do_ld8_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static Int128 do_ld16_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{
void *haddr;
Int128 ret;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_128);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_16(cpu, ra, haddr, mop);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_16(env, ra, haddr, mop);
clear_helper_retaddr();
if (mop & MO_BSWAP) {
@@ -1046,81 +1062,166 @@ static Int128 do_ld16_mmu(CPUState *cpu, abi_ptr addr,
return ret;
}
static void do_st1_mmu(CPUState *cpu, vaddr addr, uint8_t val,
Int128 helper_ld16_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return do_ld16_mmu(env, addr, get_memop(oi), ra);
}
Int128 helper_ld_i128(CPUArchState *env, uint64_t addr, MemOpIdx oi)
{
return helper_ld16_mmu(env, addr, oi, GETPC());
}
Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
Int128 ret = do_ld16_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static void do_st1_mmu(CPUArchState *env, abi_ptr addr, uint8_t val,
MemOp mop, uintptr_t ra)
{
void *haddr;
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, get_memop(oi), ra, MMU_DATA_STORE);
tcg_debug_assert((mop & MO_SIZE) == MO_8);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
stb_p(haddr, val);
clear_helper_retaddr();
}
static void do_st2_mmu(CPUState *cpu, vaddr addr, uint16_t val,
MemOpIdx oi, uintptr_t ra)
void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st1_mmu(env, addr, val, get_memop(oi), ra);
}
void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st1_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
static void do_st2_mmu(CPUArchState *env, abi_ptr addr, uint16_t val,
MemOp mop, uintptr_t ra)
{
void *haddr;
MemOp mop = get_memop(oi);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE);
tcg_debug_assert((mop & MO_SIZE) == MO_16);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) {
val = bswap16(val);
}
store_atom_2(cpu, ra, haddr, mop, val);
store_atom_2(env, ra, haddr, mop, val);
clear_helper_retaddr();
}
static void do_st4_mmu(CPUState *cpu, vaddr addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
void helper_stw_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st2_mmu(env, addr, val, get_memop(oi), ra);
}
void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uint16_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st2_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
static void do_st4_mmu(CPUArchState *env, abi_ptr addr, uint32_t val,
MemOp mop, uintptr_t ra)
{
void *haddr;
MemOp mop = get_memop(oi);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE);
tcg_debug_assert((mop & MO_SIZE) == MO_32);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) {
val = bswap32(val);
}
store_atom_4(cpu, ra, haddr, mop, val);
store_atom_4(env, ra, haddr, mop, val);
clear_helper_retaddr();
}
static void do_st8_mmu(CPUState *cpu, vaddr addr, uint64_t val,
MemOpIdx oi, uintptr_t ra)
void helper_stl_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st4_mmu(env, addr, val, get_memop(oi), ra);
}
void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st4_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
static void do_st8_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
MemOp mop, uintptr_t ra)
{
void *haddr;
MemOp mop = get_memop(oi);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE);
tcg_debug_assert((mop & MO_SIZE) == MO_64);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) {
val = bswap64(val);
}
store_atom_8(cpu, ra, haddr, mop, val);
store_atom_8(env, ra, haddr, mop, val);
clear_helper_retaddr();
}
static void do_st16_mmu(CPUState *cpu, vaddr addr, Int128 val,
MemOpIdx oi, uintptr_t ra)
void helper_stq_mmu(CPUArchState *env, uint64_t addr, uint64_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st8_mmu(env, addr, val, get_memop(oi), ra);
}
void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st8_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
static void do_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val,
MemOp mop, uintptr_t ra)
{
void *haddr;
MemOpIdx mop = get_memop(oi);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE);
tcg_debug_assert((mop & MO_SIZE) == MO_128);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) {
val = bswap128(val);
}
store_atom_16(cpu, ra, haddr, mop, val);
store_atom_16(env, ra, haddr, mop, val);
clear_helper_retaddr();
}
void helper_st16_mmu(CPUArchState *env, uint64_t addr, Int128 val,
MemOpIdx oi, uintptr_t ra)
{
do_st16_mmu(env, addr, val, get_memop(oi), ra);
}
void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx oi)
{
helper_st16_mmu(env, addr, val, oi, GETPC());
}
void cpu_st16_mmu(CPUArchState *env, abi_ptr addr,
Int128 val, MemOpIdx oi, uintptr_t ra)
{
do_st16_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr ptr)
{
uint32_t ret;
@@ -1167,7 +1268,7 @@ uint8_t cpu_ldb_code_mmu(CPUArchState *env, abi_ptr addr,
void *haddr;
uint8_t ret;
haddr = cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH);
haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH);
ret = ldub_p(haddr);
clear_helper_retaddr();
return ret;
@@ -1179,7 +1280,7 @@ uint16_t cpu_ldw_code_mmu(CPUArchState *env, abi_ptr addr,
void *haddr;
uint16_t ret;
haddr = cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH);
haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH);
ret = lduw_p(haddr);
clear_helper_retaddr();
if (get_memop(oi) & MO_BSWAP) {
@@ -1194,7 +1295,7 @@ uint32_t cpu_ldl_code_mmu(CPUArchState *env, abi_ptr addr,
void *haddr;
uint32_t ret;
haddr = cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH);
haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH);
ret = ldl_p(haddr);
clear_helper_retaddr();
if (get_memop(oi) & MO_BSWAP) {
@@ -1209,7 +1310,7 @@ uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr addr,
void *haddr;
uint64_t ret;
haddr = cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD);
ret = ldq_p(haddr);
clear_helper_retaddr();
if (get_memop(oi) & MO_BSWAP) {
@@ -1223,8 +1324,8 @@ uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr addr,
/*
* Do not allow unaligned operations to proceed. Return the host address.
*/
static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
int size, uintptr_t retaddr)
static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
MemOpIdx oi, int size, uintptr_t retaddr)
{
MemOp mop = get_memop(oi);
int a_bits = get_alignment_bits(mop);
@@ -1232,15 +1333,15 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
/* Enforce guest required alignment. */
if (unlikely(addr & ((1 << a_bits) - 1))) {
cpu_loop_exit_sigbus(cpu, addr, MMU_DATA_STORE, retaddr);
cpu_loop_exit_sigbus(env_cpu(env), addr, MMU_DATA_STORE, retaddr);
}
/* Enforce qemu required alignment. */
if (unlikely(addr & (size - 1))) {
cpu_loop_exit_atomic(cpu, retaddr);
cpu_loop_exit_atomic(env_cpu(env), retaddr);
}
ret = g2h(cpu, addr);
ret = g2h(env_cpu(env), addr);
set_helper_retaddr(retaddr);
return ret;
}
@@ -1270,7 +1371,7 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
#include "atomic_template.h"
#endif
#if defined(CONFIG_ATOMIC128) || HAVE_CMPXCHG128
#if defined(CONFIG_ATOMIC128) || defined(CONFIG_CMPXCHG128)
#define DATA_SIZE 16
#include "atomic_template.h"
#endif

View File

@@ -904,7 +904,7 @@ static void alsa_init_per_direction(AudiodevAlsaPerDirectionOptions *apdo)
}
}
static void *alsa_audio_init(Audiodev *dev, Error **errp)
static void *alsa_audio_init(Audiodev *dev)
{
AudiodevAlsaOptions *aopts;
assert(dev->driver == AUDIODEV_DRIVER_ALSA);
@@ -960,6 +960,7 @@ static struct audio_driver alsa_audio_driver = {
.init = alsa_audio_init,
.fini = alsa_audio_fini,
.pcm_ops = &alsa_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (ALSAVoiceOut),

View File

@@ -26,7 +26,6 @@
#include "audio/audio.h"
#include "monitor/hmp.h"
#include "monitor/monitor.h"
#include "qapi/error.h"
#include "qapi/qmp/qdict.h"
static QLIST_HEAD (capture_list_head, CaptureState) capture_head;
@@ -66,11 +65,10 @@ void hmp_wavcapture(Monitor *mon, const QDict *qdict)
int nchannels = qdict_get_try_int(qdict, "nchannels", 2);
const char *audiodev = qdict_get_str(qdict, "audiodev");
CaptureState *s;
Error *local_err = NULL;
AudioState *as = audio_state_by_name(audiodev, &local_err);
AudioState *as = audio_state_by_name(audiodev);
if (!as) {
error_report_err(local_err);
monitor_printf(mon, "Audiodev '%s' not found\n", audiodev);
return;
}

View File

@@ -32,9 +32,7 @@
#include "qapi/qobject-input-visitor.h"
#include "qapi/qapi-visit-audio.h"
#include "qapi/qapi-commands-audio.h"
#include "qapi/qmp/qdict.h"
#include "qemu/cutils.h"
#include "qemu/error-report.h"
#include "qemu/log.h"
#include "qemu/module.h"
#include "qemu/help_option.h"
@@ -63,22 +61,19 @@ const char *audio_prio_list[] = {
"spice",
CONFIG_AUDIO_DRIVERS
"none",
"wav",
NULL
};
static QLIST_HEAD(, audio_driver) audio_drivers;
static AudiodevListHead audiodevs =
QSIMPLEQ_HEAD_INITIALIZER(audiodevs);
static AudiodevListHead default_audiodevs =
QSIMPLEQ_HEAD_INITIALIZER(default_audiodevs);
static AudiodevListHead audiodevs = QSIMPLEQ_HEAD_INITIALIZER(audiodevs);
void audio_driver_register(audio_driver *drv)
{
QLIST_INSERT_HEAD(&audio_drivers, drv, next);
}
static audio_driver *audio_driver_lookup(const char *name)
audio_driver *audio_driver_lookup(const char *name)
{
struct audio_driver *d;
Error *local_err = NULL;
@@ -104,7 +99,6 @@ static audio_driver *audio_driver_lookup(const char *name)
static QTAILQ_HEAD(AudioStateHead, AudioState) audio_states =
QTAILQ_HEAD_INITIALIZER(audio_states);
static AudioState *default_audio_state;
const struct mixeng_volume nominal_volume = {
.mute = 0,
@@ -117,6 +111,8 @@ const struct mixeng_volume nominal_volume = {
#endif
};
static bool legacy_config = true;
int audio_bug (const char *funcname, int cond)
{
if (cond) {
@@ -1557,11 +1553,9 @@ size_t audio_generic_read(HWVoiceIn *hw, void *buf, size_t size)
}
static int audio_driver_init(AudioState *s, struct audio_driver *drv,
Audiodev *dev, Error **errp)
bool msg, Audiodev *dev)
{
Error *local_err = NULL;
s->drv_opaque = drv->init(dev, &local_err);
s->drv_opaque = drv->init(dev);
if (s->drv_opaque) {
if (!drv->pcm_ops->get_buffer_in) {
@@ -1573,15 +1567,13 @@ static int audio_driver_init(AudioState *s, struct audio_driver *drv,
drv->pcm_ops->put_buffer_out = audio_generic_put_buffer_out;
}
audio_init_nb_voices_out(s, drv, 1);
audio_init_nb_voices_in(s, drv, 0);
audio_init_nb_voices_out(s, drv);
audio_init_nb_voices_in(s, drv);
s->drv = drv;
return 0;
} else {
if (local_err) {
error_propagate(errp, local_err);
} else {
error_setg(errp, "Could not init `%s' audio driver", drv->name);
if (msg) {
dolog("Could not init `%s' audio driver\n", drv->name);
}
return -1;
}
@@ -1661,7 +1653,6 @@ static void free_audio_state(AudioState *s)
void audio_cleanup(void)
{
default_audio_state = NULL;
while (!QTAILQ_EMPTY(&audio_states)) {
AudioState *s = QTAILQ_FIRST(&audio_states);
QTAILQ_REMOVE(&audio_states, s, list);
@@ -1688,25 +1679,19 @@ static const VMStateDescription vmstate_audio = {
}
};
void audio_create_default_audiodevs(void)
static void audio_validate_opts(Audiodev *dev, Error **errp);
static AudiodevListEntry *audiodev_find(
AudiodevListHead *head, const char *drvname)
{
for (int i = 0; audio_prio_list[i]; i++) {
if (audio_driver_lookup(audio_prio_list[i])) {
QDict *dict = qdict_new();
Audiodev *dev = NULL;
Visitor *v;
qdict_put_str(dict, "driver", audio_prio_list[i]);
qdict_put_str(dict, "id", "#default");
v = qobject_input_visitor_new_keyval(QOBJECT(dict));
qobject_unref(dict);
visit_type_Audiodev(v, NULL, &dev, &error_fatal);
visit_free(v);
audio_define_default(dev, &error_abort);
AudiodevListEntry *e;
QSIMPLEQ_FOREACH(e, head, next) {
if (strcmp(AudiodevDriver_str(e->dev->driver), drvname) == 0) {
return e;
}
}
return NULL;
}
/*
@@ -1715,16 +1700,62 @@ void audio_create_default_audiodevs(void)
* if dev == NULL => legacy implicit initialization, return the already created
* state or create a new one
*/
static AudioState *audio_init(Audiodev *dev, Error **errp)
static AudioState *audio_init(Audiodev *dev, const char *name)
{
static bool atexit_registered;
size_t i;
int done = 0;
const char *drvname;
VMChangeStateEntry *vmse;
const char *drvname = NULL;
VMChangeStateEntry *e;
AudioState *s;
struct audio_driver *driver;
/* silence gcc warning about uninitialized variable */
AudiodevListHead head = QSIMPLEQ_HEAD_INITIALIZER(head);
if (using_spice) {
/*
* When using spice allow the spice audio driver being picked
* as default.
*
* Temporary hack. Using audio devices without explicit
* audiodev= property is already deprecated. Same goes for
* the -soundhw switch. Once this support gets finally
* removed we can also drop the concept of a default audio
* backend and this can go away.
*/
driver = audio_driver_lookup("spice");
if (driver) {
driver->can_be_default = 1;
}
}
if (dev) {
/* -audiodev option */
legacy_config = false;
drvname = AudiodevDriver_str(dev->driver);
} else if (!QTAILQ_EMPTY(&audio_states)) {
if (!legacy_config) {
dolog("Device %s: audiodev default parameter is deprecated, please "
"specify audiodev=%s\n", name,
QTAILQ_FIRST(&audio_states)->dev->id);
}
return QTAILQ_FIRST(&audio_states);
} else {
/* legacy implicit initialization */
head = audio_handle_legacy_opts();
/*
* In case of legacy initialization, all Audiodevs in the list will have
* the same configuration (except the driver), so it doesn't matter which
* one we chose. We need an Audiodev to set up AudioState before we can
* init a driver. Also note that dev at this point is still in the
* list.
*/
dev = QSIMPLEQ_FIRST(&head)->dev;
audio_validate_opts(dev, &error_abort);
}
s = g_new0(AudioState, 1);
s->dev = dev;
QLIST_INIT (&s->hw_head_out);
QLIST_INIT (&s->hw_head_in);
@@ -1736,36 +1767,56 @@ static AudioState *audio_init(Audiodev *dev, Error **errp)
s->ts = timer_new_ns(QEMU_CLOCK_VIRTUAL, audio_timer, s);
if (dev) {
/* -audiodev option */
s->dev = dev;
drvname = AudiodevDriver_str(dev->driver);
s->nb_hw_voices_out = audio_get_pdo_out(dev)->voices;
s->nb_hw_voices_in = audio_get_pdo_in(dev)->voices;
if (s->nb_hw_voices_out < 1) {
dolog ("Bogus number of playback voices %d, setting to 1\n",
s->nb_hw_voices_out);
s->nb_hw_voices_out = 1;
}
if (s->nb_hw_voices_in < 0) {
dolog ("Bogus number of capture voices %d, setting to 0\n",
s->nb_hw_voices_in);
s->nb_hw_voices_in = 0;
}
if (drvname) {
driver = audio_driver_lookup(drvname);
if (driver) {
done = !audio_driver_init(s, driver, dev, errp);
done = !audio_driver_init(s, driver, true, dev);
} else {
error_setg(errp, "Unknown audio driver `%s'\n", drvname);
dolog ("Unknown audio driver `%s'\n", drvname);
}
if (!done) {
goto out;
free_audio_state(s);
return NULL;
}
} else {
assert(!default_audio_state);
for (;;) {
AudiodevListEntry *e = QSIMPLEQ_FIRST(&default_audiodevs);
if (!e) {
error_setg(errp, "no default audio driver available");
goto out;
for (i = 0; audio_prio_list[i]; i++) {
AudiodevListEntry *e = audiodev_find(&head, audio_prio_list[i]);
driver = audio_driver_lookup(audio_prio_list[i]);
if (e && driver) {
s->dev = dev = e->dev;
audio_validate_opts(dev, &error_abort);
done = !audio_driver_init(s, driver, false, dev);
if (done) {
e->dev = NULL;
break;
}
}
s->dev = dev = e->dev;
drvname = AudiodevDriver_str(dev->driver);
driver = audio_driver_lookup(drvname);
if (!audio_driver_init(s, driver, dev, NULL)) {
break;
}
QSIMPLEQ_REMOVE_HEAD(&default_audiodevs, next);
}
}
audio_free_audiodev_list(&head);
if (!done) {
driver = audio_driver_lookup("none");
done = !audio_driver_init(s, driver, false, dev);
assert(done);
dolog("warning: Using timer based audio emulation\n");
}
if (dev->timer_period <= 0) {
s->period_ticks = 1;
@@ -1773,51 +1824,37 @@ static AudioState *audio_init(Audiodev *dev, Error **errp)
s->period_ticks = dev->timer_period * (int64_t)SCALE_US;
}
vmse = qemu_add_vm_change_state_handler (audio_vm_change_state_handler, s);
if (!vmse) {
e = qemu_add_vm_change_state_handler (audio_vm_change_state_handler, s);
if (!e) {
dolog ("warning: Could not register change state handler\n"
"(Audio can continue looping even after stopping the VM)\n");
}
QTAILQ_INSERT_TAIL(&audio_states, s, list);
QLIST_INIT (&s->card_head);
vmstate_register_any(NULL, &vmstate_audio, s);
vmstate_register (NULL, 0, &vmstate_audio, s);
return s;
out:
free_audio_state(s);
return NULL;
}
AudioState *audio_get_default_audio_state(Error **errp)
void audio_free_audiodev_list(AudiodevListHead *head)
{
if (!default_audio_state) {
default_audio_state = audio_init(NULL, errp);
if (!default_audio_state) {
if (!QSIMPLEQ_EMPTY(&audiodevs)) {
error_append_hint(errp, "Perhaps you wanted to use -audio or set audiodev=%s?\n",
QSIMPLEQ_FIRST(&audiodevs)->dev->id);
}
}
AudiodevListEntry *e;
while ((e = QSIMPLEQ_FIRST(head))) {
QSIMPLEQ_REMOVE_HEAD(head, next);
qapi_free_Audiodev(e->dev);
g_free(e);
}
return default_audio_state;
}
bool AUD_register_card (const char *name, QEMUSoundCard *card, Error **errp)
void AUD_register_card (const char *name, QEMUSoundCard *card)
{
if (!card->state) {
card->state = audio_get_default_audio_state(errp);
if (!card->state) {
return false;
}
card->state = audio_init(NULL, name);
}
card->name = g_strdup (name);
memset (&card->entries, 0, sizeof (card->entries));
QLIST_INSERT_HEAD(&card->state->card_head, card, entries);
return true;
}
void AUD_remove_card (QEMUSoundCard *card)
@@ -1839,8 +1876,10 @@ CaptureVoiceOut *AUD_add_capture(
struct capture_callback *cb;
if (!s) {
error_report("Capturing without setting an audiodev is not supported");
abort();
if (!legacy_config) {
dolog("Capturing without setting an audiodev is deprecated\n");
}
s = audio_init(NULL, NULL);
}
if (!audio_get_pdo_out(s->dev)->mixing_engine) {
@@ -1861,8 +1900,10 @@ CaptureVoiceOut *AUD_add_capture(
cap = audio_pcm_capture_find_specific(s, as);
if (cap) {
QLIST_INSERT_HEAD (&cap->cb_head, cb, entries);
return cap;
} else {
HWVoiceOut *hw;
CaptureVoiceOut *cap;
cap = g_malloc0(sizeof(*cap));
@@ -1896,9 +1937,8 @@ CaptureVoiceOut *AUD_add_capture(
QLIST_FOREACH(hw, &s->hw_head_out, entries) {
audio_attach_capture (hw);
}
return cap;
}
return cap;
}
void AUD_del_capture (CaptureVoiceOut *cap, void *cb_opaque)
@@ -2144,24 +2184,17 @@ void audio_define(Audiodev *dev)
QSIMPLEQ_INSERT_TAIL(&audiodevs, e, next);
}
void audio_define_default(Audiodev *dev, Error **errp)
{
AudiodevListEntry *e;
audio_validate_opts(dev, errp);
e = g_new0(AudiodevListEntry, 1);
e->dev = dev;
QSIMPLEQ_INSERT_TAIL(&default_audiodevs, e, next);
}
void audio_init_audiodevs(void)
bool audio_init_audiodevs(void)
{
AudiodevListEntry *e;
QSIMPLEQ_FOREACH(e, &audiodevs, next) {
audio_init(e->dev, &error_fatal);
if (!audio_init(e->dev, NULL)) {
return false;
}
}
return true;
}
audsettings audiodev_to_audsettings(AudiodevPerDirectionOptions *pdo)
@@ -2223,7 +2256,7 @@ int audio_buffer_bytes(AudiodevPerDirectionOptions *pdo,
audioformat_bytes_per_sample(as->fmt);
}
AudioState *audio_state_by_name(const char *name, Error **errp)
AudioState *audio_state_by_name(const char *name)
{
AudioState *s;
QTAILQ_FOREACH(s, &audio_states, list) {
@@ -2232,7 +2265,6 @@ AudioState *audio_state_by_name(const char *name, Error **errp)
return s;
}
}
error_setg(errp, "audiodev '%s' not found", name);
return NULL;
}

View File

@@ -94,7 +94,7 @@ typedef struct QEMUAudioTimeStamp {
void AUD_vlog (const char *cap, const char *fmt, va_list ap) G_GNUC_PRINTF(2, 0);
void AUD_log (const char *cap, const char *fmt, ...) G_GNUC_PRINTF(2, 3);
bool AUD_register_card (const char *name, QEMUSoundCard *card, Error **errp);
void AUD_register_card (const char *name, QEMUSoundCard *card);
void AUD_remove_card (QEMUSoundCard *card);
CaptureVoiceOut *AUD_add_capture(
AudioState *s,
@@ -169,14 +169,12 @@ void audio_sample_from_uint64(void *samples, int pos,
uint64_t left, uint64_t right);
void audio_define(Audiodev *audio);
void audio_define_default(Audiodev *dev, Error **errp);
void audio_parse_option(const char *opt);
void audio_create_default_audiodevs(void);
void audio_init_audiodevs(void);
bool audio_init_audiodevs(void);
void audio_help(void);
void audio_legacy_help(void);
AudioState *audio_state_by_name(const char *name, Error **errp);
AudioState *audio_get_default_audio_state(Error **errp);
AudioState *audio_state_by_name(const char *name);
const char *audio_get_id(QEMUSoundCard *card);
#define DEFINE_AUDIO_PROPERTIES(_s, _f) \

View File

@@ -140,12 +140,13 @@ typedef struct audio_driver audio_driver;
struct audio_driver {
const char *name;
const char *descr;
void *(*init) (Audiodev *, Error **);
void *(*init) (Audiodev *);
void (*fini) (void *);
#ifdef CONFIG_GIO
void (*set_dbus_server) (AudioState *s, GDBusObjectManagerServer *manager, bool p2p);
#endif
struct audio_pcm_ops *pcm_ops;
int can_be_default;
int max_voices_out;
int max_voices_in;
size_t voice_size_out;
@@ -242,6 +243,7 @@ extern const struct mixeng_volume nominal_volume;
extern const char *audio_prio_list[];
void audio_driver_register(audio_driver *drv);
audio_driver *audio_driver_lookup(const char *name);
void audio_pcm_init_info (struct audio_pcm_info *info, struct audsettings *as);
void audio_pcm_info_clear_buf (struct audio_pcm_info *info, void *buf, int len);
@@ -295,6 +297,9 @@ typedef struct AudiodevListEntry {
} AudiodevListEntry;
typedef QSIMPLEQ_HEAD(, AudiodevListEntry) AudiodevListHead;
AudiodevListHead audio_handle_legacy_opts(void);
void audio_free_audiodev_list(AudiodevListHead *head);
void audio_create_pdos(Audiodev *dev);
AudiodevPerDirectionOptions *audio_get_pdo_in(Audiodev *dev);

591
audio/audio_legacy.c Normal file
View File

@@ -0,0 +1,591 @@
/*
* QEMU Audio subsystem: legacy configuration handling
*
* Copyright (c) 2015-2019 Zoltán Kővágó <DirtY.iCE.hu@gmail.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "audio.h"
#include "audio_int.h"
#include "qemu/cutils.h"
#include "qemu/timer.h"
#include "qapi/error.h"
#include "qapi/qapi-visit-audio.h"
#include "qapi/visitor-impl.h"
#define AUDIO_CAP "audio-legacy"
#include "audio_int.h"
static uint32_t toui32(const char *str)
{
uint64_t ret;
if (parse_uint_full(str, 10, &ret) || ret > UINT32_MAX) {
dolog("Invalid integer value `%s'\n", str);
exit(1);
}
return ret;
}
/* helper functions to convert env variables */
static void get_bool(const char *env, bool *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
*dst = toui32(val) != 0;
*has_dst = true;
}
}
static void get_int(const char *env, uint32_t *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
*dst = toui32(val);
*has_dst = true;
}
}
static void get_str(const char *env, char **dst)
{
const char *val = getenv(env);
if (val) {
g_free(*dst);
*dst = g_strdup(val);
}
}
static void get_fmt(const char *env, AudioFormat *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
size_t i;
for (i = 0; AudioFormat_lookup.size; ++i) {
if (strcasecmp(val, AudioFormat_lookup.array[i]) == 0) {
*dst = i;
*has_dst = true;
return;
}
}
dolog("Invalid audio format `%s'\n", val);
exit(1);
}
}
#if defined(CONFIG_AUDIO_ALSA) || defined(CONFIG_AUDIO_DSOUND)
static void get_millis_to_usecs(const char *env, uint32_t *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
*dst = toui32(val) * 1000;
*has_dst = true;
}
}
#endif
#if defined(CONFIG_AUDIO_ALSA) || defined(CONFIG_AUDIO_COREAUDIO) || \
defined(CONFIG_AUDIO_PA) || defined(CONFIG_AUDIO_SDL) || \
defined(CONFIG_AUDIO_DSOUND) || defined(CONFIG_AUDIO_OSS)
static uint32_t frames_to_usecs(uint32_t frames,
AudiodevPerDirectionOptions *pdo)
{
uint32_t freq = pdo->has_frequency ? pdo->frequency : 44100;
return (frames * 1000000 + freq / 2) / freq;
}
#endif
#ifdef CONFIG_AUDIO_COREAUDIO
static void get_frames_to_usecs(const char *env, uint32_t *dst, bool *has_dst,
AudiodevPerDirectionOptions *pdo)
{
const char *val = getenv(env);
if (val) {
*dst = frames_to_usecs(toui32(val), pdo);
*has_dst = true;
}
}
#endif
#if defined(CONFIG_AUDIO_PA) || defined(CONFIG_AUDIO_SDL) || \
defined(CONFIG_AUDIO_DSOUND) || defined(CONFIG_AUDIO_OSS)
static uint32_t samples_to_usecs(uint32_t samples,
AudiodevPerDirectionOptions *pdo)
{
uint32_t channels = pdo->has_channels ? pdo->channels : 2;
return frames_to_usecs(samples / channels, pdo);
}
#endif
#if defined(CONFIG_AUDIO_PA) || defined(CONFIG_AUDIO_SDL)
static void get_samples_to_usecs(const char *env, uint32_t *dst, bool *has_dst,
AudiodevPerDirectionOptions *pdo)
{
const char *val = getenv(env);
if (val) {
*dst = samples_to_usecs(toui32(val), pdo);
*has_dst = true;
}
}
#endif
#if defined(CONFIG_AUDIO_DSOUND) || defined(CONFIG_AUDIO_OSS)
static uint32_t bytes_to_usecs(uint32_t bytes, AudiodevPerDirectionOptions *pdo)
{
AudioFormat fmt = pdo->has_format ? pdo->format : AUDIO_FORMAT_S16;
uint32_t bytes_per_sample = audioformat_bytes_per_sample(fmt);
return samples_to_usecs(bytes / bytes_per_sample, pdo);
}
static void get_bytes_to_usecs(const char *env, uint32_t *dst, bool *has_dst,
AudiodevPerDirectionOptions *pdo)
{
const char *val = getenv(env);
if (val) {
*dst = bytes_to_usecs(toui32(val), pdo);
*has_dst = true;
}
}
#endif
/* backend specific functions */
#ifdef CONFIG_AUDIO_ALSA
/* ALSA */
static void handle_alsa_per_direction(
AudiodevAlsaPerDirectionOptions *apdo, const char *prefix)
{
char buf[64];
size_t len = strlen(prefix);
bool size_in_usecs = false;
bool dummy;
memcpy(buf, prefix, len);
strcpy(buf + len, "TRY_POLL");
get_bool(buf, &apdo->try_poll, &apdo->has_try_poll);
strcpy(buf + len, "DEV");
get_str(buf, &apdo->dev);
strcpy(buf + len, "SIZE_IN_USEC");
get_bool(buf, &size_in_usecs, &dummy);
strcpy(buf + len, "PERIOD_SIZE");
get_int(buf, &apdo->period_length, &apdo->has_period_length);
if (apdo->has_period_length && !size_in_usecs) {
apdo->period_length = frames_to_usecs(
apdo->period_length,
qapi_AudiodevAlsaPerDirectionOptions_base(apdo));
}
strcpy(buf + len, "BUFFER_SIZE");
get_int(buf, &apdo->buffer_length, &apdo->has_buffer_length);
if (apdo->has_buffer_length && !size_in_usecs) {
apdo->buffer_length = frames_to_usecs(
apdo->buffer_length,
qapi_AudiodevAlsaPerDirectionOptions_base(apdo));
}
}
static void handle_alsa(Audiodev *dev)
{
AudiodevAlsaOptions *aopt = &dev->u.alsa;
handle_alsa_per_direction(aopt->in, "QEMU_ALSA_ADC_");
handle_alsa_per_direction(aopt->out, "QEMU_ALSA_DAC_");
get_millis_to_usecs("QEMU_ALSA_THRESHOLD",
&aopt->threshold, &aopt->has_threshold);
}
#endif
#ifdef CONFIG_AUDIO_COREAUDIO
/* coreaudio */
static void handle_coreaudio(Audiodev *dev)
{
get_frames_to_usecs(
"QEMU_COREAUDIO_BUFFER_SIZE",
&dev->u.coreaudio.out->buffer_length,
&dev->u.coreaudio.out->has_buffer_length,
qapi_AudiodevCoreaudioPerDirectionOptions_base(dev->u.coreaudio.out));
get_int("QEMU_COREAUDIO_BUFFER_COUNT",
&dev->u.coreaudio.out->buffer_count,
&dev->u.coreaudio.out->has_buffer_count);
}
#endif
#ifdef CONFIG_AUDIO_DSOUND
/* dsound */
static void handle_dsound(Audiodev *dev)
{
get_millis_to_usecs("QEMU_DSOUND_LATENCY_MILLIS",
&dev->u.dsound.latency, &dev->u.dsound.has_latency);
get_bytes_to_usecs("QEMU_DSOUND_BUFSIZE_OUT",
&dev->u.dsound.out->buffer_length,
&dev->u.dsound.out->has_buffer_length,
dev->u.dsound.out);
get_bytes_to_usecs("QEMU_DSOUND_BUFSIZE_IN",
&dev->u.dsound.in->buffer_length,
&dev->u.dsound.in->has_buffer_length,
dev->u.dsound.in);
}
#endif
#ifdef CONFIG_AUDIO_OSS
/* OSS */
static void handle_oss_per_direction(
AudiodevOssPerDirectionOptions *opdo, const char *try_poll_env,
const char *dev_env)
{
get_bool(try_poll_env, &opdo->try_poll, &opdo->has_try_poll);
get_str(dev_env, &opdo->dev);
get_bytes_to_usecs("QEMU_OSS_FRAGSIZE",
&opdo->buffer_length, &opdo->has_buffer_length,
qapi_AudiodevOssPerDirectionOptions_base(opdo));
get_int("QEMU_OSS_NFRAGS", &opdo->buffer_count,
&opdo->has_buffer_count);
}
static void handle_oss(Audiodev *dev)
{
AudiodevOssOptions *oopt = &dev->u.oss;
handle_oss_per_direction(oopt->in, "QEMU_AUDIO_ADC_TRY_POLL",
"QEMU_OSS_ADC_DEV");
handle_oss_per_direction(oopt->out, "QEMU_AUDIO_DAC_TRY_POLL",
"QEMU_OSS_DAC_DEV");
get_bool("QEMU_OSS_MMAP", &oopt->try_mmap, &oopt->has_try_mmap);
get_bool("QEMU_OSS_EXCLUSIVE", &oopt->exclusive, &oopt->has_exclusive);
get_int("QEMU_OSS_POLICY", &oopt->dsp_policy, &oopt->has_dsp_policy);
}
#endif
#ifdef CONFIG_AUDIO_PA
/* pulseaudio */
static void handle_pa_per_direction(
AudiodevPaPerDirectionOptions *ppdo, const char *env)
{
get_str(env, &ppdo->name);
}
static void handle_pa(Audiodev *dev)
{
handle_pa_per_direction(dev->u.pa.in, "QEMU_PA_SOURCE");
handle_pa_per_direction(dev->u.pa.out, "QEMU_PA_SINK");
get_samples_to_usecs(
"QEMU_PA_SAMPLES", &dev->u.pa.in->buffer_length,
&dev->u.pa.in->has_buffer_length,
qapi_AudiodevPaPerDirectionOptions_base(dev->u.pa.in));
get_samples_to_usecs(
"QEMU_PA_SAMPLES", &dev->u.pa.out->buffer_length,
&dev->u.pa.out->has_buffer_length,
qapi_AudiodevPaPerDirectionOptions_base(dev->u.pa.out));
get_str("QEMU_PA_SERVER", &dev->u.pa.server);
}
#endif
#ifdef CONFIG_AUDIO_SDL
/* SDL */
static void handle_sdl(Audiodev *dev)
{
/* SDL is output only */
get_samples_to_usecs("QEMU_SDL_SAMPLES", &dev->u.sdl.out->buffer_length,
&dev->u.sdl.out->has_buffer_length,
qapi_AudiodevSdlPerDirectionOptions_base(dev->u.sdl.out));
}
#endif
/* wav */
static void handle_wav(Audiodev *dev)
{
get_int("QEMU_WAV_FREQUENCY",
&dev->u.wav.out->frequency, &dev->u.wav.out->has_frequency);
get_fmt("QEMU_WAV_FORMAT", &dev->u.wav.out->format,
&dev->u.wav.out->has_format);
get_int("QEMU_WAV_DAC_FIXED_CHANNELS",
&dev->u.wav.out->channels, &dev->u.wav.out->has_channels);
get_str("QEMU_WAV_PATH", &dev->u.wav.path);
}
/* general */
static void handle_per_direction(
AudiodevPerDirectionOptions *pdo, const char *prefix)
{
char buf[64];
size_t len = strlen(prefix);
memcpy(buf, prefix, len);
strcpy(buf + len, "FIXED_SETTINGS");
get_bool(buf, &pdo->fixed_settings, &pdo->has_fixed_settings);
strcpy(buf + len, "FIXED_FREQ");
get_int(buf, &pdo->frequency, &pdo->has_frequency);
strcpy(buf + len, "FIXED_FMT");
get_fmt(buf, &pdo->format, &pdo->has_format);
strcpy(buf + len, "FIXED_CHANNELS");
get_int(buf, &pdo->channels, &pdo->has_channels);
strcpy(buf + len, "VOICES");
get_int(buf, &pdo->voices, &pdo->has_voices);
}
static AudiodevListEntry *legacy_opt(const char *drvname)
{
AudiodevListEntry *e = g_new0(AudiodevListEntry, 1);
e->dev = g_new0(Audiodev, 1);
e->dev->id = g_strdup(drvname);
e->dev->driver = qapi_enum_parse(
&AudiodevDriver_lookup, drvname, -1, &error_abort);
audio_create_pdos(e->dev);
handle_per_direction(audio_get_pdo_in(e->dev), "QEMU_AUDIO_ADC_");
handle_per_direction(audio_get_pdo_out(e->dev), "QEMU_AUDIO_DAC_");
/* Original description: Timer period in HZ (0 - use lowest possible) */
get_int("QEMU_AUDIO_TIMER_PERIOD",
&e->dev->timer_period, &e->dev->has_timer_period);
if (e->dev->has_timer_period && e->dev->timer_period) {
e->dev->timer_period = NANOSECONDS_PER_SECOND / 1000 /
e->dev->timer_period;
}
switch (e->dev->driver) {
#ifdef CONFIG_AUDIO_ALSA
case AUDIODEV_DRIVER_ALSA:
handle_alsa(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_COREAUDIO
case AUDIODEV_DRIVER_COREAUDIO:
handle_coreaudio(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_DSOUND
case AUDIODEV_DRIVER_DSOUND:
handle_dsound(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_OSS
case AUDIODEV_DRIVER_OSS:
handle_oss(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_PA
case AUDIODEV_DRIVER_PA:
handle_pa(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_SDL
case AUDIODEV_DRIVER_SDL:
handle_sdl(e->dev);
break;
#endif
case AUDIODEV_DRIVER_WAV:
handle_wav(e->dev);
break;
default:
break;
}
return e;
}
AudiodevListHead audio_handle_legacy_opts(void)
{
const char *drvname = getenv("QEMU_AUDIO_DRV");
AudiodevListHead head = QSIMPLEQ_HEAD_INITIALIZER(head);
if (drvname) {
AudiodevListEntry *e;
audio_driver *driver = audio_driver_lookup(drvname);
if (!driver) {
dolog("Unknown audio driver `%s'\n", drvname);
exit(1);
}
e = legacy_opt(drvname);
QSIMPLEQ_INSERT_TAIL(&head, e, next);
} else {
for (int i = 0; audio_prio_list[i]; i++) {
audio_driver *driver = audio_driver_lookup(audio_prio_list[i]);
if (driver && driver->can_be_default) {
AudiodevListEntry *e = legacy_opt(driver->name);
QSIMPLEQ_INSERT_TAIL(&head, e, next);
}
}
if (QSIMPLEQ_EMPTY(&head)) {
dolog("Internal error: no default audio driver available\n");
exit(1);
}
}
return head;
}
/* visitor to print -audiodev option */
typedef struct {
Visitor visitor;
bool comma;
GList *path;
} LegacyPrintVisitor;
static bool lv_start_struct(Visitor *v, const char *name, void **obj,
size_t size, Error **errp)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
lv->path = g_list_append(lv->path, g_strdup(name));
return true;
}
static void lv_end_struct(Visitor *v, void **obj)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
lv->path = g_list_delete_link(lv->path, g_list_last(lv->path));
}
static void lv_print_key(Visitor *v, const char *name)
{
GList *e;
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
if (lv->comma) {
putchar(',');
} else {
lv->comma = true;
}
for (e = lv->path; e; e = e->next) {
if (e->data) {
printf("%s.", (const char *) e->data);
}
}
printf("%s=", name);
}
static bool lv_type_int64(Visitor *v, const char *name, int64_t *obj,
Error **errp)
{
lv_print_key(v, name);
printf("%" PRIi64, *obj);
return true;
}
static bool lv_type_uint64(Visitor *v, const char *name, uint64_t *obj,
Error **errp)
{
lv_print_key(v, name);
printf("%" PRIu64, *obj);
return true;
}
static bool lv_type_bool(Visitor *v, const char *name, bool *obj, Error **errp)
{
lv_print_key(v, name);
printf("%s", *obj ? "on" : "off");
return true;
}
static bool lv_type_str(Visitor *v, const char *name, char **obj, Error **errp)
{
const char *str = *obj;
lv_print_key(v, name);
while (*str) {
if (*str == ',') {
putchar(',');
}
putchar(*str++);
}
return true;
}
static void lv_complete(Visitor *v, void *opaque)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
assert(lv->path == NULL);
}
static void lv_free(Visitor *v)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
g_list_free_full(lv->path, g_free);
g_free(lv);
}
static Visitor *legacy_visitor_new(void)
{
LegacyPrintVisitor *lv = g_new0(LegacyPrintVisitor, 1);
lv->visitor.start_struct = lv_start_struct;
lv->visitor.end_struct = lv_end_struct;
/* lists not supported */
lv->visitor.type_int64 = lv_type_int64;
lv->visitor.type_uint64 = lv_type_uint64;
lv->visitor.type_bool = lv_type_bool;
lv->visitor.type_str = lv_type_str;
lv->visitor.type = VISITOR_OUTPUT;
lv->visitor.complete = lv_complete;
lv->visitor.free = lv_free;
return &lv->visitor;
}
void audio_legacy_help(void)
{
AudiodevListHead head;
AudiodevListEntry *e;
printf("Environment variable based configuration deprecated.\n");
printf("Please use the new -audiodev option.\n");
head = audio_handle_legacy_opts();
printf("\nEquivalent -audiodev to your current environment variables:\n");
if (!getenv("QEMU_AUDIO_DRV")) {
printf("(Since you didn't specify QEMU_AUDIO_DRV, I'll list all "
"possibilities)\n");
}
QSIMPLEQ_FOREACH(e, &head, next) {
Visitor *v;
Audiodev *dev = e->dev;
printf("-audiodev ");
v = legacy_visitor_new();
visit_type_Audiodev(v, NULL, &dev, &error_abort);
visit_free(v);
printf("\n");
}
audio_free_audiodev_list(&head);
}

View File

@@ -37,12 +37,11 @@
#endif
static void glue(audio_init_nb_voices_, TYPE)(AudioState *s,
struct audio_driver *drv, int min_voices)
struct audio_driver *drv)
{
int max_voices = glue (drv->max_voices_, TYPE);
size_t voice_size = glue(drv->voice_size_, TYPE);
glue (s->nb_hw_voices_, TYPE) = glue(audio_get_pdo_, TYPE)(s->dev)->voices;
if (glue (s->nb_hw_voices_, TYPE) > max_voices) {
if (!max_voices) {
#ifdef DAC
@@ -57,12 +56,6 @@ static void glue(audio_init_nb_voices_, TYPE)(AudioState *s,
glue (s->nb_hw_voices_, TYPE) = max_voices;
}
if (glue (s->nb_hw_voices_, TYPE) < min_voices) {
dolog ("Bogus number of " NAME " voices %d, setting to %d\n",
glue (s->nb_hw_voices_, TYPE),
min_voices);
}
if (audio_bug(__func__, !voice_size && max_voices)) {
dolog ("drv=`%s' voice_size=0 max_voices=%d\n",
drv->name, max_voices);

View File

@@ -644,7 +644,7 @@ static void coreaudio_enable_out(HWVoiceOut *hw, bool enable)
update_device_playback_state(core);
}
static void *coreaudio_audio_init(Audiodev *dev, Error **errp)
static void *coreaudio_audio_init(Audiodev *dev)
{
return dev;
}
@@ -673,6 +673,7 @@ static struct audio_driver coreaudio_audio_driver = {
.init = coreaudio_audio_init,
.fini = coreaudio_audio_fini,
.pcm_ops = &coreaudio_pcm_ops,
.can_be_default = 1,
.max_voices_out = 1,
.max_voices_in = 0,
.voice_size_out = sizeof (coreaudioVoiceOut),

View File

@@ -29,11 +29,7 @@
#include "qemu/timer.h"
#include "qemu/dbus.h"
#ifdef G_OS_UNIX
#include <gio/gunixfdlist.h>
#endif
#include "ui/dbus.h"
#include "ui/dbus-display1.h"
#define AUDIO_CAP "dbus"
@@ -395,7 +391,7 @@ dbus_enable_in(HWVoiceIn *hw, bool enable)
}
static void *
dbus_audio_init(Audiodev *dev, Error **errp)
dbus_audio_init(Audiodev *dev)
{
DBusAudio *da = g_new0(DBusAudio, 1);
@@ -448,9 +444,7 @@ listener_in_vanished_cb(GDBusConnection *connection,
static gboolean
dbus_audio_register_listener(AudioState *s,
GDBusMethodInvocation *invocation,
#ifdef G_OS_UNIX
GUnixFDList *fd_list,
#endif
GVariant *arg_listener,
bool out)
{
@@ -477,11 +471,6 @@ dbus_audio_register_listener(AudioState *s,
return DBUS_METHOD_INVOCATION_HANDLED;
}
#ifdef G_OS_WIN32
if (!dbus_win32_import_socket(invocation, arg_listener, &fd)) {
return DBUS_METHOD_INVOCATION_HANDLED;
}
#else
fd = g_unix_fd_list_get(fd_list, g_variant_get_handle(arg_listener), &err);
if (err) {
g_dbus_method_invocation_return_error(invocation,
@@ -491,7 +480,6 @@ dbus_audio_register_listener(AudioState *s,
err->message);
return DBUS_METHOD_INVOCATION_HANDLED;
}
#endif
socket = g_socket_new_from_fd(fd, &err);
if (err) {
@@ -500,28 +488,15 @@ dbus_audio_register_listener(AudioState *s,
DBUS_DISPLAY_ERROR_FAILED,
"Couldn't make a socket: %s",
err->message);
#ifdef G_OS_WIN32
closesocket(fd);
#else
close(fd);
#endif
return DBUS_METHOD_INVOCATION_HANDLED;
}
socket_conn = g_socket_connection_factory_create_connection(socket);
if (out) {
qemu_dbus_display1_audio_complete_register_out_listener(
da->iface, invocation
#ifdef G_OS_UNIX
, NULL
#endif
);
da->iface, invocation, NULL);
} else {
qemu_dbus_display1_audio_complete_register_in_listener(
da->iface, invocation
#ifdef G_OS_UNIX
, NULL
#endif
);
da->iface, invocation, NULL);
}
listener_conn =
@@ -599,32 +574,22 @@ dbus_audio_register_listener(AudioState *s,
static gboolean
dbus_audio_register_out_listener(AudioState *s,
GDBusMethodInvocation *invocation,
#ifdef G_OS_UNIX
GUnixFDList *fd_list,
#endif
GVariant *arg_listener)
{
return dbus_audio_register_listener(s, invocation,
#ifdef G_OS_UNIX
fd_list,
#endif
arg_listener, true);
fd_list, arg_listener, true);
}
static gboolean
dbus_audio_register_in_listener(AudioState *s,
GDBusMethodInvocation *invocation,
#ifdef G_OS_UNIX
GUnixFDList *fd_list,
#endif
GVariant *arg_listener)
{
return dbus_audio_register_listener(s, invocation,
#ifdef G_OS_UNIX
fd_list,
#endif
arg_listener, false);
fd_list, arg_listener, false);
}
static void
@@ -676,6 +641,7 @@ static struct audio_driver dbus_audio_driver = {
.fini = dbus_audio_fini,
.set_dbus_server = dbus_audio_set_server,
.pcm_ops = &dbus_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof(DBusVoiceOut),

View File

@@ -619,7 +619,7 @@ static void dsound_audio_fini (void *opaque)
g_free(s);
}
static void *dsound_audio_init(Audiodev *dev, Error **errp)
static void *dsound_audio_init(Audiodev *dev)
{
int err;
HRESULT hr;
@@ -721,6 +721,7 @@ static struct audio_driver dsound_audio_driver = {
.init = dsound_audio_init,
.fini = dsound_audio_fini,
.pcm_ops = &dsound_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = 1,
.voice_size_out = sizeof (DSoundVoiceOut),

View File

@@ -70,9 +70,6 @@ typedef struct QJackClient {
int buffersize;
jack_port_t **port;
QJackBuffer fifo;
/* Used as workspace by qjack_process() */
float **process_buffers;
}
QJackClient;
@@ -270,21 +267,22 @@ static int qjack_process(jack_nframes_t nframes, void *arg)
}
/* get the buffers for the ports */
float *buffers[c->nchannels];
for (int i = 0; i < c->nchannels; ++i) {
c->process_buffers[i] = jack_port_get_buffer(c->port[i], nframes);
buffers[i] = jack_port_get_buffer(c->port[i], nframes);
}
if (c->out) {
if (likely(c->enabled)) {
qjack_buffer_read_l(&c->fifo, c->process_buffers, nframes);
qjack_buffer_read_l(&c->fifo, buffers, nframes);
} else {
for (int i = 0; i < c->nchannels; ++i) {
memset(c->process_buffers[i], 0, nframes * sizeof(float));
memset(buffers[i], 0, nframes * sizeof(float));
}
}
} else {
if (likely(c->enabled)) {
qjack_buffer_write_l(&c->fifo, c->process_buffers, nframes);
qjack_buffer_write_l(&c->fifo, buffers, nframes);
}
}
@@ -402,8 +400,7 @@ static void qjack_client_connect_ports(QJackClient *c)
static int qjack_client_init(QJackClient *c)
{
jack_status_t status;
int client_name_len = jack_client_name_size(); /* includes NUL */
g_autofree char *client_name = g_new(char, client_name_len);
char client_name[jack_client_name_size()];
jack_options_t options = JackNullOption;
if (c->state == QJACK_STATE_RUNNING) {
@@ -412,7 +409,7 @@ static int qjack_client_init(QJackClient *c)
c->connect_ports = true;
snprintf(client_name, client_name_len, "%s-%s",
snprintf(client_name, sizeof(client_name), "%s-%s",
c->out ? "out" : "in",
c->opt->client_name ? c->opt->client_name : audio_application_name());
@@ -450,9 +447,6 @@ static int qjack_client_init(QJackClient *c)
jack_get_client_name(c->client));
}
/* Allocate working buffer for process callback */
c->process_buffers = g_new(float *, c->nchannels);
jack_set_process_callback(c->client, qjack_process , c);
jack_set_port_registration_callback(c->client, qjack_port_registration, c);
jack_set_xrun_callback(c->client, qjack_xrun, c);
@@ -584,7 +578,6 @@ static void qjack_client_fini_locked(QJackClient *c)
qjack_buffer_free(&c->fifo);
g_free(c->port);
g_free(c->process_buffers);
c->state = QJACK_STATE_DISCONNECTED;
/* fallthrough */
@@ -645,7 +638,7 @@ static int qjack_thread_creator(jack_native_thread_t *thread,
}
#endif
static void *qjack_init(Audiodev *dev, Error **errp)
static void *qjack_init(Audiodev *dev)
{
assert(dev->driver == AUDIODEV_DRIVER_JACK);
return dev;
@@ -676,6 +669,7 @@ static struct audio_driver jack_driver = {
.init = qjack_init,
.fini = qjack_fini,
.pcm_ops = &jack_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof(QJackOut),

View File

@@ -1,14 +1,15 @@
system_ss.add([spice_headers, files('audio.c')])
system_ss.add(files(
softmmu_ss.add([spice_headers, files('audio.c')])
softmmu_ss.add(files(
'audio-hmp-cmds.c',
'audio_legacy.c',
'mixeng.c',
'noaudio.c',
'wavaudio.c',
'wavcapture.c',
))
system_ss.add(when: coreaudio, if_true: files('coreaudio.m'))
system_ss.add(when: dsound, if_true: files('dsoundaudio.c', 'audio_win_int.c'))
softmmu_ss.add(when: coreaudio, if_true: files('coreaudio.m'))
softmmu_ss.add(when: dsound, if_true: files('dsoundaudio.c', 'audio_win_int.c'))
audio_modules = {}
foreach m : [
@@ -30,7 +31,7 @@ endforeach
if dbus_display
module_ss = ss.source_set()
module_ss.add(when: [gio, pixman], if_true: files('dbusaudio.c'))
module_ss.add(when: gio, if_true: files('dbusaudio.c'))
audio_modules += {'dbus': module_ss}
endif

View File

@@ -38,7 +38,7 @@ typedef struct st_sample st_sample;
typedef void (t_sample) (struct st_sample *dst, const void *src, int samples);
typedef void (f_sample) (void *dst, const struct st_sample *src, int samples);
/* indices: [stereo][signed][swap endianness][8, 16 or 32-bits] */
/* indices: [stereo][signed][swap endiannes][8, 16 or 32-bits] */
extern t_sample *mixeng_conv[2][2][2][3];
extern f_sample *mixeng_clip[2][2][2][3];

View File

@@ -104,7 +104,7 @@ static void no_enable_in(HWVoiceIn *hw, bool enable)
}
}
static void *no_audio_init(Audiodev *dev, Error **errp)
static void *no_audio_init(Audiodev *dev)
{
return &no_audio_init;
}
@@ -135,6 +135,7 @@ static struct audio_driver no_audio_driver = {
.init = no_audio_init,
.fini = no_audio_fini,
.pcm_ops = &no_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (NoVoiceOut),

View File

@@ -28,7 +28,6 @@
#include "qemu/main-loop.h"
#include "qemu/module.h"
#include "qemu/host-utils.h"
#include "qapi/error.h"
#include "audio.h"
#include "trace.h"
@@ -549,6 +548,7 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
hw->size_emul);
hw->buf_emul = NULL;
} else {
int err;
int trig = 0;
if (ioctl (fd, SNDCTL_DSP_SETTRIGGER, &trig) < 0) {
oss_logerr (errno, "SNDCTL_DSP_SETTRIGGER 0 failed\n");
@@ -736,7 +736,7 @@ static void oss_init_per_direction(AudiodevOssPerDirectionOptions *opdo)
}
}
static void *oss_audio_init(Audiodev *dev, Error **errp)
static void *oss_audio_init(Audiodev *dev)
{
AudiodevOssOptions *oopts;
assert(dev->driver == AUDIODEV_DRIVER_OSS);
@@ -745,12 +745,8 @@ static void *oss_audio_init(Audiodev *dev, Error **errp)
oss_init_per_direction(oopts->in);
oss_init_per_direction(oopts->out);
if (access(oopts->in->dev ?: "/dev/dsp", R_OK | W_OK) < 0) {
error_setg_errno(errp, errno, "%s not accessible", oopts->in->dev ?: "/dev/dsp");
return NULL;
}
if (access(oopts->out->dev ?: "/dev/dsp", R_OK | W_OK) < 0) {
error_setg_errno(errp, errno, "%s not accessible", oopts->out->dev ?: "/dev/dsp");
if (access(oopts->in->dev ?: "/dev/dsp", R_OK | W_OK) < 0 ||
access(oopts->out->dev ?: "/dev/dsp", R_OK | W_OK) < 0) {
return NULL;
}
return dev;
@@ -783,6 +779,7 @@ static struct audio_driver oss_audio_driver = {
.init = oss_audio_init,
.fini = oss_audio_fini,
.pcm_ops = &oss_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (OSSVoiceOut),

View File

@@ -3,7 +3,7 @@
#include "qemu/osdep.h"
#include "qemu/module.h"
#include "audio.h"
#include "qapi/error.h"
#include "qapi/opts-visitor.h"
#include <pulse/pulseaudio.h>
@@ -818,7 +818,7 @@ fail:
return NULL;
}
static void *qpa_audio_init(Audiodev *dev, Error **errp)
static void *qpa_audio_init(Audiodev *dev)
{
paaudio *g;
AudiodevPaOptions *popts = &dev->u.pa;
@@ -834,12 +834,10 @@ static void *qpa_audio_init(Audiodev *dev, Error **errp)
runtime = getenv("XDG_RUNTIME_DIR");
if (!runtime) {
error_setg(errp, "XDG_RUNTIME_DIR not set");
return NULL;
}
snprintf(pidfile, sizeof(pidfile), "%s/pulse/pid", runtime);
if (stat(pidfile, &st) != 0) {
error_setg_errno(errp, errno, "could not stat pidfile %s", pidfile);
return NULL;
}
}
@@ -869,7 +867,6 @@ static void *qpa_audio_init(Audiodev *dev, Error **errp)
}
if (!g->conn) {
g_free(g);
error_setg(errp, "could not connect to PulseAudio server");
return NULL;
}
@@ -931,6 +928,7 @@ static struct audio_driver pa_audio_driver = {
.init = qpa_audio_init,
.fini = qpa_audio_fini,
.pcm_ops = &qpa_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (PAVoiceOut),

View File

@@ -1,5 +1,5 @@
/*
* QEMU PipeWire audio driver
* QEMU Pipewire audio driver
*
* Copyright (c) 2023 Red Hat Inc.
*
@@ -13,7 +13,6 @@
#include "audio.h"
#include <errno.h>
#include "qemu/error-report.h"
#include "qapi/error.h"
#include <spa/param/audio/format-utils.h>
#include <spa/utils/ringbuffer.h>
#include <spa/utils/result.h>
@@ -67,9 +66,6 @@ typedef struct PWVoiceIn {
PWVoice v;
} PWVoiceIn;
#define PW_VOICE_IN(v) ((PWVoiceIn *)v)
#define PW_VOICE_OUT(v) ((PWVoiceOut *)v)
static void
stream_destroy(void *data)
{
@@ -201,6 +197,16 @@ on_stream_state_changed(void *data, enum pw_stream_state old,
trace_pw_state_changed(pw_stream_get_node_id(v->stream),
pw_stream_state_as_string(state));
switch (state) {
case PW_STREAM_STATE_ERROR:
case PW_STREAM_STATE_UNCONNECTED:
break;
case PW_STREAM_STATE_PAUSED:
case PW_STREAM_STATE_CONNECTING:
case PW_STREAM_STATE_STREAMING:
break;
}
}
static const struct pw_stream_events capture_stream_events = {
@@ -418,8 +424,8 @@ pw_to_audfmt(enum spa_audio_format fmt, int *endianness,
}
static int
qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
const char *name, enum spa_direction dir)
create_stream(pwaudio *c, PWVoice *v, const char *stream_name,
const char *name, enum spa_direction dir)
{
int res;
uint32_t n_params;
@@ -430,10 +436,6 @@ qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
struct pw_properties *props;
props = pw_properties_new(NULL, NULL);
if (!props) {
error_report("Failed to create PW properties: %s", g_strerror(errno));
return -1;
}
/* 75% of the timer period for faster updates */
buf_samples = (uint64_t)v->g->dev->timer_period * v->info.rate
@@ -446,8 +448,8 @@ qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
pw_properties_set(props, PW_KEY_TARGET_OBJECT, name);
}
v->stream = pw_stream_new(c->core, stream_name, props);
if (v->stream == NULL) {
error_report("Failed to create PW stream: %s", g_strerror(errno));
return -1;
}
@@ -475,7 +477,6 @@ qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
PW_STREAM_FLAG_MAP_BUFFERS |
PW_STREAM_FLAG_RT_PROCESS, params, n_params);
if (res < 0) {
error_report("Failed to connect PW stream: %s", g_strerror(errno));
pw_stream_destroy(v->stream);
return -1;
}
@@ -483,37 +484,71 @@ qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
return 0;
}
static void
qpw_set_position(uint32_t channels, uint32_t position[SPA_AUDIO_MAX_CHANNELS])
static int
qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
const char *name, enum spa_direction dir)
{
memcpy(position, (uint32_t[SPA_AUDIO_MAX_CHANNELS]) { SPA_AUDIO_CHANNEL_UNKNOWN, },
sizeof(uint32_t) * SPA_AUDIO_MAX_CHANNELS);
/*
* TODO: This currently expects the only frontend supporting more than 2
* channels is the usb-audio. We will need some means to set channel
* order when a new frontend gains multi-channel support.
*/
switch (channels) {
int r;
switch (v->info.channels) {
case 8:
position[6] = SPA_AUDIO_CHANNEL_SL;
position[7] = SPA_AUDIO_CHANNEL_SR;
/* fallthrough */
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[2] = SPA_AUDIO_CHANNEL_FC;
v->info.position[3] = SPA_AUDIO_CHANNEL_LFE;
v->info.position[4] = SPA_AUDIO_CHANNEL_RL;
v->info.position[5] = SPA_AUDIO_CHANNEL_RR;
v->info.position[6] = SPA_AUDIO_CHANNEL_SL;
v->info.position[7] = SPA_AUDIO_CHANNEL_SR;
break;
case 6:
position[2] = SPA_AUDIO_CHANNEL_FC;
position[3] = SPA_AUDIO_CHANNEL_LFE;
position[4] = SPA_AUDIO_CHANNEL_RL;
position[5] = SPA_AUDIO_CHANNEL_RR;
/* fallthrough */
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[2] = SPA_AUDIO_CHANNEL_FC;
v->info.position[3] = SPA_AUDIO_CHANNEL_LFE;
v->info.position[4] = SPA_AUDIO_CHANNEL_RL;
v->info.position[5] = SPA_AUDIO_CHANNEL_RR;
break;
case 5:
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[2] = SPA_AUDIO_CHANNEL_FC;
v->info.position[3] = SPA_AUDIO_CHANNEL_LFE;
v->info.position[4] = SPA_AUDIO_CHANNEL_RC;
break;
case 4:
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[2] = SPA_AUDIO_CHANNEL_FC;
v->info.position[3] = SPA_AUDIO_CHANNEL_RC;
break;
case 3:
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[2] = SPA_AUDIO_CHANNEL_LFE;
break;
case 2:
position[0] = SPA_AUDIO_CHANNEL_FL;
position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
break;
case 1:
position[0] = SPA_AUDIO_CHANNEL_MONO;
v->info.position[0] = SPA_AUDIO_CHANNEL_MONO;
break;
default:
dolog("Internal error: unsupported channel count %d\n", channels);
for (size_t i = 0; i < v->info.channels; i++) {
v->info.position[i] = SPA_AUDIO_CHANNEL_UNKNOWN;
}
break;
}
/* create a new unconnected pwstream */
r = create_stream(c, v, stream_name, name, dir);
if (r < 0) {
AUD_log(AUDIO_CAP, "Failed to create stream.");
return -1;
}
return r;
}
static int
@@ -531,7 +566,6 @@ qpw_init_out(HWVoiceOut *hw, struct audsettings *as, void *drv_opaque)
v->info.format = audfmt_to_pw(as->fmt, as->endianness);
v->info.channels = as->nchannels;
qpw_set_position(as->nchannels, v->info.position);
v->info.rate = as->freq;
obt_as.fmt =
@@ -545,6 +579,7 @@ qpw_init_out(HWVoiceOut *hw, struct audsettings *as, void *drv_opaque)
r = qpw_stream_new(c, v, ppdo->stream_name ? : c->dev->id,
ppdo->name, SPA_DIRECTION_OUTPUT);
if (r < 0) {
error_report("qpw_stream_new for playback failed");
pw_thread_loop_unlock(c->thread_loop);
return -1;
}
@@ -578,7 +613,6 @@ qpw_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
v->info.format = audfmt_to_pw(as->fmt, as->endianness);
v->info.channels = as->nchannels;
qpw_set_position(as->nchannels, v->info.position);
v->info.rate = as->freq;
obt_as.fmt =
@@ -589,6 +623,7 @@ qpw_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
r = qpw_stream_new(c, v, ppdo->stream_name ? : c->dev->id,
ppdo->name, SPA_DIRECTION_INPUT);
if (r < 0) {
error_report("qpw_stream_new for recording failed");
pw_thread_loop_unlock(c->thread_loop);
return -1;
}
@@ -604,56 +639,63 @@ qpw_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
return 0;
}
static void
qpw_voice_fini(PWVoice *v)
{
pwaudio *c = v->g;
if (!v->stream) {
return;
}
pw_thread_loop_lock(c->thread_loop);
pw_stream_destroy(v->stream);
v->stream = NULL;
pw_thread_loop_unlock(c->thread_loop);
}
static void
qpw_fini_out(HWVoiceOut *hw)
{
qpw_voice_fini(&PW_VOICE_OUT(hw)->v);
PWVoiceOut *pw = (PWVoiceOut *) hw;
PWVoice *v = &pw->v;
if (v->stream) {
pwaudio *c = v->g;
pw_thread_loop_lock(c->thread_loop);
pw_stream_destroy(v->stream);
v->stream = NULL;
pw_thread_loop_unlock(c->thread_loop);
}
}
static void
qpw_fini_in(HWVoiceIn *hw)
{
qpw_voice_fini(&PW_VOICE_IN(hw)->v);
PWVoiceIn *pw = (PWVoiceIn *) hw;
PWVoice *v = &pw->v;
if (v->stream) {
pwaudio *c = v->g;
pw_thread_loop_lock(c->thread_loop);
pw_stream_destroy(v->stream);
v->stream = NULL;
pw_thread_loop_unlock(c->thread_loop);
}
}
static void
qpw_voice_set_enabled(PWVoice *v, bool enable)
qpw_enable_out(HWVoiceOut *hw, bool enable)
{
PWVoiceOut *po = (PWVoiceOut *) hw;
PWVoice *v = &po->v;
pwaudio *c = v->g;
pw_thread_loop_lock(c->thread_loop);
pw_stream_set_active(v->stream, enable);
pw_thread_loop_unlock(c->thread_loop);
}
static void
qpw_enable_out(HWVoiceOut *hw, bool enable)
{
qpw_voice_set_enabled(&PW_VOICE_OUT(hw)->v, enable);
}
static void
qpw_enable_in(HWVoiceIn *hw, bool enable)
{
qpw_voice_set_enabled(&PW_VOICE_IN(hw)->v, enable);
PWVoiceIn *pi = (PWVoiceIn *) hw;
PWVoice *v = &pi->v;
pwaudio *c = v->g;
pw_thread_loop_lock(c->thread_loop);
pw_stream_set_active(v->stream, enable);
pw_thread_loop_unlock(c->thread_loop);
}
static void
qpw_voice_set_volume(PWVoice *v, Volume *vol)
qpw_volume_out(HWVoiceOut *hw, Volume *vol)
{
PWVoiceOut *pw = (PWVoiceOut *) hw;
PWVoice *v = &pw->v;
pwaudio *c = v->g;
int i, ret;
@@ -674,16 +716,29 @@ qpw_voice_set_volume(PWVoice *v, Volume *vol)
pw_thread_loop_unlock(c->thread_loop);
}
static void
qpw_volume_out(HWVoiceOut *hw, Volume *vol)
{
qpw_voice_set_volume(&PW_VOICE_OUT(hw)->v, vol);
}
static void
qpw_volume_in(HWVoiceIn *hw, Volume *vol)
{
qpw_voice_set_volume(&PW_VOICE_IN(hw)->v, vol);
PWVoiceIn *pw = (PWVoiceIn *) hw;
PWVoice *v = &pw->v;
pwaudio *c = v->g;
int i, ret;
pw_thread_loop_lock(c->thread_loop);
v->volume.channels = vol->channels;
for (i = 0; i < vol->channels; ++i) {
v->volume.values[i] = (float)vol->vol[i] / 255;
}
ret = pw_stream_set_control(v->stream,
SPA_PROP_channelVolumes, v->volume.channels, v->volume.values, 0);
trace_pw_vol(ret == 0 ? "success" : "failed");
v->muted = vol->mute;
float val = v->muted ? 1.f : 0.f;
ret = pw_stream_set_control(v->stream, SPA_PROP_mute, 1, &val, 0);
pw_thread_loop_unlock(c->thread_loop);
}
static int wait_resync(pwaudio *pw)
@@ -705,7 +760,6 @@ static int wait_resync(pwaudio *pw)
}
return 0;
}
static void
on_core_error(void *data, uint32_t id, int seq, int res, const char *message)
{
@@ -737,31 +791,30 @@ static const struct pw_core_events core_events = {
};
static void *
qpw_audio_init(Audiodev *dev, Error **errp)
qpw_audio_init(Audiodev *dev)
{
g_autofree pwaudio *pw = g_new0(pwaudio, 1);
assert(dev->driver == AUDIODEV_DRIVER_PIPEWIRE);
trace_pw_audio_init();
pw_init(NULL, NULL);
trace_pw_audio_init();
assert(dev->driver == AUDIODEV_DRIVER_PIPEWIRE);
pw->dev = dev;
pw->thread_loop = pw_thread_loop_new("PipeWire thread loop", NULL);
pw->thread_loop = pw_thread_loop_new("Pipewire thread loop", NULL);
if (pw->thread_loop == NULL) {
error_setg_errno(errp, errno, "Could not create PipeWire loop");
error_report("Could not create Pipewire loop");
goto fail;
}
pw->context =
pw_context_new(pw_thread_loop_get_loop(pw->thread_loop), NULL, 0);
if (pw->context == NULL) {
error_setg_errno(errp, errno, "Could not create PipeWire context");
error_report("Could not create Pipewire context");
goto fail;
}
if (pw_thread_loop_start(pw->thread_loop) < 0) {
error_setg_errno(errp, errno, "Could not start PipeWire loop");
error_report("Could not start Pipewire loop");
goto fail;
}
@@ -770,13 +823,13 @@ qpw_audio_init(Audiodev *dev, Error **errp)
pw->core = pw_context_connect(pw->context, NULL, 0);
if (pw->core == NULL) {
pw_thread_loop_unlock(pw->thread_loop);
goto fail_error;
goto fail;
}
if (pw_core_add_listener(pw->core, &pw->core_listener,
&core_events, pw) < 0) {
pw_thread_loop_unlock(pw->thread_loop);
goto fail_error;
goto fail;
}
if (wait_resync(pw) < 0) {
pw_thread_loop_unlock(pw->thread_loop);
@@ -786,14 +839,17 @@ qpw_audio_init(Audiodev *dev, Error **errp)
return g_steal_pointer(&pw);
fail_error:
error_setg(errp, "Failed to initialize PW context");
fail:
AUD_log(AUDIO_CAP, "Failed to initialize PW context");
if (pw->thread_loop) {
pw_thread_loop_stop(pw->thread_loop);
}
g_clear_pointer(&pw->context, pw_context_destroy);
g_clear_pointer(&pw->thread_loop, pw_thread_loop_destroy);
if (pw->context) {
g_clear_pointer(&pw->context, pw_context_destroy);
}
if (pw->thread_loop) {
g_clear_pointer(&pw->thread_loop, pw_thread_loop_destroy);
}
return NULL;
}
@@ -843,6 +899,7 @@ static struct audio_driver pw_audio_driver = {
.init = qpw_audio_init,
.fini = qpw_audio_fini,
.pcm_ops = &qpw_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof(PWVoiceOut),

View File

@@ -26,7 +26,6 @@
#include <SDL.h>
#include <SDL_thread.h>
#include "qemu/module.h"
#include "qapi/error.h"
#include "audio.h"
#ifndef _WIN32
@@ -450,10 +449,10 @@ static void sdl_enable_in(HWVoiceIn *hw, bool enable)
SDL_PauseAudioDevice(sdl->devid, !enable);
}
static void *sdl_audio_init(Audiodev *dev, Error **errp)
static void *sdl_audio_init(Audiodev *dev)
{
if (SDL_InitSubSystem (SDL_INIT_AUDIO)) {
error_setg(errp, "SDL failed to initialize audio subsystem");
sdl_logerr ("SDL failed to initialize audio subsystem\n");
return NULL;
}
@@ -494,6 +493,7 @@ static struct audio_driver sdl_audio_driver = {
.init = sdl_audio_init,
.fini = sdl_audio_fini,
.pcm_ops = &sdl_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof(SDLVoiceOut),

View File

@@ -518,7 +518,7 @@ static void sndio_fini_in(HWVoiceIn *hw)
sndio_fini(self);
}
static void *sndio_audio_init(Audiodev *dev, Error **errp)
static void *sndio_audio_init(Audiodev *dev)
{
assert(dev->driver == AUDIODEV_DRIVER_SNDIO);
return dev;
@@ -550,6 +550,7 @@ static struct audio_driver sndio_audio_driver = {
.init = sndio_audio_init,
.fini = sndio_audio_fini,
.pcm_ops = &sndio_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof(SndioVoice),

View File

@@ -22,7 +22,6 @@
#include "qemu/module.h"
#include "qemu/error-report.h"
#include "qemu/timer.h"
#include "qapi/error.h"
#include "ui/qemu-spice.h"
#define AUDIO_CAP "spice"
@@ -72,13 +71,11 @@ static const SpiceRecordInterface record_sif = {
.base.minor_version = SPICE_INTERFACE_RECORD_MINOR,
};
static void *spice_audio_init(Audiodev *dev, Error **errp)
static void *spice_audio_init(Audiodev *dev)
{
if (!using_spice) {
error_setg(errp, "Cannot use spice audio without -spice");
return NULL;
}
return &spice_audio_init;
}

View File

@@ -24,7 +24,7 @@ pw_read(int32_t avail, uint32_t index, size_t len) "avail=%d index=%u len=%zu"
pw_write(int32_t filled, int32_t avail, uint32_t index, size_t len) "filled=%d avail=%d index=%u len=%zu"
pw_vol(const char *ret) "set volume: %s"
pw_period(uint64_t quantum, uint32_t rate) "period =%" PRIu64 "/%u"
pw_audio_init(void) "Initialize PipeWire context"
pw_audio_init(void) "Initialize Pipewire context"
# audio.c
audio_timer_start(int interval) "interval %d ms"

View File

@@ -97,10 +97,6 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
dolog ("WAVE files can not handle 32bit formats\n");
return -1;
case AUDIO_FORMAT_F32:
dolog("WAVE files can not handle float formats\n");
return -1;
default:
abort();
}
@@ -186,7 +182,7 @@ static void wav_enable_out(HWVoiceOut *hw, bool enable)
}
}
static void *wav_audio_init(Audiodev *dev, Error **errp)
static void *wav_audio_init(Audiodev *dev)
{
assert(dev->driver == AUDIODEV_DRIVER_WAV);
return dev;
@@ -212,6 +208,7 @@ static struct audio_driver wav_audio_driver = {
.init = wav_audio_init,
.fini = wav_audio_fini,
.pcm_ops = &wav_pcm_ops,
.can_be_default = 0,
.max_voices_out = 1,
.max_voices_in = 0,
.voice_size_out = sizeof (WAVVoiceOut),

View File

@@ -232,9 +232,9 @@ static void cryptodev_vhost_user_init(
backend->conf.max_auth_key_len = VHOST_USER_MAX_AUTH_KEY_LEN;
}
static int64_t cryptodev_vhost_user_crypto_create_session(
static int64_t cryptodev_vhost_user_sym_create_session(
CryptoDevBackend *backend,
CryptoDevBackendSessionInfo *sess_info,
CryptoDevBackendSymSessionInfo *sess_info,
uint32_t queue_index, Error **errp)
{
CryptoDevBackendClient *cc =
@@ -266,17 +266,18 @@ static int cryptodev_vhost_user_create_session(
void *opaque)
{
uint32_t op_code = sess_info->op_code;
CryptoDevBackendSymSessionInfo *sym_sess_info;
int64_t ret;
Error *local_error = NULL;
int status;
switch (op_code) {
case VIRTIO_CRYPTO_CIPHER_CREATE_SESSION:
case VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION:
case VIRTIO_CRYPTO_HASH_CREATE_SESSION:
case VIRTIO_CRYPTO_MAC_CREATE_SESSION:
case VIRTIO_CRYPTO_AEAD_CREATE_SESSION:
ret = cryptodev_vhost_user_crypto_create_session(backend, sess_info,
sym_sess_info = &sess_info->u.sym_sess_info;
ret = cryptodev_vhost_user_sym_create_session(backend, sym_sess_info,
queue_index, &local_error);
break;

View File

@@ -191,11 +191,6 @@ static int cryptodev_backend_account(CryptoDevBackend *backend,
if (algtype == QCRYPTODEV_BACKEND_ALG_ASYM) {
CryptoDevBackendAsymOpInfo *asym_op_info = op_info->u.asym_op_info;
len = asym_op_info->src_len;
if (unlikely(!backend->asym_stat)) {
error_report("cryptodev: Unexpected asym operation");
return -VIRTIO_CRYPTO_NOTSUPP;
}
switch (op_info->op_code) {
case VIRTIO_CRYPTO_AKCIPHER_ENCRYPT:
CryptodevAsymStatIncEncrypt(backend, len);
@@ -215,11 +210,6 @@ static int cryptodev_backend_account(CryptoDevBackend *backend,
} else if (algtype == QCRYPTODEV_BACKEND_ALG_SYM) {
CryptoDevBackendSymOpInfo *sym_op_info = op_info->u.sym_op_info;
len = sym_op_info->src_len;
if (unlikely(!backend->sym_stat)) {
error_report("cryptodev: Unexpected sym operation");
return -VIRTIO_CRYPTO_NOTSUPP;
}
switch (op_info->op_code) {
case VIRTIO_CRYPTO_CIPHER_ENCRYPT:
CryptodevSymStatIncEncrypt(backend, len);
@@ -252,11 +242,10 @@ static void cryptodev_backend_throttle_timer_cb(void *opaque)
continue;
}
throttle_account(&backend->ts, THROTTLE_WRITE, ret);
throttle_account(&backend->ts, true, ret);
cryptodev_backend_operation(backend, op_info);
if (throttle_enabled(&backend->tc) &&
throttle_schedule_timer(&backend->ts, &backend->tt,
THROTTLE_WRITE)) {
throttle_schedule_timer(&backend->ts, &backend->tt, true)) {
break;
}
}
@@ -272,7 +261,7 @@ int cryptodev_backend_crypto_operation(
goto do_account;
}
if (throttle_schedule_timer(&backend->ts, &backend->tt, THROTTLE_WRITE) ||
if (throttle_schedule_timer(&backend->ts, &backend->tt, true) ||
!QTAILQ_EMPTY(&backend->opinfos)) {
QTAILQ_INSERT_TAIL(&backend->opinfos, op_info, next);
return 0;
@@ -284,7 +273,7 @@ do_account:
return ret;
}
throttle_account(&backend->ts, THROTTLE_WRITE, ret);
throttle_account(&backend->ts, true, ret);
return cryptodev_backend_operation(backend, op_info);
}
@@ -342,7 +331,8 @@ static void cryptodev_backend_set_throttle(CryptoDevBackend *backend, int field,
if (!enabled) {
throttle_init(&backend->ts);
throttle_timers_init(&backend->tt, qemu_get_aio_context(),
QEMU_CLOCK_REALTIME, NULL,
QEMU_CLOCK_REALTIME,
cryptodev_backend_throttle_timer_cb, /* FIXME */
cryptodev_backend_throttle_timer_cb, backend);
}
@@ -532,7 +522,7 @@ static int cryptodev_backend_stats_query(Object *obj, void *data)
entry = g_new0(StatsResult, 1);
entry->provider = STATS_PROVIDER_CRYPTODEV;
entry->qom_path = object_get_canonical_path(obj);
entry->qom_path = g_strdup(object_get_canonical_path(obj));
entry->stats = stats_list;
QAPI_LIST_PREPEND(*stats_results, entry);

View File

@@ -426,7 +426,8 @@ dbus_vmstate_complete(UserCreatable *uc, Error **errp)
return;
}
if (vmstate_register_any(VMSTATE_IF(self), &dbus_vmstate, self) < 0) {
if (vmstate_register(VMSTATE_IF(self), VMSTATE_INSTANCE_ID_ANY,
&dbus_vmstate, self) < 0) {
error_setg(errp, "Failed to register vmstate");
}
}

View File

@@ -18,8 +18,6 @@
#include "sysemu/hostmem.h"
#include "qom/object_interfaces.h"
#include "qom/object.h"
#include "qapi/visitor.h"
#include "qapi/qapi-visit-common.h"
OBJECT_DECLARE_SIMPLE_TYPE(HostMemoryBackendFile, MEMORY_BACKEND_FILE)
@@ -33,7 +31,6 @@ struct HostMemoryBackendFile {
bool discard_data;
bool is_pmem;
bool readonly;
OnOffAuto rom;
};
static void
@@ -56,39 +53,14 @@ file_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
return;
}
switch (fb->rom) {
case ON_OFF_AUTO_AUTO:
/* Traditionally, opening the file readonly always resulted in ROM. */
fb->rom = fb->readonly ? ON_OFF_AUTO_ON : ON_OFF_AUTO_OFF;
break;
case ON_OFF_AUTO_ON:
if (!fb->readonly) {
error_setg(errp, "property 'rom' = 'on' is not supported with"
" 'readonly' = 'off'");
return;
}
break;
case ON_OFF_AUTO_OFF:
if (fb->readonly && backend->share) {
error_setg(errp, "property 'rom' = 'off' is incompatible with"
" 'readonly' = 'on' and 'share' = 'on'");
return;
}
break;
default:
assert(false);
}
name = host_memory_backend_get_name(backend);
ram_flags = backend->share ? RAM_SHARED : 0;
ram_flags |= fb->readonly ? RAM_READONLY_FD : 0;
ram_flags |= fb->rom == ON_OFF_AUTO_ON ? RAM_READONLY : 0;
ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
ram_flags |= fb->is_pmem ? RAM_PMEM : 0;
ram_flags |= RAM_NAMED_FILE;
memory_region_init_ram_from_file(&backend->mr, OBJECT(backend), name,
backend->size, fb->align, ram_flags,
fb->mem_path, fb->offset, errp);
fb->mem_path, fb->offset, fb->readonly,
errp);
g_free(name);
#endif
}
@@ -228,32 +200,6 @@ static void file_memory_backend_set_readonly(Object *obj, bool value,
fb->readonly = value;
}
static void file_memory_backend_get_rom(Object *obj, Visitor *v,
const char *name, void *opaque,
Error **errp)
{
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(obj);
OnOffAuto rom = fb->rom;
visit_type_OnOffAuto(v, name, &rom, errp);
}
static void file_memory_backend_set_rom(Object *obj, Visitor *v,
const char *name, void *opaque,
Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(obj);
if (host_memory_backend_mr_inited(backend)) {
error_setg(errp, "cannot change property '%s' of %s.", name,
object_get_typename(obj));
return;
}
visit_type_OnOffAuto(v, name, &fb->rom, errp);
}
static void file_backend_unparent(Object *obj)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
@@ -296,10 +242,6 @@ file_backend_class_init(ObjectClass *oc, void *data)
object_class_property_add_bool(oc, "readonly",
file_memory_backend_get_readonly,
file_memory_backend_set_readonly);
object_class_property_add(oc, "rom", "OnOffAuto",
file_memory_backend_get_rom, file_memory_backend_set_rom, NULL, NULL);
object_class_property_set_description(oc, "rom",
"Whether to create Read Only Memory (ROM)");
}
static void file_backend_instance_finalize(Object *o)

View File

@@ -1,4 +1,4 @@
system_ss.add([files(
softmmu_ss.add([files(
'cryptodev-builtin.c',
'cryptodev-hmp-cmds.c',
'cryptodev.c',
@@ -10,20 +10,20 @@ system_ss.add([files(
'confidential-guest-support.c',
), numa])
system_ss.add(when: 'CONFIG_POSIX', if_true: files('rng-random.c'))
system_ss.add(when: 'CONFIG_POSIX', if_true: files('hostmem-file.c'))
system_ss.add(when: 'CONFIG_LINUX', if_true: files('hostmem-memfd.c'))
softmmu_ss.add(when: 'CONFIG_POSIX', if_true: files('rng-random.c'))
softmmu_ss.add(when: 'CONFIG_POSIX', if_true: files('hostmem-file.c'))
softmmu_ss.add(when: 'CONFIG_LINUX', if_true: files('hostmem-memfd.c'))
if keyutils.found()
system_ss.add(keyutils, files('cryptodev-lkcf.c'))
softmmu_ss.add(keyutils, files('cryptodev-lkcf.c'))
endif
if have_vhost_user
system_ss.add(when: 'CONFIG_VIRTIO', if_true: files('vhost-user.c'))
softmmu_ss.add(when: 'CONFIG_VIRTIO', if_true: files('vhost-user.c'))
endif
system_ss.add(when: 'CONFIG_VIRTIO_CRYPTO', if_true: files('cryptodev-vhost.c'))
softmmu_ss.add(when: 'CONFIG_VIRTIO_CRYPTO', if_true: files('cryptodev-vhost.c'))
if have_vhost_user_crypto
system_ss.add(when: 'CONFIG_VIRTIO_CRYPTO', if_true: files('cryptodev-vhost-user.c'))
softmmu_ss.add(when: 'CONFIG_VIRTIO_CRYPTO', if_true: files('cryptodev-vhost-user.c'))
endif
system_ss.add(when: gio, if_true: files('dbus-vmstate.c'))
system_ss.add(when: 'CONFIG_SGX', if_true: files('hostmem-epc.c'))
softmmu_ss.add(when: gio, if_true: files('dbus-vmstate.c'))
softmmu_ss.add(when: 'CONFIG_SGX', if_true: files('hostmem-epc.c'))
subdir('tpm')

View File

@@ -1,6 +1,6 @@
if have_tpm
system_ss.add(files('tpm_backend.c'))
system_ss.add(files('tpm_util.c'))
system_ss.add(when: 'CONFIG_TPM_PASSTHROUGH', if_true: files('tpm_passthrough.c'))
system_ss.add(when: 'CONFIG_TPM_EMULATOR', if_true: files('tpm_emulator.c'))
softmmu_ss.add(files('tpm_backend.c'))
softmmu_ss.add(files('tpm_util.c'))
softmmu_ss.add(when: 'CONFIG_TPM_PASSTHROUGH', if_true: files('tpm_passthrough.c'))
softmmu_ss.add(when: 'CONFIG_TPM_EMULATOR', if_true: files('tpm_emulator.c'))
endif

View File

@@ -534,8 +534,11 @@ static int tpm_emulator_block_migration(TPMEmulator *tpm_emu)
error_setg(&tpm_emu->migration_blocker,
"Migration disabled: TPM emulator does not support "
"migration");
if (migrate_add_blocker(&tpm_emu->migration_blocker, &err) < 0) {
if (migrate_add_blocker(tpm_emu->migration_blocker, &err) < 0) {
error_report_err(err);
error_free(tpm_emu->migration_blocker);
tpm_emu->migration_blocker = NULL;
return -1;
}
}
@@ -975,7 +978,8 @@ static void tpm_emulator_inst_init(Object *obj)
qemu_add_vm_change_state_handler(tpm_emulator_vm_state_change,
tpm_emu);
vmstate_register_any(NULL, &vmstate_tpm_emulator, obj);
vmstate_register(NULL, VMSTATE_INSTANCE_ID_ANY,
&vmstate_tpm_emulator, obj);
}
/*
@@ -1012,7 +1016,10 @@ static void tpm_emulator_inst_finalize(Object *obj)
qapi_free_TPMEmulatorOptions(tpm_emu->options);
migrate_del_blocker(&tpm_emu->migration_blocker);
if (tpm_emu->migration_blocker) {
migrate_del_blocker(tpm_emu->migration_blocker);
error_free(tpm_emu->migration_blocker);
}
tpm_sized_buffer_reset(&state_blobs->volatil);
tpm_sized_buffer_reset(&state_blobs->permanent);

View File

@@ -238,7 +238,7 @@ struct ptm_lockstorage {
} req; /* request */
struct {
ptm_res tpm_result;
} resp; /* response */
} resp; /* reponse */
} u;
};

View File

@@ -112,8 +112,12 @@ static int tpm_util_request(int fd,
void *response,
size_t responselen)
{
GPollFD fds[1] = { {.fd = fd, .events = G_IO_IN } };
fd_set readfds;
int n;
struct timeval tv = {
.tv_sec = 1,
.tv_usec = 0,
};
n = write(fd, request, requestlen);
if (n < 0) {
@@ -123,8 +127,11 @@ static int tpm_util_request(int fd,
return -EFAULT;
}
FD_ZERO(&readfds);
FD_SET(fd, &readfds);
/* wait for a second */
n = RETRY_ON_EINTR(g_poll(fds, 1, 1000));
n = select(fd + 1, &readfds, NULL, NULL, &tv);
if (n != 1) {
return -errno;
}

633
block.c

File diff suppressed because it is too large Load Diff

View File

@@ -384,33 +384,31 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
return NULL;
}
bdrv_graph_rdlock_main_loop();
if (!bdrv_is_inserted(bs)) {
error_setg(errp, "Device is not inserted: %s",
bdrv_get_device_name(bs));
goto error_rdlock;
return NULL;
}
if (!bdrv_is_inserted(target)) {
error_setg(errp, "Device is not inserted: %s",
bdrv_get_device_name(target));
goto error_rdlock;
return NULL;
}
if (compress && !bdrv_supports_compressed_writes(target)) {
error_setg(errp, "Compression is not supported for this drive %s",
bdrv_get_device_name(target));
goto error_rdlock;
return NULL;
}
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) {
goto error_rdlock;
return NULL;
}
if (bdrv_op_is_blocked(target, BLOCK_OP_TYPE_BACKUP_TARGET, errp)) {
goto error_rdlock;
return NULL;
}
bdrv_graph_rdunlock_main_loop();
if (perf->max_workers < 1 || perf->max_workers > INT_MAX) {
error_setg(errp, "max-workers must be between 1 and %d", INT_MAX);
@@ -438,7 +436,6 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
len = bdrv_getlength(bs);
if (len < 0) {
GRAPH_RDLOCK_GUARD_MAINLOOP();
error_setg_errno(errp, -len, "Unable to get length for '%s'",
bdrv_get_device_or_node_name(bs));
goto error;
@@ -446,7 +443,6 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
target_len = bdrv_getlength(target);
if (target_len < 0) {
GRAPH_RDLOCK_GUARD_MAINLOOP();
error_setg_errno(errp, -target_len, "Unable to get length for '%s'",
bdrv_get_device_or_node_name(bs));
goto error;
@@ -496,10 +492,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
block_copy_set_speed(bcs, speed);
/* Required permissions are taken by copy-before-write filter target */
bdrv_graph_wrlock(target);
block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
&error_abort);
bdrv_graph_wrunlock();
return &job->common;
@@ -512,8 +506,4 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
}
return NULL;
error_rdlock:
bdrv_graph_rdunlock_main_loop();
return NULL;
}

View File

@@ -508,8 +508,6 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
goto out;
}
bdrv_graph_rdlock_main_loop();
bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
(BDRV_REQ_FUA & bs->file->bs->supported_write_flags);
bs->supported_zero_flags = BDRV_REQ_WRITE_UNCHANGED |
@@ -522,7 +520,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
if (s->align && (s->align >= INT_MAX || !is_power_of_2(s->align))) {
error_setg(errp, "Cannot meet constraints with align %" PRIu64,
s->align);
goto out_rdlock;
goto out;
}
align = MAX(s->align, bs->file->bs->bl.request_alignment);
@@ -532,7 +530,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
!QEMU_IS_ALIGNED(s->max_transfer, align))) {
error_setg(errp, "Cannot meet constraints with max-transfer %" PRIu64,
s->max_transfer);
goto out_rdlock;
goto out;
}
s->opt_write_zero = qemu_opt_get_size(opts, "opt-write-zero", 0);
@@ -541,7 +539,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
!QEMU_IS_ALIGNED(s->opt_write_zero, align))) {
error_setg(errp, "Cannot meet constraints with opt-write-zero %" PRIu64,
s->opt_write_zero);
goto out_rdlock;
goto out;
}
s->max_write_zero = qemu_opt_get_size(opts, "max-write-zero", 0);
@@ -551,7 +549,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
MAX(s->opt_write_zero, align)))) {
error_setg(errp, "Cannot meet constraints with max-write-zero %" PRIu64,
s->max_write_zero);
goto out_rdlock;
goto out;
}
s->opt_discard = qemu_opt_get_size(opts, "opt-discard", 0);
@@ -560,7 +558,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
!QEMU_IS_ALIGNED(s->opt_discard, align))) {
error_setg(errp, "Cannot meet constraints with opt-discard %" PRIu64,
s->opt_discard);
goto out_rdlock;
goto out;
}
s->max_discard = qemu_opt_get_size(opts, "max-discard", 0);
@@ -570,14 +568,12 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
MAX(s->opt_discard, align)))) {
error_setg(errp, "Cannot meet constraints with max-discard %" PRIu64,
s->max_discard);
goto out_rdlock;
goto out;
}
bdrv_debug_event(bs, BLKDBG_NONE);
ret = 0;
out_rdlock:
bdrv_graph_rdunlock_main_loop();
out:
if (ret < 0) {
qemu_mutex_destroy(&s->lock);
@@ -750,10 +746,13 @@ blkdebug_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
return bdrv_co_pdiscard(bs->file, offset, bytes);
}
static int coroutine_fn GRAPH_RDLOCK
blkdebug_co_block_status(BlockDriverState *bs, bool want_zero, int64_t offset,
int64_t bytes, int64_t *pnum, int64_t *map,
BlockDriverState **file)
static int coroutine_fn blkdebug_co_block_status(BlockDriverState *bs,
bool want_zero,
int64_t offset,
int64_t bytes,
int64_t *pnum,
int64_t *map,
BlockDriverState **file)
{
int err;
@@ -974,7 +973,7 @@ blkdebug_co_getlength(BlockDriverState *bs)
return bdrv_co_getlength(bs->file->bs);
}
static void GRAPH_RDLOCK blkdebug_refresh_filename(BlockDriverState *bs)
static void blkdebug_refresh_filename(BlockDriverState *bs)
{
BDRVBlkdebugState *s = bs->opaque;
const QDictEntry *e;

View File

@@ -13,7 +13,6 @@
#include "block/block_int.h"
#include "exec/memory.h"
#include "exec/cpu-common.h" /* for qemu_ram_get_fd() */
#include "qemu/defer-call.h"
#include "qapi/error.h"
#include "qemu/error-report.h"
#include "qapi/qmp/qdict.h"
@@ -23,6 +22,16 @@
#include "block/block-io.h"
/*
* Keep the QEMU BlockDriver names identical to the libblkio driver names.
* Using macros instead of typing out the string literals avoids typos.
*/
#define DRIVER_IO_URING "io_uring"
#define DRIVER_NVME_IO_URING "nvme-io_uring"
#define DRIVER_VIRTIO_BLK_VFIO_PCI "virtio-blk-vfio-pci"
#define DRIVER_VIRTIO_BLK_VHOST_USER "virtio-blk-vhost-user"
#define DRIVER_VIRTIO_BLK_VHOST_VDPA "virtio-blk-vhost-vdpa"
/*
* Allocated bounce buffers are kept in a list sorted by buffer address.
*/
@@ -313,10 +322,10 @@ static void blkio_detach_aio_context(BlockDriverState *bs)
}
/*
* Called by defer_call_end() or immediately if not in a deferred section.
* Called without blkio_lock.
* Called by blk_io_unplug() or immediately if not plugged. Called without
* blkio_lock.
*/
static void blkio_deferred_fn(void *opaque)
static void blkio_unplug_fn(void *opaque)
{
BDRVBlkioState *s = opaque;
@@ -333,7 +342,7 @@ static void blkio_submit_io(BlockDriverState *bs)
{
BDRVBlkioState *s = bs->opaque;
defer_call(blkio_deferred_fn, s);
blk_io_plug_call(blkio_unplug_fn, s);
}
static int coroutine_fn
@@ -604,8 +613,8 @@ static void blkio_unregister_buf(BlockDriverState *bs, void *host, size_t size)
}
}
static int blkio_io_uring_connect(BlockDriverState *bs, QDict *options,
int flags, Error **errp)
static int blkio_io_uring_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
const char *filename = qdict_get_str(options, "filename");
BDRVBlkioState *s = bs->opaque;
@@ -628,18 +637,11 @@ static int blkio_io_uring_connect(BlockDriverState *bs, QDict *options,
}
}
ret = blkio_connect(s->blkio);
if (ret < 0) {
error_setg_errno(errp, -ret, "blkio_connect failed: %s",
blkio_get_error_msg());
return ret;
}
return 0;
}
static int blkio_nvme_io_uring_connect(BlockDriverState *bs, QDict *options,
int flags, Error **errp)
static int blkio_nvme_io_uring(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
const char *path = qdict_get_try_str(options, "path");
BDRVBlkioState *s = bs->opaque;
@@ -663,23 +665,16 @@ static int blkio_nvme_io_uring_connect(BlockDriverState *bs, QDict *options,
return -EINVAL;
}
ret = blkio_connect(s->blkio);
if (ret < 0) {
error_setg_errno(errp, -ret, "blkio_connect failed: %s",
blkio_get_error_msg());
return ret;
}
return 0;
}
static int blkio_virtio_blk_connect(BlockDriverState *bs, QDict *options,
int flags, Error **errp)
static int blkio_virtio_blk_common_open(BlockDriverState *bs,
QDict *options, int flags, Error **errp)
{
const char *path = qdict_get_try_str(options, "path");
BDRVBlkioState *s = bs->opaque;
bool fd_supported = false;
int fd = -1, ret;
int fd, ret;
if (!path) {
error_setg(errp, "missing 'path' option");
@@ -691,7 +686,7 @@ static int blkio_virtio_blk_connect(BlockDriverState *bs, QDict *options,
return -EINVAL;
}
if (blkio_set_int(s->blkio, "fd", -1) == 0) {
if (blkio_get_int(s->blkio, "fd", &fd) == 0) {
fd_supported = true;
}
@@ -701,86 +696,33 @@ static int blkio_virtio_blk_connect(BlockDriverState *bs, QDict *options,
* layer through the "/dev/fdset/N" special path.
*/
if (fd_supported) {
/*
* `path` can contain the path of a character device
* (e.g. /dev/vhost-vdpa-0 or /dev/vfio/vfio) or a unix socket.
*
* So, we should always open it with O_RDWR flag, also if BDRV_O_RDWR
* is not set in the open flags, because the exchange of IOCTL commands
* for example will fail.
*
* In order to open the device read-only, we are using the `read-only`
* property of the libblkio driver in blkio_file_open().
*/
fd = qemu_open(path, O_RDWR, NULL);
if (fd < 0) {
/*
* qemu_open() can fail if the user specifies a path that is not
* a file or device, for example in the case of Unix Domain Socket
* for the virtio-blk-vhost-user driver. In such cases let's have
* libblkio open the path directly.
*/
fd_supported = false;
int open_flags;
if (flags & BDRV_O_RDWR) {
open_flags = O_RDWR;
} else {
ret = blkio_set_int(s->blkio, "fd", fd);
if (ret < 0) {
fd_supported = false;
qemu_close(fd);
fd = -1;
}
open_flags = O_RDONLY;
}
}
if (!fd_supported) {
ret = blkio_set_str(s->blkio, "path", path);
if (ret < 0) {
error_setg_errno(errp, -ret, "failed to set path: %s",
blkio_get_error_msg());
return ret;
fd = qemu_open(path, open_flags, errp);
if (fd < 0) {
return -EINVAL;
}
}
ret = blkio_connect(s->blkio);
if (ret < 0 && fd >= 0) {
/* Failed to give the FD to libblkio, close it */
qemu_close(fd);
fd = -1;
}
/*
* Before https://gitlab.com/libblkio/libblkio/-/merge_requests/208
* (libblkio <= v1.3.0), setting the `fd` property is not enough to check
* whether the driver supports the `fd` property or not. In that case,
* blkio_connect() will fail with -EINVAL.
* So let's try calling blkio_connect() again by directly setting `path`
* to cover this scenario.
*/
if (fd_supported && ret == -EINVAL) {
/*
* We need to clear the `fd` property we set previously by setting
* it to -1.
*/
ret = blkio_set_int(s->blkio, "fd", -1);
ret = blkio_set_int(s->blkio, "fd", fd);
if (ret < 0) {
error_setg_errno(errp, -ret, "failed to set fd: %s",
blkio_get_error_msg());
qemu_close(fd);
return ret;
}
} else {
ret = blkio_set_str(s->blkio, "path", path);
if (ret < 0) {
error_setg_errno(errp, -ret, "failed to set path: %s",
blkio_get_error_msg());
return ret;
}
ret = blkio_connect(s->blkio);
}
if (ret < 0) {
error_setg_errno(errp, -ret, "blkio_connect failed: %s",
blkio_get_error_msg());
return ret;
}
qdict_del(options, "path");
@@ -802,6 +744,24 @@ static int blkio_file_open(BlockDriverState *bs, QDict *options, int flags,
return ret;
}
if (strcmp(blkio_driver, DRIVER_IO_URING) == 0) {
ret = blkio_io_uring_open(bs, options, flags, errp);
} else if (strcmp(blkio_driver, DRIVER_NVME_IO_URING) == 0) {
ret = blkio_nvme_io_uring(bs, options, flags, errp);
} else if (strcmp(blkio_driver, DRIVER_VIRTIO_BLK_VFIO_PCI) == 0) {
ret = blkio_virtio_blk_common_open(bs, options, flags, errp);
} else if (strcmp(blkio_driver, DRIVER_VIRTIO_BLK_VHOST_USER) == 0) {
ret = blkio_virtio_blk_common_open(bs, options, flags, errp);
} else if (strcmp(blkio_driver, DRIVER_VIRTIO_BLK_VHOST_VDPA) == 0) {
ret = blkio_virtio_blk_common_open(bs, options, flags, errp);
} else {
g_assert_not_reached();
}
if (ret < 0) {
blkio_destroy(&s->blkio);
return ret;
}
if (!(flags & BDRV_O_RDWR)) {
ret = blkio_set_bool(s->blkio, "read-only", true);
if (ret < 0) {
@@ -812,20 +772,10 @@ static int blkio_file_open(BlockDriverState *bs, QDict *options, int flags,
}
}
if (strcmp(blkio_driver, "io_uring") == 0) {
ret = blkio_io_uring_connect(bs, options, flags, errp);
} else if (strcmp(blkio_driver, "nvme-io_uring") == 0) {
ret = blkio_nvme_io_uring_connect(bs, options, flags, errp);
} else if (strcmp(blkio_driver, "virtio-blk-vfio-pci") == 0) {
ret = blkio_virtio_blk_connect(bs, options, flags, errp);
} else if (strcmp(blkio_driver, "virtio-blk-vhost-user") == 0) {
ret = blkio_virtio_blk_connect(bs, options, flags, errp);
} else if (strcmp(blkio_driver, "virtio-blk-vhost-vdpa") == 0) {
ret = blkio_virtio_blk_connect(bs, options, flags, errp);
} else {
g_assert_not_reached();
}
ret = blkio_connect(s->blkio);
if (ret < 0) {
error_setg_errno(errp, -ret, "blkio_connect failed: %s",
blkio_get_error_msg());
blkio_destroy(&s->blkio);
return ret;
}
@@ -905,7 +855,6 @@ static int blkio_file_open(BlockDriverState *bs, QDict *options, int flags,
QLIST_INIT(&s->bounce_bufs);
s->blkioq = blkio_get_queue(s->blkio, 0);
s->completion_fd = blkioq_get_completion_fd(s->blkioq);
blkioq_set_completion_fd_enabled(s->blkioq, true);
blkio_attach_aio_context(bs, bdrv_get_aio_context(bs));
return 0;
@@ -1079,63 +1028,49 @@ static void blkio_refresh_limits(BlockDriverState *bs, Error **errp)
* - truncate
*/
/*
* Do not include .format_name and .protocol_name because module_block.py
* does not parse macros in the source code.
*/
#define BLKIO_DRIVER_COMMON \
.instance_size = sizeof(BDRVBlkioState), \
.bdrv_file_open = blkio_file_open, \
.bdrv_close = blkio_close, \
.bdrv_co_getlength = blkio_co_getlength, \
.bdrv_co_truncate = blkio_truncate, \
.bdrv_co_get_info = blkio_co_get_info, \
.bdrv_attach_aio_context = blkio_attach_aio_context, \
.bdrv_detach_aio_context = blkio_detach_aio_context, \
.bdrv_co_pdiscard = blkio_co_pdiscard, \
.bdrv_co_preadv = blkio_co_preadv, \
.bdrv_co_pwritev = blkio_co_pwritev, \
.bdrv_co_flush_to_disk = blkio_co_flush, \
.bdrv_co_pwrite_zeroes = blkio_co_pwrite_zeroes, \
.bdrv_refresh_limits = blkio_refresh_limits, \
.bdrv_register_buf = blkio_register_buf, \
.bdrv_unregister_buf = blkio_unregister_buf,
#define BLKIO_DRIVER(name, ...) \
{ \
.format_name = name, \
.protocol_name = name, \
.instance_size = sizeof(BDRVBlkioState), \
.bdrv_file_open = blkio_file_open, \
.bdrv_close = blkio_close, \
.bdrv_co_getlength = blkio_co_getlength, \
.bdrv_co_truncate = blkio_truncate, \
.bdrv_co_get_info = blkio_co_get_info, \
.bdrv_attach_aio_context = blkio_attach_aio_context, \
.bdrv_detach_aio_context = blkio_detach_aio_context, \
.bdrv_co_pdiscard = blkio_co_pdiscard, \
.bdrv_co_preadv = blkio_co_preadv, \
.bdrv_co_pwritev = blkio_co_pwritev, \
.bdrv_co_flush_to_disk = blkio_co_flush, \
.bdrv_co_pwrite_zeroes = blkio_co_pwrite_zeroes, \
.bdrv_refresh_limits = blkio_refresh_limits, \
.bdrv_register_buf = blkio_register_buf, \
.bdrv_unregister_buf = blkio_unregister_buf, \
__VA_ARGS__ \
}
/*
* Use the same .format_name and .protocol_name as the libblkio driver name for
* consistency.
*/
static BlockDriver bdrv_io_uring = {
.format_name = "io_uring",
.protocol_name = "io_uring",
static BlockDriver bdrv_io_uring = BLKIO_DRIVER(
DRIVER_IO_URING,
.bdrv_needs_filename = true,
BLKIO_DRIVER_COMMON
};
);
static BlockDriver bdrv_nvme_io_uring = {
.format_name = "nvme-io_uring",
.protocol_name = "nvme-io_uring",
BLKIO_DRIVER_COMMON
};
static BlockDriver bdrv_nvme_io_uring = BLKIO_DRIVER(
DRIVER_NVME_IO_URING,
);
static BlockDriver bdrv_virtio_blk_vfio_pci = {
.format_name = "virtio-blk-vfio-pci",
.protocol_name = "virtio-blk-vfio-pci",
BLKIO_DRIVER_COMMON
};
static BlockDriver bdrv_virtio_blk_vfio_pci = BLKIO_DRIVER(
DRIVER_VIRTIO_BLK_VFIO_PCI
);
static BlockDriver bdrv_virtio_blk_vhost_user = {
.format_name = "virtio-blk-vhost-user",
.protocol_name = "virtio-blk-vhost-user",
BLKIO_DRIVER_COMMON
};
static BlockDriver bdrv_virtio_blk_vhost_user = BLKIO_DRIVER(
DRIVER_VIRTIO_BLK_VHOST_USER
);
static BlockDriver bdrv_virtio_blk_vhost_vdpa = {
.format_name = "virtio-blk-vhost-vdpa",
.protocol_name = "virtio-blk-vhost-vdpa",
BLKIO_DRIVER_COMMON
};
static BlockDriver bdrv_virtio_blk_vhost_vdpa = BLKIO_DRIVER(
DRIVER_VIRTIO_BLK_VHOST_VDPA
);
static void bdrv_blkio_init(void)
{

View File

@@ -251,9 +251,7 @@ static int blk_log_writes_open(BlockDriverState *bs, QDict *options, int flags,
ret = 0;
fail_log:
if (ret < 0) {
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, s->log_file);
bdrv_graph_wrunlock();
s->log_file = NULL;
}
fail:
@@ -265,10 +263,8 @@ static void blk_log_writes_close(BlockDriverState *bs)
{
BDRVBlkLogWritesState *s = bs->opaque;
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, s->log_file);
s->log_file = NULL;
bdrv_graph_wrunlock();
}
static int64_t coroutine_fn GRAPH_RDLOCK

View File

@@ -130,13 +130,7 @@ static int coroutine_fn GRAPH_RDLOCK blkreplay_co_flush(BlockDriverState *bs)
static int blkreplay_snapshot_goto(BlockDriverState *bs,
const char *snapshot_id)
{
BlockDriverState *file_bs;
bdrv_graph_rdlock_main_loop();
file_bs = bs->file->bs;
bdrv_graph_rdunlock_main_loop();
return bdrv_snapshot_goto(file_bs, snapshot_id, NULL);
return bdrv_snapshot_goto(bs->file->bs, snapshot_id, NULL);
}
static BlockDriver bdrv_blkreplay = {

Some files were not shown because too many files have changed in this diff Show More