Compare commits

..

9 Commits

Author SHA1 Message Date
Fabiano Rosas
6424d5b3df migration/multifd: Extract sem_done waiting into a function
This helps document the intent of the loop via the function name and
we can reuse this in the future.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-03 11:31:14 -03:00
Fabiano Rosas
5494d69c58 migration/multifd: Decouple control flow from the SYNC packet
We currently have the sem_sync semaphore that is used:

1) on the sending side, to know when the multifd_send_thread has
   finished sending the MULTIFD_FLAG_SYNC packet;

  This is unnecessary. Multifd sends packets (not pages) one by one
  and completion is already bound by both the channels_ready and sem
  semaphores. The SYNC packet has nothing special that would require
  it to have a separate semaphore on the sending side.

2) on the receiving side, to know when the multifd_recv_thread has
   finished receiving the MULTIFD_FLAG_SYNC packet;

  This is unnecessary because the multifd_recv_state->sem_sync
  semaphore already does the same thing. We care that the SYNC arrived
  from the source, knowing that the SYNC has been received by the recv
  thread doesn't add anything.

3) on both sending and receiving sides, to wait for the multifd threads
   to finish before cleaning up;

   This happens because multifd_send_sync_main() blocks
   ram_save_complete() from finishing until the semaphore is
   posted. This is surprising and not documented.

Clarify the above situation by renaming 'sem_sync' to 'sem_done' and
making the #3 usage the main one. Stop tracking the SYNC packet on
source (#1) and leave multifd_recv_state->sem_sync untouched on the
destination (#2).

Due to the 'channels_ready' and 'sem' semaphores, we always send
packets in lockstep with switching MultiFDSendParams, so
p->pending_job is always either 1 or 0. The thread has no knowledge of
whether it will have more to send once it posts to
channels_ready. Send it on an extra loop so it sees no pending_job and
releases the semaphore.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-03 11:31:13 -03:00
Fabiano Rosas
374cb846eb migration/multifd: Centralize multifd_send thread release actions
When the multifd_send thread finishes or when an error is detected and
we want it to finish, there are some actions that need to be taken:

- The 'quit' variable should be set. It is used both as a signal for
  the thread to end and as an indicative that the thread has already
  ended.

- The channels_ready and sem_sync semaphores need to be released. The
  main thread might be waiting to send another packet or waiting for
  the confirmation that the SYNC packet has been sent. If an error
  occurred, the multifd_send thread might not be able to send more
  packets or send the SYNC packet. The main thread should be released
  so we can do cleanup.

These two actions need to occur in this order because the side queuing
the packets always checks for p->quit after it is allowed to continue.

There are a few moments where we want to perform these actions, so
extract that code into function to be reused.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-03 11:31:13 -03:00
Fabiano Rosas
c2866a080d migration/multifd: Clarify Error usage in multifd_channel_connect
The function is currently called from two sites, one always gives it a
NULL Error and the other always gives it a non-NULL Error.

In the non-NULL case, all it does it trace the error and return. One
of the callers already have tracing, add a tracepoint to the other and
stop passing the error into the function.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-03 11:31:13 -03:00
Fabiano Rosas
375c45671e migration/multifd: Set p->quit before releasing channels_ready
All waiters of channels_ready check p->quit shortly after being
released. We need to set the variable before posting the semaphore.

We probably never seen any issue here because this is a "not even
started" error case, which is probably very unlikely to happen.

The other place that releases the channels_ready semaphore is
multifd_send_thread() which does already set p->quit before the post
(via multifd_send_terminate_threads).

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-03 11:31:13 -03:00
Fabiano Rosas
39875ab892 migration/multifd: Move error handling back into the multifd thread
The multifd_send_terminate_threads() is doing double duty terminating
the threads and setting the migration error and state. Clean it up.

This will allow for further simplification of the multifd thread
cleanup path in the following patches.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-03 11:31:13 -03:00
Fabiano Rosas
4a2c3ad4fa migration/multifd: Unify multifd_send_thread error paths
The preferred usage of the Error type is to always set both the return
code and the error when a failure happens. As all code called from the
send thread follows this pattern, we'll always have the return code
and the error set at the same time.

Aside from the convention, in this piece of code this must be the
case, otherwise the if (ret != 0) would be exiting the thread without
calling multifd_send_terminate_threads() which is incorrect.

Unify both paths to make it clear that both are taken when there's an
error.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-03 11:31:13 -03:00
Fabiano Rosas
f2accad0c8 migration/multifd: Remove channels_ready semaphore
The channels_ready semaphore is a global variable not linked to any
single multifd channel. Waiting on it only means that "some" channel
has become ready to send data. Since we need to address the channels
by index (multifd_send_state->params[i]), that information adds
nothing of value. The channel being addressed is not necessarily the
one that just released the semaphore.

The only usage of this semaphore that makes sense is to wait for it in
a loop that iterates for the number of channels. That could mean: all
channels have been setup and are operational OR all channels have
finished their work and are idle.

Currently all code that waits on channels_ready is redundant. There is
always a subsequent lock or semaphore that does the actual data
protection/synchronization.

- at multifd_send_pages: Waiting on channels_ready doesn't mean the
  'next_channel' is ready, it could be any other channel. So there are
  already cases where this code runs as if no semaphore was there.

  Waiting outside of the loop is also incorrect because if the current
  channel already has a pending_job, then it will loop into the next
  one without waiting the semaphore and the count will be greater than
  zero at the end of the execution.

  Checking that "any" channel is ready as a proxy for all channels
  being ready would work, but it's not what the code is doing and not
  really needed because the channel lock and 'sem' would be enough.

- at multifd_send_sync: This usage is correct, but it is made
  redundant by the wait on sem_sync. What this piece of code is doing
  is making sure all channels have sent the SYNC packet and became
  idle afterwards.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-02 19:33:06 -03:00
Fabiano Rosas
b8d09729d9 migration/multifd: Remove direct "socket" references
We're about to enable support for other transports in multifd, so
remove direct references to sockets.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-02 11:17:29 -03:00
4340 changed files with 143157 additions and 234393 deletions

View File

@@ -24,10 +24,6 @@ variables:
# Each script line from will be in a collapsible section in the job output
# and show the duration of each line.
FF_SCRIPT_SECTIONS: 1
# The project has a fairly fat GIT repo so we try and avoid bringing in things
# we don't need. The --filter options avoid blobs and tree references we aren't going to use
# and we also avoid fetching tags.
GIT_FETCH_EXTRA_FLAGS: --filter=blob:none --filter=tree:0 --no-tags --prune --quiet
interruptible: true
@@ -45,10 +41,6 @@ variables:
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_TAG'
when: never
# Scheduled runs on mainline don't get pipelines except for the special Coverity job
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_PIPELINE_SOURCE == "schedule"'
when: never
# Cirrus jobs can't run unless the creds / target repo are set
- if: '$QEMU_JOB_CIRRUS && ($CIRRUS_GITHUB_REPO == null || $CIRRUS_API_TOKEN == null)'
when: never

View File

@@ -9,13 +9,11 @@
when: always
before_script:
- JOBS=$(expr $(nproc) + 1)
- cat /packages.txt
script:
- export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
- export CCACHE_MAXSIZE="500M"
- export PATH="$CCACHE_WRAPPERSDIR:$PATH"
- du -sh .git
- mkdir build
- cd build
- ccache --zero-stats
@@ -27,10 +25,10 @@
then
pyvenv/bin/meson configure . -Dbackend_max_links="$LD_JOBS" ;
fi || exit 1;
- $MAKE -j"$JOBS"
- make -j"$JOBS"
- if test -n "$MAKE_CHECK_ARGS";
then
$MAKE -j"$JOBS" $MAKE_CHECK_ARGS ;
make -j"$JOBS" $MAKE_CHECK_ARGS ;
fi
- ccache --show-stats
@@ -46,8 +44,10 @@
exclude:
- build/**/*.p
- build/**/*.a.p
- build/**/*.fa.p
- build/**/*.c.o
- build/**/*.c.o.d
- build/**/*.fa
.common_test_job_template:
extends: .base_job_template
@@ -59,7 +59,7 @@
- cd build
- find . -type f -exec touch {} +
# Avoid recompiling by hiding ninja with NINJA=":"
- $MAKE NINJA=":" $MAKE_CHECK_ARGS
- make NINJA=":" $MAKE_CHECK_ARGS
.native_test_job_template:
extends: .common_test_job_template

View File

@@ -30,7 +30,6 @@ avocado-system-alpine:
variables:
IMAGE: alpine
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:avr arch:loongarch64 arch:mips64 arch:mipsel
build-system-ubuntu:
extends:
@@ -41,7 +40,8 @@ build-system-ubuntu:
variables:
IMAGE: ubuntu2204
CONFIGURE_ARGS: --enable-docs
TARGETS: alpha-softmmu microblazeel-softmmu mips64el-softmmu
TARGETS: alpha-softmmu cris-softmmu hppa-softmmu
microblazeel-softmmu mips64el-softmmu
MAKE_CHECK_ARGS: check-build
check-system-ubuntu:
@@ -61,7 +61,6 @@ avocado-system-ubuntu:
variables:
IMAGE: ubuntu2204
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:alpha arch:microblazeel arch:mips64el
build-system-debian:
extends:
@@ -70,10 +69,10 @@ build-system-debian:
needs:
job: amd64-debian-container
variables:
IMAGE: debian
IMAGE: debian-amd64
CONFIGURE_ARGS: --with-coroutine=sigaltstack
TARGETS: arm-softmmu i386-softmmu riscv64-softmmu sh4eb-softmmu
sparc-softmmu xtensa-softmmu
sparc-softmmu xtensaeb-softmmu
MAKE_CHECK_ARGS: check-build
check-system-debian:
@@ -82,7 +81,7 @@ check-system-debian:
- job: build-system-debian
artifacts: true
variables:
IMAGE: debian
IMAGE: debian-amd64
MAKE_CHECK_ARGS: check
avocado-system-debian:
@@ -91,9 +90,8 @@ avocado-system-debian:
- job: build-system-debian
artifacts: true
variables:
IMAGE: debian
IMAGE: debian-amd64
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:arm arch:i386 arch:riscv64 arch:sh4 arch:sparc arch:xtensa
crash-test-debian:
extends: .native_test_job_template
@@ -101,7 +99,7 @@ crash-test-debian:
- job: build-system-debian
artifacts: true
variables:
IMAGE: debian
IMAGE: debian-amd64
script:
- cd build
- make NINJA=":" check-venv
@@ -116,7 +114,7 @@ build-system-fedora:
variables:
IMAGE: fedora
CONFIGURE_ARGS: --disable-gcrypt --enable-nettle --enable-docs
TARGETS: microblaze-softmmu mips-softmmu
TARGETS: tricore-softmmu microblaze-softmmu mips-softmmu
xtensa-softmmu m68k-softmmu riscv32-softmmu ppc-softmmu sparc64-softmmu
MAKE_CHECK_ARGS: check-build
@@ -137,8 +135,6 @@ avocado-system-fedora:
variables:
IMAGE: fedora
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:microblaze arch:mips arch:xtensa arch:m68k
arch:riscv32 arch:ppc arch:sparc64
crash-test-fedora:
extends: .native_test_job_template
@@ -158,89 +154,22 @@ build-system-centos:
- .native_build_job_template
- .native_build_artifact_template
needs:
job: amd64-centos9-container
job: amd64-centos8-container
variables:
IMAGE: centos9
IMAGE: centos8
CONFIGURE_ARGS: --disable-nettle --enable-gcrypt --enable-vfio-user-server
--enable-modules --enable-trace-backends=dtrace --enable-docs
TARGETS: ppc64-softmmu or1k-softmmu s390x-softmmu
x86_64-softmmu rx-softmmu sh4-softmmu
x86_64-softmmu rx-softmmu sh4-softmmu nios2-softmmu
MAKE_CHECK_ARGS: check-build
# Previous QEMU release. Used for cross-version migration tests.
build-previous-qemu:
extends: .native_build_job_template
artifacts:
when: on_success
expire_in: 2 days
paths:
- build-previous
exclude:
- build-previous/**/*.p
- build-previous/**/*.a.p
- build-previous/**/*.c.o
- build-previous/**/*.c.o.d
needs:
job: amd64-opensuse-leap-container
variables:
IMAGE: opensuse-leap
TARGETS: x86_64-softmmu aarch64-softmmu
# Override the default flags as we need more to grab the old version
GIT_FETCH_EXTRA_FLAGS: --prune --quiet
before_script:
- export QEMU_PREV_VERSION="$(sed 's/\([0-9.]*\)\.[0-9]*/v\1.0/' VERSION)"
- git remote add upstream https://gitlab.com/qemu-project/qemu
- git fetch upstream refs/tags/$QEMU_PREV_VERSION:refs/tags/$QEMU_PREV_VERSION
- git checkout $QEMU_PREV_VERSION
after_script:
- mv build build-previous
.migration-compat-common:
extends: .common_test_job_template
needs:
- job: build-previous-qemu
- job: build-system-opensuse
# The old QEMU could have bugs unrelated to migration that are
# already fixed in the current development branch, so this test
# might fail.
allow_failure: true
variables:
IMAGE: opensuse-leap
MAKE_CHECK_ARGS: check-build
script:
# Use the migration-tests from the older QEMU tree. This avoids
# testing an old QEMU against new features/tests that it is not
# compatible with.
- cd build-previous
# old to new
- QTEST_QEMU_BINARY_SRC=./qemu-system-${TARGET}
QTEST_QEMU_BINARY=../build/qemu-system-${TARGET} ./tests/qtest/migration-test
# new to old
- QTEST_QEMU_BINARY_DST=./qemu-system-${TARGET}
QTEST_QEMU_BINARY=../build/qemu-system-${TARGET} ./tests/qtest/migration-test
# This job needs to be disabled until we can have an aarch64 CPU model that
# will both (1) support both KVM and TCG, and (2) provide a stable ABI.
# Currently only "-cpu max" can provide (1), however it doesn't guarantee
# (2). Mark this test skipped until later.
migration-compat-aarch64:
extends: .migration-compat-common
variables:
TARGET: aarch64
QEMU_JOB_SKIPPED: 1
migration-compat-x86_64:
extends: .migration-compat-common
variables:
TARGET: x86_64
check-system-centos:
extends: .native_test_job_template
needs:
- job: build-system-centos
artifacts: true
variables:
IMAGE: centos9
IMAGE: centos8
MAKE_CHECK_ARGS: check
avocado-system-centos:
@@ -249,10 +178,8 @@ avocado-system-centos:
- job: build-system-centos
artifacts: true
variables:
IMAGE: centos9
IMAGE: centos8
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:ppc64 arch:or1k arch:s390x arch:x86_64 arch:rx
arch:sh4
build-system-opensuse:
extends:
@@ -282,38 +209,7 @@ avocado-system-opensuse:
variables:
IMAGE: opensuse-leap
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:s390x arch:x86_64 arch:aarch64
#
# Flaky tests. We don't run these by default and they are allow fail
# but often the CI system is the only way to trigger the failures.
#
build-system-flaky:
extends:
- .native_build_job_template
- .native_build_artifact_template
needs:
job: amd64-debian-container
variables:
IMAGE: debian
QEMU_JOB_OPTIONAL: 1
TARGETS: aarch64-softmmu arm-softmmu mips64el-softmmu
ppc64-softmmu rx-softmmu s390x-softmmu sh4-softmmu x86_64-softmmu
MAKE_CHECK_ARGS: check-build
avocado-system-flaky:
extends: .avocado_test_job_template
needs:
- job: build-system-flaky
artifacts: true
allow_failure: true
variables:
IMAGE: debian
MAKE_CHECK_ARGS: check-avocado
QEMU_JOB_OPTIONAL: 1
QEMU_TEST_FLAKY_TESTS: 1
AVOCADO_TAGS: flaky
# This jobs explicitly disable TCG (--disable-tcg), KVM is detected by
# the configure script. The container doesn't contain Xen headers so
@@ -325,9 +221,9 @@ avocado-system-flaky:
build-tcg-disabled:
extends: .native_build_job_template
needs:
job: amd64-centos9-container
job: amd64-centos8-container
variables:
IMAGE: centos9
IMAGE: centos8
script:
- mkdir build
- cd build
@@ -340,7 +236,7 @@ build-tcg-disabled:
- cd tests/qemu-iotests/
- ./check -raw 001 002 003 004 005 008 009 010 011 012 021 025 032 033 048
052 063 077 086 101 104 106 113 148 150 151 152 157 159 160 163
170 171 184 192 194 208 221 226 227 236 253 277 image-fleecing
170 171 183 184 192 194 208 221 226 227 236 253 277 image-fleecing
- ./check -qcow2 028 051 056 057 058 065 068 082 085 091 095 096 102 122
124 132 139 142 144 145 151 152 155 157 165 194 196 200 202
208 209 216 218 227 234 246 247 248 250 254 255 257 258
@@ -353,7 +249,6 @@ build-user:
variables:
IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --disable-tools --disable-system
--target-list-exclude=alpha-linux-user,sh4-linux-user
MAKE_CHECK_ARGS: check-tcg
build-user-static:
@@ -363,18 +258,6 @@ build-user-static:
variables:
IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --disable-tools --disable-system --static
--target-list-exclude=alpha-linux-user,sh4-linux-user
MAKE_CHECK_ARGS: check-tcg
# targets stuck on older compilers
build-legacy:
extends: .native_build_job_template
needs:
job: amd64-debian-legacy-cross-container
variables:
IMAGE: debian-legacy-test-cross
TARGETS: alpha-linux-user alpha-softmmu sh4-linux-user
CONFIGURE_ARGS: --disable-tools
MAKE_CHECK_ARGS: check-tcg
build-user-hexagon:
@@ -387,9 +270,7 @@ build-user-hexagon:
CONFIGURE_ARGS: --disable-tools --disable-docs --enable-debug-tcg
MAKE_CHECK_ARGS: check-tcg
# Build the softmmu targets we have check-tcg tests and compilers in
# our omnibus all-test-cross container. Those targets that haven't got
# Debian cross compiler support need to use special containers.
# Only build the softmmu targets we have check-tcg tests for
build-some-softmmu:
extends: .native_build_job_template
needs:
@@ -397,18 +278,7 @@ build-some-softmmu:
variables:
IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --disable-tools --enable-debug
TARGETS: arm-softmmu aarch64-softmmu i386-softmmu riscv64-softmmu
s390x-softmmu x86_64-softmmu
MAKE_CHECK_ARGS: check-tcg
build-loongarch64:
extends: .native_build_job_template
needs:
job: loongarch-debian-cross-container
variables:
IMAGE: debian-loongarch-cross
CONFIGURE_ARGS: --disable-tools --enable-debug
TARGETS: loongarch64-linux-user loongarch64-softmmu
TARGETS: xtensa-softmmu arm-softmmu aarch64-softmmu alpha-softmmu
MAKE_CHECK_ARGS: check-tcg
# We build tricore in a very minimal tricore only container
@@ -430,7 +300,6 @@ clang-system:
IMAGE: fedora
CONFIGURE_ARGS: --cc=clang --cxx=clang++
--extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined
--extra-cflags=-fno-sanitize=function
TARGETS: alpha-softmmu arm-softmmu m68k-softmmu mips64-softmmu s390x-softmmu
MAKE_CHECK_ARGS: check-qtest check-tcg
@@ -442,9 +311,8 @@ clang-user:
variables:
IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --cc=clang --cxx=clang++ --disable-system
--target-list-exclude=alpha-linux-user,microblazeel-linux-user,aarch64_be-linux-user,i386-linux-user,m68k-linux-user,mipsn32el-linux-user,xtensaeb-linux-user
--target-list-exclude=microblazeel-linux-user,aarch64_be-linux-user,i386-linux-user,m68k-linux-user,mipsn32el-linux-user,xtensaeb-linux-user
--extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined
--extra-cflags=-fno-sanitize=function
MAKE_CHECK_ARGS: check-unit check-tcg
# Set LD_JOBS=1 because this requires LTO and ld consumes a large amount of memory.
@@ -575,9 +443,6 @@ tsan-build:
CONFIGURE_ARGS: --enable-tsan --cc=clang --cxx=clang++
--enable-trace-backends=ust --disable-slirp
TARGETS: x86_64-softmmu ppc64-softmmu riscv64-softmmu x86_64-linux-user
# Remove when we switch to a distro with clang >= 18
# https://github.com/google/sanitizers/issues/1716
MAKE: setarch -R make
# gcov is a GCC features
gcov:
@@ -633,10 +498,10 @@ build-tci:
variables:
IMAGE: debian-all-test-cross
script:
- TARGETS="aarch64 arm hppa m68k microblaze ppc64 s390x x86_64"
- TARGETS="aarch64 alpha arm hppa m68k microblaze ppc64 s390x x86_64"
- mkdir build
- cd build
- ../configure --enable-tcg-interpreter --disable-kvm --disable-docs --disable-gtk --disable-vnc
- ../configure --enable-tcg-interpreter --disable-docs --disable-gtk --disable-vnc
--target-list="$(for tg in $TARGETS; do echo -n ${tg}'-softmmu '; done)"
|| { cat config.log meson-logs/meson-log.txt && exit 1; }
- make -j"$JOBS"
@@ -651,15 +516,12 @@ build-tci:
- make check-tcg
# Check our reduced build configurations
# requires libfdt: aarch64, arm, loongarch64, microblaze, microblazeel,
# or1k, ppc64, riscv32, riscv64, rx
# fails qtest without boards: i386, x86_64
build-without-defaults:
extends: .native_build_job_template
needs:
job: amd64-centos9-container
job: amd64-centos8-container
variables:
IMAGE: centos9
IMAGE: centos8
CONFIGURE_ARGS:
--without-default-devices
--without-default-features
@@ -667,11 +529,8 @@ build-without-defaults:
--disable-pie
--disable-qom-cast-debug
--disable-strip
TARGETS: alpha-softmmu avr-softmmu cris-softmmu hppa-softmmu m68k-softmmu
mips-softmmu mips64-softmmu mipsel-softmmu mips64el-softmmu
ppc-softmmu s390x-softmmu sh4-softmmu sh4eb-softmmu sparc-softmmu
sparc64-softmmu tricore-softmmu xtensa-softmmu xtensaeb-softmmu
hexagon-linux-user i386-linux-user s390x-linux-user
TARGETS: avr-softmmu mips64-softmmu s390x-softmmu sh4-softmmu
sparc64-softmmu hexagon-linux-user i386-linux-user s390x-linux-user
MAKE_CHECK_ARGS: check
build-libvhost-user:
@@ -697,7 +556,7 @@ build-tools-and-docs-debian:
# when running on 'master' we use pre-existing container
optional: true
variables:
IMAGE: debian
IMAGE: debian-amd64
MAKE_CHECK_ARGS: check-unit ctags TAGS cscope
CONFIGURE_ARGS: --disable-system --disable-user --enable-docs --enable-tools
QEMU_JOB_PUBLISH: 1
@@ -717,7 +576,7 @@ build-tools-and-docs-debian:
# of what topic branch they're currently using
pages:
extends: .base_job_template
image: $CI_REGISTRY_IMAGE/qemu/debian:$QEMU_CI_CONTAINER_TAG
image: $CI_REGISTRY_IMAGE/qemu/debian-amd64:$QEMU_CI_CONTAINER_TAG
stage: test
needs:
- job: build-tools-and-docs-debian
@@ -725,10 +584,7 @@ pages:
- mkdir -p public
# HTML-ised source tree
- make gtags
# We unset variables to work around a bug in some htags versions
# which causes it to fail when the environment is large
- CI_COMMIT_MESSAGE= CI_COMMIT_TAG_MESSAGE= htags
-anT --tree-view=filetree -m qemu_init
- htags -anT --tree-view=filetree -m qemu_init
-t "Welcome to the QEMU sourcecode"
- mv HTML public/src
# Project documentation
@@ -740,40 +596,3 @@ pages:
- public
variables:
QEMU_JOB_PUBLISH: 1
coverity:
image: $CI_REGISTRY_IMAGE/qemu/fedora:$QEMU_CI_CONTAINER_TAG
stage: build
allow_failure: true
timeout: 3h
needs:
- job: amd64-fedora-container
optional: true
before_script:
- dnf install -y curl wget
script:
# would be nice to cancel the job if over quota (https://gitlab.com/gitlab-org/gitlab/-/issues/256089)
# for example:
# curl --request POST --header "PRIVATE-TOKEN: $CI_JOB_TOKEN" "${CI_SERVER_URL}/api/v4/projects/${CI_PROJECT_ID}/jobs/${CI_JOB_ID}/cancel
- 'scripts/coverity-scan/run-coverity-scan --check-upload-only || { exitcode=$?; if test $exitcode = 1; then
exit 0;
else
exit $exitcode;
fi; };
scripts/coverity-scan/run-coverity-scan --update-tools-only > update-tools.log 2>&1 || { cat update-tools.log; exit 1; };
scripts/coverity-scan/run-coverity-scan --no-update-tools'
rules:
- if: '$COVERITY_TOKEN == null'
when: never
- if: '$COVERITY_EMAIL == null'
when: never
# Never included on upstream pipelines, except for schedules
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_PIPELINE_SOURCE == "schedule"'
when: on_success
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM'
when: never
# Forks don't get any pipeline unless QEMU_CI=1 or QEMU_CI=2 is set
- if: '$QEMU_CI != "1" && $QEMU_CI != "2"'
when: never
# Always manual on forks even if $QEMU_CI == "2"
- when: manual

View File

@@ -13,7 +13,7 @@
.cirrus_build_job:
extends: .base_job_template
stage: build
image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:latest
image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:master
needs: []
# 20 mins larger than "timeout_in" in cirrus/build.yml
# as there's often a 5-10 minute delay before Cirrus CI
@@ -52,42 +52,61 @@ x64-freebsd-13-build:
NAME: freebsd-13
CIRRUS_VM_INSTANCE_TYPE: freebsd_instance
CIRRUS_VM_IMAGE_SELECTOR: image_family
CIRRUS_VM_IMAGE_NAME: freebsd-13-3
CIRRUS_VM_IMAGE_NAME: freebsd-13-2
CIRRUS_VM_CPUS: 8
CIRRUS_VM_RAM: 8G
UPDATE_COMMAND: pkg update; pkg upgrade -y
INSTALL_COMMAND: pkg install -y
CONFIGURE_ARGS: --target-list-exclude=arm-softmmu,i386-softmmu,microblaze-softmmu,mips64el-softmmu,mipsel-softmmu,mips-softmmu,ppc-softmmu,sh4eb-softmmu,xtensa-softmmu
TEST_TARGETS: check
aarch64-macos-13-base-build:
aarch64-macos-12-base-build:
extends: .cirrus_build_job
variables:
NAME: macos-13
NAME: macos-12
CIRRUS_VM_INSTANCE_TYPE: macos_instance
CIRRUS_VM_IMAGE_SELECTOR: image
CIRRUS_VM_IMAGE_NAME: ghcr.io/cirruslabs/macos-ventura-base:latest
CIRRUS_VM_IMAGE_NAME: ghcr.io/cirruslabs/macos-monterey-base:latest
CIRRUS_VM_CPUS: 12
CIRRUS_VM_RAM: 24G
UPDATE_COMMAND: brew update
INSTALL_COMMAND: brew install
PATH_EXTRA: /opt/homebrew/ccache/libexec:/opt/homebrew/gettext/bin
PKG_CONFIG_PATH: /opt/homebrew/curl/lib/pkgconfig:/opt/homebrew/ncurses/lib/pkgconfig:/opt/homebrew/readline/lib/pkgconfig
CONFIGURE_ARGS: --target-list-exclude=arm-softmmu,i386-softmmu,microblazeel-softmmu,mips64-softmmu,mipsel-softmmu,mips-softmmu,ppc-softmmu,sh4-softmmu,xtensaeb-softmmu
TEST_TARGETS: check-unit check-block check-qapi-schema check-softfloat check-qtest-x86_64
aarch64-macos-14-base-build:
extends: .cirrus_build_job
# The following jobs run VM-based tests via KVM on a Linux-based Cirrus-CI job
.cirrus_kvm_job:
extends: .base_job_template
stage: build
image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:master
needs: []
timeout: 80m
script:
- sed -e "s|[@]CI_REPOSITORY_URL@|$CI_REPOSITORY_URL|g"
-e "s|[@]CI_COMMIT_REF_NAME@|$CI_COMMIT_REF_NAME|g"
-e "s|[@]CI_COMMIT_SHA@|$CI_COMMIT_SHA|g"
-e "s|[@]NAME@|$NAME|g"
-e "s|[@]CONFIGURE_ARGS@|$CONFIGURE_ARGS|g"
-e "s|[@]TEST_TARGETS@|$TEST_TARGETS|g"
<.gitlab-ci.d/cirrus/kvm-build.yml >.gitlab-ci.d/cirrus/$NAME.yml
- cat .gitlab-ci.d/cirrus/$NAME.yml
- cirrus-run -v --show-build-log always .gitlab-ci.d/cirrus/$NAME.yml
variables:
NAME: macos-14
CIRRUS_VM_INSTANCE_TYPE: macos_instance
CIRRUS_VM_IMAGE_SELECTOR: image
CIRRUS_VM_IMAGE_NAME: ghcr.io/cirruslabs/macos-sonoma-base:latest
CIRRUS_VM_CPUS: 12
CIRRUS_VM_RAM: 24G
UPDATE_COMMAND: brew update
INSTALL_COMMAND: brew install
PATH_EXTRA: /opt/homebrew/ccache/libexec:/opt/homebrew/gettext/bin
PKG_CONFIG_PATH: /opt/homebrew/curl/lib/pkgconfig:/opt/homebrew/ncurses/lib/pkgconfig:/opt/homebrew/readline/lib/pkgconfig
TEST_TARGETS: check-unit check-block check-qapi-schema check-softfloat check-qtest-x86_64
QEMU_JOB_CIRRUS: 1
QEMU_JOB_OPTIONAL: 1
x86-netbsd:
extends: .cirrus_kvm_job
variables:
NAME: netbsd
CONFIGURE_ARGS: --target-list=x86_64-softmmu,ppc64-softmmu,aarch64-softmmu
TEST_TARGETS: check
x86-openbsd:
extends: .cirrus_kvm_job
variables:
NAME: openbsd
CONFIGURE_ARGS: --target-list=i386-softmmu,riscv64-softmmu,mips64-softmmu
TEST_TARGETS: check

View File

@@ -21,7 +21,7 @@ build_task:
install_script:
- @UPDATE_COMMAND@
- @INSTALL_COMMAND@ @PKGS@
- if test -n "@PYPI_PKGS@" ; then PYLIB=$(@PYTHON@ -c 'import sysconfig; print(sysconfig.get_path("stdlib"))'); rm -f $PYLIB/EXTERNALLY-MANAGED; @PIP3@ install @PYPI_PKGS@ ; fi
- if test -n "@PYPI_PKGS@" ; then @PIP3@ install @PYPI_PKGS@ ; fi
clone_script:
- git clone --depth 100 "$CI_REPOSITORY_URL" .
- git fetch origin "$CI_COMMIT_REF_NAME"

View File

@@ -11,6 +11,6 @@ MAKE='/usr/local/bin/gmake'
NINJA='/usr/local/bin/ninja'
PACKAGING_COMMAND='pkg'
PIP3='/usr/local/bin/pip-3.8'
PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache cmocka ctags curl cyrus-sasl dbus diffutils dtc flex fusefs-libs3 gettext git glib gmake gnutls gsed gtk-vnc gtk3 json-c libepoxy libffi libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm lzo2 meson mtools ncurses nettle ninja opencv pixman pkgconf png py311-numpy py311-pillow py311-pip py311-sphinx py311-sphinx_rtd_theme py311-tomli py311-yaml python3 rpm2cpio sdl2 sdl2_image snappy sndio socat spice-protocol tesseract usbredir virglrenderer vte3 xorriso zstd'
PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache cmocka ctags curl cyrus-sasl dbus diffutils dtc flex fusefs-libs3 gettext git glib gmake gnutls gsed gtk3 json-c libepoxy libffi libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm lzo2 meson mtools ncurses nettle ninja opencv pixman pkgconf png py39-numpy py39-pillow py39-pip py39-sphinx py39-sphinx_rtd_theme py39-tomli py39-yaml python3 rpm2cpio sdl2 sdl2_image snappy sndio socat spice-protocol tesseract usbredir virglrenderer vte3 xorriso zstd'
PYPI_PKGS=''
PYTHON='/usr/local/bin/python3'

View File

@@ -0,0 +1,31 @@
container:
image: fedora:35
cpu: 4
memory: 8Gb
kvm: true
env:
CIRRUS_CLONE_DEPTH: 1
CI_REPOSITORY_URL: "@CI_REPOSITORY_URL@"
CI_COMMIT_REF_NAME: "@CI_COMMIT_REF_NAME@"
CI_COMMIT_SHA: "@CI_COMMIT_SHA@"
@NAME@_task:
@NAME@_vm_cache:
folder: $HOME/.cache/qemu-vm
install_script:
- dnf update -y
- dnf install -y git make openssh-clients qemu-img qemu-system-x86 wget meson
clone_script:
- git clone --depth 100 "$CI_REPOSITORY_URL" .
- git fetch origin "$CI_COMMIT_REF_NAME"
- git reset --hard "$CI_COMMIT_SHA"
build_script:
- if [ -f $HOME/.cache/qemu-vm/images/@NAME@.img ]; then
make vm-build-@NAME@ J=$(getconf _NPROCESSORS_ONLN)
EXTRA_CONFIGURE_OPTS="@CONFIGURE_ARGS@"
BUILD_TARGET="@TEST_TARGETS@" ;
else
make vm-build-@NAME@ J=$(getconf _NPROCESSORS_ONLN) BUILD_TARGET=help
EXTRA_CONFIGURE_OPTS="--disable-system --disable-user --disable-tools" ;
fi

View File

@@ -1,6 +1,6 @@
# THIS FILE WAS AUTO-GENERATED
#
# $ lcitool variables macos-13 qemu
# $ lcitool variables macos-12 qemu
#
# https://gitlab.com/libvirt/libvirt-ci
@@ -11,6 +11,6 @@ MAKE='/opt/homebrew/bin/gmake'
NINJA='/opt/homebrew/bin/ninja'
PACKAGING_COMMAND='brew'
PIP3='/opt/homebrew/bin/pip3'
PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 gtk-vnc jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd'
PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol tesseract usbredir vde vte3 xorriso zlib zstd'
PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli'
PYTHON='/opt/homebrew/bin/python3'

View File

@@ -1,16 +0,0 @@
# THIS FILE WAS AUTO-GENERATED
#
# $ lcitool variables macos-14 qemu
#
# https://gitlab.com/libvirt/libvirt-ci
CCACHE='/opt/homebrew/bin/ccache'
CPAN_PKGS=''
CROSS_PKGS=''
MAKE='/opt/homebrew/bin/gmake'
NINJA='/opt/homebrew/bin/ninja'
PACKAGING_COMMAND='brew'
PIP3='/opt/homebrew/bin/pip3'
PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 gtk-vnc jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd'
PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli'
PYTHON='/opt/homebrew/bin/python3'

View File

@@ -1,10 +1,10 @@
include:
- local: '/.gitlab-ci.d/container-template.yml'
amd64-centos9-container:
amd64-centos8-container:
extends: .container_job_template
variables:
NAME: centos9
NAME: centos8
amd64-fedora-container:
extends: .container_job_template

View File

@@ -1,3 +1,9 @@
alpha-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-alpha-cross
amd64-debian-cross-container:
extends: .container_job_template
stage: containers
@@ -10,12 +16,6 @@ amd64-debian-user-cross-container:
variables:
NAME: debian-all-test-cross
amd64-debian-legacy-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-legacy-test-cross
arm64-debian-cross-container:
extends: .container_job_template
stage: containers
@@ -40,17 +40,23 @@ hexagon-cross-container:
variables:
NAME: debian-hexagon-cross
loongarch-debian-cross-container:
hppa-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-loongarch-cross
NAME: debian-hppa-cross
i686-debian-cross-container:
m68k-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-i686-cross
NAME: debian-m68k-cross
mips64-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-mips64-cross
mips64el-debian-cross-container:
extends: .container_job_template
@@ -58,12 +64,24 @@ mips64el-debian-cross-container:
variables:
NAME: debian-mips64el-cross
mips-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-mips-cross
mipsel-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-mipsel-cross
powerpc-test-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-powerpc-test-cross
ppc64el-debian-cross-container:
extends: .container_job_template
stage: containers
@@ -77,7 +95,13 @@ riscv64-debian-cross-container:
allow_failure: true
variables:
NAME: debian-riscv64-cross
QEMU_JOB_OPTIONAL: 1
# we can however build TCG tests using a non-sid base
riscv64-debian-test-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-riscv64-test-cross
s390x-debian-cross-container:
extends: .container_job_template
@@ -85,6 +109,18 @@ s390x-debian-cross-container:
variables:
NAME: debian-s390x-cross
sh4-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-sh4-cross
sparc64-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-sparc64-cross
tricore-debian-cross-container:
extends: .container_job_template
stage: containers
@@ -101,6 +137,16 @@ cris-fedora-cross-container:
variables:
NAME: fedora-cris-cross
i386-fedora-cross-container:
extends: .container_job_template
variables:
NAME: fedora-i386-cross
win32-fedora-cross-container:
extends: .container_job_template
variables:
NAME: fedora-win32-cross
win64-fedora-cross-container:
extends: .container_job_template
variables:

View File

@@ -11,7 +11,7 @@ amd64-debian-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian
NAME: debian-amd64
amd64-ubuntu2204-container:
extends: .container_job_template

View File

@@ -8,8 +8,6 @@
key: "$CI_JOB_NAME"
when: always
timeout: 80m
before_script:
- cat /packages.txt
script:
- export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
@@ -74,7 +72,7 @@
- ../configure --enable-werror --disable-docs $QEMU_CONFIGURE_OPTS
--disable-system --target-list-exclude="aarch64_be-linux-user
alpha-linux-user cris-linux-user m68k-linux-user microblazeel-linux-user
or1k-linux-user ppc-linux-user sparc-linux-user
nios2-linux-user or1k-linux-user ppc-linux-user sparc-linux-user
xtensa-linux-user $CROSS_SKIP_TARGETS"
- make -j$(expr $(nproc) + 1) all check-build $MAKE_CHECK_ARGS

View File

@@ -37,38 +37,27 @@ cross-arm64-kvm-only:
IMAGE: debian-arm64-cross
EXTRA_CONFIGURE_OPTS: --disable-tcg --without-default-features
cross-i686-system:
extends:
- .cross_system_build_job
- .cross_test_artifacts
needs:
job: i686-debian-cross-container
variables:
IMAGE: debian-i686-cross
EXTRA_CONFIGURE_OPTS: --disable-kvm
MAKE_CHECK_ARGS: check-qtest
cross-i686-user:
cross-i386-user:
extends:
- .cross_user_build_job
- .cross_test_artifacts
needs:
job: i686-debian-cross-container
job: i386-fedora-cross-container
variables:
IMAGE: debian-i686-cross
IMAGE: fedora-i386-cross
MAKE_CHECK_ARGS: check
cross-i686-tci:
cross-i386-tci:
extends:
- .cross_accel_build_job
- .cross_test_artifacts
timeout: 60m
needs:
job: i686-debian-cross-container
job: i386-fedora-cross-container
variables:
IMAGE: debian-i686-cross
IMAGE: fedora-i386-cross
ACCEL: tcg-interpreter
EXTRA_CONFIGURE_OPTS: --target-list=i386-softmmu,i386-linux-user,aarch64-softmmu,aarch64-linux-user,ppc-softmmu,ppc-linux-user --disable-plugins --disable-kvm
EXTRA_CONFIGURE_OPTS: --target-list=i386-softmmu,i386-linux-user,aarch64-softmmu,aarch64-linux-user,ppc-softmmu,ppc-linux-user --disable-plugins
MAKE_CHECK_ARGS: check check-tcg
cross-mipsel-system:
@@ -170,15 +159,29 @@ cross-mips64el-kvm-only:
IMAGE: debian-mips64el-cross
EXTRA_CONFIGURE_OPTS: --disable-tcg --target-list=mips64el-softmmu
cross-win32-system:
extends: .cross_system_build_job
needs:
job: win32-fedora-cross-container
variables:
IMAGE: fedora-win32-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu m68k-softmmu
microblazeel-softmmu mips64el-softmmu nios2-softmmu
artifacts:
when: on_success
paths:
- build/qemu-setup*.exe
cross-win64-system:
extends: .cross_system_build_job
needs:
job: win64-fedora-cross-container
variables:
IMAGE: fedora-win64-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu
m68k-softmmu microblazeel-softmmu
m68k-softmmu microblazeel-softmmu nios2-softmmu
or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu
tricore-softmmu xtensaeb-softmmu
artifacts:

View File

@@ -10,14 +10,13 @@
# gitlab-runner. To avoid problems that gitlab-runner can cause while
# reusing the GIT repository, let's enable the clone strategy, which
# guarantees a fresh repository on each job run.
variables:
GIT_STRATEGY: clone
# All custom runners can extend this template to upload the testlog
# data as an artifact and also feed the junit report
.custom_runner_template:
extends: .base_job_template
variables:
GIT_STRATEGY: clone
GIT_FETCH_EXTRA_FLAGS: --no-tags --prune --quiet
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
expire_in: 7 days
@@ -29,6 +28,7 @@
junit: build/meson-logs/testlog.junit.xml
include:
- local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-s390x.yml'
- local: '/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml'
- local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch64.yml'
- local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch32.yml'
- local: '/.gitlab-ci.d/custom-runners/centos-stream-8-x86_64.yml'

View File

@@ -0,0 +1,24 @@
# All centos-stream-8 jobs should run successfully in an environment
# setup by the scripts/ci/setup/stream/8/build-environment.yml task
# "Installation of extra packages to build QEMU"
centos-stream-8-x86_64:
extends: .custom_runner_template
allow_failure: true
needs: []
stage: build
tags:
- centos_stream_8
- x86_64
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$CENTOS_STREAM_8_x86_64_RUNNER_AVAILABLE"
before_script:
- JOBS=$(expr $(nproc) + 1)
script:
- mkdir build
- cd build
- ../scripts/ci/org.centos/stream/8/x86_64/configure
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make -j"$JOBS"
- make NINJA=":" check check-avocado

View File

@@ -1,32 +1,34 @@
# All ubuntu-22.04 jobs should run successfully in an environment
# setup by the scripts/ci/setup/ubuntu/build-environment.yml task
# "Install basic packages to build QEMU on Ubuntu 22.04"
# All ubuntu-20.04 jobs should run successfully in an environment
# setup by the scripts/ci/setup/build-environment.yml task
# "Install basic packages to build QEMU on Ubuntu 20.04/20.04"
ubuntu-22.04-s390x-all-linux:
ubuntu-20.04-s390x-all-linux-static:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- ubuntu_20.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$S390X_RUNNER_AVAILABLE"
script:
# --disable-libssh is needed because of https://bugs.launchpad.net/qemu/+bug/1838763
# --disable-glusterfs is needed because there's no static version of those libs in distro supplied packages
- mkdir build
- cd build
- ../configure --enable-debug --disable-system --disable-tools --disable-docs
- ../configure --enable-debug --static --disable-system --disable-glusterfs --disable-libssh
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc`
- make --output-sync check-tcg
- make --output-sync -j`nproc` check
ubuntu-22.04-s390x-all-system:
ubuntu-20.04-s390x-all:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- ubuntu_20.04
- s390x
timeout: 75m
rules:
@@ -35,17 +37,17 @@ ubuntu-22.04-s390x-all-system:
script:
- mkdir build
- cd build
- ../configure --disable-user
- ../configure --disable-libssh
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check
ubuntu-22.04-s390x-alldbg:
ubuntu-20.04-s390x-alldbg:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- ubuntu_20.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
@@ -57,18 +59,18 @@ ubuntu-22.04-s390x-alldbg:
script:
- mkdir build
- cd build
- ../configure --enable-debug
- ../configure --enable-debug --disable-libssh
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make clean
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check
ubuntu-22.04-s390x-clang:
ubuntu-20.04-s390x-clang:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- ubuntu_20.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
@@ -80,16 +82,16 @@ ubuntu-22.04-s390x-clang:
script:
- mkdir build
- cd build
- ../configure --cc=clang --cxx=clang++ --enable-sanitizers
- ../configure --disable-libssh --cc=clang --cxx=clang++ --enable-sanitizers
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check
ubuntu-22.04-s390x-tci:
ubuntu-20.04-s390x-tci:
needs: []
stage: build
tags:
- ubuntu_22.04
- ubuntu_20.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
@@ -101,16 +103,16 @@ ubuntu-22.04-s390x-tci:
script:
- mkdir build
- cd build
- ../configure --enable-tcg-interpreter
- ../configure --disable-libssh --enable-tcg-interpreter
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc`
ubuntu-22.04-s390x-notcg:
ubuntu-20.04-s390x-notcg:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- ubuntu_20.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
@@ -122,7 +124,7 @@ ubuntu-22.04-s390x-notcg:
script:
- mkdir build
- cd build
- ../configure --disable-tcg
- ../configure --disable-libssh --disable-tcg
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check

View File

@@ -1,5 +1,5 @@
# All ubuntu-22.04 jobs should run successfully in an environment
# setup by the scripts/ci/setup/ubuntu/build-environment.yml task
# setup by the scripts/ci/setup/qemu/build-environment.yml task
# "Install basic packages to build QEMU on Ubuntu 22.04"
ubuntu-22.04-aarch32-all:

View File

@@ -1,5 +1,5 @@
# All ubuntu-22.04 jobs should run successfully in an environment
# setup by the scripts/ci/setup/ubuntu/build-environment.yml task
# setup by the scripts/ci/setup/qemu/build-environment.yml task
# "Install basic packages to build QEMU on Ubuntu 22.04"
ubuntu-22.04-aarch64-all-linux-static:

View File

@@ -24,10 +24,6 @@
- if: '$QEMU_CI == "1" && $CI_PROJECT_NAMESPACE != "qemu-project" && $CI_COMMIT_MESSAGE =~ /opensbi/i'
when: manual
# Scheduled runs on mainline don't get pipelines except for the special Coverity job
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_PIPELINE_SOURCE == "schedule"'
when: never
# Run if any files affecting the build output are touched
- changes:
- .gitlab-ci.d/opensbi.yml

View File

@@ -1,7 +1,9 @@
msys2-64bit:
.shared_msys2_builder:
extends: .base_job_template
tags:
- saas-windows-medium-amd64
- shared-windows
- windows
- windows-1809
cache:
key: "$CI_JOB_NAME"
paths:
@@ -12,19 +14,9 @@ msys2-64bit:
stage: build
timeout: 100m
variables:
# Select the "64 bit, gcc and MSVCRT" MSYS2 environment
MSYSTEM: MINGW64
# This feature doesn't (currently) work with PowerShell, it stops
# the echo'ing of commands being run and doesn't show any timing
FF_SCRIPT_SECTIONS: 0
# do not remove "--without-default-devices"!
# commit 9f8e6cad65a6 ("gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices"
# changed to compile QEMU with the --without-default-devices switch
# for this job, because otherwise the build could not complete within
# the project timeout.
CONFIGURE_ARGS: --target-list=sparc-softmmu --without-default-devices -Ddebug=false -Doptimization=0
# The Windows git is a bit older so override the default
GIT_FETCH_EXTRA_FLAGS: --no-tags --prune --quiet
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
expire_in: 7 days
@@ -80,35 +72,34 @@ msys2-64bit:
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
bison diffutils flex
git grep make sed
mingw-w64-x86_64-binutils
mingw-w64-x86_64-capstone
mingw-w64-x86_64-ccache
mingw-w64-x86_64-curl
mingw-w64-x86_64-cyrus-sasl
mingw-w64-x86_64-dtc
mingw-w64-x86_64-gcc
mingw-w64-x86_64-glib2
mingw-w64-x86_64-gnutls
mingw-w64-x86_64-gtk3
mingw-w64-x86_64-libgcrypt
mingw-w64-x86_64-libjpeg-turbo
mingw-w64-x86_64-libnfs
mingw-w64-x86_64-libpng
mingw-w64-x86_64-libssh
mingw-w64-x86_64-libtasn1
mingw-w64-x86_64-libusb
mingw-w64-x86_64-lzo2
mingw-w64-x86_64-nettle
mingw-w64-x86_64-ninja
mingw-w64-x86_64-pixman
mingw-w64-x86_64-pkgconf
mingw-w64-x86_64-python
mingw-w64-x86_64-SDL2
mingw-w64-x86_64-SDL2_image
mingw-w64-x86_64-snappy
mingw-w64-x86_64-spice
mingw-w64-x86_64-usbredir
mingw-w64-x86_64-zstd"
$MINGW_TARGET-capstone
$MINGW_TARGET-ccache
$MINGW_TARGET-curl
$MINGW_TARGET-cyrus-sasl
$MINGW_TARGET-dtc
$MINGW_TARGET-gcc
$MINGW_TARGET-glib2
$MINGW_TARGET-gnutls
$MINGW_TARGET-gtk3
$MINGW_TARGET-libgcrypt
$MINGW_TARGET-libjpeg-turbo
$MINGW_TARGET-libnfs
$MINGW_TARGET-libpng
$MINGW_TARGET-libssh
$MINGW_TARGET-libtasn1
$MINGW_TARGET-libusb
$MINGW_TARGET-lzo2
$MINGW_TARGET-nettle
$MINGW_TARGET-ninja
$MINGW_TARGET-pixman
$MINGW_TARGET-pkgconf
$MINGW_TARGET-python
$MINGW_TARGET-SDL2
$MINGW_TARGET-SDL2_image
$MINGW_TARGET-snappy
$MINGW_TARGET-spice
$MINGW_TARGET-usbredir
$MINGW_TARGET-zstd "
- Write-Output "Running build at $(Get-Date -Format u)"
- $env:CHERE_INVOKING = 'yes' # Preserve the current working directory
- $env:MSYS = 'winsymlinks:native' # Enable native Windows symlink
@@ -125,3 +116,25 @@ msys2-64bit:
- ..\msys64\usr\bin\bash -lc "make check MTESTARGS='$TEST_ARGS' || { cat meson-logs/testlog.txt; exit 1; } ;"
- ..\msys64\usr\bin\bash -lc "ccache --show-stats"
- Write-Output "Finished build at $(Get-Date -Format u)"
msys2-64bit:
extends: .shared_msys2_builder
variables:
MINGW_TARGET: mingw-w64-x86_64
MSYSTEM: MINGW64
# do not remove "--without-default-devices"!
# commit 9f8e6cad65a6 ("gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices"
# changed to compile QEMU with the --without-default-devices switch
# for the msys2 64-bit job, due to the build could not complete within
CONFIGURE_ARGS: --target-list=x86_64-softmmu --without-default-devices -Ddebug=false -Doptimization=0
# qTests don't run successfully with "--without-default-devices",
# so let's exclude the qtests from CI for now.
TEST_ARGS: --no-suite qtest
msys2-32bit:
extends: .shared_msys2_builder
variables:
MINGW_TARGET: mingw-w64-i686
MSYSTEM: MINGW32
CONFIGURE_ARGS: --target-list=ppc64-softmmu -Ddebug=false -Doptimization=0
TEST_ARGS: --no-suite qtest

View File

@@ -30,41 +30,22 @@ malc <av1474@comtv.ru> malc <malc@c046a42c-6fe2-441c-8c8c-71466251a162>
# Corrupted Author fields
Aaron Larson <alarson@ddci.com> alarson@ddci.com
Andreas Färber <andreas.faerber@web.de> Andreas Färber <andreas.faerber>
fanwenjie <fanwj@mail.ustc.edu.cn> fanwj@mail.ustc.edu.cn <fanwj@mail.ustc.edu.cn>
Jason Wang <jasowang@redhat.com> Jason Wang <jasowang>
Marek Dolata <mkdolata@us.ibm.com> mkdolata@us.ibm.com <mkdolata@us.ibm.com>
Michael Ellerman <mpe@ellerman.id.au> michael@ozlabs.org <michael@ozlabs.org>
Nick Hudson <hnick@vmware.com> hnick@vmware.com <hnick@vmware.com>
Timothée Cocault <timothee.cocault@gmail.com> timothee.cocault@gmail.com <timothee.cocault@gmail.com>
Stefan Weil <sw@weilnetz.de> <weil@mail.berlios.de>
Stefan Weil <sw@weilnetz.de> Stefan Weil <stefan@kiwi.(none)>
# There is also a:
# (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162>
# for the cvs2svn initialization commit e63c3dc74bf.
# Next, translate a few commits where mailman rewrote the From: line due
# to strict SPF and DMARC. Usually, our build process should be flagging
# commits like these before maintainer merges; if you find the need to add
# a line here, please also report a bug against the part of the build
# process that let the mis-attribution slip through in the first place.
#
# If the mailing list munges your emails, use:
# git config sendemail.from '"Your Name" <your.email@example.com>'
# the use of "" in that line will differ from the typically unquoted
# 'git config user.name', which in turn is sufficient for 'git send-email'
# to add an extra From: line in the body of your email that takes
# precedence over any munged From: in the mail's headers.
# See https://lists.openembedded.org/g/openembedded-core/message/166515
# and https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg06784.html
# to strict SPF, although we prefer to avoid adding more entries like that.
Ed Swierk <eswierk@skyportsystems.com> Ed Swierk via Qemu-devel <qemu-devel@nongnu.org>
Ian McKellar <ianloic@google.com> Ian McKellar via Qemu-devel <qemu-devel@nongnu.org>
Julia Suvorova <jusual@mail.ru> Julia Suvorova via Qemu-devel <qemu-devel@nongnu.org>
Justin Terry (VM) <juterry@microsoft.com> Justin Terry (VM) via Qemu-devel <qemu-devel@nongnu.org>
Stefan Weil <sw@weilnetz.de> Stefan Weil via <qemu-devel@nongnu.org>
Stefan Weil <sw@weilnetz.de> Stefan Weil via <qemu-trivial@nongnu.org>
Andrey Drobyshev <andrey.drobyshev@virtuozzo.com> Andrey Drobyshev via <qemu-block@nongnu.org>
BALATON Zoltan <balaton@eik.bme.hu> BALATON Zoltan via <qemu-ppc@nongnu.org>
# Next, replace old addresses by a more recent one.
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com> <aleksandar.markovic@mips.com>
@@ -84,12 +65,8 @@ Greg Kurz <groug@kaod.org> <gkurz@linux.vnet.ibm.com>
Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
Juan Quintela <quintela@trasno.org> <quintela@redhat.com>
Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
Luc Michel <luc@lmichel.fr> <luc.michel@git.antfield.fr>
Luc Michel <luc@lmichel.fr> <luc.michel@greensocs.com>
Luc Michel <luc@lmichel.fr> <lmichel@kalray.eu>
Radoslaw Biernacki <rad@semihalf.com> <radoslaw.biernacki@linaro.org>
Paul Brook <paul@nowt.org> <paul@codesourcery.com>
Paul Burton <paulburton@kernel.org> <paul.burton@mips.com>
@@ -100,9 +77,7 @@ Philippe Mathieu-Daudé <philmd@linaro.org> <f4bug@amsat.org>
Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@redhat.com>
Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@fungible.com>
Roman Bolshakov <rbolshakov@ddn.com> <r.bolshakov@yadro.com>
Sriram Yagnaraman <sriram.yagnaraman@ericsson.com> <sriram.yagnaraman@est.tech>
Stefan Brankovic <stefan.brankovic@syrmia.com> <stefan.brankovic@rt-rk.com.com>
Stefan Weil <sw@weilnetz.de> Stefan Weil <stefan@weilnetz.de>
Taylor Simpson <ltaylorsimpson@gmail.com> <tsimpson@quicinc.com>
Yongbok Kim <yongbok.kim@mips.com> <yongbok.kim@imgtec.com>

View File

@@ -5,21 +5,16 @@
# Required
version: 2
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.11"
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py
# We recommend specifying your dependencies to enable reproducible builds:
# https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
python:
install:
- requirements: docs/requirements.txt
# We want all the document formats
formats: all
# For consistency, we require that QEMU's Sphinx extensions
# run with at least the same minimum version of Python that
# we require for other Python in our codebase (our conf.py
# enforces this, and some code needs it.)
python:
version: 3.6

View File

@@ -1,5 +1,5 @@
os: linux
dist: jammy
dist: focal
language: c
compiler:
- gcc
@@ -7,11 +7,13 @@ cache:
# There is one cache per branch and compiler version.
# characteristics of each job are used to identify the cache:
# - OS name (currently only linux)
# - OS distribution (e.g. "jammy" for Linux)
# - OS distribution (for Linux, bionic or focal)
# - Names and values of visible environment variables set in .travis.yml or Settings panel
timeout: 1200
ccache: true
pip: true
directories:
- $HOME/avocado/data/cache
# The channel name "irc.oftc.net#qemu" is encrypted against qemu/qemu
@@ -32,8 +34,8 @@ env:
- BASE_CONFIG="--disable-docs --disable-tools"
- TEST_BUILD_CMD=""
- TEST_CMD="make check V=1"
# This is broadly a list of "mainline" system targets which have support across the major distros
- MAIN_SYSTEM_TARGETS="aarch64-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu"
# This is broadly a list of "mainline" softmmu targets which have support across the major distros
- MAIN_SOFTMMU_TARGETS="aarch64-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu"
- CCACHE_SLOPPINESS="include_file_ctime,include_file_mtime"
- CCACHE_MAXSIZE=1G
- G_MESSAGES_DEBUG=error
@@ -81,6 +83,7 @@ jobs:
- name: "[aarch64] GCC check-tcg"
arch: arm64
dist: focal
addons:
apt_packages:
- libaio-dev
@@ -106,17 +109,17 @@ jobs:
- libvdeplug-dev
- libvte-2.91-dev
- ninja-build
- python3-tomli
# Tests dependencies
- genisoimage
env:
- TEST_CMD="make check check-tcg V=1"
- CONFIG="--disable-containers --enable-fdt=system
--target-list=${MAIN_SYSTEM_TARGETS} --cxx=/bin/false"
--target-list=${MAIN_SOFTMMU_TARGETS} --cxx=/bin/false"
- UNRELIABLE=true
- name: "[ppc64] Clang check-tcg"
- name: "[ppc64] GCC check-tcg"
arch: ppc64le
compiler: clang
dist: focal
addons:
apt_packages:
- libaio-dev
@@ -142,7 +145,6 @@ jobs:
- libvdeplug-dev
- libvte-2.91-dev
- ninja-build
- python3-tomli
# Tests dependencies
- genisoimage
env:
@@ -152,6 +154,7 @@ jobs:
- name: "[s390x] GCC check-tcg"
arch: s390x
dist: focal
addons:
apt_packages:
- libaio-dev
@@ -177,13 +180,13 @@ jobs:
- libvdeplug-dev
- libvte-2.91-dev
- ninja-build
- python3-tomli
# Tests dependencies
- genisoimage
env:
- TEST_CMD="make check check-tcg V=1"
- CONFIG="--disable-containers
--target-list=hppa-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu"
- CONFIG="--disable-containers --enable-fdt=system
--target-list=${MAIN_SOFTMMU_TARGETS},s390x-linux-user"
- UNRELIABLE=true
script:
- BUILD_RC=0 && make -j${JOBS} || BUILD_RC=$?
- |
@@ -194,9 +197,9 @@ jobs:
$(exit $BUILD_RC);
fi
- name: "[s390x] Clang (other-system)"
- name: "[s390x] GCC (other-softmmu)"
arch: s390x
compiler: clang
dist: focal
addons:
apt_packages:
- libaio-dev
@@ -217,16 +220,17 @@ jobs:
- libsnappy-dev
- libzstd-dev
- nettle-dev
- xfslibs-dev
- ninja-build
- python3-tomli
# Tests dependencies
- genisoimage
env:
- CONFIG="--disable-containers --audio-drv-list=sdl --disable-user
--target-list=arm-softmmu,avr-softmmu,microblaze-softmmu,sh4eb-softmmu,sparc64-softmmu,xtensaeb-softmmu"
- CONFIG="--disable-containers --enable-fdt=system --audio-drv-list=sdl
--disable-user --target-list-exclude=${MAIN_SOFTMMU_TARGETS}"
- name: "[s390x] GCC (user)"
arch: s390x
dist: focal
addons:
apt_packages:
- libgcrypt20-dev
@@ -235,14 +239,13 @@ jobs:
- ninja-build
- flex
- bison
- python3-tomli
env:
- TEST_CMD="make check check-tcg V=1"
- CONFIG="--disable-containers --disable-system"
- name: "[s390x] Clang (disable-tcg)"
arch: s390x
compiler: clang
dist: focal
compiler: clang-10
addons:
apt_packages:
- libaio-dev
@@ -268,8 +271,9 @@ jobs:
- libvdeplug-dev
- libvte-2.91-dev
- ninja-build
- python3-tomli
- clang-10
env:
- TEST_CMD="make check-unit"
- CONFIG="--disable-containers --disable-tcg --enable-kvm --disable-tools
--enable-fdt=system --host-cc=clang --cxx=clang++"
- UNRELIABLE=true

View File

@@ -11,9 +11,6 @@ config OPENGL
config X11
bool
config PIXMAN
bool
config SPICE
bool
@@ -23,9 +20,6 @@ config IVSHMEM
config TPM
bool
config FDT
bool
config VHOST_USER
bool
@@ -38,6 +32,9 @@ config VHOST_KERNEL
config VIRTFS
bool
config PVRDMA
bool
config MULTIPROCESS_ALLOWED
bool
imply MULTIPROCESS
@@ -49,6 +46,3 @@ config FUZZ
config VFIO_USER_SERVER_ALLOWED
bool
imply VFIO_USER_SERVER
config HV_BALLOON_POSSIBLE
bool

File diff suppressed because it is too large Load Diff

View File

@@ -78,8 +78,7 @@ x := $(shell rm -rf meson-private meson-info meson-logs)
endif
# 1. ensure config-host.mak is up-to-date
config-host.mak: $(SRC_PATH)/configure $(SRC_PATH)/scripts/meson-buildoptions.sh \
$(SRC_PATH)/pythondeps.toml $(SRC_PATH)/VERSION
config-host.mak: $(SRC_PATH)/configure $(SRC_PATH)/scripts/meson-buildoptions.sh $(SRC_PATH)/VERSION
@echo config-host.mak is out-of-date, running configure
@if test -f meson-private/coredata.dat; then \
./config.status --skip-meson; \
@@ -142,13 +141,8 @@ MAKE.n = $(findstring n,$(firstword $(filter-out --%,$(MAKEFLAGS))))
MAKE.k = $(findstring k,$(firstword $(filter-out --%,$(MAKEFLAGS))))
MAKE.q = $(findstring q,$(firstword $(filter-out --%,$(MAKEFLAGS))))
MAKE.nq = $(if $(word 2, $(MAKE.n) $(MAKE.q)),nq)
NINJAFLAGS = \
$(if $V,-v) \
$(if $(MAKE.n), -n) \
$(if $(MAKE.k), -k0) \
$(filter-out -j, \
$(or $(filter -l% -j%, $(MAKEFLAGS)), \
$(if $(filter --jobserver-auth=%, $(MAKEFLAGS)),, -j1))) \
NINJAFLAGS = $(if $V,-v) $(if $(MAKE.n), -n) $(if $(MAKE.k), -k0) \
$(filter-out -j, $(lastword -j1 $(filter -l% -j%, $(MAKEFLAGS)))) \
-d keepdepfile
ninja-cmd-goals = $(or $(MAKECMDGOALS), all)
ninja-cmd-goals += $(foreach g, $(MAKECMDGOALS), $(.ninja-goals.$g))
@@ -208,7 +202,6 @@ clean: recurse-clean
! -path ./roms/edk2/ArmPkg/Library/GccLto/liblto-arm.a \
-exec rm {} +
rm -f TAGS cscope.* *~ */*~
@$(MAKE) -Ctests/qemu-iotests clean
VERSION = $(shell cat $(SRC_PATH)/VERSION)
@@ -290,13 +283,6 @@ include $(SRC_PATH)/tests/vm/Makefile.include
print-help-run = printf " %-30s - %s\\n" "$1" "$2"
print-help = @$(call print-help-run,$1,$2)
.PHONY: update-linux-vdso
update-linux-vdso:
@for m in $(SRC_PATH)/linux-user/*/Makefile.vdso; do \
$(MAKE) $(SUBDIR_MAKEFLAGS) -C $$(dirname $$m) -f Makefile.vdso \
SRC_PATH=$(SRC_PATH) BUILD_DIR=$(BUILD_DIR); \
done
.PHONY: help
help:
@echo 'Generic targets:'
@@ -317,9 +303,6 @@ endif
$(call print-help,distclean,Remove all generated files)
$(call print-help,dist,Build a distributable tarball)
@echo ''
@echo 'Linux-user targets:'
$(call print-help,update-linux-vdso,Build linux-user vdso images)
@echo ''
@echo 'Test targets:'
$(call print-help,check,Run all tests (check-help for details))
$(call print-help,bench,Run all benchmarks)

View File

@@ -82,7 +82,7 @@ guidelines set out in the `style section
the Developers Guide.
Additional information on submitting patches can be found online via
the QEMU website:
the QEMU website
* `<https://wiki.qemu.org/Contribute/SubmitAPatch>`_
* `<https://wiki.qemu.org/Contribute/TrivialPatches>`_
@@ -102,7 +102,7 @@ requires a working 'git send-email' setup, and by default doesn't
automate everything, so you may want to go through the above steps
manually for once.
For installation instructions, please go to:
For installation instructions, please go to
* `<https://github.com/stefanha/git-publish>`_
@@ -159,7 +159,7 @@ Contact
=======
The QEMU community can be contacted in a number of ways, with the two
main methods being email and IRC:
main methods being email and IRC
* `<mailto:qemu-devel@nongnu.org>`_
* `<https://lists.nongnu.org/mailman/listinfo/qemu-devel>`_

View File

@@ -1 +1 @@
9.0.93
8.1.50

View File

@@ -16,4 +16,3 @@ config KVM
config XEN
bool
select FSDEV_9P if VIRTFS
select XEN_BUS

View File

@@ -41,7 +41,7 @@ void accel_blocker_init(void)
void accel_ioctl_begin(void)
{
if (likely(bql_locked())) {
if (likely(qemu_mutex_iothread_locked())) {
return;
}
@@ -51,7 +51,7 @@ void accel_ioctl_begin(void)
void accel_ioctl_end(void)
{
if (likely(bql_locked())) {
if (likely(qemu_mutex_iothread_locked())) {
return;
}
@@ -62,7 +62,7 @@ void accel_ioctl_end(void)
void accel_cpu_ioctl_begin(CPUState *cpu)
{
if (unlikely(bql_locked())) {
if (unlikely(qemu_mutex_iothread_locked())) {
return;
}
@@ -72,7 +72,7 @@ void accel_cpu_ioctl_begin(CPUState *cpu)
void accel_cpu_ioctl_end(CPUState *cpu)
{
if (unlikely(bql_locked())) {
if (unlikely(qemu_mutex_iothread_locked())) {
return;
}
@@ -105,7 +105,7 @@ void accel_ioctl_inhibit_begin(void)
* We allow to inhibit only when holding the BQL, so we can identify
* when an inhibitor wants to issue an ioctl easily.
*/
g_assert(bql_locked());
g_assert(qemu_mutex_iothread_locked());
/* Block further invocations of the ioctls outside the BQL. */
CPU_FOREACH(cpu) {

View File

@@ -30,7 +30,7 @@
#include "hw/core/accel-cpu.h"
#ifndef CONFIG_USER_ONLY
#include "accel-system.h"
#include "accel-softmmu.h"
#endif /* !CONFIG_USER_ONLY */
static const TypeInfo accel_type = {
@@ -104,7 +104,7 @@ static void accel_init_cpu_interfaces(AccelClass *ac)
void accel_init_interfaces(AccelClass *ac)
{
#ifndef CONFIG_USER_ONLY
accel_system_init_ops_interfaces(ac);
accel_init_ops_interfaces(ac);
#endif /* !CONFIG_USER_ONLY */
accel_init_cpu_interfaces(ac);
@@ -119,37 +119,16 @@ void accel_cpu_instance_init(CPUState *cpu)
}
}
bool accel_cpu_common_realize(CPUState *cpu, Error **errp)
bool accel_cpu_realizefn(CPUState *cpu, Error **errp)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
AccelState *accel = current_accel();
AccelClass *acc = ACCEL_GET_CLASS(accel);
/* target specific realization */
if (cc->accel_cpu && cc->accel_cpu->cpu_target_realize
&& !cc->accel_cpu->cpu_target_realize(cpu, errp)) {
return false;
if (cc->accel_cpu && cc->accel_cpu->cpu_realizefn) {
return cc->accel_cpu->cpu_realizefn(cpu, errp);
}
/* generic realization */
if (acc->cpu_common_realize && !acc->cpu_common_realize(cpu, errp)) {
return false;
}
return true;
}
void accel_cpu_common_unrealize(CPUState *cpu)
{
AccelState *accel = current_accel();
AccelClass *acc = ACCEL_GET_CLASS(accel);
/* generic unrealization */
if (acc->cpu_common_unrealize) {
acc->cpu_common_unrealize(cpu);
}
}
int accel_supported_gdbstub_sstep_flags(void)
{
AccelState *accel = current_accel();

View File

@@ -28,7 +28,7 @@
#include "hw/boards.h"
#include "sysemu/cpus.h"
#include "qemu/error-report.h"
#include "accel-system.h"
#include "accel-softmmu.h"
int accel_init_machine(AccelState *accel, MachineState *ms)
{
@@ -62,7 +62,7 @@ void accel_setup_post(MachineState *ms)
}
/* initialize the arch-independent accel operation interfaces */
void accel_system_init_ops_interfaces(AccelClass *ac)
void accel_init_ops_interfaces(AccelClass *ac)
{
const char *ac_name;
char *ops_name;
@@ -99,8 +99,8 @@ static const TypeInfo accel_ops_type_info = {
.class_size = sizeof(AccelOpsClass),
};
static void accel_system_register_types(void)
static void accel_softmmu_register_types(void)
{
type_register_static(&accel_ops_type_info);
}
type_init(accel_system_register_types);
type_init(accel_softmmu_register_types);

View File

@@ -7,9 +7,9 @@
* See the COPYING file in the top-level directory.
*/
#ifndef ACCEL_SYSTEM_H
#define ACCEL_SYSTEM_H
#ifndef ACCEL_SOFTMMU_H
#define ACCEL_SOFTMMU_H
void accel_system_init_ops_interfaces(AccelClass *ac);
void accel_init_ops_interfaces(AccelClass *ac);
#endif /* ACCEL_SYSTEM_H */
#endif /* ACCEL_SOFTMMU_H */

View File

@@ -24,9 +24,10 @@ static void *dummy_cpu_thread_fn(void *arg)
rcu_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->can_do_io = 1;
current_cpu = cpu;
#ifndef _WIN32
@@ -42,7 +43,7 @@ static void *dummy_cpu_thread_fn(void *arg)
qemu_guest_random_seed_thread_part2(cpu->random_seed);
do {
bql_unlock();
qemu_mutex_unlock_iothread();
#ifndef _WIN32
do {
int sig;
@@ -55,11 +56,11 @@ static void *dummy_cpu_thread_fn(void *arg)
#else
qemu_sem_wait(&cpu->sem);
#endif
bql_lock();
qemu_mutex_lock_iothread();
qemu_wait_io_event(cpu);
} while (!cpu->unplug);
bql_unlock();
qemu_mutex_unlock_iothread();
rcu_unregister_thread();
return NULL;
}
@@ -68,6 +69,9 @@ void dummy_start_vcpu_thread(CPUState *cpu)
{
char thread_name[VCPU_THREAD_NAME_SIZE];
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/DUMMY",
cpu->cpu_index);
qemu_thread_create(cpu->thread, thread_name, dummy_cpu_thread_fn, cpu,

View File

@@ -52,7 +52,7 @@
#include "qemu/main-loop.h"
#include "exec/address-spaces.h"
#include "exec/exec-all.h"
#include "gdbstub/enums.h"
#include "exec/gdbstub.h"
#include "sysemu/cpus.h"
#include "sysemu/hvf.h"
#include "sysemu/hvf_int.h"
@@ -204,15 +204,15 @@ static void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
{
if (!cpu->accel->dirty) {
if (!cpu->vcpu_dirty) {
hvf_get_registers(cpu);
cpu->accel->dirty = true;
cpu->vcpu_dirty = true;
}
}
static void hvf_cpu_synchronize_state(CPUState *cpu)
{
if (!cpu->accel->dirty) {
if (!cpu->vcpu_dirty) {
run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL);
}
}
@@ -221,7 +221,7 @@ static void do_hvf_cpu_synchronize_set_dirty(CPUState *cpu,
run_on_cpu_data arg)
{
/* QEMU state is the reference, push it to HVF now and on next entry */
cpu->accel->dirty = true;
cpu->vcpu_dirty = true;
}
static void hvf_cpu_synchronize_post_reset(CPUState *cpu)
@@ -400,9 +400,9 @@ static int hvf_init_vcpu(CPUState *cpu)
r = hv_vcpu_create(&cpu->accel->fd,
(hv_vcpu_exit_t **)&cpu->accel->exit, NULL);
#else
r = hv_vcpu_create(&cpu->accel->fd, HV_VCPU_DEFAULT);
r = hv_vcpu_create((hv_vcpuid_t *)&cpu->accel->fd, HV_VCPU_DEFAULT);
#endif
cpu->accel->dirty = true;
cpu->vcpu_dirty = 1;
assert_hvf_ok(r);
cpu->accel->guest_debug_enabled = false;
@@ -424,10 +424,11 @@ static void *hvf_cpu_thread_fn(void *arg)
rcu_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->can_do_io = 1;
current_cpu = cpu;
hvf_init_vcpu(cpu);
@@ -448,7 +449,7 @@ static void *hvf_cpu_thread_fn(void *arg)
hvf_vcpu_destroy(cpu);
cpu_thread_signal_destroyed(cpu);
bql_unlock();
qemu_mutex_unlock_iothread();
rcu_unregister_thread();
return NULL;
}
@@ -463,6 +464,10 @@ static void hvf_start_vcpu_thread(CPUState *cpu)
*/
assert(hvf_enabled());
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/HVF",
cpu->cpu_index);
qemu_thread_create(cpu->thread, thread_name, hvf_cpu_thread_fn,

View File

@@ -13,30 +13,40 @@
#include "sysemu/hvf.h"
#include "sysemu/hvf_int.h"
const char *hvf_return_string(hv_return_t ret)
{
switch (ret) {
case HV_SUCCESS: return "HV_SUCCESS";
case HV_ERROR: return "HV_ERROR";
case HV_BUSY: return "HV_BUSY";
case HV_BAD_ARGUMENT: return "HV_BAD_ARGUMENT";
case HV_NO_RESOURCES: return "HV_NO_RESOURCES";
case HV_NO_DEVICE: return "HV_NO_DEVICE";
case HV_UNSUPPORTED: return "HV_UNSUPPORTED";
case HV_DENIED: return "HV_DENIED";
default: return "[unknown hv_return value]";
}
}
void assert_hvf_ok_impl(hv_return_t ret, const char *file, unsigned int line,
const char *exp)
void assert_hvf_ok(hv_return_t ret)
{
if (ret == HV_SUCCESS) {
return;
}
error_report("Error: %s = %s (0x%x, at %s:%u)",
exp, hvf_return_string(ret), ret, file, line);
switch (ret) {
case HV_ERROR:
error_report("Error: HV_ERROR");
break;
case HV_BUSY:
error_report("Error: HV_BUSY");
break;
case HV_BAD_ARGUMENT:
error_report("Error: HV_BAD_ARGUMENT");
break;
case HV_NO_RESOURCES:
error_report("Error: HV_NO_RESOURCES");
break;
case HV_NO_DEVICE:
error_report("Error: HV_NO_DEVICE");
break;
case HV_UNSUPPORTED:
error_report("Error: HV_UNSUPPORTED");
break;
#if defined(MAC_OS_VERSION_11_0) && \
MAC_OS_X_VERSION_MIN_REQUIRED >= MAC_OS_VERSION_11_0
case HV_DENIED:
error_report("Error: HV_DENIED");
break;
#endif
default:
error_report("Unknown Error");
}
abort();
}

View File

@@ -33,9 +33,10 @@ static void *kvm_vcpu_thread_fn(void *arg)
rcu_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->can_do_io = 1;
current_cpu = cpu;
r = kvm_init_vcpu(cpu, &error_fatal);
@@ -57,7 +58,7 @@ static void *kvm_vcpu_thread_fn(void *arg)
kvm_destroy_vcpu(cpu);
cpu_thread_signal_destroyed(cpu);
bql_unlock();
qemu_mutex_unlock_iothread();
rcu_unregister_thread();
return NULL;
}
@@ -66,6 +67,9 @@ static void kvm_start_vcpu_thread(CPUState *cpu)
{
char thread_name[VCPU_THREAD_NAME_SIZE];
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/KVM",
cpu->cpu_index);
qemu_thread_create(cpu->thread, thread_name, kvm_vcpu_thread_fn,
@@ -79,10 +83,10 @@ static bool kvm_vcpu_thread_is_idle(CPUState *cpu)
static bool kvm_cpus_are_resettable(void)
{
return !kvm_enabled() || !kvm_state->guest_state_protected;
return !kvm_enabled() || kvm_cpu_check_are_resettable();
}
#ifdef TARGET_KVM_HAVE_GUEST_DEBUG
#ifdef KVM_CAP_SET_GUEST_DEBUG
static int kvm_update_guest_debug_ops(CPUState *cpu)
{
return kvm_update_guest_debug(cpu, 0);
@@ -101,7 +105,7 @@ static void kvm_accel_ops_class_init(ObjectClass *oc, void *data)
ops->synchronize_state = kvm_cpu_synchronize_state;
ops->synchronize_pre_loadvm = kvm_cpu_synchronize_pre_loadvm;
#ifdef TARGET_KVM_HAVE_GUEST_DEBUG
#ifdef KVM_CAP_SET_GUEST_DEBUG
ops->update_guest_debug = kvm_update_guest_debug_ops;
ops->supports_guest_debug = kvm_supports_guest_debug;
ops->insert_breakpoint = kvm_insert_breakpoint;

File diff suppressed because it is too large Load Diff

View File

@@ -22,4 +22,5 @@ bool kvm_supports_guest_debug(void);
int kvm_insert_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len);
int kvm_remove_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len);
void kvm_remove_all_breakpoints(CPUState *cpu);
#endif /* KVM_CPUS_H */

View File

@@ -9,17 +9,13 @@ kvm_device_ioctl(int fd, int type, void *arg) "dev fd %d, type 0x%x, arg %p"
kvm_failed_reg_get(uint64_t id, const char *msg) "Warning: Unable to retrieve ONEREG %" PRIu64 " from KVM: %s"
kvm_failed_reg_set(uint64_t id, const char *msg) "Warning: Unable to set ONEREG %" PRIu64 " to KVM: %s"
kvm_init_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu"
kvm_create_vcpu(int cpu_index, unsigned long arch_cpu_id, int kvm_fd) "index: %d, id: %lu, kvm fd: %d"
kvm_destroy_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu"
kvm_park_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu"
kvm_unpark_vcpu(unsigned long arch_cpu_id, const char *msg) "id: %lu %s"
kvm_irqchip_commit_routes(void) ""
kvm_irqchip_add_msi_route(char *name, int vector, int virq) "dev %s vector %d virq %d"
kvm_irqchip_update_msi_route(int virq) "Updating MSI route virq=%d"
kvm_irqchip_release_virq(int virq) "virq %d"
kvm_set_ioeventfd_mmio(int fd, uint64_t addr, uint32_t val, bool assign, uint32_t size, bool datamatch) "fd: %d @0x%" PRIx64 " val=0x%x assign: %d size: %d match: %d"
kvm_set_ioeventfd_pio(int fd, uint16_t addr, uint32_t val, bool assign, uint32_t size, bool datamatch) "fd: %d @0x%x val=0x%x assign: %d size: %d match: %d"
kvm_set_user_memory(uint16_t as, uint16_t slot, uint32_t flags, uint64_t guest_phys_addr, uint64_t memory_size, uint64_t userspace_addr, uint32_t fd, uint64_t fd_offset, int ret) "AddrSpace#%d Slot#%d flags=0x%x gpa=0x%"PRIx64 " size=0x%"PRIx64 " ua=0x%"PRIx64 " guest_memfd=%d" " guest_memfd_offset=0x%" PRIx64 " ret=%d"
kvm_set_user_memory(uint32_t slot, uint32_t flags, uint64_t guest_phys_addr, uint64_t memory_size, uint64_t userspace_addr, int ret) "Slot#%d flags=0x%x gpa=0x%"PRIx64 " size=0x%"PRIx64 " ua=0x%"PRIx64 " ret=%d"
kvm_clear_dirty_log(uint32_t slot, uint64_t start, uint32_t size) "slot#%"PRId32" start 0x%"PRIx64" size 0x%"PRIx32
kvm_resample_fd_notify(int gsi) "gsi %d"
kvm_dirty_ring_full(int id) "vcpu %d"
@@ -29,10 +25,4 @@ kvm_dirty_ring_reaper(const char *s) "%s"
kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"PRIu64" pages (took %"PRIi64" us)"
kvm_dirty_ring_reaper_kick(const char *reason) "%s"
kvm_dirty_ring_flush(int finished) "%d"
kvm_failed_get_vcpu_mmap_size(void) ""
kvm_cpu_exec(void) ""
kvm_interrupt_exit_request(void) ""
kvm_io_window_exit(void) ""
kvm_run_exit_system_event(int cpu_index, uint32_t event_type) "cpu_index %d, system_even_type %"PRIu32
kvm_convert_memory(uint64_t start, uint64_t size, const char *msg) "start 0x%" PRIx64 " size 0x%" PRIx64 " %s"
kvm_memory_fault(uint64_t start, uint64_t size, uint64_t flags) "start 0x%" PRIx64 " size 0x%" PRIx64 " flags 0x%" PRIx64

View File

@@ -1,5 +1,5 @@
specific_ss.add(files('accel-target.c'))
system_ss.add(files('accel-system.c', 'accel-blocker.c'))
specific_ss.add(files('accel-common.c', 'accel-blocker.c'))
system_ss.add(files('accel-softmmu.c'))
user_ss.add(files('accel-user.c'))
subdir('tcg')

View File

@@ -24,18 +24,6 @@
#include "qemu/main-loop.h"
#include "hw/core/cpu.h"
static int64_t qtest_clock_counter;
static int64_t qtest_get_virtual_clock(void)
{
return qatomic_read_i64(&qtest_clock_counter);
}
static void qtest_set_virtual_clock(int64_t count)
{
qatomic_set_i64(&qtest_clock_counter, count);
}
static int qtest_init_accel(MachineState *ms)
{
return 0;
@@ -64,7 +52,6 @@ static void qtest_accel_ops_class_init(ObjectClass *oc, void *data)
ops->create_vcpu_thread = dummy_start_vcpu_thread;
ops->get_virtual_clock = qtest_get_virtual_clock;
ops->set_virtual_clock = qtest_set_virtual_clock;
};
static const TypeInfo qtest_accel_ops_type = {

View File

@@ -17,13 +17,17 @@
KVMState *kvm_state;
bool kvm_kernel_irqchip;
bool kvm_async_interrupts_allowed;
bool kvm_eventfds_allowed;
bool kvm_irqfds_allowed;
bool kvm_resamplefds_allowed;
bool kvm_msi_via_irqfd_allowed;
bool kvm_gsi_routing_allowed;
bool kvm_gsi_direct_mapping;
bool kvm_allowed;
bool kvm_readonly_mem_allowed;
bool kvm_ioeventfd_any_length_allowed;
bool kvm_msi_use_devid;
bool kvm_direct_msi_allowed;
void kvm_flush_coalesced_mmio_buffer(void)
{
@@ -38,6 +42,11 @@ bool kvm_has_sync_mmu(void)
return false;
}
int kvm_has_many_ioeventfds(void)
{
return 0;
}
int kvm_on_sigbus_vcpu(CPUState *cpu, int code, void *addr)
{
return 1;
@@ -83,6 +92,11 @@ void kvm_irqchip_change_notify(void)
{
}
int kvm_irqchip_add_adapter_route(KVMState *s, AdapterInfo *adapter)
{
return -ENOSYS;
}
int kvm_irqchip_add_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
EventNotifier *rn, int virq)
{
@@ -95,14 +109,9 @@ int kvm_irqchip_remove_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
return -ENOSYS;
}
unsigned int kvm_get_max_memslots(void)
bool kvm_has_free_slot(MachineState *ms)
{
return 0;
}
unsigned int kvm_get_free_memslots(void)
{
return 0;
return false;
}
void kvm_init_cpu_signals(CPUState *cpu)
@@ -124,13 +133,3 @@ uint32_t kvm_dirty_ring_size(void)
{
return 0;
}
bool kvm_hwpoisoned_mem(void)
{
return false;
}
int kvm_create_guest_memfd(uint64_t size, uint64_t flags, Error **errp)
{
return -ENOSYS;
}

View File

@@ -1,6 +1,6 @@
system_stubs_ss = ss.source_set()
system_stubs_ss.add(when: 'CONFIG_XEN', if_false: files('xen-stub.c'))
system_stubs_ss.add(when: 'CONFIG_KVM', if_false: files('kvm-stub.c'))
system_stubs_ss.add(when: 'CONFIG_TCG', if_false: files('tcg-stub.c'))
sysemu_stubs_ss = ss.source_set()
sysemu_stubs_ss.add(when: 'CONFIG_XEN', if_false: files('xen-stub.c'))
sysemu_stubs_ss.add(when: 'CONFIG_KVM', if_false: files('kvm-stub.c'))
sysemu_stubs_ss.add(when: 'CONFIG_TCG', if_false: files('tcg-stub.c'))
specific_ss.add_all(when: ['CONFIG_SYSTEM_ONLY'], if_true: system_stubs_ss)
specific_ss.add_all(when: ['CONFIG_SYSTEM_ONLY'], if_true: sysemu_stubs_ss)

View File

@@ -18,6 +18,28 @@ void tb_flush(CPUState *cpu)
{
}
void tlb_set_dirty(CPUState *cpu, vaddr vaddr)
{
}
void tcg_flush_jmp_cache(CPUState *cpu)
{
}
int probe_access_flags(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx,
bool nonfault, void **phost, uintptr_t retaddr)
{
g_assert_not_reached();
}
void *probe_access(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr)
{
/* Handled by hardware accelerator. */
g_assert_not_reached();
}
G_NORETURN void cpu_loop_exit(CPUState *cpu)
{
g_assert_not_reached();

View File

@@ -73,8 +73,7 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ABI_TYPE cmpv, ABI_TYPE newv,
MemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi,
DATA_SIZE, retaddr);
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr);
DATA_TYPE ret;
#if DATA_SIZE == 16
@@ -91,8 +90,7 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val,
MemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi,
DATA_SIZE, retaddr);
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr);
DATA_TYPE ret;
ret = qatomic_xchg__nocheck(haddr, val);
@@ -106,7 +104,7 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE val, MemOpIdx oi, uintptr_t retaddr) \
{ \
DATA_TYPE *haddr, ret; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \
ret = qatomic_##X(haddr, val); \
ATOMIC_MMU_CLEANUP; \
atomic_trace_rmw_post(env, addr, oi); \
@@ -137,7 +135,7 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE xval, MemOpIdx oi, uintptr_t retaddr) \
{ \
XDATA_TYPE *haddr, cmp, old, new, val = xval; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \
smp_mb(); \
cmp = qatomic_read__nocheck(haddr); \
do { \
@@ -178,8 +176,7 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ABI_TYPE cmpv, ABI_TYPE newv,
MemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi,
DATA_SIZE, retaddr);
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr);
DATA_TYPE ret;
#if DATA_SIZE == 16
@@ -196,8 +193,7 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val,
MemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi,
DATA_SIZE, retaddr);
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr);
ABI_TYPE ret;
ret = qatomic_xchg__nocheck(haddr, BSWAP(val));
@@ -211,7 +207,7 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE val, MemOpIdx oi, uintptr_t retaddr) \
{ \
DATA_TYPE *haddr, ret; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \
ret = qatomic_##X(haddr, BSWAP(val)); \
ATOMIC_MMU_CLEANUP; \
atomic_trace_rmw_post(env, addr, oi); \
@@ -239,7 +235,7 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE xval, MemOpIdx oi, uintptr_t retaddr) \
{ \
XDATA_TYPE *haddr, ldo, ldn, old, new, val = xval; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \
smp_mb(); \
ldn = qatomic_read__nocheck(haddr); \
do { \

View File

@@ -20,8 +20,9 @@
#include "qemu/osdep.h"
#include "sysemu/cpus.h"
#include "sysemu/tcg.h"
#include "exec/exec-all.h"
#include "qemu/plugin.h"
#include "internal-common.h"
#include "internal.h"
bool tcg_allowed;
@@ -35,7 +36,7 @@ void cpu_loop_exit_noexc(CPUState *cpu)
void cpu_loop_exit(CPUState *cpu)
{
/* Undo the setting in cpu_tb_exec. */
cpu->neg.can_do_io = true;
cpu->can_do_io = 1;
/* Undo any setting in generated code. */
qemu_plugin_disable_mem_helpers(cpu);
siglongjmp(cpu->jmp_env, 1);

View File

@@ -30,6 +30,9 @@
#include "qemu/rcu.h"
#include "exec/log.h"
#include "qemu/main-loop.h"
#if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
#include "hw/i386/apic.h"
#endif
#include "sysemu/cpus.h"
#include "exec/cpu-all.h"
#include "sysemu/cpu-timers.h"
@@ -39,8 +42,7 @@
#include "tb-jmp-cache.h"
#include "tb-hash.h"
#include "tb-context.h"
#include "internal-common.h"
#include "internal-target.h"
#include "internal.h"
/* -icount align implementation. */
@@ -71,7 +73,7 @@ static void align_clocks(SyncClocks *sc, CPUState *cpu)
return;
}
cpu_icount = cpu->icount_extra + cpu->neg.icount_decr.u16.low;
cpu_icount = cpu->icount_extra + cpu_neg(cpu)->icount_decr.u16.low;
sc->diff_clk += icount_to_ns(sc->last_cpu_icount - cpu_icount);
sc->last_cpu_icount = cpu_icount;
@@ -122,7 +124,7 @@ static void init_delay_params(SyncClocks *sc, CPUState *cpu)
sc->realtime_clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT);
sc->diff_clk = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - sc->realtime_clock;
sc->last_cpu_icount
= cpu->icount_extra + cpu->neg.icount_decr.u16.low;
= cpu->icount_extra + cpu_neg(cpu)->icount_decr.u16.low;
if (sc->diff_clk < max_delay) {
max_delay = sc->diff_clk;
}
@@ -144,16 +146,6 @@ static void init_delay_params(SyncClocks *sc, const CPUState *cpu)
}
#endif /* CONFIG USER ONLY */
bool tcg_cflags_has(CPUState *cpu, uint32_t flags)
{
return cpu->tcg_cflags & flags;
}
void tcg_cflags_set(CPUState *cpu, uint32_t flags)
{
cpu->tcg_cflags |= flags;
}
uint32_t curr_cflags(CPUState *cpu)
{
uint32_t cflags = cpu->tcg_cflags;
@@ -230,7 +222,7 @@ static TranslationBlock *tb_htable_lookup(CPUState *cpu, vaddr pc,
struct tb_desc desc;
uint32_t h;
desc.env = cpu_env(cpu);
desc.env = cpu->env_ptr;
desc.cs_base = cs_base;
desc.flags = flags;
desc.cflags = cflags;
@@ -260,29 +252,43 @@ static inline TranslationBlock *tb_lookup(CPUState *cpu, vaddr pc,
hash = tb_jmp_cache_hash_func(pc);
jc = cpu->tb_jmp_cache;
tb = qatomic_read(&jc->array[hash].tb);
if (likely(tb &&
jc->array[hash].pc == pc &&
tb->cs_base == cs_base &&
tb->flags == flags &&
tb_cflags(tb) == cflags)) {
goto hit;
if (cflags & CF_PCREL) {
/* Use acquire to ensure current load of pc from jc. */
tb = qatomic_load_acquire(&jc->array[hash].tb);
if (likely(tb &&
jc->array[hash].pc == pc &&
tb->cs_base == cs_base &&
tb->flags == flags &&
tb_cflags(tb) == cflags)) {
return tb;
}
tb = tb_htable_lookup(cpu, pc, cs_base, flags, cflags);
if (tb == NULL) {
return NULL;
}
jc->array[hash].pc = pc;
/* Ensure pc is written first. */
qatomic_store_release(&jc->array[hash].tb, tb);
} else {
/* Use rcu_read to ensure current load of pc from *tb. */
tb = qatomic_rcu_read(&jc->array[hash].tb);
if (likely(tb &&
tb->pc == pc &&
tb->cs_base == cs_base &&
tb->flags == flags &&
tb_cflags(tb) == cflags)) {
return tb;
}
tb = tb_htable_lookup(cpu, pc, cs_base, flags, cflags);
if (tb == NULL) {
return NULL;
}
/* Use the pc value already stored in tb->pc. */
qatomic_set(&jc->array[hash].tb, tb);
}
tb = tb_htable_lookup(cpu, pc, cs_base, flags, cflags);
if (tb == NULL) {
return NULL;
}
jc->array[hash].pc = pc;
qatomic_set(&jc->array[hash].tb, tb);
hit:
/*
* As long as tb is not NULL, the contents are consistent. Therefore,
* the virtual PC has to match for non-CF_PCREL translations.
*/
assert((tb_cflags(tb) & CF_PCREL) || tb->pc == pc);
return tb;
}
@@ -350,9 +356,9 @@ static bool check_for_breakpoints_slow(CPUState *cpu, vaddr pc,
#ifdef CONFIG_USER_ONLY
g_assert_not_reached();
#else
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
assert(tcg_ops->debug_check_breakpoint);
match_bp = tcg_ops->debug_check_breakpoint(cpu);
CPUClass *cc = CPU_GET_CLASS(cpu);
assert(cc->tcg_ops->debug_check_breakpoint);
match_bp = cc->tcg_ops->debug_check_breakpoint(cpu);
#endif
}
@@ -378,7 +384,7 @@ static bool check_for_breakpoints_slow(CPUState *cpu, vaddr pc,
* breakpoints are removed.
*/
if (match_page) {
*cflags = (*cflags & ~CF_COUNT_MASK) | CF_NO_GOTO_TB | CF_BP_PAGE | 1;
*cflags = (*cflags & ~CF_COUNT_MASK) | CF_NO_GOTO_TB | 1;
}
return false;
}
@@ -406,14 +412,6 @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
uint64_t cs_base;
uint32_t flags, cflags;
/*
* By definition we've just finished a TB, so I/O is OK.
* Avoid the possibility of calling cpu_io_recompile() if
* a page table walk triggered by tb_lookup() calling
* probe_access_internal() happens to touch an MMIO device.
* The next TB, if we chain to it, will clear the flag again.
*/
cpu->neg.can_do_io = true;
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
cflags = curr_cflags(cpu);
@@ -446,6 +444,7 @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
static inline TranslationBlock * QEMU_DISABLE_CFI
cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
{
CPUArchState *env = cpu->env_ptr;
uintptr_t ret;
TranslationBlock *last_tb;
const void *tb_ptr = itb->tc.ptr;
@@ -455,8 +454,8 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
}
qemu_thread_jit_execute();
ret = tcg_qemu_tb_exec(cpu_env(cpu), tb_ptr);
cpu->neg.can_do_io = true;
ret = tcg_qemu_tb_exec(env, tb_ptr);
cpu->can_do_io = 1;
qemu_plugin_disable_mem_helpers(cpu);
/*
* TODO: Delay swapping back to the read-write region of the TB
@@ -476,11 +475,10 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
* counter hit zero); we must restore the guest PC to the address
* of the start of the TB.
*/
CPUClass *cc = cpu->cc;
const TCGCPUOps *tcg_ops = cc->tcg_ops;
CPUClass *cc = CPU_GET_CLASS(cpu);
if (tcg_ops->synchronize_from_tb) {
tcg_ops->synchronize_from_tb(cpu, last_tb);
if (cc->tcg_ops->synchronize_from_tb) {
cc->tcg_ops->synchronize_from_tb(cpu, last_tb);
} else {
tcg_debug_assert(!(tb_cflags(last_tb) & CF_PCREL));
assert(cc->set_pc);
@@ -512,19 +510,19 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
static void cpu_exec_enter(CPUState *cpu)
{
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
CPUClass *cc = CPU_GET_CLASS(cpu);
if (tcg_ops->cpu_exec_enter) {
tcg_ops->cpu_exec_enter(cpu);
if (cc->tcg_ops->cpu_exec_enter) {
cc->tcg_ops->cpu_exec_enter(cpu);
}
}
static void cpu_exec_exit(CPUState *cpu)
{
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
CPUClass *cc = CPU_GET_CLASS(cpu);
if (tcg_ops->cpu_exec_exit) {
tcg_ops->cpu_exec_exit(cpu);
if (cc->tcg_ops->cpu_exec_exit) {
cc->tcg_ops->cpu_exec_exit(cpu);
}
}
@@ -559,15 +557,15 @@ static void cpu_exec_longjmp_cleanup(CPUState *cpu)
tcg_ctx->gen_tb = NULL;
}
#endif
if (bql_locked()) {
bql_unlock();
if (qemu_mutex_iothread_locked()) {
qemu_mutex_unlock_iothread();
}
assert_no_pages_locked();
}
void cpu_exec_step_atomic(CPUState *cpu)
{
CPUArchState *env = cpu_env(cpu);
CPUArchState *env = cpu->env_ptr;
TranslationBlock *tb;
vaddr pc;
uint64_t cs_base;
@@ -678,10 +676,16 @@ static inline bool cpu_handle_halt(CPUState *cpu)
{
#ifndef CONFIG_USER_ONLY
if (cpu->halted) {
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
bool leave_halt = tcg_ops->cpu_exec_halt(cpu);
if (!leave_halt) {
#if defined(TARGET_I386)
if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
X86CPU *x86_cpu = X86_CPU(cpu);
qemu_mutex_lock_iothread();
apic_poll_irq(x86_cpu->apic_state);
cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
qemu_mutex_unlock_iothread();
}
#endif /* TARGET_I386 */
if (!cpu_has_work(cpu)) {
return true;
}
@@ -694,7 +698,7 @@ static inline bool cpu_handle_halt(CPUState *cpu)
static inline void cpu_handle_debug_exception(CPUState *cpu)
{
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
CPUClass *cc = CPU_GET_CLASS(cpu);
CPUWatchpoint *wp;
if (!cpu->watchpoint_hit) {
@@ -703,8 +707,8 @@ static inline void cpu_handle_debug_exception(CPUState *cpu)
}
}
if (tcg_ops->debug_excp_handler) {
tcg_ops->debug_excp_handler(cpu);
if (cc->tcg_ops->debug_excp_handler) {
cc->tcg_ops->debug_excp_handler(cpu);
}
}
@@ -713,7 +717,7 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
if (cpu->exception_index < 0) {
#ifndef CONFIG_USER_ONLY
if (replay_has_exception()
&& cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0) {
&& cpu_neg(cpu)->icount_decr.u16.low + cpu->icount_extra == 0) {
/* Execute just one insn to trigger exception pending in the log */
cpu->cflags_next_tb = (curr_cflags(cpu) & ~CF_USE_ICOUNT)
| CF_NOIRQ | 1;
@@ -721,7 +725,6 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
#endif
return false;
}
if (cpu->exception_index >= EXCP_INTERRUPT) {
/* exit request from the cpu execution loop */
*ret = cpu->exception_index;
@@ -730,59 +733,62 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
}
cpu->exception_index = -1;
return true;
}
} else {
#if defined(CONFIG_USER_ONLY)
/*
* If user mode only, we simulate a fake exception which will be
* handled outside the cpu execution loop.
*/
/* if user mode only, we simulate a fake exception
which will be handled outside the cpu execution
loop */
#if defined(TARGET_I386)
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
tcg_ops->fake_user_interrupt(cpu);
CPUClass *cc = CPU_GET_CLASS(cpu);
cc->tcg_ops->fake_user_interrupt(cpu);
#endif /* TARGET_I386 */
*ret = cpu->exception_index;
cpu->exception_index = -1;
return true;
#else
if (replay_exception()) {
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
bql_lock();
tcg_ops->do_interrupt(cpu);
bql_unlock();
*ret = cpu->exception_index;
cpu->exception_index = -1;
return true;
#else
if (replay_exception()) {
CPUClass *cc = CPU_GET_CLASS(cpu);
qemu_mutex_lock_iothread();
cc->tcg_ops->do_interrupt(cpu);
qemu_mutex_unlock_iothread();
cpu->exception_index = -1;
if (unlikely(cpu->singlestep_enabled)) {
/*
* After processing the exception, ensure an EXCP_DEBUG is
* raised when single-stepping so that GDB doesn't miss the
* next instruction.
*/
*ret = EXCP_DEBUG;
cpu_handle_debug_exception(cpu);
if (unlikely(cpu->singlestep_enabled)) {
/*
* After processing the exception, ensure an EXCP_DEBUG is
* raised when single-stepping so that GDB doesn't miss the
* next instruction.
*/
*ret = EXCP_DEBUG;
cpu_handle_debug_exception(cpu);
return true;
}
} else if (!replay_has_interrupt()) {
/* give a chance to iothread in replay mode */
*ret = EXCP_INTERRUPT;
return true;
}
} else if (!replay_has_interrupt()) {
/* give a chance to iothread in replay mode */
*ret = EXCP_INTERRUPT;
return true;
}
#endif
}
return false;
}
static inline bool icount_exit_request(CPUState *cpu)
#ifndef CONFIG_USER_ONLY
/*
* CPU_INTERRUPT_POLL is a virtual event which gets converted into a
* "real" interrupt event later. It does not need to be recorded for
* replay purposes.
*/
static inline bool need_replay_interrupt(int interrupt_request)
{
if (!icount_enabled()) {
return false;
}
if (cpu->cflags_next_tb != -1 && !(cpu->cflags_next_tb & CF_USE_ICOUNT)) {
return false;
}
return cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0;
#if defined(TARGET_I386)
return !(interrupt_request & CPU_INTERRUPT_POLL);
#else
return true;
#endif
}
#endif /* !CONFIG_USER_ONLY */
static inline bool cpu_handle_interrupt(CPUState *cpu,
TranslationBlock **last_tb)
@@ -801,11 +807,11 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
* Ensure zeroing happens before reading cpu->exit_request or
* cpu->interrupt_request (see also smp_wmb in cpu_exit())
*/
qatomic_set_mb(&cpu->neg.icount_decr.u16.high, 0);
qatomic_set_mb(&cpu_neg(cpu)->icount_decr.u16.high, 0);
if (unlikely(qatomic_read(&cpu->interrupt_request))) {
int interrupt_request;
bql_lock();
qemu_mutex_lock_iothread();
interrupt_request = cpu->interrupt_request;
if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
/* Mask out external interrupts for this step. */
@@ -814,7 +820,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
if (interrupt_request & CPU_INTERRUPT_DEBUG) {
cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
cpu->exception_index = EXCP_DEBUG;
bql_unlock();
qemu_mutex_unlock_iothread();
return true;
}
#if !defined(CONFIG_USER_ONLY)
@@ -825,7 +831,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
cpu->interrupt_request &= ~CPU_INTERRUPT_HALT;
cpu->halted = 1;
cpu->exception_index = EXCP_HLT;
bql_unlock();
qemu_mutex_unlock_iothread();
return true;
}
#if defined(TARGET_I386)
@@ -836,14 +842,14 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
cpu_svm_check_intercept_param(env, SVM_EXIT_INIT, 0, 0);
do_cpu_init(x86_cpu);
cpu->exception_index = EXCP_HALTED;
bql_unlock();
qemu_mutex_unlock_iothread();
return true;
}
#else
else if (interrupt_request & CPU_INTERRUPT_RESET) {
replay_interrupt();
cpu_reset(cpu);
bql_unlock();
qemu_mutex_unlock_iothread();
return true;
}
#endif /* !TARGET_I386 */
@@ -852,11 +858,11 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
True when it is, and we should restart on a new TB,
and via longjmp via cpu_loop_exit. */
else {
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
CPUClass *cc = CPU_GET_CLASS(cpu);
if (tcg_ops->cpu_exec_interrupt(cpu, interrupt_request)) {
if (!tcg_ops->need_replay_interrupt ||
tcg_ops->need_replay_interrupt(interrupt_request)) {
if (cc->tcg_ops->cpu_exec_interrupt &&
cc->tcg_ops->cpu_exec_interrupt(cpu, interrupt_request)) {
if (need_replay_interrupt(interrupt_request)) {
replay_interrupt();
}
/*
@@ -866,7 +872,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
*/
if (unlikely(cpu->singlestep_enabled)) {
cpu->exception_index = EXCP_DEBUG;
bql_unlock();
qemu_mutex_unlock_iothread();
return true;
}
cpu->exception_index = -1;
@@ -885,11 +891,14 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
}
/* If we exit via cpu_loop_exit/longjmp it is reset in cpu_exec */
bql_unlock();
qemu_mutex_unlock_iothread();
}
/* Finally, check if we need to exit to the main loop. */
if (unlikely(qatomic_read(&cpu->exit_request)) || icount_exit_request(cpu)) {
if (unlikely(qatomic_read(&cpu->exit_request))
|| (icount_enabled()
&& (cpu->cflags_next_tb == -1 || cpu->cflags_next_tb & CF_USE_ICOUNT)
&& cpu_neg(cpu)->icount_decr.u16.low + cpu->icount_extra == 0)) {
qatomic_set(&cpu->exit_request, 0);
if (cpu->exception_index == -1) {
cpu->exception_index = EXCP_INTERRUPT;
@@ -904,6 +913,8 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
vaddr pc, TranslationBlock **last_tb,
int *tb_exit)
{
int32_t insns_left;
trace_exec_tb(tb, pc);
tb = cpu_tb_exec(cpu, tb, tb_exit);
if (*tb_exit != TB_EXIT_REQUESTED) {
@@ -912,7 +923,8 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
}
*last_tb = NULL;
if (cpu_loop_exit_requested(cpu)) {
insns_left = qatomic_read(&cpu_neg(cpu)->icount_decr.u32);
if (insns_left < 0) {
/* Something asked us to stop executing chained TBs; just
* continue round the main loop. Whatever requested the exit
* will also have set something else (eg exit_request or
@@ -929,8 +941,8 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
/* Ensure global icount has gone forward */
icount_update(cpu);
/* Refill decrementer and continue execution. */
int32_t insns_left = MIN(0xffff, cpu->icount_budget);
cpu->neg.icount_decr.u16.low = insns_left;
insns_left = MIN(0xffff, cpu->icount_budget);
cpu_neg(cpu)->icount_decr.u16.low = insns_left;
cpu->icount_extra = cpu->icount_budget - insns_left;
/*
@@ -964,7 +976,7 @@ cpu_exec_loop(CPUState *cpu, SyncClocks *sc)
uint64_t cs_base;
uint32_t flags, cflags;
cpu_get_tb_cpu_state(cpu_env(cpu), &pc, &cs_base, &flags);
cpu_get_tb_cpu_state(cpu->env_ptr, &pc, &cs_base, &flags);
/*
* When requested, use an exact setting for cflags for the next
@@ -999,8 +1011,14 @@ cpu_exec_loop(CPUState *cpu, SyncClocks *sc)
*/
h = tb_jmp_cache_hash_func(pc);
jc = cpu->tb_jmp_cache;
jc->array[h].pc = pc;
qatomic_set(&jc->array[h].tb, tb);
if (cflags & CF_PCREL) {
jc->array[h].pc = pc;
/* Ensure pc is written first. */
qatomic_store_release(&jc->array[h].tb, tb);
} else {
/* Use the pc value already stored in tb->pc. */
qatomic_set(&jc->array[h].tb, tb);
}
}
#ifndef CONFIG_USER_ONLY
@@ -1051,7 +1069,7 @@ int cpu_exec(CPUState *cpu)
return EXCP_HALTED;
}
RCU_READ_LOCK_GUARD();
rcu_read_lock();
cpu_exec_enter(cpu);
/*
@@ -1065,20 +1083,18 @@ int cpu_exec(CPUState *cpu)
ret = cpu_exec_setjmp(cpu, &sc);
cpu_exec_exit(cpu);
rcu_read_unlock();
return ret;
}
bool tcg_exec_realizefn(CPUState *cpu, Error **errp)
void tcg_exec_realizefn(CPUState *cpu, Error **errp)
{
static bool tcg_target_initialized;
CPUClass *cc = CPU_GET_CLASS(cpu);
if (!tcg_target_initialized) {
/* Check mandatory TCGCPUOps handlers */
#ifndef CONFIG_USER_ONLY
assert(cpu->cc->tcg_ops->cpu_exec_halt);
assert(cpu->cc->tcg_ops->cpu_exec_interrupt);
#endif /* !CONFIG_USER_ONLY */
cpu->cc->tcg_ops->initialize();
cc->tcg_ops->initialize();
tcg_target_initialized = true;
}
@@ -1088,8 +1104,6 @@ bool tcg_exec_realizefn(CPUState *cpu, Error **errp)
tcg_iommu_init_notifier_list(cpu);
#endif /* !CONFIG_USER_ONLY */
/* qemu_plugin_vcpu_init_hook delayed until cpu_index assigned. */
return true;
}
/* undo the initializations in reverse order */

File diff suppressed because it is too large Load Diff

View File

@@ -6,10 +6,11 @@
#include "qemu/osdep.h"
#include "qemu/lockable.h"
#include "tcg/debuginfo.h"
#include <elfutils/libdwfl.h>
#include "debuginfo.h"
static QemuMutex lock;
static Dwfl *dwfl;
static const Dwfl_Callbacks dwfl_callbacks = {

View File

@@ -4,8 +4,8 @@
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef TCG_DEBUGINFO_H
#define TCG_DEBUGINFO_H
#ifndef ACCEL_TCG_DEBUGINFO_H
#define ACCEL_TCG_DEBUGINFO_H
#include "qemu/bitops.h"

View File

@@ -1,59 +0,0 @@
/*
* Internal execution defines for qemu (target agnostic)
*
* Copyright (c) 2003 Fabrice Bellard
*
* SPDX-License-Identifier: LGPL-2.1-or-later
*/
#ifndef ACCEL_TCG_INTERNAL_COMMON_H
#define ACCEL_TCG_INTERNAL_COMMON_H
#include "exec/cpu-common.h"
#include "exec/translation-block.h"
extern int64_t max_delay;
extern int64_t max_advance;
extern bool one_insn_per_tb;
/*
* Return true if CS is not running in parallel with other cpus, either
* because there are no other cpus or we are within an exclusive context.
*/
static inline bool cpu_in_serial_context(CPUState *cs)
{
return !tcg_cflags_has(cs, CF_PARALLEL) || cpu_in_exclusive_context(cs);
}
/**
* cpu_plugin_mem_cbs_enabled() - are plugin memory callbacks enabled?
* @cs: CPUState pointer
*
* The memory callbacks are installed if a plugin has instrumented an
* instruction for memory. This can be useful to know if you want to
* force a slow path for a series of memory accesses.
*/
static inline bool cpu_plugin_mem_cbs_enabled(const CPUState *cpu)
{
#ifdef CONFIG_PLUGIN
return !!cpu->neg.plugin_mem_cbs;
#else
return false;
#endif
}
TranslationBlock *tb_gen_code(CPUState *cpu, vaddr pc,
uint64_t cs_base, uint32_t flags,
int cflags);
void page_init(void);
void tb_htable_init(void);
void tb_reset_jump(TranslationBlock *tb, int n);
TranslationBlock *tb_link_page(TranslationBlock *tb);
void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
uintptr_t host_pc);
bool tcg_exec_realizefn(CPUState *cpu, Error **errp);
void tcg_exec_unrealizefn(CPUState *cpu);
#endif

View File

@@ -1,13 +1,13 @@
/*
* Internal execution defines for qemu (target specific)
* Internal execution defines for qemu
*
* Copyright (c) 2003 Fabrice Bellard
*
* SPDX-License-Identifier: LGPL-2.1-or-later
*/
#ifndef ACCEL_TCG_INTERNAL_TARGET_H
#define ACCEL_TCG_INTERNAL_TARGET_H
#ifndef ACCEL_TCG_INTERNAL_H
#define ACCEL_TCG_INTERNAL_H
#include "exec/exec-all.h"
#include "exec/translate-all.h"
@@ -69,7 +69,16 @@ void tb_invalidate_phys_range_fast(ram_addr_t ram_addr,
G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr);
#endif /* CONFIG_SOFTMMU */
TranslationBlock *tb_gen_code(CPUState *cpu, vaddr pc,
uint64_t cs_base, uint32_t flags,
int cflags);
void page_init(void);
void tb_htable_init(void);
void tb_reset_jump(TranslationBlock *tb, int n);
TranslationBlock *tb_link_page(TranslationBlock *tb);
bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc);
void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
uintptr_t host_pc);
/* Return the current PC from CPU, which may be cached in TB. */
static inline vaddr log_pc(CPUState *cpu, const TranslationBlock *tb)
@@ -81,6 +90,20 @@ static inline vaddr log_pc(CPUState *cpu, const TranslationBlock *tb)
}
}
/*
* Return true if CS is not running in parallel with other cpus, either
* because there are no other cpus or we are within an exclusive context.
*/
static inline bool cpu_in_serial_context(CPUState *cs)
{
return !(cs->tcg_cflags & CF_PARALLEL) || cpu_in_exclusive_context(cs);
}
extern int64_t max_delay;
extern int64_t max_advance;
extern bool one_insn_per_tb;
/**
* tcg_req_mo:
* @type: TCGBar

View File

@@ -9,8 +9,8 @@
* See the COPYING file in the top-level directory.
*/
#include "host/load-extract-al16-al8.h.inc"
#include "host/store-insert-al16.h.inc"
#include "host/load-extract-al16-al8.h"
#include "host/store-insert-al16.h"
#ifdef CONFIG_ATOMIC64
# define HAVE_al8 true
@@ -26,7 +26,7 @@
* If the operation must be split into two operations to be
* examined separately for atomicity, return -lg2.
*/
static int required_atomicity(CPUState *cpu, uintptr_t p, MemOp memop)
static int required_atomicity(CPUArchState *env, uintptr_t p, MemOp memop)
{
MemOp atom = memop & MO_ATOM_MASK;
MemOp size = memop & MO_SIZE;
@@ -76,7 +76,7 @@ static int required_atomicity(CPUState *cpu, uintptr_t p, MemOp memop)
/*
* Examine the alignment of p to determine if there are subobjects
* that must be aligned. Note that we only really need ctz4() --
* any more significant bits are discarded by the immediately
* any more sigificant bits are discarded by the immediately
* following comparison.
*/
tmp = ctz32(p);
@@ -93,7 +93,7 @@ static int required_atomicity(CPUState *cpu, uintptr_t p, MemOp memop)
* host atomicity in order to avoid racing. This reduction
* avoids looping with cpu_loop_exit_atomic.
*/
if (cpu_in_serial_context(cpu)) {
if (cpu_in_serial_context(env_cpu(env))) {
return MO_8;
}
return atmax;
@@ -139,14 +139,14 @@ static inline uint64_t load_atomic8(void *pv)
/**
* load_atomic8_or_exit:
* @cpu: generic cpu state
* @env: cpu context
* @ra: host unwind address
* @pv: host address
*
* Atomically load 8 aligned bytes from @pv.
* If this is not possible, longjmp out to restart serially.
*/
static uint64_t load_atomic8_or_exit(CPUState *cpu, uintptr_t ra, void *pv)
static uint64_t load_atomic8_or_exit(CPUArchState *env, uintptr_t ra, void *pv)
{
if (HAVE_al8) {
return load_atomic8(pv);
@@ -168,19 +168,19 @@ static uint64_t load_atomic8_or_exit(CPUState *cpu, uintptr_t ra, void *pv)
#endif
/* Ultimate fallback: re-execute in serial context. */
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
/**
* load_atomic16_or_exit:
* @cpu: generic cpu state
* @env: cpu context
* @ra: host unwind address
* @pv: host address
*
* Atomically load 16 aligned bytes from @pv.
* If this is not possible, longjmp out to restart serially.
*/
static Int128 load_atomic16_or_exit(CPUState *cpu, uintptr_t ra, void *pv)
static Int128 load_atomic16_or_exit(CPUArchState *env, uintptr_t ra, void *pv)
{
Int128 *p = __builtin_assume_aligned(pv, 16);
@@ -212,7 +212,7 @@ static Int128 load_atomic16_or_exit(CPUState *cpu, uintptr_t ra, void *pv)
}
/* Ultimate fallback: re-execute in serial context. */
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
/**
@@ -263,7 +263,7 @@ static uint64_t load_atom_extract_al8x2(void *pv)
/**
* load_atom_extract_al8_or_exit:
* @cpu: generic cpu state
* @env: cpu context
* @ra: host unwind address
* @pv: host address
* @s: object size in bytes, @s <= 4.
@@ -273,7 +273,7 @@ static uint64_t load_atom_extract_al8x2(void *pv)
* 8-byte load and extract.
* The value is returned in the low bits of a uint32_t.
*/
static uint32_t load_atom_extract_al8_or_exit(CPUState *cpu, uintptr_t ra,
static uint32_t load_atom_extract_al8_or_exit(CPUArchState *env, uintptr_t ra,
void *pv, int s)
{
uintptr_t pi = (uintptr_t)pv;
@@ -281,12 +281,12 @@ static uint32_t load_atom_extract_al8_or_exit(CPUState *cpu, uintptr_t ra,
int shr = (HOST_BIG_ENDIAN ? 8 - s - o : o) * 8;
pv = (void *)(pi & ~7);
return load_atomic8_or_exit(cpu, ra, pv) >> shr;
return load_atomic8_or_exit(env, ra, pv) >> shr;
}
/**
* load_atom_extract_al16_or_exit:
* @cpu: generic cpu state
* @env: cpu context
* @ra: host unwind address
* @p: host address
* @s: object size in bytes, @s <= 8.
@@ -299,7 +299,7 @@ static uint32_t load_atom_extract_al8_or_exit(CPUState *cpu, uintptr_t ra,
*
* If this is not possible, longjmp out to restart serially.
*/
static uint64_t load_atom_extract_al16_or_exit(CPUState *cpu, uintptr_t ra,
static uint64_t load_atom_extract_al16_or_exit(CPUArchState *env, uintptr_t ra,
void *pv, int s)
{
uintptr_t pi = (uintptr_t)pv;
@@ -312,7 +312,7 @@ static uint64_t load_atom_extract_al16_or_exit(CPUState *cpu, uintptr_t ra,
* Provoke SIGBUS if possible otherwise.
*/
pv = (void *)(pi & ~7);
r = load_atomic16_or_exit(cpu, ra, pv);
r = load_atomic16_or_exit(env, ra, pv);
r = int128_urshift(r, shr);
return int128_getlo(r);
@@ -394,7 +394,7 @@ static inline uint64_t load_atom_8_by_8_or_4(void *pv)
*
* Load 2 bytes from @p, honoring the atomicity of @memop.
*/
static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra,
static uint16_t load_atom_2(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop)
{
uintptr_t pi = (uintptr_t)pv;
@@ -410,7 +410,7 @@ static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra,
}
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
switch (atmax) {
case MO_8:
return lduw_he_p(pv);
@@ -421,9 +421,9 @@ static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra,
return load_atomic4(pv - 1) >> 8;
}
if ((pi & 15) != 7) {
return load_atom_extract_al8_or_exit(cpu, ra, pv, 2);
return load_atom_extract_al8_or_exit(env, ra, pv, 2);
}
return load_atom_extract_al16_or_exit(cpu, ra, pv, 2);
return load_atom_extract_al16_or_exit(env, ra, pv, 2);
default:
g_assert_not_reached();
}
@@ -436,7 +436,7 @@ static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra,
*
* Load 4 bytes from @p, honoring the atomicity of @memop.
*/
static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra,
static uint32_t load_atom_4(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop)
{
uintptr_t pi = (uintptr_t)pv;
@@ -452,7 +452,7 @@ static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra,
}
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
switch (atmax) {
case MO_8:
case MO_16:
@@ -466,9 +466,9 @@ static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra,
return load_atom_extract_al4x2(pv);
case MO_32:
if (!(pi & 4)) {
return load_atom_extract_al8_or_exit(cpu, ra, pv, 4);
return load_atom_extract_al8_or_exit(env, ra, pv, 4);
}
return load_atom_extract_al16_or_exit(cpu, ra, pv, 4);
return load_atom_extract_al16_or_exit(env, ra, pv, 4);
default:
g_assert_not_reached();
}
@@ -481,7 +481,7 @@ static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra,
*
* Load 8 bytes from @p, honoring the atomicity of @memop.
*/
static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
static uint64_t load_atom_8(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop)
{
uintptr_t pi = (uintptr_t)pv;
@@ -498,12 +498,12 @@ static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
return load_atom_extract_al16_or_al8(pv, 8);
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
if (atmax == MO_64) {
if (!HAVE_al8 && (pi & 7) == 0) {
load_atomic8_or_exit(cpu, ra, pv);
load_atomic8_or_exit(env, ra, pv);
}
return load_atom_extract_al16_or_exit(cpu, ra, pv, 8);
return load_atom_extract_al16_or_exit(env, ra, pv, 8);
}
if (HAVE_al8_fast) {
return load_atom_extract_al8x2(pv);
@@ -519,7 +519,7 @@ static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
if (HAVE_al8) {
return load_atom_extract_al8x2(pv);
}
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
default:
g_assert_not_reached();
}
@@ -532,7 +532,7 @@ static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
*
* Load 16 bytes from @p, honoring the atomicity of @memop.
*/
static Int128 load_atom_16(CPUState *cpu, uintptr_t ra,
static Int128 load_atom_16(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop)
{
uintptr_t pi = (uintptr_t)pv;
@@ -548,7 +548,7 @@ static Int128 load_atom_16(CPUState *cpu, uintptr_t ra,
return atomic16_read_ro(pv);
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
switch (atmax) {
case MO_8:
memcpy(&r, pv, 16);
@@ -563,20 +563,20 @@ static Int128 load_atom_16(CPUState *cpu, uintptr_t ra,
break;
case MO_64:
if (!HAVE_al8) {
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
a = load_atomic8(pv);
b = load_atomic8(pv + 8);
break;
case -MO_64:
if (!HAVE_al8) {
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
a = load_atom_extract_al8x2(pv);
b = load_atom_extract_al8x2(pv + 8);
break;
case MO_128:
return load_atomic16_or_exit(cpu, ra, pv);
return load_atomic16_or_exit(env, ra, pv);
default:
g_assert_not_reached();
}
@@ -825,7 +825,7 @@ static uint64_t store_whole_le16(void *pv, int size, Int128 val_le)
int sh = o * 8;
Int128 m, v;
qemu_build_assert(HAVE_CMPXCHG128);
qemu_build_assert(HAVE_ATOMIC128_RW);
/* Like MAKE_64BIT_MASK(0, sz), but larger. */
if (sz <= 64) {
@@ -857,7 +857,7 @@ static uint64_t store_whole_le16(void *pv, int size, Int128 val_le)
*
* Store 2 bytes to @p, honoring the atomicity of @memop.
*/
static void store_atom_2(CPUState *cpu, uintptr_t ra,
static void store_atom_2(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop, uint16_t val)
{
uintptr_t pi = (uintptr_t)pv;
@@ -868,7 +868,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
return;
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
if (atmax == MO_8) {
stw_he_p(pv, val);
return;
@@ -887,7 +887,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
return;
}
} else if ((pi & 15) == 7) {
if (HAVE_CMPXCHG128) {
if (HAVE_ATOMIC128_RW) {
Int128 v = int128_lshift(int128_make64(val), 56);
Int128 m = int128_lshift(int128_make64(0xffff), 56);
store_atom_insert_al16(pv - 7, v, m);
@@ -897,7 +897,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
g_assert_not_reached();
}
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
/**
@@ -908,7 +908,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
*
* Store 4 bytes to @p, honoring the atomicity of @memop.
*/
static void store_atom_4(CPUState *cpu, uintptr_t ra,
static void store_atom_4(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop, uint32_t val)
{
uintptr_t pi = (uintptr_t)pv;
@@ -919,7 +919,7 @@ static void store_atom_4(CPUState *cpu, uintptr_t ra,
return;
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
switch (atmax) {
case MO_8:
stl_he_p(pv, val);
@@ -956,12 +956,12 @@ static void store_atom_4(CPUState *cpu, uintptr_t ra,
return;
}
} else {
if (HAVE_CMPXCHG128) {
if (HAVE_ATOMIC128_RW) {
store_whole_le16(pv, 4, int128_make64(cpu_to_le32(val)));
return;
}
}
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
default:
g_assert_not_reached();
}
@@ -975,7 +975,7 @@ static void store_atom_4(CPUState *cpu, uintptr_t ra,
*
* Store 8 bytes to @p, honoring the atomicity of @memop.
*/
static void store_atom_8(CPUState *cpu, uintptr_t ra,
static void store_atom_8(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop, uint64_t val)
{
uintptr_t pi = (uintptr_t)pv;
@@ -986,7 +986,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
return;
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
switch (atmax) {
case MO_8:
stq_he_p(pv, val);
@@ -1021,7 +1021,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
}
break;
case MO_64:
if (HAVE_CMPXCHG128) {
if (HAVE_ATOMIC128_RW) {
store_whole_le16(pv, 8, int128_make64(cpu_to_le64(val)));
return;
}
@@ -1029,7 +1029,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
default:
g_assert_not_reached();
}
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
/**
@@ -1040,7 +1040,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
*
* Store 16 bytes to @p, honoring the atomicity of @memop.
*/
static void store_atom_16(CPUState *cpu, uintptr_t ra,
static void store_atom_16(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop, Int128 val)
{
uintptr_t pi = (uintptr_t)pv;
@@ -1052,7 +1052,7 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
return;
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
a = HOST_BIG_ENDIAN ? int128_gethi(val) : int128_getlo(val);
b = HOST_BIG_ENDIAN ? int128_getlo(val) : int128_gethi(val);
@@ -1076,7 +1076,7 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
}
break;
case -MO_64:
if (HAVE_CMPXCHG128) {
if (HAVE_ATOMIC128_RW) {
uint64_t val_le;
int s2 = pi & 15;
int s1 = 16 - s2;
@@ -1103,9 +1103,13 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
}
break;
case MO_128:
if (HAVE_ATOMIC128_RW) {
atomic16_set(pv, val);
return;
}
break;
default:
g_assert_not_reached();
}
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}

View File

@@ -8,235 +8,6 @@
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
/*
* Load helpers for tcg-ldst.h
*/
tcg_target_ulong helper_ldub_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_8);
return do_ld1_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
tcg_target_ulong helper_lduw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
return do_ld2_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
tcg_target_ulong helper_ldul_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
return do_ld4_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
uint64_t helper_ldq_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
return do_ld8_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
/*
* Provide signed versions of the load routines as well. We can of course
* avoid this for 64-bit data, or for 32-bit data on 32-bit host.
*/
tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
return (int8_t)helper_ldub_mmu(env, addr, oi, retaddr);
}
tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
return (int16_t)helper_lduw_mmu(env, addr, oi, retaddr);
}
tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
return (int32_t)helper_ldul_mmu(env, addr, oi, retaddr);
}
Int128 helper_ld16_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
return do_ld16_mmu(env_cpu(env), addr, oi, retaddr);
}
Int128 helper_ld_i128(CPUArchState *env, uint64_t addr, uint32_t oi)
{
return helper_ld16_mmu(env, addr, oi, GETPC());
}
/*
* Store helpers for tcg-ldst.h
*/
void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_8);
do_st1_mmu(env_cpu(env), addr, val, oi, ra);
}
void helper_stw_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
do_st2_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_stl_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
do_st4_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_stq_mmu(CPUArchState *env, uint64_t addr, uint64_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
do_st8_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_st16_mmu(CPUArchState *env, uint64_t addr, Int128 val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
do_st16_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx oi)
{
helper_st16_mmu(env, addr, val, oi, GETPC());
}
/*
* Load helpers for cpu_ldst.h
*/
static void plugin_load_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi)
{
if (cpu_plugin_mem_cbs_enabled(env_cpu(env))) {
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
}
}
uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra)
{
uint8_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_UB);
ret = do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, oi);
return ret;
}
uint16_t cpu_ldw_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint16_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
ret = do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, oi);
return ret;
}
uint32_t cpu_ldl_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint32_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
ret = do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, oi);
return ret;
}
uint64_t cpu_ldq_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint64_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
ret = do_ld8_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, oi);
return ret;
}
Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
Int128 ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
ret = do_ld16_mmu(env_cpu(env), addr, oi, ra);
plugin_load_cb(env, addr, oi);
return ret;
}
/*
* Store helpers for cpu_ldst.h
*/
static void plugin_store_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi)
{
if (cpu_plugin_mem_cbs_enabled(env_cpu(env))) {
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
}
void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val,
MemOpIdx oi, uintptr_t retaddr)
{
helper_stb_mmu(env, addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uint16_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
do_st2_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uint32_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
do_st4_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
do_st8_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
void cpu_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
do_st16_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
/*
* Wrappers of the above
*/
uint32_t cpu_ldub_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra)
@@ -358,8 +129,7 @@ void cpu_stq_le_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
uint32_t cpu_ldub_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldub_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_ldub_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
int cpu_ldsb_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
@@ -369,8 +139,7 @@ int cpu_ldsb_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
uint32_t cpu_lduw_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_lduw_be_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_lduw_be_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
int cpu_ldsw_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
@@ -380,20 +149,17 @@ int cpu_ldsw_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
uint32_t cpu_ldl_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldl_be_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_ldl_be_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
uint64_t cpu_ldq_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldq_be_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_ldq_be_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
uint32_t cpu_lduw_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_lduw_le_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_lduw_le_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
int cpu_ldsw_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
@@ -403,63 +169,54 @@ int cpu_ldsw_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
uint32_t cpu_ldl_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldl_le_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_ldl_le_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
uint64_t cpu_ldq_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldq_le_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_ldq_le_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
void cpu_stb_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stb_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stb_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stw_be_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stw_be_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stw_be_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stl_be_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stl_be_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stl_be_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stq_be_data_ra(CPUArchState *env, abi_ptr addr,
uint64_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stq_be_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stq_be_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stw_le_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stw_le_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stw_le_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stl_le_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stl_le_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stl_le_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stq_le_data_ra(CPUArchState *env, abi_ptr addr,
uint64_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stq_le_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stq_le_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
/*--------------------------*/

View File

@@ -1,9 +1,7 @@
common_ss.add(when: 'CONFIG_TCG', if_true: files(
'cpu-exec-common.c',
))
tcg_specific_ss = ss.source_set()
tcg_specific_ss.add(files(
tcg_ss = ss.source_set()
tcg_ss.add(files(
'tcg-all.c',
'cpu-exec-common.c',
'cpu-exec.c',
'tb-maint.c',
'tcg-runtime-gvec.c',
@@ -11,20 +9,17 @@ tcg_specific_ss.add(files(
'translate-all.c',
'translator.c',
))
tcg_specific_ss.add(when: 'CONFIG_USER_ONLY', if_true: files('user-exec.c'))
tcg_specific_ss.add(when: 'CONFIG_SYSTEM_ONLY', if_false: files('user-exec-stub.c'))
tcg_ss.add(when: 'CONFIG_USER_ONLY', if_true: files('user-exec.c'))
tcg_ss.add(when: 'CONFIG_SYSTEM_ONLY', if_false: files('user-exec-stub.c'))
if get_option('plugins')
tcg_specific_ss.add(files('plugin-gen.c'))
tcg_ss.add(files('plugin-gen.c'))
endif
specific_ss.add_all(when: 'CONFIG_TCG', if_true: tcg_specific_ss)
tcg_ss.add(when: libdw, if_true: files('debuginfo.c'))
tcg_ss.add(when: 'CONFIG_LINUX', if_true: files('perf.c'))
specific_ss.add_all(when: 'CONFIG_TCG', if_true: tcg_ss)
specific_ss.add(when: ['CONFIG_SYSTEM_ONLY', 'CONFIG_TCG'], if_true: files(
'cputlb.c',
'watchpoint.c',
))
system_ss.add(when: ['CONFIG_TCG'], if_true: files(
'icount-common.c',
'monitor.c',
))

View File

@@ -8,7 +8,6 @@
#include "qemu/osdep.h"
#include "qemu/accel.h"
#include "qemu/qht.h"
#include "qapi/error.h"
#include "qapi/type-helpers.h"
#include "qapi/qapi-commands-machine.h"
@@ -17,8 +16,7 @@
#include "sysemu/cpu-timers.h"
#include "sysemu/tcg.h"
#include "tcg/tcg.h"
#include "internal-common.h"
#include "tb-context.h"
#include "internal.h"
static void dump_drift_info(GString *buf)
@@ -52,153 +50,6 @@ static void dump_accel_info(GString *buf)
one_insn_per_tb ? "on" : "off");
}
static void print_qht_statistics(struct qht_stats hst, GString *buf)
{
uint32_t hgram_opts;
size_t hgram_bins;
char *hgram;
if (!hst.head_buckets) {
return;
}
g_string_append_printf(buf, "TB hash buckets %zu/%zu "
"(%0.2f%% head buckets used)\n",
hst.used_head_buckets, hst.head_buckets,
(double)hst.used_head_buckets /
hst.head_buckets * 100);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_opts |= QDIST_PR_100X | QDIST_PR_PERCENT;
if (qdist_xmax(&hst.occupancy) - qdist_xmin(&hst.occupancy) == 1) {
hgram_opts |= QDIST_PR_NODECIMAL;
}
hgram = qdist_pr(&hst.occupancy, 10, hgram_opts);
g_string_append_printf(buf, "TB hash occupancy %0.2f%% avg chain occ. "
"Histogram: %s\n",
qdist_avg(&hst.occupancy) * 100, hgram);
g_free(hgram);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_bins = qdist_xmax(&hst.chain) - qdist_xmin(&hst.chain);
if (hgram_bins > 10) {
hgram_bins = 10;
} else {
hgram_bins = 0;
hgram_opts |= QDIST_PR_NODECIMAL | QDIST_PR_NOBINRANGE;
}
hgram = qdist_pr(&hst.chain, hgram_bins, hgram_opts);
g_string_append_printf(buf, "TB hash avg chain %0.3f buckets. "
"Histogram: %s\n",
qdist_avg(&hst.chain), hgram);
g_free(hgram);
}
struct tb_tree_stats {
size_t nb_tbs;
size_t host_size;
size_t target_size;
size_t max_target_size;
size_t direct_jmp_count;
size_t direct_jmp2_count;
size_t cross_page;
};
static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data)
{
const TranslationBlock *tb = value;
struct tb_tree_stats *tst = data;
tst->nb_tbs++;
tst->host_size += tb->tc.size;
tst->target_size += tb->size;
if (tb->size > tst->max_target_size) {
tst->max_target_size = tb->size;
}
if (tb->page_addr[1] != -1) {
tst->cross_page++;
}
if (tb->jmp_reset_offset[0] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp_count++;
if (tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp2_count++;
}
}
return false;
}
static void tlb_flush_counts(size_t *pfull, size_t *ppart, size_t *pelide)
{
CPUState *cpu;
size_t full = 0, part = 0, elide = 0;
CPU_FOREACH(cpu) {
full += qatomic_read(&cpu->neg.tlb.c.full_flush_count);
part += qatomic_read(&cpu->neg.tlb.c.part_flush_count);
elide += qatomic_read(&cpu->neg.tlb.c.elide_flush_count);
}
*pfull = full;
*ppart = part;
*pelide = elide;
}
static void tcg_dump_info(GString *buf)
{
g_string_append_printf(buf, "[TCG profiler not compiled]\n");
}
static void dump_exec_info(GString *buf)
{
struct tb_tree_stats tst = {};
struct qht_stats hst;
size_t nb_tbs, flush_full, flush_part, flush_elide;
tcg_tb_foreach(tb_tree_stats_iter, &tst);
nb_tbs = tst.nb_tbs;
/* XXX: avoid using doubles ? */
g_string_append_printf(buf, "Translation buffer state:\n");
/*
* Report total code size including the padding and TB structs;
* otherwise users might think "-accel tcg,tb-size" is not honoured.
* For avg host size we use the precise numbers from tb_tree_stats though.
*/
g_string_append_printf(buf, "gen code size %zu/%zu\n",
tcg_code_size(), tcg_code_capacity());
g_string_append_printf(buf, "TB count %zu\n", nb_tbs);
g_string_append_printf(buf, "TB avg target size %zu max=%zu bytes\n",
nb_tbs ? tst.target_size / nb_tbs : 0,
tst.max_target_size);
g_string_append_printf(buf, "TB avg host size %zu bytes "
"(expansion ratio: %0.1f)\n",
nb_tbs ? tst.host_size / nb_tbs : 0,
tst.target_size ?
(double)tst.host_size / tst.target_size : 0);
g_string_append_printf(buf, "cross page TB count %zu (%zu%%)\n",
tst.cross_page,
nb_tbs ? (tst.cross_page * 100) / nb_tbs : 0);
g_string_append_printf(buf, "direct jump count %zu (%zu%%) "
"(2 jumps=%zu %zu%%)\n",
tst.direct_jmp_count,
nb_tbs ? (tst.direct_jmp_count * 100) / nb_tbs : 0,
tst.direct_jmp2_count,
nb_tbs ? (tst.direct_jmp2_count * 100) / nb_tbs : 0);
qht_statistics_init(&tb_ctx.htable, &hst);
print_qht_statistics(hst, buf);
qht_statistics_destroy(&hst);
g_string_append_printf(buf, "\nStatistics:\n");
g_string_append_printf(buf, "TB flush count %u\n",
qatomic_read(&tb_ctx.tb_flush_count));
g_string_append_printf(buf, "TB invalidate count %u\n",
qatomic_read(&tb_ctx.tb_phys_invalidate_count));
tlb_flush_counts(&flush_full, &flush_part, &flush_elide);
g_string_append_printf(buf, "TLB full flushes %zu\n", flush_full);
g_string_append_printf(buf, "TLB partial flushes %zu\n", flush_part);
g_string_append_printf(buf, "TLB elided flushes %zu\n", flush_elide);
tcg_dump_info(buf);
}
HumanReadableText *qmp_x_query_jit(Error **errp)
{
g_autoptr(GString) buf = g_string_new("");
@@ -215,11 +66,6 @@ HumanReadableText *qmp_x_query_jit(Error **errp)
return human_readable_text_from_str(buf);
}
static void tcg_dump_op_count(GString *buf)
{
g_string_append_printf(buf, "[TCG profiler not compiled]\n");
}
HumanReadableText *qmp_x_query_opcount(Error **errp)
{
g_autoptr(GString) buf = g_string_new("");

View File

@@ -10,13 +10,13 @@
#include "qemu/osdep.h"
#include "elf.h"
#include "exec/target_page.h"
#include "exec/translation-block.h"
#include "exec/exec-all.h"
#include "qemu/timer.h"
#include "tcg/debuginfo.h"
#include "tcg/perf.h"
#include "tcg/tcg.h"
#include "debuginfo.h"
#include "perf.h"
static FILE *safe_fopen_w(const char *path)
{
int saved_errno;
@@ -335,7 +335,11 @@ void perf_report_code(uint64_t guest_pc, TranslationBlock *tb,
/* FIXME: This replicates the restore_state_to_opc() logic. */
q[insn].address = gen_insn_data[insn * start_words + 0];
if (tb_cflags(tb) & CF_PCREL) {
q[insn].address |= (guest_pc & qemu_target_page_mask());
q[insn].address |= (guest_pc & TARGET_PAGE_MASK);
} else {
#if defined(TARGET_I386)
q[insn].address -= tb->cs_base;
#endif
}
q[insn].flags = DEBUGINFO_SYMBOL | (jitdump ? DEBUGINFO_LINE : 0);
}

View File

@@ -4,8 +4,8 @@
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef TCG_PERF_H
#define TCG_PERF_H
#ifndef ACCEL_TCG_PERF_H
#define ACCEL_TCG_PERF_H
#if defined(CONFIG_TCG) && defined(CONFIG_LINUX)
/* Start writing perf-<pid>.map. */

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,4 @@
#ifdef CONFIG_PLUGIN
DEF_HELPER_FLAGS_2(plugin_vcpu_udata_cb, TCG_CALL_NO_RWG | TCG_CALL_PLUGIN, void, i32, ptr)
DEF_HELPER_FLAGS_4(plugin_vcpu_mem_cb, TCG_CALL_NO_RWG | TCG_CALL_PLUGIN, void, i32, i32, i64, ptr)
#endif

View File

@@ -9,25 +9,20 @@
#ifndef ACCEL_TCG_TB_JMP_CACHE_H
#define ACCEL_TCG_TB_JMP_CACHE_H
#include "qemu/rcu.h"
#include "exec/cpu-common.h"
#define TB_JMP_CACHE_BITS 12
#define TB_JMP_CACHE_SIZE (1 << TB_JMP_CACHE_BITS)
/*
* Invalidated in parallel; all accesses to 'tb' must be atomic.
* A valid entry is read/written by a single CPU, therefore there is
* no need for qatomic_rcu_read() and pc is always consistent with a
* non-NULL value of 'tb'. Strictly speaking pc is only needed for
* CF_PCREL, but it's used always for simplicity.
* Accessed in parallel; all accesses to 'tb' must be atomic.
* For CF_PCREL, accesses to 'pc' must be protected by a
* load_acquire/store_release to 'tb'.
*/
typedef struct CPUJumpCache {
struct CPUJumpCache {
struct rcu_head rcu;
struct {
TranslationBlock *tb;
vaddr pc;
} array[TB_JMP_CACHE_SIZE];
} CPUJumpCache;
};
#endif /* ACCEL_TCG_TB_JMP_CACHE_H */

View File

@@ -23,15 +23,13 @@
#include "exec/cputlb.h"
#include "exec/log.h"
#include "exec/exec-all.h"
#include "exec/page-protection.h"
#include "exec/tb-flush.h"
#include "exec/translate-all.h"
#include "sysemu/tcg.h"
#include "tcg/tcg.h"
#include "tb-hash.h"
#include "tb-context.h"
#include "internal-common.h"
#include "internal-target.h"
#include "internal.h"
/* List iterators for lists of tagged pointers in TranslationBlock. */
@@ -209,12 +207,13 @@ static PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc)
{
PageDesc *pd;
void **lp;
int i;
/* Level 1. Always allocated. */
lp = l1_map + ((index >> v_l1_shift) & (v_l1_size - 1));
/* Level 2..N-1. */
for (int i = v_l2_levels; i > 0; i--) {
for (i = v_l2_levels; i > 0; i--) {
void **p = qatomic_rcu_read(lp);
if (p == NULL) {
@@ -713,7 +712,7 @@ static void tb_record(TranslationBlock *tb)
tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr0 >> TARGET_PAGE_BITS;
assert(paddr0 != -1);
if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
@@ -745,7 +744,7 @@ static void tb_remove(TranslationBlock *tb)
tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr0 >> TARGET_PAGE_BITS;
assert(paddr0 != -1);
if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
@@ -1022,7 +1021,7 @@ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t last)
* Called with mmap_lock held for user-mode emulation
* NOTE: this function must not be called while a TB is running.
*/
static void tb_invalidate_phys_page(tb_page_addr_t addr)
void tb_invalidate_phys_page(tb_page_addr_t addr)
{
tb_page_addr_t start, last;
@@ -1161,6 +1160,28 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages,
#endif
}
/*
* Invalidate all TBs which intersect with the target physical
* address page @addr.
*/
void tb_invalidate_phys_page(tb_page_addr_t addr)
{
struct page_collection *pages;
tb_page_addr_t start, last;
PageDesc *p;
p = page_find(addr >> TARGET_PAGE_BITS);
if (p == NULL) {
return;
}
start = addr & TARGET_PAGE_MASK;
last = addr | ~TARGET_PAGE_MASK;
pages = page_collection_lock(start, last);
tb_invalidate_phys_page_range__locked(pages, p, start, last, 0);
page_collection_unlock(pages);
}
/*
* Invalidate all TBs which intersect with the target physical address range
* [start;last]. NOTE: start and end may refer to *different* physical pages.

View File

@@ -111,24 +111,24 @@ void icount_prepare_for_run(CPUState *cpu, int64_t cpu_budget)
* each vCPU execution. However u16.high can be raised
* asynchronously by cpu_exit/cpu_interrupt/tcg_handle_interrupt
*/
g_assert(cpu->neg.icount_decr.u16.low == 0);
g_assert(cpu_neg(cpu)->icount_decr.u16.low == 0);
g_assert(cpu->icount_extra == 0);
replay_mutex_lock();
cpu->icount_budget = MIN(icount_get_limit(), cpu_budget);
insns_left = MIN(0xffff, cpu->icount_budget);
cpu->neg.icount_decr.u16.low = insns_left;
cpu_neg(cpu)->icount_decr.u16.low = insns_left;
cpu->icount_extra = cpu->icount_budget - insns_left;
if (cpu->icount_budget == 0) {
/*
* We're called without the BQL, so must take it while
* We're called without the iothread lock, so must take it while
* we're calling timer handlers.
*/
bql_lock();
qemu_mutex_lock_iothread();
icount_notify_aio_contexts();
bql_unlock();
qemu_mutex_unlock_iothread();
}
}
@@ -138,7 +138,7 @@ void icount_process_data(CPUState *cpu)
icount_update(cpu);
/* Reset the counters */
cpu->neg.icount_decr.u16.low = 0;
cpu_neg(cpu)->icount_decr.u16.low = 0;
cpu->icount_extra = 0;
cpu->icount_budget = 0;
@@ -153,7 +153,7 @@ void icount_handle_interrupt(CPUState *cpu, int mask)
tcg_handle_interrupt(cpu, mask);
if (qemu_cpu_is_self(cpu) &&
!cpu->neg.can_do_io
!cpu->can_do_io
&& (mask & ~old_mask) != 0) {
cpu_abort(cpu, "Raised interrupt while not in I/O function");
}

View File

@@ -32,7 +32,7 @@
#include "qemu/guest-random.h"
#include "exec/exec-all.h"
#include "hw/boards.h"
#include "tcg/startup.h"
#include "tcg/tcg.h"
#include "tcg-accel-ops.h"
#include "tcg-accel-ops-mttcg.h"
@@ -76,11 +76,11 @@ static void *mttcg_cpu_thread_fn(void *arg)
rcu_add_force_rcu_notifier(&force_rcu.notifier);
tcg_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
cpu->can_do_io = 1;
current_cpu = cpu;
cpu_thread_signal_created(cpu);
qemu_guest_random_seed_thread_part2(cpu->random_seed);
@@ -91,9 +91,9 @@ static void *mttcg_cpu_thread_fn(void *arg)
do {
if (cpu_can_run(cpu)) {
int r;
bql_unlock();
r = tcg_cpu_exec(cpu);
bql_lock();
qemu_mutex_unlock_iothread();
r = tcg_cpus_exec(cpu);
qemu_mutex_lock_iothread();
switch (r) {
case EXCP_DEBUG:
cpu_handle_guest_debug(cpu);
@@ -105,9 +105,9 @@ static void *mttcg_cpu_thread_fn(void *arg)
*/
break;
case EXCP_ATOMIC:
bql_unlock();
qemu_mutex_unlock_iothread();
cpu_exec_step_atomic(cpu);
bql_lock();
qemu_mutex_lock_iothread();
default:
/* Ignore everything else? */
break;
@@ -118,8 +118,8 @@ static void *mttcg_cpu_thread_fn(void *arg)
qemu_wait_io_event(cpu);
} while (!cpu->unplug || cpu_can_run(cpu));
tcg_cpu_destroy(cpu);
bql_unlock();
tcg_cpus_destroy(cpu);
qemu_mutex_unlock_iothread();
rcu_remove_force_rcu_notifier(&force_rcu.notifier);
rcu_unregister_thread();
return NULL;
@@ -137,6 +137,10 @@ void mttcg_start_vcpu_thread(CPUState *cpu)
g_assert(tcg_enabled());
tcg_cpu_init_cflags(cpu, current_machine->smp.max_cpus > 1);
cpu->thread = g_new0(QemuThread, 1);
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
/* create a thread per vCPU with TCG (MTTCG) */
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/TCG",
cpu->cpu_index);

View File

@@ -32,7 +32,7 @@
#include "qemu/notify.h"
#include "qemu/guest-random.h"
#include "exec/exec-all.h"
#include "tcg/startup.h"
#include "tcg/tcg.h"
#include "tcg-accel-ops.h"
#include "tcg-accel-ops-rr.h"
#include "tcg-accel-ops-icount.h"
@@ -111,7 +111,7 @@ static void rr_wait_io_event(void)
while (all_cpu_threads_idle()) {
rr_stop_kick_timer();
qemu_cond_wait_bql(first_cpu->halt_cond);
qemu_cond_wait_iothread(first_cpu->halt_cond);
}
rr_start_kick_timer();
@@ -131,7 +131,7 @@ static void rr_deal_with_unplugged_cpus(void)
CPU_FOREACH(cpu) {
if (cpu->unplug && !cpu_can_run(cpu)) {
tcg_cpu_destroy(cpu);
tcg_cpus_destroy(cpu);
break;
}
}
@@ -188,17 +188,17 @@ static void *rr_cpu_thread_fn(void *arg)
rcu_add_force_rcu_notifier(&force_rcu);
tcg_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
cpu->can_do_io = 1;
cpu_thread_signal_created(cpu);
qemu_guest_random_seed_thread_part2(cpu->random_seed);
/* wait for initial kick-off after machine start */
while (first_cpu->stopped) {
qemu_cond_wait_bql(first_cpu->halt_cond);
qemu_cond_wait_iothread(first_cpu->halt_cond);
/* process any pending work */
CPU_FOREACH(cpu) {
@@ -218,9 +218,9 @@ static void *rr_cpu_thread_fn(void *arg)
/* Only used for icount_enabled() */
int64_t cpu_budget = 0;
bql_unlock();
qemu_mutex_unlock_iothread();
replay_mutex_lock();
bql_lock();
qemu_mutex_lock_iothread();
if (icount_enabled()) {
int cpu_count = rr_cpu_count();
@@ -254,23 +254,23 @@ static void *rr_cpu_thread_fn(void *arg)
if (cpu_can_run(cpu)) {
int r;
bql_unlock();
qemu_mutex_unlock_iothread();
if (icount_enabled()) {
icount_prepare_for_run(cpu, cpu_budget);
}
r = tcg_cpu_exec(cpu);
r = tcg_cpus_exec(cpu);
if (icount_enabled()) {
icount_process_data(cpu);
}
bql_lock();
qemu_mutex_lock_iothread();
if (r == EXCP_DEBUG) {
cpu_handle_guest_debug(cpu);
break;
} else if (r == EXCP_ATOMIC) {
bql_unlock();
qemu_mutex_unlock_iothread();
cpu_exec_step_atomic(cpu);
bql_lock();
qemu_mutex_lock_iothread();
break;
}
} else if (cpu->stop) {
@@ -317,25 +317,24 @@ void rr_start_vcpu_thread(CPUState *cpu)
tcg_cpu_init_cflags(cpu, false);
if (!single_tcg_cpu_thread) {
single_tcg_halt_cond = cpu->halt_cond;
single_tcg_cpu_thread = cpu->thread;
cpu->thread = g_new0(QemuThread, 1);
cpu->halt_cond = g_new0(QemuCond, 1);
qemu_cond_init(cpu->halt_cond);
/* share a single thread for all cpus with TCG */
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "ALL CPUs/TCG");
qemu_thread_create(cpu->thread, thread_name,
rr_cpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE);
single_tcg_halt_cond = cpu->halt_cond;
single_tcg_cpu_thread = cpu->thread;
} else {
/* we share the thread, dump spare data */
g_free(cpu->thread);
qemu_cond_destroy(cpu->halt_cond);
g_free(cpu->halt_cond);
/* we share the thread */
cpu->thread = single_tcg_cpu_thread;
cpu->halt_cond = single_tcg_halt_cond;
/* copy the stuff done at start of rr_cpu_thread_fn */
cpu->thread_id = first_cpu->thread_id;
cpu->neg.can_do_io = 1;
cpu->can_do_io = 1;
cpu->created = true;
}
}

View File

@@ -34,10 +34,7 @@
#include "qemu/timer.h"
#include "exec/exec-all.h"
#include "exec/hwaddr.h"
#include "exec/tb-flush.h"
#include "gdbstub/enums.h"
#include "hw/core/cpu.h"
#include "exec/gdbstub.h"
#include "tcg-accel-ops.h"
#include "tcg-accel-ops-mttcg.h"
@@ -62,15 +59,15 @@ void tcg_cpu_init_cflags(CPUState *cpu, bool parallel)
cflags |= parallel ? CF_PARALLEL : 0;
cflags |= icount_enabled() ? CF_USE_ICOUNT : 0;
tcg_cflags_set(cpu, cflags);
cpu->tcg_cflags |= cflags;
}
void tcg_cpu_destroy(CPUState *cpu)
void tcg_cpus_destroy(CPUState *cpu)
{
cpu_thread_signal_destroyed(cpu);
}
int tcg_cpu_exec(CPUState *cpu)
int tcg_cpus_exec(CPUState *cpu)
{
int ret;
assert(tcg_enabled());
@@ -80,17 +77,10 @@ int tcg_cpu_exec(CPUState *cpu)
return ret;
}
static void tcg_cpu_reset_hold(CPUState *cpu)
{
tcg_flush_jmp_cache(cpu);
tlb_flush(cpu);
}
/* mask must never be zero, except for A20 change call */
void tcg_handle_interrupt(CPUState *cpu, int mask)
{
g_assert(bql_locked());
g_assert(qemu_mutex_iothread_locked());
cpu->interrupt_request |= mask;
@@ -101,7 +91,7 @@ void tcg_handle_interrupt(CPUState *cpu, int mask)
if (!qemu_cpu_is_self(cpu)) {
qemu_cpu_kick(cpu);
} else {
qatomic_set(&cpu->neg.icount_decr.u16.high, -1);
qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1);
}
}
@@ -215,7 +205,6 @@ static void tcg_accel_ops_init(AccelOpsClass *ops)
}
}
ops->cpu_reset_hold = tcg_cpu_reset_hold;
ops->supports_guest_debug = tcg_supports_guest_debug;
ops->insert_breakpoint = tcg_insert_breakpoint;
ops->remove_breakpoint = tcg_remove_breakpoint;

View File

@@ -14,8 +14,8 @@
#include "sysemu/cpus.h"
void tcg_cpu_destroy(CPUState *cpu);
int tcg_cpu_exec(CPUState *cpu);
void tcg_cpus_destroy(CPUState *cpu);
int tcg_cpus_exec(CPUState *cpu);
void tcg_handle_interrupt(CPUState *cpu, int mask);
void tcg_cpu_init_cflags(CPUState *cpu, bool parallel);

View File

@@ -27,7 +27,7 @@
#include "sysemu/tcg.h"
#include "exec/replay-core.h"
#include "sysemu/cpu-timers.h"
#include "tcg/startup.h"
#include "tcg/tcg.h"
#include "tcg/oversized-guest.h"
#include "qapi/error.h"
#include "qemu/error-report.h"
@@ -38,7 +38,7 @@
#if !defined(CONFIG_USER_ONLY)
#include "hw/boards.h"
#endif
#include "internal-common.h"
#include "internal.h"
struct TCGState {
AccelState parent_obj;
@@ -121,7 +121,7 @@ static int tcg_init_machine(MachineState *ms)
* There's no guest base to take into account, so go ahead and
* initialize the prologue now.
*/
tcg_prologue_init();
tcg_prologue_init(tcg_ctx);
#endif
return 0;
@@ -227,8 +227,6 @@ static void tcg_accel_class_init(ObjectClass *oc, void *data)
AccelClass *ac = ACCEL_CLASS(oc);
ac->name = "tcg";
ac->init_machine = tcg_init_machine;
ac->cpu_common_realize = tcg_exec_realizefn;
ac->cpu_common_unrealize = tcg_exec_unrealizefn;
ac->allowed = &tcg_allowed;
ac->gdbstub_supported_sstep_flags = tcg_gdbstub_supported_sstep_flags;

View File

@@ -61,9 +61,8 @@
#include "tb-jmp-cache.h"
#include "tb-hash.h"
#include "tb-context.h"
#include "internal-common.h"
#include "internal-target.h"
#include "tcg/perf.h"
#include "internal.h"
#include "perf.h"
#include "tcg/insn-start-words.h"
TBContext tb_ctx;
@@ -215,7 +214,7 @@ void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
* Reset the cycle counter to the start of the block and
* shift if to the number of actually executed instructions.
*/
cpu->neg.icount_decr.u16.low += insns_left;
cpu_neg(cpu)->icount_decr.u16.low += insns_left;
}
cpu->cc->tcg_ops->restore_state_to_opc(cpu, tb, data);
@@ -256,6 +255,7 @@ bool cpu_unwind_state_data(CPUState *cpu, uintptr_t host_pc, uint64_t *data)
void page_init(void)
{
page_size_init();
page_table_config_init();
}
@@ -288,7 +288,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
vaddr pc, uint64_t cs_base,
uint32_t flags, int cflags)
{
CPUArchState *env = cpu_env(cpu);
CPUArchState *env = cpu->env_ptr;
TranslationBlock *tb, *existing_tb;
tb_page_addr_t phys_pc, phys_p2;
tcg_insn_unit *gen_code_buf;
@@ -303,7 +303,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
if (phys_pc == -1) {
/* Generate a one-shot TB with 1 insn in it */
cflags = (cflags & ~CF_COUNT_MASK) | 1;
cflags = (cflags & ~CF_COUNT_MASK) | CF_LAST_IO | 1;
}
max_insns = cflags & CF_COUNT_MASK;
@@ -344,6 +344,8 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
tcg_ctx->page_bits = TARGET_PAGE_BITS;
tcg_ctx->page_mask = TARGET_PAGE_MASK;
tcg_ctx->tlb_dyn_max_bits = CPU_TLB_DYN_MAX_BITS;
tcg_ctx->tlb_fast_offset =
(int)offsetof(ArchCPU, neg.tlb.f) - (int)offsetof(ArchCPU, env);
#endif
tcg_ctx->insn_start_words = TARGET_INSN_START_WORDS;
#ifdef TCG_GUEST_DEFAULT_MO
@@ -578,7 +580,7 @@ void tb_check_watchpoint(CPUState *cpu, uintptr_t retaddr)
} else {
/* The exception probably happened in a helper. The CPU state should
have been saved before calling it. Fetch the PC from there. */
CPUArchState *env = cpu_env(cpu);
CPUArchState *env = cpu->env_ptr;
vaddr pc;
uint64_t cs_base;
tb_page_addr_t addr;
@@ -621,7 +623,7 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
cc = CPU_GET_CLASS(cpu);
if (cc->tcg_ops->io_recompile_replay_branch &&
cc->tcg_ops->io_recompile_replay_branch(cpu, tb)) {
cpu->neg.icount_decr.u16.low++;
cpu_neg(cpu)->icount_decr.u16.low++;
n = 2;
}
@@ -631,10 +633,10 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
* operations only (which execute after completion) so we don't
* double instrument the instruction.
*/
cpu->cflags_next_tb = curr_cflags(cpu) | CF_MEMI_ONLY | n;
cpu->cflags_next_tb = curr_cflags(cpu) | CF_MEMI_ONLY | CF_LAST_IO | n;
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
vaddr pc = cpu->cc->get_pc(cpu);
vaddr pc = log_pc(cpu, tb);
if (qemu_log_in_addr_range(pc)) {
qemu_log("cpu_io_recompile: rewound execution of TB to %016"
VADDR_PRIx "\n", pc);
@@ -644,6 +646,142 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
cpu_loop_exit_noexc(cpu);
}
static void print_qht_statistics(struct qht_stats hst, GString *buf)
{
uint32_t hgram_opts;
size_t hgram_bins;
char *hgram;
if (!hst.head_buckets) {
return;
}
g_string_append_printf(buf, "TB hash buckets %zu/%zu "
"(%0.2f%% head buckets used)\n",
hst.used_head_buckets, hst.head_buckets,
(double)hst.used_head_buckets /
hst.head_buckets * 100);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_opts |= QDIST_PR_100X | QDIST_PR_PERCENT;
if (qdist_xmax(&hst.occupancy) - qdist_xmin(&hst.occupancy) == 1) {
hgram_opts |= QDIST_PR_NODECIMAL;
}
hgram = qdist_pr(&hst.occupancy, 10, hgram_opts);
g_string_append_printf(buf, "TB hash occupancy %0.2f%% avg chain occ. "
"Histogram: %s\n",
qdist_avg(&hst.occupancy) * 100, hgram);
g_free(hgram);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_bins = qdist_xmax(&hst.chain) - qdist_xmin(&hst.chain);
if (hgram_bins > 10) {
hgram_bins = 10;
} else {
hgram_bins = 0;
hgram_opts |= QDIST_PR_NODECIMAL | QDIST_PR_NOBINRANGE;
}
hgram = qdist_pr(&hst.chain, hgram_bins, hgram_opts);
g_string_append_printf(buf, "TB hash avg chain %0.3f buckets. "
"Histogram: %s\n",
qdist_avg(&hst.chain), hgram);
g_free(hgram);
}
struct tb_tree_stats {
size_t nb_tbs;
size_t host_size;
size_t target_size;
size_t max_target_size;
size_t direct_jmp_count;
size_t direct_jmp2_count;
size_t cross_page;
};
static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data)
{
const TranslationBlock *tb = value;
struct tb_tree_stats *tst = data;
tst->nb_tbs++;
tst->host_size += tb->tc.size;
tst->target_size += tb->size;
if (tb->size > tst->max_target_size) {
tst->max_target_size = tb->size;
}
if (tb_page_addr1(tb) != -1) {
tst->cross_page++;
}
if (tb->jmp_reset_offset[0] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp_count++;
if (tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp2_count++;
}
}
return false;
}
void dump_exec_info(GString *buf)
{
struct tb_tree_stats tst = {};
struct qht_stats hst;
size_t nb_tbs, flush_full, flush_part, flush_elide;
tcg_tb_foreach(tb_tree_stats_iter, &tst);
nb_tbs = tst.nb_tbs;
/* XXX: avoid using doubles ? */
g_string_append_printf(buf, "Translation buffer state:\n");
/*
* Report total code size including the padding and TB structs;
* otherwise users might think "-accel tcg,tb-size" is not honoured.
* For avg host size we use the precise numbers from tb_tree_stats though.
*/
g_string_append_printf(buf, "gen code size %zu/%zu\n",
tcg_code_size(), tcg_code_capacity());
g_string_append_printf(buf, "TB count %zu\n", nb_tbs);
g_string_append_printf(buf, "TB avg target size %zu max=%zu bytes\n",
nb_tbs ? tst.target_size / nb_tbs : 0,
tst.max_target_size);
g_string_append_printf(buf, "TB avg host size %zu bytes "
"(expansion ratio: %0.1f)\n",
nb_tbs ? tst.host_size / nb_tbs : 0,
tst.target_size ?
(double)tst.host_size / tst.target_size : 0);
g_string_append_printf(buf, "cross page TB count %zu (%zu%%)\n",
tst.cross_page,
nb_tbs ? (tst.cross_page * 100) / nb_tbs : 0);
g_string_append_printf(buf, "direct jump count %zu (%zu%%) "
"(2 jumps=%zu %zu%%)\n",
tst.direct_jmp_count,
nb_tbs ? (tst.direct_jmp_count * 100) / nb_tbs : 0,
tst.direct_jmp2_count,
nb_tbs ? (tst.direct_jmp2_count * 100) / nb_tbs : 0);
qht_statistics_init(&tb_ctx.htable, &hst);
print_qht_statistics(hst, buf);
qht_statistics_destroy(&hst);
g_string_append_printf(buf, "\nStatistics:\n");
g_string_append_printf(buf, "TB flush count %u\n",
qatomic_read(&tb_ctx.tb_flush_count));
g_string_append_printf(buf, "TB invalidate count %u\n",
qatomic_read(&tb_ctx.tb_phys_invalidate_count));
tlb_flush_counts(&flush_full, &flush_part, &flush_elide);
g_string_append_printf(buf, "TLB full flushes %zu\n", flush_full);
g_string_append_printf(buf, "TLB partial flushes %zu\n", flush_part);
g_string_append_printf(buf, "TLB elided flushes %zu\n", flush_elide);
tcg_dump_info(buf);
}
#else /* CONFIG_USER_ONLY */
void cpu_interrupt(CPUState *cpu, int mask)
{
g_assert(qemu_mutex_iothread_locked());
cpu->interrupt_request |= mask;
qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1);
}
#endif /* CONFIG_USER_ONLY */
/*
@@ -663,3 +801,11 @@ void tcg_flush_jmp_cache(CPUState *cpu)
qatomic_set(&jc->array[i].tb, NULL);
}
}
/* This is a wrapper for common code that can not use CONFIG_SOFTMMU */
void tcg_flush_softmmu_tlb(CPUState *cs)
{
#ifdef CONFIG_SOFTMMU
tlb_flush(cs);
#endif
}

View File

@@ -12,23 +12,31 @@
#include "qemu/error-report.h"
#include "exec/exec-all.h"
#include "exec/translator.h"
#include "exec/cpu_ldst.h"
#include "exec/plugin-gen.h"
#include "exec/cpu_ldst.h"
#include "tcg/tcg-op-common.h"
#include "internal-target.h"
#include "disas/disas.h"
#include "internal.h"
static void set_can_do_io(DisasContextBase *db, bool val)
static void gen_io_start(void)
{
QEMU_BUILD_BUG_ON(sizeof_field(CPUState, neg.can_do_io) != 1);
tcg_gen_st8_i32(tcg_constant_i32(val), tcg_env,
offsetof(ArchCPU, parent_obj.neg.can_do_io) -
offsetof(ArchCPU, env));
tcg_gen_st_i32(tcg_constant_i32(1), cpu_env,
offsetof(ArchCPU, parent_obj.can_do_io) -
offsetof(ArchCPU, env));
}
bool translator_io_start(DisasContextBase *db)
{
uint32_t cflags = tb_cflags(db->tb);
if (!(cflags & CF_USE_ICOUNT)) {
return false;
}
if (db->num_insns == db->max_insns && (cflags & CF_LAST_IO)) {
/* Already started in translator_loop. */
return true;
}
gen_io_start();
/*
* Ensure that this instruction will be the last in the TB.
* The target may override this to something more forceful.
@@ -39,17 +47,14 @@ bool translator_io_start(DisasContextBase *db)
return true;
}
static TCGOp *gen_tb_start(DisasContextBase *db, uint32_t cflags)
static TCGOp *gen_tb_start(uint32_t cflags)
{
TCGv_i32 count = NULL;
TCGv_i32 count = tcg_temp_new_i32();
TCGOp *icount_start_insn = NULL;
if ((cflags & CF_USE_ICOUNT) || !(cflags & CF_NOIRQ)) {
count = tcg_temp_new_i32();
tcg_gen_ld_i32(count, tcg_env,
offsetof(ArchCPU, parent_obj.neg.icount_decr.u32)
- offsetof(ArchCPU, env));
}
tcg_gen_ld_i32(count, cpu_env,
offsetof(ArchCPU, neg.icount_decr.u32) -
offsetof(ArchCPU, env));
if (cflags & CF_USE_ICOUNT) {
/*
@@ -76,9 +81,19 @@ static TCGOp *gen_tb_start(DisasContextBase *db, uint32_t cflags)
}
if (cflags & CF_USE_ICOUNT) {
tcg_gen_st16_i32(count, tcg_env,
offsetof(ArchCPU, parent_obj.neg.icount_decr.u16.low)
- offsetof(ArchCPU, env));
tcg_gen_st16_i32(count, cpu_env,
offsetof(ArchCPU, neg.icount_decr.u16.low) -
offsetof(ArchCPU, env));
/*
* cpu->can_do_io is cleared automatically here at the beginning of
* each translation block. The cost is minimal and only paid for
* -icount, plus it would be very easy to forget doing it in the
* translator. Doing it here means we don't need a gen_io_end() to
* go with gen_io_start().
*/
tcg_gen_st_i32(tcg_constant_i32(0), cpu_env,
offsetof(ArchCPU, parent_obj.can_do_io) -
offsetof(ArchCPU, env));
}
return icount_start_insn;
@@ -119,7 +134,6 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
{
uint32_t cflags = tb_cflags(tb);
TCGOp *icount_start_insn;
TCGOp *first_insn_start = NULL;
bool plugin_enabled;
/* Initialize DisasContext */
@@ -130,44 +144,41 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
db->num_insns = 0;
db->max_insns = *max_insns;
db->singlestep_enabled = cflags & CF_SINGLE_STEP;
db->insn_start = NULL;
db->fake_insn = false;
db->host_addr[0] = host_pc;
db->host_addr[1] = NULL;
db->record_start = 0;
db->record_len = 0;
ops->init_disas_context(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
/* Start translating. */
icount_start_insn = gen_tb_start(db, cflags);
icount_start_insn = gen_tb_start(cflags);
ops->tb_start(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
plugin_enabled = plugin_gen_tb_start(cpu, db);
db->plugin_enabled = plugin_enabled;
plugin_enabled = plugin_gen_tb_start(cpu, db, cflags & CF_MEMI_ONLY);
while (true) {
*max_insns = ++db->num_insns;
ops->insn_start(db, cpu);
db->insn_start = tcg_last_op();
if (first_insn_start == NULL) {
first_insn_start = db->insn_start;
}
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
if (plugin_enabled) {
plugin_gen_insn_start(cpu, db);
}
/*
* Disassemble one instruction. The translate_insn hook should
* update db->pc_next and db->is_jmp to indicate what should be
* done next -- either exiting this loop or locate the start of
* the next instruction.
*/
ops->translate_insn(db, cpu);
/* Disassemble one instruction. The translate_insn hook should
update db->pc_next and db->is_jmp to indicate what should be
done next -- either exiting this loop or locate the start of
the next instruction. */
if (db->num_insns == db->max_insns && (cflags & CF_LAST_IO)) {
/* Accept I/O on the last instruction. */
gen_io_start();
ops->translate_insn(db, cpu);
} else {
/* we should only see CF_MEMI_ONLY for io_recompile */
tcg_debug_assert(!(cflags & CF_MEMI_ONLY));
ops->translate_insn(db, cpu);
}
/*
* We can't instrument after instructions that change control
@@ -199,277 +210,172 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
ops->tb_stop(db, cpu);
gen_tb_end(tb, cflags, icount_start_insn, db->num_insns);
/*
* Manage can_do_io for the translation block: set to false before
* the first insn and set to true before the last insn.
*/
if (db->num_insns == 1) {
tcg_debug_assert(first_insn_start == db->insn_start);
} else {
tcg_debug_assert(first_insn_start != db->insn_start);
tcg_ctx->emit_before_op = first_insn_start;
set_can_do_io(db, false);
if (plugin_enabled) {
plugin_gen_tb_end(cpu);
}
tcg_ctx->emit_before_op = db->insn_start;
set_can_do_io(db, true);
tcg_ctx->emit_before_op = NULL;
/* May be used by disas_log or plugin callbacks. */
/* The disas_log hook may use these values rather than recompute. */
tb->size = db->pc_next - db->pc_first;
tb->icount = db->num_insns;
if (plugin_enabled) {
plugin_gen_tb_end(cpu, db->num_insns);
}
if (qemu_loglevel_mask(CPU_LOG_TB_IN_ASM)
&& qemu_log_in_addr_range(db->pc_first)) {
FILE *logfile = qemu_log_trylock();
if (logfile) {
fprintf(logfile, "----------------\n");
if (!ops->disas_log ||
!ops->disas_log(db, cpu, logfile)) {
fprintf(logfile, "IN: %s\n", lookup_symbol(db->pc_first));
target_disas(logfile, cpu, db);
}
ops->disas_log(db, cpu, logfile);
fprintf(logfile, "\n");
qemu_log_unlock(logfile);
}
}
}
static bool translator_ld(CPUArchState *env, DisasContextBase *db,
void *dest, vaddr pc, size_t len)
static void *translator_access(CPUArchState *env, DisasContextBase *db,
vaddr pc, size_t len)
{
TranslationBlock *tb = db->tb;
vaddr last = pc + len - 1;
void *host;
vaddr base;
vaddr base, end;
TranslationBlock *tb;
tb = db->tb;
/* Use slow path if first page is MMIO. */
if (unlikely(tb_page_addr0(tb) == -1)) {
/* We capped translation with first page MMIO in tb_gen_code. */
tcg_debug_assert(db->max_insns == 1);
return false;
return NULL;
}
host = db->host_addr[0];
base = db->pc_first;
if (likely(((base ^ last) & TARGET_PAGE_MASK) == 0)) {
/* Entire read is from the first page. */
memcpy(dest, host + (pc - base), len);
return true;
}
if (unlikely(((base ^ pc) & TARGET_PAGE_MASK) == 0)) {
/* Read begins on the first page and extends to the second. */
size_t len0 = -(pc | TARGET_PAGE_MASK);
memcpy(dest, host + (pc - base), len0);
pc += len0;
dest += len0;
len -= len0;
}
/*
* The read must conclude on the second page and not extend to a third.
*
* TODO: We could allow the two pages to be virtually discontiguous,
* since we already allow the two pages to be physically discontiguous.
* The only reasonable use case would be executing an insn at the end
* of the address space wrapping around to the beginning. For that,
* we would need to know the current width of the address space.
* In the meantime, assert.
*/
base = (base & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE;
assert(((base ^ pc) & TARGET_PAGE_MASK) == 0);
assert(((base ^ last) & TARGET_PAGE_MASK) == 0);
host = db->host_addr[1];
if (host == NULL) {
tb_page_addr_t page0, old_page1, new_page1;
new_page1 = get_page_addr_code_hostp(env, base, &db->host_addr[1]);
/*
* If the second page is MMIO, treat as if the first page
* was MMIO as well, so that we do not cache the TB.
*/
if (unlikely(new_page1 == -1)) {
tb_unlock_pages(tb);
tb_set_page_addr0(tb, -1);
/* Require that this be the final insn. */
db->max_insns = db->num_insns;
return false;
}
/*
* If this is not the first time around, and page1 matches,
* then we already have the page locked. Alternately, we're
* not doing anything to prevent the PTE from changing, so
* we might wind up with a different page, requiring us to
* re-do the locking.
*/
old_page1 = tb_page_addr1(tb);
if (likely(new_page1 != old_page1)) {
page0 = tb_page_addr0(tb);
if (unlikely(old_page1 != -1)) {
tb_unlock_page1(page0, old_page1);
}
tb_set_page_addr1(tb, new_page1);
tb_lock_page1(page0, new_page1);
}
end = pc + len - 1;
if (likely(is_same_page(db, end))) {
host = db->host_addr[0];
base = db->pc_first;
} else {
host = db->host_addr[1];
base = TARGET_PAGE_ALIGN(db->pc_first);
if (host == NULL) {
tb_page_addr_t page0, old_page1, new_page1;
new_page1 = get_page_addr_code_hostp(env, base, &db->host_addr[1]);
/*
* If the second page is MMIO, treat as if the first page
* was MMIO as well, so that we do not cache the TB.
*/
if (unlikely(new_page1 == -1)) {
tb_unlock_pages(tb);
tb_set_page_addr0(tb, -1);
return NULL;
}
/*
* If this is not the first time around, and page1 matches,
* then we already have the page locked. Alternately, we're
* not doing anything to prevent the PTE from changing, so
* we might wind up with a different page, requiring us to
* re-do the locking.
*/
old_page1 = tb_page_addr1(tb);
if (likely(new_page1 != old_page1)) {
page0 = tb_page_addr0(tb);
if (unlikely(old_page1 != -1)) {
tb_unlock_page1(page0, old_page1);
}
tb_set_page_addr1(tb, new_page1);
tb_lock_page1(page0, new_page1);
}
host = db->host_addr[1];
}
/* Use slow path when crossing pages. */
if (is_same_page(db, pc)) {
return NULL;
}
}
memcpy(dest, host + (pc - base), len);
return true;
tcg_debug_assert(pc >= base);
return host + (pc - base);
}
static void record_save(DisasContextBase *db, vaddr pc,
const void *from, int size)
static void plugin_insn_append(abi_ptr pc, const void *from, size_t size)
{
int offset;
#ifdef CONFIG_PLUGIN
struct qemu_plugin_insn *insn = tcg_ctx->plugin_insn;
abi_ptr off;
/* Do not record probes before the start of TB. */
if (pc < db->pc_first) {
if (insn == NULL) {
return;
}
/*
* In translator_access, we verified that pc is within 2 pages
* of pc_first, thus this will never overflow.
*/
offset = pc - db->pc_first;
/*
* Either the first or second page may be I/O. If it is the second,
* then the first byte we need to record will be at a non-zero offset.
* In either case, we should not need to record but a single insn.
*/
if (db->record_len == 0) {
db->record_start = offset;
db->record_len = size;
} else {
assert(offset == db->record_start + db->record_len);
assert(db->record_len + size <= sizeof(db->record));
db->record_len += size;
off = pc - insn->vaddr;
if (off < insn->data->len) {
g_byte_array_set_size(insn->data, off);
} else if (off > insn->data->len) {
/* we have an unexpected gap */
g_assert_not_reached();
}
memcpy(db->record + (offset - db->record_start), from, size);
insn->data = g_byte_array_append(insn->data, from, size);
#endif
}
size_t translator_st_len(const DisasContextBase *db)
uint8_t translator_ldub(CPUArchState *env, DisasContextBase *db, abi_ptr pc)
{
return db->fake_insn ? db->record_len : db->tb->size;
uint8_t ret;
void *p = translator_access(env, db, pc, sizeof(ret));
if (p) {
plugin_insn_append(pc, p, sizeof(ret));
return ldub_p(p);
}
ret = cpu_ldub_code(env, pc);
plugin_insn_append(pc, &ret, sizeof(ret));
return ret;
}
bool translator_st(const DisasContextBase *db, void *dest,
vaddr addr, size_t len)
uint16_t translator_lduw(CPUArchState *env, DisasContextBase *db, abi_ptr pc)
{
size_t offset, offset_end;
uint16_t ret, plug;
void *p = translator_access(env, db, pc, sizeof(ret));
if (addr < db->pc_first) {
return false;
if (p) {
plugin_insn_append(pc, p, sizeof(ret));
return lduw_p(p);
}
offset = addr - db->pc_first;
offset_end = offset + len;
if (offset_end > translator_st_len(db)) {
return false;
}
if (!db->fake_insn) {
size_t offset_page1 = -(db->pc_first | TARGET_PAGE_MASK);
/* Get all the bytes from the first page. */
if (db->host_addr[0]) {
if (offset_end <= offset_page1) {
memcpy(dest, db->host_addr[0] + offset, len);
return true;
}
if (offset < offset_page1) {
size_t len0 = offset_page1 - offset;
memcpy(dest, db->host_addr[0] + offset, len0);
offset += len0;
dest += len0;
}
}
/* Get any bytes from the second page. */
if (db->host_addr[1] && offset >= offset_page1) {
memcpy(dest, db->host_addr[1] + (offset - offset_page1),
offset_end - offset);
return true;
}
}
/* Else get recorded bytes. */
if (db->record_len != 0 &&
offset >= db->record_start &&
offset_end <= db->record_start + db->record_len) {
memcpy(dest, db->record + (offset - db->record_start),
offset_end - offset);
return true;
}
return false;
ret = cpu_lduw_code(env, pc);
plug = tswap16(ret);
plugin_insn_append(pc, &plug, sizeof(ret));
return ret;
}
uint8_t translator_ldub(CPUArchState *env, DisasContextBase *db, vaddr pc)
uint32_t translator_ldl(CPUArchState *env, DisasContextBase *db, abi_ptr pc)
{
uint8_t raw;
uint32_t ret, plug;
void *p = translator_access(env, db, pc, sizeof(ret));
if (!translator_ld(env, db, &raw, pc, sizeof(raw))) {
raw = cpu_ldub_code(env, pc);
record_save(db, pc, &raw, sizeof(raw));
if (p) {
plugin_insn_append(pc, p, sizeof(ret));
return ldl_p(p);
}
return raw;
ret = cpu_ldl_code(env, pc);
plug = tswap32(ret);
plugin_insn_append(pc, &plug, sizeof(ret));
return ret;
}
uint16_t translator_lduw(CPUArchState *env, DisasContextBase *db, vaddr pc)
uint64_t translator_ldq(CPUArchState *env, DisasContextBase *db, abi_ptr pc)
{
uint16_t raw, tgt;
uint64_t ret, plug;
void *p = translator_access(env, db, pc, sizeof(ret));
if (translator_ld(env, db, &raw, pc, sizeof(raw))) {
tgt = tswap16(raw);
} else {
tgt = cpu_lduw_code(env, pc);
raw = tswap16(tgt);
record_save(db, pc, &raw, sizeof(raw));
if (p) {
plugin_insn_append(pc, p, sizeof(ret));
return ldq_p(p);
}
return tgt;
ret = cpu_ldq_code(env, pc);
plug = tswap64(ret);
plugin_insn_append(pc, &plug, sizeof(ret));
return ret;
}
uint32_t translator_ldl(CPUArchState *env, DisasContextBase *db, vaddr pc)
void translator_fake_ldb(uint8_t insn8, abi_ptr pc)
{
uint32_t raw, tgt;
if (translator_ld(env, db, &raw, pc, sizeof(raw))) {
tgt = tswap32(raw);
} else {
tgt = cpu_ldl_code(env, pc);
raw = tswap32(tgt);
record_save(db, pc, &raw, sizeof(raw));
}
return tgt;
}
uint64_t translator_ldq(CPUArchState *env, DisasContextBase *db, vaddr pc)
{
uint64_t raw, tgt;
if (translator_ld(env, db, &raw, pc, sizeof(raw))) {
tgt = tswap64(raw);
} else {
tgt = cpu_ldq_code(env, pc);
raw = tswap64(tgt);
record_save(db, pc, &raw, sizeof(raw));
}
return tgt;
}
void translator_fake_ld(DisasContextBase *db, const void *data, size_t len)
{
db->fake_insn = true;
record_save(db, db->pc_first, data, len);
plugin_insn_append(pc, &insn8, sizeof(insn8));
}

View File

@@ -14,10 +14,6 @@ void qemu_init_vcpu(CPUState *cpu)
{
}
void cpu_exec_reset_hold(CPUState *cpu)
{
}
/* User mode emulation does not support record/replay yet. */
bool replay_exception(void)

View File

@@ -24,27 +24,17 @@
#include "qemu/bitops.h"
#include "qemu/rcu.h"
#include "exec/cpu_ldst.h"
#include "qemu/main-loop.h"
#include "exec/translate-all.h"
#include "exec/page-protection.h"
#include "exec/helper-proto.h"
#include "qemu/atomic128.h"
#include "trace/trace-root.h"
#include "tcg/tcg-ldst.h"
#include "internal-common.h"
#include "internal-target.h"
#include "internal.h"
__thread uintptr_t helper_retaddr;
//#define DEBUG_SIGNAL
void cpu_interrupt(CPUState *cpu, int mask)
{
g_assert(bql_locked());
cpu->interrupt_request |= mask;
qatomic_set(&cpu->neg.icount_decr.u16.high, -1);
}
/*
* Adjust the pc to pass to cpu_restore_state; return the memop type.
*/
@@ -660,17 +650,16 @@ void page_protect(tb_page_addr_t address)
{
PageFlagsNode *p;
target_ulong start, last;
int host_page_size = qemu_real_host_page_size();
int prot;
assert_memory_lock();
if (host_page_size <= TARGET_PAGE_SIZE) {
if (qemu_host_page_size <= TARGET_PAGE_SIZE) {
start = address & TARGET_PAGE_MASK;
last = start + TARGET_PAGE_SIZE - 1;
} else {
start = address & -host_page_size;
last = start + host_page_size - 1;
start = address & qemu_host_page_mask;
last = start + qemu_host_page_size - 1;
}
p = pageflags_find(start, last);
@@ -681,7 +670,7 @@ void page_protect(tb_page_addr_t address)
if (unlikely(p->itree.last < last)) {
/* More than one protection region covers the one host page. */
assert(TARGET_PAGE_SIZE < host_page_size);
assert(TARGET_PAGE_SIZE < qemu_host_page_size);
while ((p = pageflags_next(p, start, last)) != NULL) {
prot |= p->flags;
}
@@ -689,7 +678,7 @@ void page_protect(tb_page_addr_t address)
if (prot & PAGE_WRITE) {
pageflags_set_clear(start, last, 0, PAGE_WRITE);
mprotect(g2h_untagged(start), last - start + 1,
mprotect(g2h_untagged(start), qemu_host_page_size,
prot & (PAGE_READ | PAGE_EXEC) ? PROT_READ : PROT_NONE);
}
}
@@ -735,19 +724,18 @@ int page_unprotect(target_ulong address, uintptr_t pc)
}
#endif
} else {
int host_page_size = qemu_real_host_page_size();
target_ulong start, len, i;
int prot;
if (host_page_size <= TARGET_PAGE_SIZE) {
if (qemu_host_page_size <= TARGET_PAGE_SIZE) {
start = address & TARGET_PAGE_MASK;
len = TARGET_PAGE_SIZE;
prot = p->flags | PAGE_WRITE;
pageflags_set_clear(start, start + len - 1, PAGE_WRITE, 0);
current_tb_invalidated = tb_invalidate_phys_page_unwind(start, pc);
} else {
start = address & -host_page_size;
len = host_page_size;
start = address & qemu_host_page_mask;
len = qemu_host_page_size;
prot = 0;
for (i = 0; i < len; i += TARGET_PAGE_SIZE) {
@@ -773,7 +761,7 @@ int page_unprotect(target_ulong address, uintptr_t pc)
if (prot & PAGE_EXEC) {
prot = (prot & ~PAGE_EXEC) | PAGE_READ;
}
mprotect((void *)g2h_untagged(start), len, prot & PAGE_RWX);
mprotect((void *)g2h_untagged(start), len, prot & PAGE_BITS);
}
mmap_unlock();
@@ -873,7 +861,7 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, vaddr addr,
typedef struct TargetPageDataNode {
struct rcu_head rcu;
IntervalTreeNode itree;
char data[] __attribute__((aligned));
char data[TPD_PAGES][TARGET_PAGE_DATA_SIZE] __attribute__((aligned));
} TargetPageDataNode;
static IntervalTreeRoot targetdata_root;
@@ -911,8 +899,7 @@ void page_reset_target_data(target_ulong start, target_ulong last)
n_last = MIN(last, n->last);
p_len = (n_last + 1 - n_start) >> TARGET_PAGE_BITS;
memset(t->data + p_ofs * TARGET_PAGE_DATA_SIZE, 0,
p_len * TARGET_PAGE_DATA_SIZE);
memset(t->data[p_ofs], 0, p_len * TARGET_PAGE_DATA_SIZE);
}
}
@@ -920,7 +907,7 @@ void *page_get_target_data(target_ulong address)
{
IntervalTreeNode *n;
TargetPageDataNode *t;
target_ulong page, region, p_ofs;
target_ulong page, region;
page = address & TARGET_PAGE_MASK;
region = address & TBD_MASK;
@@ -936,8 +923,7 @@ void *page_get_target_data(target_ulong address)
mmap_lock();
n = interval_tree_iter_first(&targetdata_root, page, page);
if (!n) {
t = g_malloc0(sizeof(TargetPageDataNode)
+ TPD_PAGES * TARGET_PAGE_DATA_SIZE);
t = g_new0(TargetPageDataNode, 1);
n = &t->itree;
n->start = region;
n->last = region | ~TBD_MASK;
@@ -947,16 +933,15 @@ void *page_get_target_data(target_ulong address)
}
t = container_of(n, TargetPageDataNode, itree);
p_ofs = (page - region) >> TARGET_PAGE_BITS;
return t->data + p_ofs * TARGET_PAGE_DATA_SIZE;
return t->data[(page - region) >> TARGET_PAGE_BITS];
}
#else
void page_reset_target_data(target_ulong start, target_ulong last) { }
#endif /* TARGET_PAGE_DATA_SIZE */
/* The system-mode versions of these helpers are in cputlb.c. */
/* The softmmu versions of these helpers are in cputlb.c. */
static void *cpu_mmu_lookup(CPUState *cpu, vaddr addr,
static void *cpu_mmu_lookup(CPUArchState *env, vaddr addr,
MemOp mop, uintptr_t ra, MMUAccessType type)
{
int a_bits = get_alignment_bits(mop);
@@ -964,39 +949,60 @@ static void *cpu_mmu_lookup(CPUState *cpu, vaddr addr,
/* Enforce guest required alignment. */
if (unlikely(addr & ((1 << a_bits) - 1))) {
cpu_loop_exit_sigbus(cpu, addr, type, ra);
cpu_loop_exit_sigbus(env_cpu(env), addr, type, ra);
}
ret = g2h(cpu, addr);
ret = g2h(env_cpu(env), addr);
set_helper_retaddr(ra);
return ret;
}
#include "ldst_atomicity.c.inc"
static uint8_t do_ld1_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType access_type)
static uint8_t do_ld1_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{
void *haddr;
uint8_t ret;
tcg_debug_assert((mop & MO_SIZE) == MO_8);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, get_memop(oi), ra, access_type);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = ldub_p(haddr);
clear_helper_retaddr();
return ret;
}
static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType access_type)
tcg_target_ulong helper_ldub_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return do_ld1_mmu(env, addr, get_memop(oi), ra);
}
tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return (int8_t)do_ld1_mmu(env, addr, get_memop(oi), ra);
}
uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint8_t ret = do_ld1_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static uint16_t do_ld2_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{
void *haddr;
uint16_t ret;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_16);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, access_type);
ret = load_atom_2(cpu, ra, haddr, mop);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_2(env, ra, haddr, mop);
clear_helper_retaddr();
if (mop & MO_BSWAP) {
@@ -1005,16 +1011,36 @@ static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
return ret;
}
static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType access_type)
tcg_target_ulong helper_lduw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return do_ld2_mmu(env, addr, get_memop(oi), ra);
}
tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return (int16_t)do_ld2_mmu(env, addr, get_memop(oi), ra);
}
uint16_t cpu_ldw_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint16_t ret = do_ld2_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static uint32_t do_ld4_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{
void *haddr;
uint32_t ret;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_32);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, access_type);
ret = load_atom_4(cpu, ra, haddr, mop);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_4(env, ra, haddr, mop);
clear_helper_retaddr();
if (mop & MO_BSWAP) {
@@ -1023,16 +1049,36 @@ static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
return ret;
}
static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType access_type)
tcg_target_ulong helper_ldul_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return do_ld4_mmu(env, addr, get_memop(oi), ra);
}
tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return (int32_t)do_ld4_mmu(env, addr, get_memop(oi), ra);
}
uint32_t cpu_ldl_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint32_t ret = do_ld4_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static uint64_t do_ld8_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{
void *haddr;
uint64_t ret;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_64);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, access_type);
ret = load_atom_8(cpu, ra, haddr, mop);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_8(env, ra, haddr, mop);
clear_helper_retaddr();
if (mop & MO_BSWAP) {
@@ -1041,17 +1087,30 @@ static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
return ret;
}
static Int128 do_ld16_mmu(CPUState *cpu, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
uint64_t helper_ldq_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return do_ld8_mmu(env, addr, get_memop(oi), ra);
}
uint64_t cpu_ldq_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint64_t ret = do_ld8_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static Int128 do_ld16_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{
void *haddr;
Int128 ret;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_128);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_16(cpu, ra, haddr, mop);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_16(env, ra, haddr, mop);
clear_helper_retaddr();
if (mop & MO_BSWAP) {
@@ -1060,81 +1119,171 @@ static Int128 do_ld16_mmu(CPUState *cpu, abi_ptr addr,
return ret;
}
static void do_st1_mmu(CPUState *cpu, vaddr addr, uint8_t val,
Int128 helper_ld16_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return do_ld16_mmu(env, addr, get_memop(oi), ra);
}
Int128 helper_ld_i128(CPUArchState *env, uint64_t addr, MemOpIdx oi)
{
return helper_ld16_mmu(env, addr, oi, GETPC());
}
Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
Int128 ret = do_ld16_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static void do_st1_mmu(CPUArchState *env, abi_ptr addr, uint8_t val,
MemOp mop, uintptr_t ra)
{
void *haddr;
tcg_debug_assert((mop & MO_SIZE) == MO_8);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, get_memop(oi), ra, MMU_DATA_STORE);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
stb_p(haddr, val);
clear_helper_retaddr();
}
static void do_st2_mmu(CPUState *cpu, vaddr addr, uint16_t val,
MemOpIdx oi, uintptr_t ra)
void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st1_mmu(env, addr, val, get_memop(oi), ra);
}
void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st1_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
static void do_st2_mmu(CPUArchState *env, abi_ptr addr, uint16_t val,
MemOp mop, uintptr_t ra)
{
void *haddr;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_16);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) {
val = bswap16(val);
}
store_atom_2(cpu, ra, haddr, mop, val);
store_atom_2(env, ra, haddr, mop, val);
clear_helper_retaddr();
}
static void do_st4_mmu(CPUState *cpu, vaddr addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
void helper_stw_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st2_mmu(env, addr, val, get_memop(oi), ra);
}
void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uint16_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st2_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
static void do_st4_mmu(CPUArchState *env, abi_ptr addr, uint32_t val,
MemOp mop, uintptr_t ra)
{
void *haddr;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_32);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) {
val = bswap32(val);
}
store_atom_4(cpu, ra, haddr, mop, val);
store_atom_4(env, ra, haddr, mop, val);
clear_helper_retaddr();
}
static void do_st8_mmu(CPUState *cpu, vaddr addr, uint64_t val,
MemOpIdx oi, uintptr_t ra)
void helper_stl_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st4_mmu(env, addr, val, get_memop(oi), ra);
}
void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st4_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
static void do_st8_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
MemOp mop, uintptr_t ra)
{
void *haddr;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_64);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) {
val = bswap64(val);
}
store_atom_8(cpu, ra, haddr, mop, val);
store_atom_8(env, ra, haddr, mop, val);
clear_helper_retaddr();
}
static void do_st16_mmu(CPUState *cpu, vaddr addr, Int128 val,
MemOpIdx oi, uintptr_t ra)
void helper_stq_mmu(CPUArchState *env, uint64_t addr, uint64_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st8_mmu(env, addr, val, get_memop(oi), ra);
}
void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st8_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
static void do_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val,
MemOp mop, uintptr_t ra)
{
void *haddr;
MemOpIdx mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_128);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) {
val = bswap128(val);
}
store_atom_16(cpu, ra, haddr, mop, val);
store_atom_16(env, ra, haddr, mop, val);
clear_helper_retaddr();
}
void helper_st16_mmu(CPUArchState *env, uint64_t addr, Int128 val,
MemOpIdx oi, uintptr_t ra)
{
do_st16_mmu(env, addr, val, get_memop(oi), ra);
}
void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx oi)
{
helper_st16_mmu(env, addr, val, oi, GETPC());
}
void cpu_st16_mmu(CPUArchState *env, abi_ptr addr,
Int128 val, MemOpIdx oi, uintptr_t ra)
{
do_st16_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr ptr)
{
uint32_t ret;
@@ -1181,7 +1330,7 @@ uint8_t cpu_ldb_code_mmu(CPUArchState *env, abi_ptr addr,
void *haddr;
uint8_t ret;
haddr = cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH);
haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH);
ret = ldub_p(haddr);
clear_helper_retaddr();
return ret;
@@ -1193,7 +1342,7 @@ uint16_t cpu_ldw_code_mmu(CPUArchState *env, abi_ptr addr,
void *haddr;
uint16_t ret;
haddr = cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH);
haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH);
ret = lduw_p(haddr);
clear_helper_retaddr();
if (get_memop(oi) & MO_BSWAP) {
@@ -1208,7 +1357,7 @@ uint32_t cpu_ldl_code_mmu(CPUArchState *env, abi_ptr addr,
void *haddr;
uint32_t ret;
haddr = cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH);
haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH);
ret = ldl_p(haddr);
clear_helper_retaddr();
if (get_memop(oi) & MO_BSWAP) {
@@ -1223,7 +1372,7 @@ uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr addr,
void *haddr;
uint64_t ret;
haddr = cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD);
ret = ldq_p(haddr);
clear_helper_retaddr();
if (get_memop(oi) & MO_BSWAP) {
@@ -1237,7 +1386,7 @@ uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr addr,
/*
* Do not allow unaligned operations to proceed. Return the host address.
*/
static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
static void *atomic_mmu_lookup(CPUArchState *env, vaddr addr, MemOpIdx oi,
int size, uintptr_t retaddr)
{
MemOp mop = get_memop(oi);
@@ -1246,15 +1395,15 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
/* Enforce guest required alignment. */
if (unlikely(addr & ((1 << a_bits) - 1))) {
cpu_loop_exit_sigbus(cpu, addr, MMU_DATA_STORE, retaddr);
cpu_loop_exit_sigbus(env_cpu(env), addr, MMU_DATA_STORE, retaddr);
}
/* Enforce qemu required alignment. */
if (unlikely(addr & (size - 1))) {
cpu_loop_exit_atomic(cpu, retaddr);
cpu_loop_exit_atomic(env_cpu(env), retaddr);
}
ret = g2h(cpu, addr);
ret = g2h(env_cpu(env), addr);
set_helper_retaddr(retaddr);
return ret;
}

View File

@@ -1,18 +0,0 @@
/*
* SPDX-FileContributor: Philippe Mathieu-Daudé <philmd@linaro.org>
* SPDX-FileCopyrightText: 2023 Linaro Ltd.
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef ACCEL_TCG_VCPU_STATE_H
#define ACCEL_TCG_VCPU_STATE_H
#include "hw/core/cpu.h"
#ifdef CONFIG_USER_ONLY
static inline TaskState *get_task_state(const CPUState *cs)
{
return cs->opaque;
}
#endif
#endif

View File

@@ -15,7 +15,6 @@
#include "hw/xen/xen_native.h"
#include "hw/xen/xen-legacy-backend.h"
#include "hw/xen/xen_pt.h"
#include "hw/xen/xen_igd.h"
#include "chardev/char.h"
#include "qemu/accel.h"
#include "sysemu/cpus.h"

View File

@@ -904,7 +904,7 @@ static void alsa_init_per_direction(AudiodevAlsaPerDirectionOptions *apdo)
}
}
static void *alsa_audio_init(Audiodev *dev, Error **errp)
static void *alsa_audio_init(Audiodev *dev)
{
AudiodevAlsaOptions *aopts;
assert(dev->driver == AUDIODEV_DRIVER_ALSA);
@@ -960,6 +960,7 @@ static struct audio_driver alsa_audio_driver = {
.init = alsa_audio_init,
.fini = alsa_audio_fini,
.pcm_ops = &alsa_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (ALSAVoiceOut),

View File

@@ -26,7 +26,6 @@
#include "audio/audio.h"
#include "monitor/hmp.h"
#include "monitor/monitor.h"
#include "qapi/error.h"
#include "qapi/qmp/qdict.h"
static QLIST_HEAD (capture_list_head, CaptureState) capture_head;
@@ -66,11 +65,10 @@ void hmp_wavcapture(Monitor *mon, const QDict *qdict)
int nchannels = qdict_get_try_int(qdict, "nchannels", 2);
const char *audiodev = qdict_get_str(qdict, "audiodev");
CaptureState *s;
Error *local_err = NULL;
AudioState *as = audio_state_by_name(audiodev, &local_err);
AudioState *as = audio_state_by_name(audiodev);
if (!as) {
error_report_err(local_err);
monitor_printf(mon, "Audiodev '%s' not found\n", audiodev);
return;
}

View File

@@ -32,9 +32,7 @@
#include "qapi/qobject-input-visitor.h"
#include "qapi/qapi-visit-audio.h"
#include "qapi/qapi-commands-audio.h"
#include "qapi/qmp/qdict.h"
#include "qemu/cutils.h"
#include "qemu/error-report.h"
#include "qemu/log.h"
#include "qemu/module.h"
#include "qemu/help_option.h"
@@ -63,22 +61,19 @@ const char *audio_prio_list[] = {
"spice",
CONFIG_AUDIO_DRIVERS
"none",
"wav",
NULL
};
static QLIST_HEAD(, audio_driver) audio_drivers;
static AudiodevListHead audiodevs =
QSIMPLEQ_HEAD_INITIALIZER(audiodevs);
static AudiodevListHead default_audiodevs =
QSIMPLEQ_HEAD_INITIALIZER(default_audiodevs);
static AudiodevListHead audiodevs = QSIMPLEQ_HEAD_INITIALIZER(audiodevs);
void audio_driver_register(audio_driver *drv)
{
QLIST_INSERT_HEAD(&audio_drivers, drv, next);
}
static audio_driver *audio_driver_lookup(const char *name)
audio_driver *audio_driver_lookup(const char *name)
{
struct audio_driver *d;
Error *local_err = NULL;
@@ -104,7 +99,6 @@ static audio_driver *audio_driver_lookup(const char *name)
static QTAILQ_HEAD(AudioStateHead, AudioState) audio_states =
QTAILQ_HEAD_INITIALIZER(audio_states);
static AudioState *default_audio_state;
const struct mixeng_volume nominal_volume = {
.mute = 0,
@@ -117,6 +111,8 @@ const struct mixeng_volume nominal_volume = {
#endif
};
static bool legacy_config = true;
int audio_bug (const char *funcname, int cond)
{
if (cond) {
@@ -1557,11 +1553,9 @@ size_t audio_generic_read(HWVoiceIn *hw, void *buf, size_t size)
}
static int audio_driver_init(AudioState *s, struct audio_driver *drv,
Audiodev *dev, Error **errp)
bool msg, Audiodev *dev)
{
Error *local_err = NULL;
s->drv_opaque = drv->init(dev, &local_err);
s->drv_opaque = drv->init(dev);
if (s->drv_opaque) {
if (!drv->pcm_ops->get_buffer_in) {
@@ -1573,15 +1567,13 @@ static int audio_driver_init(AudioState *s, struct audio_driver *drv,
drv->pcm_ops->put_buffer_out = audio_generic_put_buffer_out;
}
audio_init_nb_voices_out(s, drv, 1);
audio_init_nb_voices_in(s, drv, 0);
audio_init_nb_voices_out(s, drv);
audio_init_nb_voices_in(s, drv);
s->drv = drv;
return 0;
} else {
if (local_err) {
error_propagate(errp, local_err);
} else {
error_setg(errp, "Could not init `%s' audio driver", drv->name);
if (msg) {
dolog("Could not init `%s' audio driver\n", drv->name);
}
return -1;
}
@@ -1661,7 +1653,6 @@ static void free_audio_state(AudioState *s)
void audio_cleanup(void)
{
default_audio_state = NULL;
while (!QTAILQ_EMPTY(&audio_states)) {
AudioState *s = QTAILQ_FIRST(&audio_states);
QTAILQ_REMOVE(&audio_states, s, list);
@@ -1683,30 +1674,24 @@ static const VMStateDescription vmstate_audio = {
.version_id = 1,
.minimum_version_id = 1,
.needed = vmstate_audio_needed,
.fields = (const VMStateField[]) {
.fields = (VMStateField[]) {
VMSTATE_END_OF_LIST()
}
};
void audio_create_default_audiodevs(void)
static void audio_validate_opts(Audiodev *dev, Error **errp);
static AudiodevListEntry *audiodev_find(
AudiodevListHead *head, const char *drvname)
{
for (int i = 0; audio_prio_list[i]; i++) {
if (audio_driver_lookup(audio_prio_list[i])) {
QDict *dict = qdict_new();
Audiodev *dev = NULL;
Visitor *v;
qdict_put_str(dict, "driver", audio_prio_list[i]);
qdict_put_str(dict, "id", "#default");
v = qobject_input_visitor_new_keyval(QOBJECT(dict));
qobject_unref(dict);
visit_type_Audiodev(v, NULL, &dev, &error_fatal);
visit_free(v);
audio_define_default(dev, &error_abort);
AudiodevListEntry *e;
QSIMPLEQ_FOREACH(e, head, next) {
if (strcmp(AudiodevDriver_str(e->dev->driver), drvname) == 0) {
return e;
}
}
return NULL;
}
/*
@@ -1715,16 +1700,62 @@ void audio_create_default_audiodevs(void)
* if dev == NULL => legacy implicit initialization, return the already created
* state or create a new one
*/
static AudioState *audio_init(Audiodev *dev, Error **errp)
static AudioState *audio_init(Audiodev *dev, const char *name)
{
static bool atexit_registered;
size_t i;
int done = 0;
const char *drvname;
const char *drvname = NULL;
VMChangeStateEntry *vmse;
AudioState *s;
struct audio_driver *driver;
/* silence gcc warning about uninitialized variable */
AudiodevListHead head = QSIMPLEQ_HEAD_INITIALIZER(head);
if (using_spice) {
/*
* When using spice allow the spice audio driver being picked
* as default.
*
* Temporary hack. Using audio devices without explicit
* audiodev= property is already deprecated. Same goes for
* the -soundhw switch. Once this support gets finally
* removed we can also drop the concept of a default audio
* backend and this can go away.
*/
driver = audio_driver_lookup("spice");
if (driver) {
driver->can_be_default = 1;
}
}
if (dev) {
/* -audiodev option */
legacy_config = false;
drvname = AudiodevDriver_str(dev->driver);
} else if (!QTAILQ_EMPTY(&audio_states)) {
if (!legacy_config) {
dolog("Device %s: audiodev default parameter is deprecated, please "
"specify audiodev=%s\n", name,
QTAILQ_FIRST(&audio_states)->dev->id);
}
return QTAILQ_FIRST(&audio_states);
} else {
/* legacy implicit initialization */
head = audio_handle_legacy_opts();
/*
* In case of legacy initialization, all Audiodevs in the list will have
* the same configuration (except the driver), so it doesn't matter which
* one we chose. We need an Audiodev to set up AudioState before we can
* init a driver. Also note that dev at this point is still in the
* list.
*/
dev = QSIMPLEQ_FIRST(&head)->dev;
audio_validate_opts(dev, &error_abort);
}
s = g_new0(AudioState, 1);
s->dev = dev;
QLIST_INIT (&s->hw_head_out);
QLIST_INIT (&s->hw_head_in);
@@ -1736,39 +1767,56 @@ static AudioState *audio_init(Audiodev *dev, Error **errp)
s->ts = timer_new_ns(QEMU_CLOCK_VIRTUAL, audio_timer, s);
if (dev) {
/* -audiodev option */
s->dev = dev;
drvname = AudiodevDriver_str(dev->driver);
s->nb_hw_voices_out = audio_get_pdo_out(dev)->voices;
s->nb_hw_voices_in = audio_get_pdo_in(dev)->voices;
if (s->nb_hw_voices_out < 1) {
dolog ("Bogus number of playback voices %d, setting to 1\n",
s->nb_hw_voices_out);
s->nb_hw_voices_out = 1;
}
if (s->nb_hw_voices_in < 0) {
dolog ("Bogus number of capture voices %d, setting to 0\n",
s->nb_hw_voices_in);
s->nb_hw_voices_in = 0;
}
if (drvname) {
driver = audio_driver_lookup(drvname);
if (driver) {
done = !audio_driver_init(s, driver, dev, errp);
done = !audio_driver_init(s, driver, true, dev);
} else {
error_setg(errp, "Unknown audio driver `%s'", drvname);
dolog ("Unknown audio driver `%s'\n", drvname);
}
if (!done) {
goto out;
free_audio_state(s);
return NULL;
}
} else {
assert(!default_audio_state);
for (;;) {
AudiodevListEntry *e = QSIMPLEQ_FIRST(&default_audiodevs);
if (!e) {
error_setg(errp, "no default audio driver available");
goto out;
for (i = 0; audio_prio_list[i]; i++) {
AudiodevListEntry *e = audiodev_find(&head, audio_prio_list[i]);
driver = audio_driver_lookup(audio_prio_list[i]);
if (e && driver) {
s->dev = dev = e->dev;
audio_validate_opts(dev, &error_abort);
done = !audio_driver_init(s, driver, false, dev);
if (done) {
e->dev = NULL;
break;
}
}
s->dev = dev = e->dev;
QSIMPLEQ_REMOVE_HEAD(&default_audiodevs, next);
g_free(e);
drvname = AudiodevDriver_str(dev->driver);
driver = audio_driver_lookup(drvname);
if (!audio_driver_init(s, driver, dev, NULL)) {
break;
}
qapi_free_Audiodev(dev);
s->dev = NULL;
}
}
audio_free_audiodev_list(&head);
if (!done) {
driver = audio_driver_lookup("none");
done = !audio_driver_init(s, driver, false, dev);
assert(done);
dolog("warning: Using timer based audio emulation\n");
}
if (dev->timer_period <= 0) {
s->period_ticks = 1;
@@ -1784,43 +1832,29 @@ static AudioState *audio_init(Audiodev *dev, Error **errp)
QTAILQ_INSERT_TAIL(&audio_states, s, list);
QLIST_INIT (&s->card_head);
vmstate_register_any(NULL, &vmstate_audio, s);
vmstate_register (NULL, 0, &vmstate_audio, s);
return s;
out:
free_audio_state(s);
return NULL;
}
AudioState *audio_get_default_audio_state(Error **errp)
void audio_free_audiodev_list(AudiodevListHead *head)
{
if (!default_audio_state) {
default_audio_state = audio_init(NULL, errp);
if (!default_audio_state) {
if (!QSIMPLEQ_EMPTY(&audiodevs)) {
error_append_hint(errp, "Perhaps you wanted to use -audio or set audiodev=%s?\n",
QSIMPLEQ_FIRST(&audiodevs)->dev->id);
}
}
AudiodevListEntry *e;
while ((e = QSIMPLEQ_FIRST(head))) {
QSIMPLEQ_REMOVE_HEAD(head, next);
qapi_free_Audiodev(e->dev);
g_free(e);
}
return default_audio_state;
}
bool AUD_register_card (const char *name, QEMUSoundCard *card, Error **errp)
void AUD_register_card (const char *name, QEMUSoundCard *card)
{
if (!card->state) {
card->state = audio_get_default_audio_state(errp);
if (!card->state) {
return false;
}
card->state = audio_init(NULL, name);
}
card->name = g_strdup (name);
memset (&card->entries, 0, sizeof (card->entries));
QLIST_INSERT_HEAD(&card->state->card_head, card, entries);
return true;
}
void AUD_remove_card (QEMUSoundCard *card)
@@ -1842,8 +1876,10 @@ CaptureVoiceOut *AUD_add_capture(
struct capture_callback *cb;
if (!s) {
error_report("Capturing without setting an audiodev is not supported");
abort();
if (!legacy_config) {
dolog("Capturing without setting an audiodev is deprecated\n");
}
s = audio_init(NULL, NULL);
}
if (!audio_get_pdo_out(s->dev)->mixing_engine) {
@@ -2147,24 +2183,17 @@ void audio_define(Audiodev *dev)
QSIMPLEQ_INSERT_TAIL(&audiodevs, e, next);
}
void audio_define_default(Audiodev *dev, Error **errp)
{
AudiodevListEntry *e;
audio_validate_opts(dev, errp);
e = g_new0(AudiodevListEntry, 1);
e->dev = dev;
QSIMPLEQ_INSERT_TAIL(&default_audiodevs, e, next);
}
void audio_init_audiodevs(void)
bool audio_init_audiodevs(void)
{
AudiodevListEntry *e;
QSIMPLEQ_FOREACH(e, &audiodevs, next) {
audio_init(e->dev, &error_fatal);
if (!audio_init(e->dev, NULL)) {
return false;
}
}
return true;
}
audsettings audiodev_to_audsettings(AudiodevPerDirectionOptions *pdo)
@@ -2226,7 +2255,7 @@ int audio_buffer_bytes(AudiodevPerDirectionOptions *pdo,
audioformat_bytes_per_sample(as->fmt);
}
AudioState *audio_state_by_name(const char *name, Error **errp)
AudioState *audio_state_by_name(const char *name)
{
AudioState *s;
QTAILQ_FOREACH(s, &audio_states, list) {
@@ -2235,7 +2264,6 @@ AudioState *audio_state_by_name(const char *name, Error **errp)
return s;
}
}
error_setg(errp, "audiodev '%s' not found", name);
return NULL;
}

View File

@@ -94,7 +94,7 @@ typedef struct QEMUAudioTimeStamp {
void AUD_vlog (const char *cap, const char *fmt, va_list ap) G_GNUC_PRINTF(2, 0);
void AUD_log (const char *cap, const char *fmt, ...) G_GNUC_PRINTF(2, 3);
bool AUD_register_card (const char *name, QEMUSoundCard *card, Error **errp);
void AUD_register_card (const char *name, QEMUSoundCard *card);
void AUD_remove_card (QEMUSoundCard *card);
CaptureVoiceOut *AUD_add_capture(
AudioState *s,
@@ -169,14 +169,12 @@ void audio_sample_from_uint64(void *samples, int pos,
uint64_t left, uint64_t right);
void audio_define(Audiodev *audio);
void audio_define_default(Audiodev *dev, Error **errp);
void audio_parse_option(const char *opt);
void audio_create_default_audiodevs(void);
void audio_init_audiodevs(void);
bool audio_init_audiodevs(void);
void audio_help(void);
void audio_legacy_help(void);
AudioState *audio_state_by_name(const char *name, Error **errp);
AudioState *audio_get_default_audio_state(Error **errp);
AudioState *audio_state_by_name(const char *name);
const char *audio_get_id(QEMUSoundCard *card);
#define DEFINE_AUDIO_PROPERTIES(_s, _f) \

View File

@@ -140,12 +140,13 @@ typedef struct audio_driver audio_driver;
struct audio_driver {
const char *name;
const char *descr;
void *(*init) (Audiodev *, Error **);
void *(*init) (Audiodev *);
void (*fini) (void *);
#ifdef CONFIG_GIO
void (*set_dbus_server) (AudioState *s, GDBusObjectManagerServer *manager, bool p2p);
#endif
struct audio_pcm_ops *pcm_ops;
int can_be_default;
int max_voices_out;
int max_voices_in;
size_t voice_size_out;
@@ -242,6 +243,7 @@ extern const struct mixeng_volume nominal_volume;
extern const char *audio_prio_list[];
void audio_driver_register(audio_driver *drv);
audio_driver *audio_driver_lookup(const char *name);
void audio_pcm_init_info (struct audio_pcm_info *info, struct audsettings *as);
void audio_pcm_info_clear_buf (struct audio_pcm_info *info, void *buf, int len);
@@ -295,6 +297,9 @@ typedef struct AudiodevListEntry {
} AudiodevListEntry;
typedef QSIMPLEQ_HEAD(, AudiodevListEntry) AudiodevListHead;
AudiodevListHead audio_handle_legacy_opts(void);
void audio_free_audiodev_list(AudiodevListHead *head);
void audio_create_pdos(Audiodev *dev);
AudiodevPerDirectionOptions *audio_get_pdo_in(Audiodev *dev);

591
audio/audio_legacy.c Normal file
View File

@@ -0,0 +1,591 @@
/*
* QEMU Audio subsystem: legacy configuration handling
*
* Copyright (c) 2015-2019 Zoltán Kővágó <DirtY.iCE.hu@gmail.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "audio.h"
#include "audio_int.h"
#include "qemu/cutils.h"
#include "qemu/timer.h"
#include "qapi/error.h"
#include "qapi/qapi-visit-audio.h"
#include "qapi/visitor-impl.h"
#define AUDIO_CAP "audio-legacy"
#include "audio_int.h"
static uint32_t toui32(const char *str)
{
uint64_t ret;
if (parse_uint_full(str, 10, &ret) || ret > UINT32_MAX) {
dolog("Invalid integer value `%s'\n", str);
exit(1);
}
return ret;
}
/* helper functions to convert env variables */
static void get_bool(const char *env, bool *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
*dst = toui32(val) != 0;
*has_dst = true;
}
}
static void get_int(const char *env, uint32_t *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
*dst = toui32(val);
*has_dst = true;
}
}
static void get_str(const char *env, char **dst)
{
const char *val = getenv(env);
if (val) {
g_free(*dst);
*dst = g_strdup(val);
}
}
static void get_fmt(const char *env, AudioFormat *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
size_t i;
for (i = 0; AudioFormat_lookup.size; ++i) {
if (strcasecmp(val, AudioFormat_lookup.array[i]) == 0) {
*dst = i;
*has_dst = true;
return;
}
}
dolog("Invalid audio format `%s'\n", val);
exit(1);
}
}
#if defined(CONFIG_AUDIO_ALSA) || defined(CONFIG_AUDIO_DSOUND)
static void get_millis_to_usecs(const char *env, uint32_t *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
*dst = toui32(val) * 1000;
*has_dst = true;
}
}
#endif
#if defined(CONFIG_AUDIO_ALSA) || defined(CONFIG_AUDIO_COREAUDIO) || \
defined(CONFIG_AUDIO_PA) || defined(CONFIG_AUDIO_SDL) || \
defined(CONFIG_AUDIO_DSOUND) || defined(CONFIG_AUDIO_OSS)
static uint32_t frames_to_usecs(uint32_t frames,
AudiodevPerDirectionOptions *pdo)
{
uint32_t freq = pdo->has_frequency ? pdo->frequency : 44100;
return (frames * 1000000 + freq / 2) / freq;
}
#endif
#ifdef CONFIG_AUDIO_COREAUDIO
static void get_frames_to_usecs(const char *env, uint32_t *dst, bool *has_dst,
AudiodevPerDirectionOptions *pdo)
{
const char *val = getenv(env);
if (val) {
*dst = frames_to_usecs(toui32(val), pdo);
*has_dst = true;
}
}
#endif
#if defined(CONFIG_AUDIO_PA) || defined(CONFIG_AUDIO_SDL) || \
defined(CONFIG_AUDIO_DSOUND) || defined(CONFIG_AUDIO_OSS)
static uint32_t samples_to_usecs(uint32_t samples,
AudiodevPerDirectionOptions *pdo)
{
uint32_t channels = pdo->has_channels ? pdo->channels : 2;
return frames_to_usecs(samples / channels, pdo);
}
#endif
#if defined(CONFIG_AUDIO_PA) || defined(CONFIG_AUDIO_SDL)
static void get_samples_to_usecs(const char *env, uint32_t *dst, bool *has_dst,
AudiodevPerDirectionOptions *pdo)
{
const char *val = getenv(env);
if (val) {
*dst = samples_to_usecs(toui32(val), pdo);
*has_dst = true;
}
}
#endif
#if defined(CONFIG_AUDIO_DSOUND) || defined(CONFIG_AUDIO_OSS)
static uint32_t bytes_to_usecs(uint32_t bytes, AudiodevPerDirectionOptions *pdo)
{
AudioFormat fmt = pdo->has_format ? pdo->format : AUDIO_FORMAT_S16;
uint32_t bytes_per_sample = audioformat_bytes_per_sample(fmt);
return samples_to_usecs(bytes / bytes_per_sample, pdo);
}
static void get_bytes_to_usecs(const char *env, uint32_t *dst, bool *has_dst,
AudiodevPerDirectionOptions *pdo)
{
const char *val = getenv(env);
if (val) {
*dst = bytes_to_usecs(toui32(val), pdo);
*has_dst = true;
}
}
#endif
/* backend specific functions */
#ifdef CONFIG_AUDIO_ALSA
/* ALSA */
static void handle_alsa_per_direction(
AudiodevAlsaPerDirectionOptions *apdo, const char *prefix)
{
char buf[64];
size_t len = strlen(prefix);
bool size_in_usecs = false;
bool dummy;
memcpy(buf, prefix, len);
strcpy(buf + len, "TRY_POLL");
get_bool(buf, &apdo->try_poll, &apdo->has_try_poll);
strcpy(buf + len, "DEV");
get_str(buf, &apdo->dev);
strcpy(buf + len, "SIZE_IN_USEC");
get_bool(buf, &size_in_usecs, &dummy);
strcpy(buf + len, "PERIOD_SIZE");
get_int(buf, &apdo->period_length, &apdo->has_period_length);
if (apdo->has_period_length && !size_in_usecs) {
apdo->period_length = frames_to_usecs(
apdo->period_length,
qapi_AudiodevAlsaPerDirectionOptions_base(apdo));
}
strcpy(buf + len, "BUFFER_SIZE");
get_int(buf, &apdo->buffer_length, &apdo->has_buffer_length);
if (apdo->has_buffer_length && !size_in_usecs) {
apdo->buffer_length = frames_to_usecs(
apdo->buffer_length,
qapi_AudiodevAlsaPerDirectionOptions_base(apdo));
}
}
static void handle_alsa(Audiodev *dev)
{
AudiodevAlsaOptions *aopt = &dev->u.alsa;
handle_alsa_per_direction(aopt->in, "QEMU_ALSA_ADC_");
handle_alsa_per_direction(aopt->out, "QEMU_ALSA_DAC_");
get_millis_to_usecs("QEMU_ALSA_THRESHOLD",
&aopt->threshold, &aopt->has_threshold);
}
#endif
#ifdef CONFIG_AUDIO_COREAUDIO
/* coreaudio */
static void handle_coreaudio(Audiodev *dev)
{
get_frames_to_usecs(
"QEMU_COREAUDIO_BUFFER_SIZE",
&dev->u.coreaudio.out->buffer_length,
&dev->u.coreaudio.out->has_buffer_length,
qapi_AudiodevCoreaudioPerDirectionOptions_base(dev->u.coreaudio.out));
get_int("QEMU_COREAUDIO_BUFFER_COUNT",
&dev->u.coreaudio.out->buffer_count,
&dev->u.coreaudio.out->has_buffer_count);
}
#endif
#ifdef CONFIG_AUDIO_DSOUND
/* dsound */
static void handle_dsound(Audiodev *dev)
{
get_millis_to_usecs("QEMU_DSOUND_LATENCY_MILLIS",
&dev->u.dsound.latency, &dev->u.dsound.has_latency);
get_bytes_to_usecs("QEMU_DSOUND_BUFSIZE_OUT",
&dev->u.dsound.out->buffer_length,
&dev->u.dsound.out->has_buffer_length,
dev->u.dsound.out);
get_bytes_to_usecs("QEMU_DSOUND_BUFSIZE_IN",
&dev->u.dsound.in->buffer_length,
&dev->u.dsound.in->has_buffer_length,
dev->u.dsound.in);
}
#endif
#ifdef CONFIG_AUDIO_OSS
/* OSS */
static void handle_oss_per_direction(
AudiodevOssPerDirectionOptions *opdo, const char *try_poll_env,
const char *dev_env)
{
get_bool(try_poll_env, &opdo->try_poll, &opdo->has_try_poll);
get_str(dev_env, &opdo->dev);
get_bytes_to_usecs("QEMU_OSS_FRAGSIZE",
&opdo->buffer_length, &opdo->has_buffer_length,
qapi_AudiodevOssPerDirectionOptions_base(opdo));
get_int("QEMU_OSS_NFRAGS", &opdo->buffer_count,
&opdo->has_buffer_count);
}
static void handle_oss(Audiodev *dev)
{
AudiodevOssOptions *oopt = &dev->u.oss;
handle_oss_per_direction(oopt->in, "QEMU_AUDIO_ADC_TRY_POLL",
"QEMU_OSS_ADC_DEV");
handle_oss_per_direction(oopt->out, "QEMU_AUDIO_DAC_TRY_POLL",
"QEMU_OSS_DAC_DEV");
get_bool("QEMU_OSS_MMAP", &oopt->try_mmap, &oopt->has_try_mmap);
get_bool("QEMU_OSS_EXCLUSIVE", &oopt->exclusive, &oopt->has_exclusive);
get_int("QEMU_OSS_POLICY", &oopt->dsp_policy, &oopt->has_dsp_policy);
}
#endif
#ifdef CONFIG_AUDIO_PA
/* pulseaudio */
static void handle_pa_per_direction(
AudiodevPaPerDirectionOptions *ppdo, const char *env)
{
get_str(env, &ppdo->name);
}
static void handle_pa(Audiodev *dev)
{
handle_pa_per_direction(dev->u.pa.in, "QEMU_PA_SOURCE");
handle_pa_per_direction(dev->u.pa.out, "QEMU_PA_SINK");
get_samples_to_usecs(
"QEMU_PA_SAMPLES", &dev->u.pa.in->buffer_length,
&dev->u.pa.in->has_buffer_length,
qapi_AudiodevPaPerDirectionOptions_base(dev->u.pa.in));
get_samples_to_usecs(
"QEMU_PA_SAMPLES", &dev->u.pa.out->buffer_length,
&dev->u.pa.out->has_buffer_length,
qapi_AudiodevPaPerDirectionOptions_base(dev->u.pa.out));
get_str("QEMU_PA_SERVER", &dev->u.pa.server);
}
#endif
#ifdef CONFIG_AUDIO_SDL
/* SDL */
static void handle_sdl(Audiodev *dev)
{
/* SDL is output only */
get_samples_to_usecs("QEMU_SDL_SAMPLES", &dev->u.sdl.out->buffer_length,
&dev->u.sdl.out->has_buffer_length,
qapi_AudiodevSdlPerDirectionOptions_base(dev->u.sdl.out));
}
#endif
/* wav */
static void handle_wav(Audiodev *dev)
{
get_int("QEMU_WAV_FREQUENCY",
&dev->u.wav.out->frequency, &dev->u.wav.out->has_frequency);
get_fmt("QEMU_WAV_FORMAT", &dev->u.wav.out->format,
&dev->u.wav.out->has_format);
get_int("QEMU_WAV_DAC_FIXED_CHANNELS",
&dev->u.wav.out->channels, &dev->u.wav.out->has_channels);
get_str("QEMU_WAV_PATH", &dev->u.wav.path);
}
/* general */
static void handle_per_direction(
AudiodevPerDirectionOptions *pdo, const char *prefix)
{
char buf[64];
size_t len = strlen(prefix);
memcpy(buf, prefix, len);
strcpy(buf + len, "FIXED_SETTINGS");
get_bool(buf, &pdo->fixed_settings, &pdo->has_fixed_settings);
strcpy(buf + len, "FIXED_FREQ");
get_int(buf, &pdo->frequency, &pdo->has_frequency);
strcpy(buf + len, "FIXED_FMT");
get_fmt(buf, &pdo->format, &pdo->has_format);
strcpy(buf + len, "FIXED_CHANNELS");
get_int(buf, &pdo->channels, &pdo->has_channels);
strcpy(buf + len, "VOICES");
get_int(buf, &pdo->voices, &pdo->has_voices);
}
static AudiodevListEntry *legacy_opt(const char *drvname)
{
AudiodevListEntry *e = g_new0(AudiodevListEntry, 1);
e->dev = g_new0(Audiodev, 1);
e->dev->id = g_strdup(drvname);
e->dev->driver = qapi_enum_parse(
&AudiodevDriver_lookup, drvname, -1, &error_abort);
audio_create_pdos(e->dev);
handle_per_direction(audio_get_pdo_in(e->dev), "QEMU_AUDIO_ADC_");
handle_per_direction(audio_get_pdo_out(e->dev), "QEMU_AUDIO_DAC_");
/* Original description: Timer period in HZ (0 - use lowest possible) */
get_int("QEMU_AUDIO_TIMER_PERIOD",
&e->dev->timer_period, &e->dev->has_timer_period);
if (e->dev->has_timer_period && e->dev->timer_period) {
e->dev->timer_period = NANOSECONDS_PER_SECOND / 1000 /
e->dev->timer_period;
}
switch (e->dev->driver) {
#ifdef CONFIG_AUDIO_ALSA
case AUDIODEV_DRIVER_ALSA:
handle_alsa(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_COREAUDIO
case AUDIODEV_DRIVER_COREAUDIO:
handle_coreaudio(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_DSOUND
case AUDIODEV_DRIVER_DSOUND:
handle_dsound(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_OSS
case AUDIODEV_DRIVER_OSS:
handle_oss(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_PA
case AUDIODEV_DRIVER_PA:
handle_pa(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_SDL
case AUDIODEV_DRIVER_SDL:
handle_sdl(e->dev);
break;
#endif
case AUDIODEV_DRIVER_WAV:
handle_wav(e->dev);
break;
default:
break;
}
return e;
}
AudiodevListHead audio_handle_legacy_opts(void)
{
const char *drvname = getenv("QEMU_AUDIO_DRV");
AudiodevListHead head = QSIMPLEQ_HEAD_INITIALIZER(head);
if (drvname) {
AudiodevListEntry *e;
audio_driver *driver = audio_driver_lookup(drvname);
if (!driver) {
dolog("Unknown audio driver `%s'\n", drvname);
exit(1);
}
e = legacy_opt(drvname);
QSIMPLEQ_INSERT_TAIL(&head, e, next);
} else {
for (int i = 0; audio_prio_list[i]; i++) {
audio_driver *driver = audio_driver_lookup(audio_prio_list[i]);
if (driver && driver->can_be_default) {
AudiodevListEntry *e = legacy_opt(driver->name);
QSIMPLEQ_INSERT_TAIL(&head, e, next);
}
}
if (QSIMPLEQ_EMPTY(&head)) {
dolog("Internal error: no default audio driver available\n");
exit(1);
}
}
return head;
}
/* visitor to print -audiodev option */
typedef struct {
Visitor visitor;
bool comma;
GList *path;
} LegacyPrintVisitor;
static bool lv_start_struct(Visitor *v, const char *name, void **obj,
size_t size, Error **errp)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
lv->path = g_list_append(lv->path, g_strdup(name));
return true;
}
static void lv_end_struct(Visitor *v, void **obj)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
lv->path = g_list_delete_link(lv->path, g_list_last(lv->path));
}
static void lv_print_key(Visitor *v, const char *name)
{
GList *e;
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
if (lv->comma) {
putchar(',');
} else {
lv->comma = true;
}
for (e = lv->path; e; e = e->next) {
if (e->data) {
printf("%s.", (const char *) e->data);
}
}
printf("%s=", name);
}
static bool lv_type_int64(Visitor *v, const char *name, int64_t *obj,
Error **errp)
{
lv_print_key(v, name);
printf("%" PRIi64, *obj);
return true;
}
static bool lv_type_uint64(Visitor *v, const char *name, uint64_t *obj,
Error **errp)
{
lv_print_key(v, name);
printf("%" PRIu64, *obj);
return true;
}
static bool lv_type_bool(Visitor *v, const char *name, bool *obj, Error **errp)
{
lv_print_key(v, name);
printf("%s", *obj ? "on" : "off");
return true;
}
static bool lv_type_str(Visitor *v, const char *name, char **obj, Error **errp)
{
const char *str = *obj;
lv_print_key(v, name);
while (*str) {
if (*str == ',') {
putchar(',');
}
putchar(*str++);
}
return true;
}
static void lv_complete(Visitor *v, void *opaque)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
assert(lv->path == NULL);
}
static void lv_free(Visitor *v)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
g_list_free_full(lv->path, g_free);
g_free(lv);
}
static Visitor *legacy_visitor_new(void)
{
LegacyPrintVisitor *lv = g_new0(LegacyPrintVisitor, 1);
lv->visitor.start_struct = lv_start_struct;
lv->visitor.end_struct = lv_end_struct;
/* lists not supported */
lv->visitor.type_int64 = lv_type_int64;
lv->visitor.type_uint64 = lv_type_uint64;
lv->visitor.type_bool = lv_type_bool;
lv->visitor.type_str = lv_type_str;
lv->visitor.type = VISITOR_OUTPUT;
lv->visitor.complete = lv_complete;
lv->visitor.free = lv_free;
return &lv->visitor;
}
void audio_legacy_help(void)
{
AudiodevListHead head;
AudiodevListEntry *e;
printf("Environment variable based configuration deprecated.\n");
printf("Please use the new -audiodev option.\n");
head = audio_handle_legacy_opts();
printf("\nEquivalent -audiodev to your current environment variables:\n");
if (!getenv("QEMU_AUDIO_DRV")) {
printf("(Since you didn't specify QEMU_AUDIO_DRV, I'll list all "
"possibilities)\n");
}
QSIMPLEQ_FOREACH(e, &head, next) {
Visitor *v;
Audiodev *dev = e->dev;
printf("-audiodev ");
v = legacy_visitor_new();
visit_type_Audiodev(v, NULL, &dev, &error_abort);
visit_free(v);
printf("\n");
}
audio_free_audiodev_list(&head);
}

View File

@@ -37,12 +37,11 @@
#endif
static void glue(audio_init_nb_voices_, TYPE)(AudioState *s,
struct audio_driver *drv, int min_voices)
struct audio_driver *drv)
{
int max_voices = glue (drv->max_voices_, TYPE);
size_t voice_size = glue(drv->voice_size_, TYPE);
glue (s->nb_hw_voices_, TYPE) = glue(audio_get_pdo_, TYPE)(s->dev)->voices;
if (glue (s->nb_hw_voices_, TYPE) > max_voices) {
if (!max_voices) {
#ifdef DAC
@@ -57,12 +56,6 @@ static void glue(audio_init_nb_voices_, TYPE)(AudioState *s,
glue (s->nb_hw_voices_, TYPE) = max_voices;
}
if (glue (s->nb_hw_voices_, TYPE) < min_voices) {
dolog ("Bogus number of " NAME " voices %d, setting to %d\n",
glue (s->nb_hw_voices_, TYPE),
min_voices);
}
if (audio_bug(__func__, !voice_size && max_voices)) {
dolog ("drv=`%s' voice_size=0 max_voices=%d\n",
drv->name, max_voices);

View File

@@ -44,6 +44,11 @@ typedef struct coreaudioVoiceOut {
bool enabled;
} coreaudioVoiceOut;
#if !defined(MAC_OS_VERSION_12_0) \
|| (MAC_OS_X_VERSION_MIN_REQUIRED < MAC_OS_VERSION_12_0)
#define kAudioObjectPropertyElementMain kAudioObjectPropertyElementMaster
#endif
static const AudioObjectPropertyAddress voice_addr = {
kAudioHardwarePropertyDefaultOutputDevice,
kAudioObjectPropertyScopeGlobal,
@@ -294,7 +299,7 @@ COREAUDIO_WRAPPER_FUNC(write, size_t, (HWVoiceOut *hw, void *buf, size_t size),
#undef COREAUDIO_WRAPPER_FUNC
/*
* callback to feed audiooutput buffer. called without BQL.
* callback to feed audiooutput buffer. called without iothread lock.
* allowed to lock "buf_mutex", but disallowed to have any other locks.
*/
static OSStatus audioDeviceIOProc(
@@ -533,7 +538,7 @@ static void update_device_playback_state(coreaudioVoiceOut *core)
}
}
/* called without BQL. */
/* called without iothread lock. */
static OSStatus handle_voice_change(
AudioObjectID in_object_id,
UInt32 in_number_addresses,
@@ -542,7 +547,7 @@ static OSStatus handle_voice_change(
{
coreaudioVoiceOut *core = in_client_data;
bql_lock();
qemu_mutex_lock_iothread();
if (core->outputDeviceID) {
fini_out_device(core);
@@ -552,7 +557,7 @@ static OSStatus handle_voice_change(
update_device_playback_state(core);
}
bql_unlock();
qemu_mutex_unlock_iothread();
return 0;
}
@@ -639,7 +644,7 @@ static void coreaudio_enable_out(HWVoiceOut *hw, bool enable)
update_device_playback_state(core);
}
static void *coreaudio_audio_init(Audiodev *dev, Error **errp)
static void *coreaudio_audio_init(Audiodev *dev)
{
return dev;
}
@@ -668,6 +673,7 @@ static struct audio_driver coreaudio_audio_driver = {
.init = coreaudio_audio_init,
.fini = coreaudio_audio_fini,
.pcm_ops = &coreaudio_pcm_ops,
.can_be_default = 1,
.max_voices_out = 1,
.max_voices_in = 0,
.voice_size_out = sizeof (coreaudioVoiceOut),

View File

@@ -105,7 +105,7 @@ static size_t dbus_put_buffer_out(HWVoiceOut *hw, void *buf, size_t size)
assert(buf == vo->buf + vo->buf_pos && vo->buf_pos + size <= vo->buf_size);
vo->buf_pos += size;
trace_dbus_audio_put_buffer_out(vo->buf_pos, vo->buf_size);
trace_dbus_audio_put_buffer_out(size);
if (vo->buf_pos < vo->buf_size) {
return size;
@@ -395,7 +395,7 @@ dbus_enable_in(HWVoiceIn *hw, bool enable)
}
static void *
dbus_audio_init(Audiodev *dev, Error **errp)
dbus_audio_init(Audiodev *dev)
{
DBusAudio *da = g_new0(DBusAudio, 1);
@@ -676,6 +676,7 @@ static struct audio_driver dbus_audio_driver = {
.fini = dbus_audio_fini,
.set_dbus_server = dbus_audio_set_server,
.pcm_ops = &dbus_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof(DBusVoiceOut),

View File

@@ -619,7 +619,7 @@ static void dsound_audio_fini (void *opaque)
g_free(s);
}
static void *dsound_audio_init(Audiodev *dev, Error **errp)
static void *dsound_audio_init(Audiodev *dev)
{
int err;
HRESULT hr;
@@ -721,6 +721,7 @@ static struct audio_driver dsound_audio_driver = {
.init = dsound_audio_init,
.fini = dsound_audio_fini,
.pcm_ops = &dsound_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = 1,
.voice_size_out = sizeof (DSoundVoiceOut),

View File

@@ -645,7 +645,7 @@ static int qjack_thread_creator(jack_native_thread_t *thread,
}
#endif
static void *qjack_init(Audiodev *dev, Error **errp)
static void *qjack_init(Audiodev *dev)
{
assert(dev->driver == AUDIODEV_DRIVER_JACK);
return dev;
@@ -676,6 +676,7 @@ static struct audio_driver jack_driver = {
.init = qjack_init,
.fini = qjack_fini,
.pcm_ops = &jack_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof(QJackOut),

View File

@@ -1,6 +1,7 @@
system_ss.add([spice_headers, files('audio.c')])
system_ss.add(files(
'audio-hmp-cmds.c',
'audio_legacy.c',
'mixeng.c',
'noaudio.c',
'wavaudio.c',
@@ -30,8 +31,7 @@ endforeach
if dbus_display
module_ss = ss.source_set()
module_ss.add(when: [gio, pixman],
if_true: [dbus_display1, files('dbusaudio.c')])
module_ss.add(when: [gio, pixman], if_true: files('dbusaudio.c'))
audio_modules += {'dbus': module_ss}
endif

View File

@@ -104,7 +104,7 @@ static void no_enable_in(HWVoiceIn *hw, bool enable)
}
}
static void *no_audio_init(Audiodev *dev, Error **errp)
static void *no_audio_init(Audiodev *dev)
{
return &no_audio_init;
}
@@ -135,6 +135,7 @@ static struct audio_driver no_audio_driver = {
.init = no_audio_init,
.fini = no_audio_fini,
.pcm_ops = &no_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (NoVoiceOut),

View File

@@ -28,7 +28,6 @@
#include "qemu/main-loop.h"
#include "qemu/module.h"
#include "qemu/host-utils.h"
#include "qapi/error.h"
#include "audio.h"
#include "trace.h"
@@ -549,6 +548,7 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
hw->size_emul);
hw->buf_emul = NULL;
} else {
int err;
int trig = 0;
if (ioctl (fd, SNDCTL_DSP_SETTRIGGER, &trig) < 0) {
oss_logerr (errno, "SNDCTL_DSP_SETTRIGGER 0 failed\n");
@@ -736,7 +736,7 @@ static void oss_init_per_direction(AudiodevOssPerDirectionOptions *opdo)
}
}
static void *oss_audio_init(Audiodev *dev, Error **errp)
static void *oss_audio_init(Audiodev *dev)
{
AudiodevOssOptions *oopts;
assert(dev->driver == AUDIODEV_DRIVER_OSS);
@@ -745,12 +745,8 @@ static void *oss_audio_init(Audiodev *dev, Error **errp)
oss_init_per_direction(oopts->in);
oss_init_per_direction(oopts->out);
if (access(oopts->in->dev ?: "/dev/dsp", R_OK | W_OK) < 0) {
error_setg_errno(errp, errno, "%s not accessible", oopts->in->dev ?: "/dev/dsp");
return NULL;
}
if (access(oopts->out->dev ?: "/dev/dsp", R_OK | W_OK) < 0) {
error_setg_errno(errp, errno, "%s not accessible", oopts->out->dev ?: "/dev/dsp");
if (access(oopts->in->dev ?: "/dev/dsp", R_OK | W_OK) < 0 ||
access(oopts->out->dev ?: "/dev/dsp", R_OK | W_OK) < 0) {
return NULL;
}
return dev;
@@ -783,6 +779,7 @@ static struct audio_driver oss_audio_driver = {
.init = oss_audio_init,
.fini = oss_audio_fini,
.pcm_ops = &oss_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (OSSVoiceOut),

View File

@@ -3,7 +3,7 @@
#include "qemu/osdep.h"
#include "qemu/module.h"
#include "audio.h"
#include "qapi/error.h"
#include "qapi/opts-visitor.h"
#include <pulse/pulseaudio.h>
@@ -818,7 +818,7 @@ fail:
return NULL;
}
static void *qpa_audio_init(Audiodev *dev, Error **errp)
static void *qpa_audio_init(Audiodev *dev)
{
paaudio *g;
AudiodevPaOptions *popts = &dev->u.pa;
@@ -834,12 +834,10 @@ static void *qpa_audio_init(Audiodev *dev, Error **errp)
runtime = getenv("XDG_RUNTIME_DIR");
if (!runtime) {
error_setg(errp, "XDG_RUNTIME_DIR not set");
return NULL;
}
snprintf(pidfile, sizeof(pidfile), "%s/pulse/pid", runtime);
if (stat(pidfile, &st) != 0) {
error_setg_errno(errp, errno, "could not stat pidfile %s", pidfile);
return NULL;
}
}
@@ -869,7 +867,6 @@ static void *qpa_audio_init(Audiodev *dev, Error **errp)
}
if (!g->conn) {
g_free(g);
error_setg(errp, "could not connect to PulseAudio server");
return NULL;
}
@@ -931,6 +928,7 @@ static struct audio_driver pa_audio_driver = {
.init = qpa_audio_init,
.fini = qpa_audio_fini,
.pcm_ops = &qpa_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (PAVoiceOut),

View File

@@ -11,8 +11,8 @@
#include "qemu/osdep.h"
#include "qemu/module.h"
#include "audio.h"
#include <errno.h>
#include "qemu/error-report.h"
#include "qapi/error.h"
#include <spa/param/audio/format-utils.h>
#include <spa/utils/ringbuffer.h>
#include <spa/utils/result.h>
@@ -736,7 +736,7 @@ static const struct pw_core_events core_events = {
};
static void *
qpw_audio_init(Audiodev *dev, Error **errp)
qpw_audio_init(Audiodev *dev)
{
g_autofree pwaudio *pw = g_new0(pwaudio, 1);
@@ -748,19 +748,19 @@ qpw_audio_init(Audiodev *dev, Error **errp)
pw->dev = dev;
pw->thread_loop = pw_thread_loop_new("PipeWire thread loop", NULL);
if (pw->thread_loop == NULL) {
error_setg_errno(errp, errno, "Could not create PipeWire loop");
error_report("Could not create PipeWire loop: %s", g_strerror(errno));
goto fail;
}
pw->context =
pw_context_new(pw_thread_loop_get_loop(pw->thread_loop), NULL, 0);
if (pw->context == NULL) {
error_setg_errno(errp, errno, "Could not create PipeWire context");
error_report("Could not create PipeWire context: %s", g_strerror(errno));
goto fail;
}
if (pw_thread_loop_start(pw->thread_loop) < 0) {
error_setg_errno(errp, errno, "Could not start PipeWire loop");
error_report("Could not start PipeWire loop: %s", g_strerror(errno));
goto fail;
}
@@ -769,13 +769,13 @@ qpw_audio_init(Audiodev *dev, Error **errp)
pw->core = pw_context_connect(pw->context, NULL, 0);
if (pw->core == NULL) {
pw_thread_loop_unlock(pw->thread_loop);
goto fail_error;
goto fail;
}
if (pw_core_add_listener(pw->core, &pw->core_listener,
&core_events, pw) < 0) {
pw_thread_loop_unlock(pw->thread_loop);
goto fail_error;
goto fail;
}
if (wait_resync(pw) < 0) {
pw_thread_loop_unlock(pw->thread_loop);
@@ -785,9 +785,8 @@ qpw_audio_init(Audiodev *dev, Error **errp)
return g_steal_pointer(&pw);
fail_error:
error_setg(errp, "Failed to initialize PW context");
fail:
AUD_log(AUDIO_CAP, "Failed to initialize PW context");
if (pw->thread_loop) {
pw_thread_loop_stop(pw->thread_loop);
}
@@ -842,6 +841,7 @@ static struct audio_driver pw_audio_driver = {
.init = qpw_audio_init,
.fini = qpw_audio_fini,
.pcm_ops = &qpw_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof(PWVoiceOut),

View File

@@ -26,7 +26,6 @@
#include <SDL.h>
#include <SDL_thread.h>
#include "qemu/module.h"
#include "qapi/error.h"
#include "audio.h"
#ifndef _WIN32
@@ -450,10 +449,10 @@ static void sdl_enable_in(HWVoiceIn *hw, bool enable)
SDL_PauseAudioDevice(sdl->devid, !enable);
}
static void *sdl_audio_init(Audiodev *dev, Error **errp)
static void *sdl_audio_init(Audiodev *dev)
{
if (SDL_InitSubSystem (SDL_INIT_AUDIO)) {
error_setg(errp, "SDL failed to initialize audio subsystem");
sdl_logerr ("SDL failed to initialize audio subsystem\n");
return NULL;
}
@@ -494,6 +493,7 @@ static struct audio_driver sdl_audio_driver = {
.init = sdl_audio_init,
.fini = sdl_audio_fini,
.pcm_ops = &sdl_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof(SDLVoiceOut),

View File

@@ -518,7 +518,7 @@ static void sndio_fini_in(HWVoiceIn *hw)
sndio_fini(self);
}
static void *sndio_audio_init(Audiodev *dev, Error **errp)
static void *sndio_audio_init(Audiodev *dev)
{
assert(dev->driver == AUDIODEV_DRIVER_SNDIO);
return dev;
@@ -550,6 +550,7 @@ static struct audio_driver sndio_audio_driver = {
.init = sndio_audio_init,
.fini = sndio_audio_fini,
.pcm_ops = &sndio_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof(SndioVoice),

View File

@@ -22,7 +22,6 @@
#include "qemu/module.h"
#include "qemu/error-report.h"
#include "qemu/timer.h"
#include "qapi/error.h"
#include "ui/qemu-spice.h"
#define AUDIO_CAP "spice"
@@ -72,13 +71,11 @@ static const SpiceRecordInterface record_sif = {
.base.minor_version = SPICE_INTERFACE_RECORD_MINOR,
};
static void *spice_audio_init(Audiodev *dev, Error **errp)
static void *spice_audio_init(Audiodev *dev)
{
if (!using_spice) {
error_setg(errp, "Cannot use spice audio without -spice");
return NULL;
}
return &spice_audio_init;
}

View File

@@ -15,7 +15,7 @@ oss_version(int version) "OSS version = 0x%x"
# dbusaudio.c
dbus_audio_register(const char *s, const char *dir) "sender = %s, dir = %s"
dbus_audio_put_buffer_out(size_t pos, size_t size) "buf_pos = %zu, buf_size = %zu"
dbus_audio_put_buffer_out(size_t len) "len = %zu"
dbus_audio_read(size_t len) "len = %zu"
# pwaudio.c

View File

@@ -97,10 +97,6 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
dolog ("WAVE files can not handle 32bit formats\n");
return -1;
case AUDIO_FORMAT_F32:
dolog("WAVE files can not handle float formats\n");
return -1;
default:
abort();
}
@@ -186,7 +182,7 @@ static void wav_enable_out(HWVoiceOut *hw, bool enable)
}
}
static void *wav_audio_init(Audiodev *dev, Error **errp)
static void *wav_audio_init(Audiodev *dev)
{
assert(dev->driver == AUDIODEV_DRIVER_WAV);
return dev;
@@ -212,6 +208,7 @@ static struct audio_driver wav_audio_driver = {
.init = wav_audio_init,
.fini = wav_audio_fini,
.pcm_ops = &wav_pcm_ops,
.can_be_default = 0,
.max_voices_out = 1,
.max_voices_in = 0,
.voice_size_out = sizeof (WAVVoiceOut),

View File

@@ -1,9 +1 @@
source tpm/Kconfig
config IOMMUFD
bool
depends on VFIO
config SPDM_SOCKET
bool
default y

View File

@@ -23,7 +23,6 @@
#include "qemu/osdep.h"
#include "sysemu/cryptodev.h"
#include "qemu/error-report.h"
#include "qapi/error.h"
#include "standard-headers/linux/virtio_crypto.h"
#include "crypto/cipher.h"
@@ -397,8 +396,8 @@ static int cryptodev_builtin_create_session(
case VIRTIO_CRYPTO_HASH_CREATE_SESSION:
case VIRTIO_CRYPTO_MAC_CREATE_SESSION:
default:
error_report("Unsupported opcode :%" PRIu32 "",
sess_info->op_code);
error_setg(&local_error, "Unsupported opcode :%" PRIu32 "",
sess_info->op_code);
return -VIRTIO_CRYPTO_NOTSUPP;
}
@@ -428,9 +427,7 @@ static int cryptodev_builtin_close_session(
CRYPTODEV_BACKEND_BUILTIN(backend);
CryptoDevBackendBuiltinSession *session;
if (session_id >= MAX_NUM_SESSIONS || !builtin->sessions[session_id]) {
return -VIRTIO_CRYPTO_INVSESS;
}
assert(session_id < MAX_NUM_SESSIONS && builtin->sessions[session_id]);
session = builtin->sessions[session_id];
if (session->cipher) {
@@ -555,8 +552,8 @@ static int cryptodev_builtin_operation(
if (op_info->session_id >= MAX_NUM_SESSIONS ||
builtin->sessions[op_info->session_id] == NULL) {
error_report("Cannot find a valid session id: %" PRIu64 "",
op_info->session_id);
error_setg(&local_error, "Cannot find a valid session id: %" PRIu64 "",
op_info->session_id);
return -VIRTIO_CRYPTO_INVSESS;
}

View File

@@ -398,7 +398,6 @@ static void cryptodev_backend_set_ops(Object *obj, Visitor *v,
static void
cryptodev_backend_complete(UserCreatable *uc, Error **errp)
{
ERRP_GUARD();
CryptoDevBackend *backend = CRYPTODEV_BACKEND(uc);
CryptoDevBackendClass *bc = CRYPTODEV_BACKEND_GET_CLASS(uc);
uint32_t services;
@@ -407,20 +406,11 @@ cryptodev_backend_complete(UserCreatable *uc, Error **errp)
QTAILQ_INIT(&backend->opinfos);
value = backend->tc.buckets[THROTTLE_OPS_TOTAL].avg;
cryptodev_backend_set_throttle(backend, THROTTLE_OPS_TOTAL, value, errp);
if (*errp) {
return;
}
value = backend->tc.buckets[THROTTLE_BPS_TOTAL].avg;
cryptodev_backend_set_throttle(backend, THROTTLE_BPS_TOTAL, value, errp);
if (*errp) {
return;
}
if (bc->init) {
bc->init(backend, errp);
if (*errp) {
return;
}
}
services = backend->conf.crypto_services;

Some files were not shown because too many files have changed in this diff Show More