Compare commits

..

9 Commits

Author SHA1 Message Date
Fabiano Rosas
6424d5b3df migration/multifd: Extract sem_done waiting into a function
This helps document the intent of the loop via the function name and
we can reuse this in the future.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-03 11:31:14 -03:00
Fabiano Rosas
5494d69c58 migration/multifd: Decouple control flow from the SYNC packet
We currently have the sem_sync semaphore that is used:

1) on the sending side, to know when the multifd_send_thread has
   finished sending the MULTIFD_FLAG_SYNC packet;

  This is unnecessary. Multifd sends packets (not pages) one by one
  and completion is already bound by both the channels_ready and sem
  semaphores. The SYNC packet has nothing special that would require
  it to have a separate semaphore on the sending side.

2) on the receiving side, to know when the multifd_recv_thread has
   finished receiving the MULTIFD_FLAG_SYNC packet;

  This is unnecessary because the multifd_recv_state->sem_sync
  semaphore already does the same thing. We care that the SYNC arrived
  from the source, knowing that the SYNC has been received by the recv
  thread doesn't add anything.

3) on both sending and receiving sides, to wait for the multifd threads
   to finish before cleaning up;

   This happens because multifd_send_sync_main() blocks
   ram_save_complete() from finishing until the semaphore is
   posted. This is surprising and not documented.

Clarify the above situation by renaming 'sem_sync' to 'sem_done' and
making the #3 usage the main one. Stop tracking the SYNC packet on
source (#1) and leave multifd_recv_state->sem_sync untouched on the
destination (#2).

Due to the 'channels_ready' and 'sem' semaphores, we always send
packets in lockstep with switching MultiFDSendParams, so
p->pending_job is always either 1 or 0. The thread has no knowledge of
whether it will have more to send once it posts to
channels_ready. Send it on an extra loop so it sees no pending_job and
releases the semaphore.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-03 11:31:13 -03:00
Fabiano Rosas
374cb846eb migration/multifd: Centralize multifd_send thread release actions
When the multifd_send thread finishes or when an error is detected and
we want it to finish, there are some actions that need to be taken:

- The 'quit' variable should be set. It is used both as a signal for
  the thread to end and as an indicative that the thread has already
  ended.

- The channels_ready and sem_sync semaphores need to be released. The
  main thread might be waiting to send another packet or waiting for
  the confirmation that the SYNC packet has been sent. If an error
  occurred, the multifd_send thread might not be able to send more
  packets or send the SYNC packet. The main thread should be released
  so we can do cleanup.

These two actions need to occur in this order because the side queuing
the packets always checks for p->quit after it is allowed to continue.

There are a few moments where we want to perform these actions, so
extract that code into function to be reused.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-03 11:31:13 -03:00
Fabiano Rosas
c2866a080d migration/multifd: Clarify Error usage in multifd_channel_connect
The function is currently called from two sites, one always gives it a
NULL Error and the other always gives it a non-NULL Error.

In the non-NULL case, all it does it trace the error and return. One
of the callers already have tracing, add a tracepoint to the other and
stop passing the error into the function.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-03 11:31:13 -03:00
Fabiano Rosas
375c45671e migration/multifd: Set p->quit before releasing channels_ready
All waiters of channels_ready check p->quit shortly after being
released. We need to set the variable before posting the semaphore.

We probably never seen any issue here because this is a "not even
started" error case, which is probably very unlikely to happen.

The other place that releases the channels_ready semaphore is
multifd_send_thread() which does already set p->quit before the post
(via multifd_send_terminate_threads).

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-03 11:31:13 -03:00
Fabiano Rosas
39875ab892 migration/multifd: Move error handling back into the multifd thread
The multifd_send_terminate_threads() is doing double duty terminating
the threads and setting the migration error and state. Clean it up.

This will allow for further simplification of the multifd thread
cleanup path in the following patches.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-03 11:31:13 -03:00
Fabiano Rosas
4a2c3ad4fa migration/multifd: Unify multifd_send_thread error paths
The preferred usage of the Error type is to always set both the return
code and the error when a failure happens. As all code called from the
send thread follows this pattern, we'll always have the return code
and the error set at the same time.

Aside from the convention, in this piece of code this must be the
case, otherwise the if (ret != 0) would be exiting the thread without
calling multifd_send_terminate_threads() which is incorrect.

Unify both paths to make it clear that both are taken when there's an
error.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-03 11:31:13 -03:00
Fabiano Rosas
f2accad0c8 migration/multifd: Remove channels_ready semaphore
The channels_ready semaphore is a global variable not linked to any
single multifd channel. Waiting on it only means that "some" channel
has become ready to send data. Since we need to address the channels
by index (multifd_send_state->params[i]), that information adds
nothing of value. The channel being addressed is not necessarily the
one that just released the semaphore.

The only usage of this semaphore that makes sense is to wait for it in
a loop that iterates for the number of channels. That could mean: all
channels have been setup and are operational OR all channels have
finished their work and are idle.

Currently all code that waits on channels_ready is redundant. There is
always a subsequent lock or semaphore that does the actual data
protection/synchronization.

- at multifd_send_pages: Waiting on channels_ready doesn't mean the
  'next_channel' is ready, it could be any other channel. So there are
  already cases where this code runs as if no semaphore was there.

  Waiting outside of the loop is also incorrect because if the current
  channel already has a pending_job, then it will loop into the next
  one without waiting the semaphore and the count will be greater than
  zero at the end of the execution.

  Checking that "any" channel is ready as a proxy for all channels
  being ready would work, but it's not what the code is doing and not
  really needed because the channel lock and 'sem' would be enough.

- at multifd_send_sync: This usage is correct, but it is made
  redundant by the wait on sem_sync. What this piece of code is doing
  is making sure all channels have sent the SYNC packet and became
  idle afterwards.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-02 19:33:06 -03:00
Fabiano Rosas
b8d09729d9 migration/multifd: Remove direct "socket" references
We're about to enable support for other transports in multifd, so
remove direct references to sockets.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-02 11:17:29 -03:00
1752 changed files with 36789 additions and 72918 deletions

View File

@@ -30,7 +30,6 @@ avocado-system-alpine:
variables: variables:
IMAGE: alpine IMAGE: alpine
MAKE_CHECK_ARGS: check-avocado MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:avr arch:loongarch64 arch:mips64 arch:mipsel
build-system-ubuntu: build-system-ubuntu:
extends: extends:
@@ -41,7 +40,8 @@ build-system-ubuntu:
variables: variables:
IMAGE: ubuntu2204 IMAGE: ubuntu2204
CONFIGURE_ARGS: --enable-docs CONFIGURE_ARGS: --enable-docs
TARGETS: alpha-softmmu microblazeel-softmmu mips64el-softmmu TARGETS: alpha-softmmu cris-softmmu hppa-softmmu
microblazeel-softmmu mips64el-softmmu
MAKE_CHECK_ARGS: check-build MAKE_CHECK_ARGS: check-build
check-system-ubuntu: check-system-ubuntu:
@@ -61,7 +61,6 @@ avocado-system-ubuntu:
variables: variables:
IMAGE: ubuntu2204 IMAGE: ubuntu2204
MAKE_CHECK_ARGS: check-avocado MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:alpha arch:microblaze arch:mips64el
build-system-debian: build-system-debian:
extends: extends:
@@ -73,7 +72,7 @@ build-system-debian:
IMAGE: debian-amd64 IMAGE: debian-amd64
CONFIGURE_ARGS: --with-coroutine=sigaltstack CONFIGURE_ARGS: --with-coroutine=sigaltstack
TARGETS: arm-softmmu i386-softmmu riscv64-softmmu sh4eb-softmmu TARGETS: arm-softmmu i386-softmmu riscv64-softmmu sh4eb-softmmu
sparc-softmmu xtensa-softmmu sparc-softmmu xtensaeb-softmmu
MAKE_CHECK_ARGS: check-build MAKE_CHECK_ARGS: check-build
check-system-debian: check-system-debian:
@@ -93,7 +92,6 @@ avocado-system-debian:
variables: variables:
IMAGE: debian-amd64 IMAGE: debian-amd64
MAKE_CHECK_ARGS: check-avocado MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:arm arch:i386 arch:riscv64 arch:sh4 arch:sparc arch:xtensa
crash-test-debian: crash-test-debian:
extends: .native_test_job_template extends: .native_test_job_template
@@ -116,7 +114,7 @@ build-system-fedora:
variables: variables:
IMAGE: fedora IMAGE: fedora
CONFIGURE_ARGS: --disable-gcrypt --enable-nettle --enable-docs CONFIGURE_ARGS: --disable-gcrypt --enable-nettle --enable-docs
TARGETS: microblaze-softmmu mips-softmmu TARGETS: tricore-softmmu microblaze-softmmu mips-softmmu
xtensa-softmmu m68k-softmmu riscv32-softmmu ppc-softmmu sparc64-softmmu xtensa-softmmu m68k-softmmu riscv32-softmmu ppc-softmmu sparc64-softmmu
MAKE_CHECK_ARGS: check-build MAKE_CHECK_ARGS: check-build
@@ -137,8 +135,6 @@ avocado-system-fedora:
variables: variables:
IMAGE: fedora IMAGE: fedora
MAKE_CHECK_ARGS: check-avocado MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:microblaze arch:mips arch:xtensa arch:m68k
arch:riscv32 arch:ppc arch:sparc64
crash-test-fedora: crash-test-fedora:
extends: .native_test_job_template extends: .native_test_job_template
@@ -184,8 +180,6 @@ avocado-system-centos:
variables: variables:
IMAGE: centos8 IMAGE: centos8
MAKE_CHECK_ARGS: check-avocado MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:ppc64 arch:or1k arch:390x arch:x86_64 arch:rx
arch:sh4 arch:nios2
build-system-opensuse: build-system-opensuse:
extends: extends:
@@ -215,7 +209,6 @@ avocado-system-opensuse:
variables: variables:
IMAGE: opensuse-leap IMAGE: opensuse-leap
MAKE_CHECK_ARGS: check-avocado MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:s390x arch:x86_64 arch:aarch64
# This jobs explicitly disable TCG (--disable-tcg), KVM is detected by # This jobs explicitly disable TCG (--disable-tcg), KVM is detected by
@@ -256,7 +249,6 @@ build-user:
variables: variables:
IMAGE: debian-all-test-cross IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --disable-tools --disable-system CONFIGURE_ARGS: --disable-tools --disable-system
--target-list-exclude=alpha-linux-user,sh4-linux-user
MAKE_CHECK_ARGS: check-tcg MAKE_CHECK_ARGS: check-tcg
build-user-static: build-user-static:
@@ -266,18 +258,6 @@ build-user-static:
variables: variables:
IMAGE: debian-all-test-cross IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --disable-tools --disable-system --static CONFIGURE_ARGS: --disable-tools --disable-system --static
--target-list-exclude=alpha-linux-user,sh4-linux-user
MAKE_CHECK_ARGS: check-tcg
# targets stuck on older compilers
build-legacy:
extends: .native_build_job_template
needs:
job: amd64-debian-legacy-cross-container
variables:
IMAGE: debian-legacy-test-cross
TARGETS: alpha-linux-user alpha-softmmu sh4-linux-user
CONFIGURE_ARGS: --disable-tools
MAKE_CHECK_ARGS: check-tcg MAKE_CHECK_ARGS: check-tcg
build-user-hexagon: build-user-hexagon:
@@ -290,9 +270,7 @@ build-user-hexagon:
CONFIGURE_ARGS: --disable-tools --disable-docs --enable-debug-tcg CONFIGURE_ARGS: --disable-tools --disable-docs --enable-debug-tcg
MAKE_CHECK_ARGS: check-tcg MAKE_CHECK_ARGS: check-tcg
# Build the softmmu targets we have check-tcg tests and compilers in # Only build the softmmu targets we have check-tcg tests for
# our omnibus all-test-cross container. Those targets that haven't got
# Debian cross compiler support need to use special containers.
build-some-softmmu: build-some-softmmu:
extends: .native_build_job_template extends: .native_build_job_template
needs: needs:
@@ -300,18 +278,7 @@ build-some-softmmu:
variables: variables:
IMAGE: debian-all-test-cross IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --disable-tools --enable-debug CONFIGURE_ARGS: --disable-tools --enable-debug
TARGETS: arm-softmmu aarch64-softmmu i386-softmmu riscv64-softmmu TARGETS: xtensa-softmmu arm-softmmu aarch64-softmmu alpha-softmmu
s390x-softmmu x86_64-softmmu
MAKE_CHECK_ARGS: check-tcg
build-loongarch64:
extends: .native_build_job_template
needs:
job: loongarch-debian-cross-container
variables:
IMAGE: debian-loongarch-cross
CONFIGURE_ARGS: --disable-tools --enable-debug
TARGETS: loongarch64-linux-user loongarch64-softmmu
MAKE_CHECK_ARGS: check-tcg MAKE_CHECK_ARGS: check-tcg
# We build tricore in a very minimal tricore only container # We build tricore in a very minimal tricore only container
@@ -344,7 +311,7 @@ clang-user:
variables: variables:
IMAGE: debian-all-test-cross IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --cc=clang --cxx=clang++ --disable-system CONFIGURE_ARGS: --cc=clang --cxx=clang++ --disable-system
--target-list-exclude=alpha-linux-user,microblazeel-linux-user,aarch64_be-linux-user,i386-linux-user,m68k-linux-user,mipsn32el-linux-user,xtensaeb-linux-user --target-list-exclude=microblazeel-linux-user,aarch64_be-linux-user,i386-linux-user,m68k-linux-user,mipsn32el-linux-user,xtensaeb-linux-user
--extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined --extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined
MAKE_CHECK_ARGS: check-unit check-tcg MAKE_CHECK_ARGS: check-unit check-tcg
@@ -531,7 +498,7 @@ build-tci:
variables: variables:
IMAGE: debian-all-test-cross IMAGE: debian-all-test-cross
script: script:
- TARGETS="aarch64 arm hppa m68k microblaze ppc64 s390x x86_64" - TARGETS="aarch64 alpha arm hppa m68k microblaze ppc64 s390x x86_64"
- mkdir build - mkdir build
- cd build - cd build
- ../configure --enable-tcg-interpreter --disable-docs --disable-gtk --disable-vnc - ../configure --enable-tcg-interpreter --disable-docs --disable-gtk --disable-vnc

View File

@@ -11,6 +11,6 @@ MAKE='/opt/homebrew/bin/gmake'
NINJA='/opt/homebrew/bin/ninja' NINJA='/opt/homebrew/bin/ninja'
PACKAGING_COMMAND='brew' PACKAGING_COMMAND='brew'
PIP3='/opt/homebrew/bin/pip3' PIP3='/opt/homebrew/bin/pip3'
PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd' PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol tesseract usbredir vde vte3 xorriso zlib zstd'
PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli' PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli'
PYTHON='/opt/homebrew/bin/python3' PYTHON='/opt/homebrew/bin/python3'

View File

@@ -1,3 +1,9 @@
alpha-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-alpha-cross
amd64-debian-cross-container: amd64-debian-cross-container:
extends: .container_job_template extends: .container_job_template
stage: containers stage: containers
@@ -10,12 +16,6 @@ amd64-debian-user-cross-container:
variables: variables:
NAME: debian-all-test-cross NAME: debian-all-test-cross
amd64-debian-legacy-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-legacy-test-cross
arm64-debian-cross-container: arm64-debian-cross-container:
extends: .container_job_template extends: .container_job_template
stage: containers stage: containers
@@ -40,11 +40,23 @@ hexagon-cross-container:
variables: variables:
NAME: debian-hexagon-cross NAME: debian-hexagon-cross
loongarch-debian-cross-container: hppa-debian-cross-container:
extends: .container_job_template extends: .container_job_template
stage: containers stage: containers
variables: variables:
NAME: debian-loongarch-cross NAME: debian-hppa-cross
m68k-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-m68k-cross
mips64-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-mips64-cross
mips64el-debian-cross-container: mips64el-debian-cross-container:
extends: .container_job_template extends: .container_job_template
@@ -52,12 +64,24 @@ mips64el-debian-cross-container:
variables: variables:
NAME: debian-mips64el-cross NAME: debian-mips64el-cross
mips-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-mips-cross
mipsel-debian-cross-container: mipsel-debian-cross-container:
extends: .container_job_template extends: .container_job_template
stage: containers stage: containers
variables: variables:
NAME: debian-mipsel-cross NAME: debian-mipsel-cross
powerpc-test-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-powerpc-test-cross
ppc64el-debian-cross-container: ppc64el-debian-cross-container:
extends: .container_job_template extends: .container_job_template
stage: containers stage: containers
@@ -71,7 +95,13 @@ riscv64-debian-cross-container:
allow_failure: true allow_failure: true
variables: variables:
NAME: debian-riscv64-cross NAME: debian-riscv64-cross
QEMU_JOB_OPTIONAL: 1
# we can however build TCG tests using a non-sid base
riscv64-debian-test-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-riscv64-test-cross
s390x-debian-cross-container: s390x-debian-cross-container:
extends: .container_job_template extends: .container_job_template
@@ -79,6 +109,18 @@ s390x-debian-cross-container:
variables: variables:
NAME: debian-s390x-cross NAME: debian-s390x-cross
sh4-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-sh4-cross
sparc64-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-sparc64-cross
tricore-debian-cross-container: tricore-debian-cross-container:
extends: .container_job_template extends: .container_job_template
stage: containers stage: containers

View File

@@ -165,7 +165,7 @@ cross-win32-system:
job: win32-fedora-cross-container job: win32-fedora-cross-container
variables: variables:
IMAGE: fedora-win32-cross IMAGE: fedora-win32-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins EXTRA_CONFIGURE_OPTS: --enable-fdt=internal
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu m68k-softmmu CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu m68k-softmmu
microblazeel-softmmu mips64el-softmmu nios2-softmmu microblazeel-softmmu mips64el-softmmu nios2-softmmu
artifacts: artifacts:
@@ -179,7 +179,7 @@ cross-win64-system:
job: win64-fedora-cross-container job: win64-fedora-cross-container
variables: variables:
IMAGE: fedora-win64-cross IMAGE: fedora-win64-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins EXTRA_CONFIGURE_OPTS: --enable-fdt=internal
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu
m68k-softmmu microblazeel-softmmu nios2-softmmu m68k-softmmu microblazeel-softmmu nios2-softmmu
or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu

View File

@@ -72,7 +72,6 @@
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed - .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
bison diffutils flex bison diffutils flex
git grep make sed git grep make sed
$MINGW_TARGET-binutils
$MINGW_TARGET-capstone $MINGW_TARGET-capstone
$MINGW_TARGET-ccache $MINGW_TARGET-ccache
$MINGW_TARGET-curl $MINGW_TARGET-curl

View File

@@ -30,38 +30,22 @@ malc <av1474@comtv.ru> malc <malc@c046a42c-6fe2-441c-8c8c-71466251a162>
# Corrupted Author fields # Corrupted Author fields
Aaron Larson <alarson@ddci.com> alarson@ddci.com Aaron Larson <alarson@ddci.com> alarson@ddci.com
Andreas Färber <andreas.faerber@web.de> Andreas Färber <andreas.faerber> Andreas Färber <andreas.faerber@web.de> Andreas Färber <andreas.faerber>
fanwenjie <fanwj@mail.ustc.edu.cn> fanwj@mail.ustc.edu.cn <fanwj@mail.ustc.edu.cn>
Jason Wang <jasowang@redhat.com> Jason Wang <jasowang> Jason Wang <jasowang@redhat.com> Jason Wang <jasowang>
Marek Dolata <mkdolata@us.ibm.com> mkdolata@us.ibm.com <mkdolata@us.ibm.com> Marek Dolata <mkdolata@us.ibm.com> mkdolata@us.ibm.com <mkdolata@us.ibm.com>
Michael Ellerman <mpe@ellerman.id.au> michael@ozlabs.org <michael@ozlabs.org> Michael Ellerman <mpe@ellerman.id.au> michael@ozlabs.org <michael@ozlabs.org>
Nick Hudson <hnick@vmware.com> hnick@vmware.com <hnick@vmware.com> Nick Hudson <hnick@vmware.com> hnick@vmware.com <hnick@vmware.com>
Timothée Cocault <timothee.cocault@gmail.com> timothee.cocault@gmail.com <timothee.cocault@gmail.com>
# There is also a: # There is also a:
# (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162> # (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162>
# for the cvs2svn initialization commit e63c3dc74bf. # for the cvs2svn initialization commit e63c3dc74bf.
# Next, translate a few commits where mailman rewrote the From: line due # Next, translate a few commits where mailman rewrote the From: line due
# to strict SPF and DMARC. Usually, our build process should be flagging # to strict SPF, although we prefer to avoid adding more entries like that.
# commits like these before maintainer merges; if you find the need to add
# a line here, please also report a bug against the part of the build
# process that let the mis-attribution slip through in the first place.
#
# If the mailing list munges your emails, use:
# git config sendemail.from '"Your Name" <your.email@example.com>'
# the use of "" in that line will differ from the typically unquoted
# 'git config user.name', which in turn is sufficient for 'git send-email'
# to add an extra From: line in the body of your email that takes
# precedence over any munged From: in the mail's headers.
# See https://lists.openembedded.org/g/openembedded-core/message/166515
# and https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg06784.html
Ed Swierk <eswierk@skyportsystems.com> Ed Swierk via Qemu-devel <qemu-devel@nongnu.org> Ed Swierk <eswierk@skyportsystems.com> Ed Swierk via Qemu-devel <qemu-devel@nongnu.org>
Ian McKellar <ianloic@google.com> Ian McKellar via Qemu-devel <qemu-devel@nongnu.org> Ian McKellar <ianloic@google.com> Ian McKellar via Qemu-devel <qemu-devel@nongnu.org>
Julia Suvorova <jusual@mail.ru> Julia Suvorova via Qemu-devel <qemu-devel@nongnu.org> Julia Suvorova <jusual@mail.ru> Julia Suvorova via Qemu-devel <qemu-devel@nongnu.org>
Justin Terry (VM) <juterry@microsoft.com> Justin Terry (VM) via Qemu-devel <qemu-devel@nongnu.org> Justin Terry (VM) <juterry@microsoft.com> Justin Terry (VM) via Qemu-devel <qemu-devel@nongnu.org>
Stefan Weil <sw@weilnetz.de> Stefan Weil via <qemu-devel@nongnu.org> Stefan Weil <sw@weilnetz.de> Stefan Weil via <qemu-devel@nongnu.org>
Andrey Drobyshev <andrey.drobyshev@virtuozzo.com> Andrey Drobyshev via <qemu-block@nongnu.org>
BALATON Zoltan <balaton@eik.bme.hu> BALATON Zoltan via <qemu-ppc@nongnu.org>
# Next, replace old addresses by a more recent one. # Next, replace old addresses by a more recent one.
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com> <aleksandar.markovic@mips.com> Aleksandar Markovic <aleksandar.qemu.devel@gmail.com> <aleksandar.markovic@mips.com>
@@ -83,9 +67,6 @@ Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com> James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org> Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com> Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
Luc Michel <luc@lmichel.fr> <luc.michel@git.antfield.fr>
Luc Michel <luc@lmichel.fr> <luc.michel@greensocs.com>
Luc Michel <luc@lmichel.fr> <lmichel@kalray.eu>
Radoslaw Biernacki <rad@semihalf.com> <radoslaw.biernacki@linaro.org> Radoslaw Biernacki <rad@semihalf.com> <radoslaw.biernacki@linaro.org>
Paul Brook <paul@nowt.org> <paul@codesourcery.com> Paul Brook <paul@nowt.org> <paul@codesourcery.com>
Paul Burton <paulburton@kernel.org> <paul.burton@mips.com> Paul Burton <paulburton@kernel.org> <paul.burton@mips.com>

View File

@@ -34,7 +34,7 @@ env:
- BASE_CONFIG="--disable-docs --disable-tools" - BASE_CONFIG="--disable-docs --disable-tools"
- TEST_BUILD_CMD="" - TEST_BUILD_CMD=""
- TEST_CMD="make check V=1" - TEST_CMD="make check V=1"
# This is broadly a list of "mainline" system targets which have support across the major distros # This is broadly a list of "mainline" softmmu targets which have support across the major distros
- MAIN_SOFTMMU_TARGETS="aarch64-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu" - MAIN_SOFTMMU_TARGETS="aarch64-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu"
- CCACHE_SLOPPINESS="include_file_ctime,include_file_mtime" - CCACHE_SLOPPINESS="include_file_ctime,include_file_mtime"
- CCACHE_MAXSIZE=1G - CCACHE_MAXSIZE=1G
@@ -197,7 +197,7 @@ jobs:
$(exit $BUILD_RC); $(exit $BUILD_RC);
fi fi
- name: "[s390x] GCC (other-system)" - name: "[s390x] GCC (other-softmmu)"
arch: s390x arch: s390x
dist: focal dist: focal
addons: addons:

View File

@@ -11,9 +11,6 @@ config OPENGL
config X11 config X11
bool bool
config PIXMAN
bool
config SPICE config SPICE
bool bool
@@ -49,6 +46,3 @@ config FUZZ
config VFIO_USER_SERVER_ALLOWED config VFIO_USER_SERVER_ALLOWED
bool bool
imply VFIO_USER_SERVER imply VFIO_USER_SERVER
config HV_BALLOON_POSSIBLE
bool

View File

@@ -137,11 +137,10 @@ Overall TCG CPUs
M: Richard Henderson <richard.henderson@linaro.org> M: Richard Henderson <richard.henderson@linaro.org>
R: Paolo Bonzini <pbonzini@redhat.com> R: Paolo Bonzini <pbonzini@redhat.com>
S: Maintained S: Maintained
F: system/cpus.c F: softmmu/cpus.c
F: system/watchpoint.c F: softmmu/watchpoint.c
F: cpu-common.c F: cpus-common.c
F: cpu-target.c F: page-vary.c
F: page-vary-target.c
F: page-vary-common.c F: page-vary-common.c
F: accel/tcg/ F: accel/tcg/
F: accel/stubs/tcg-stub.c F: accel/stubs/tcg-stub.c
@@ -245,10 +244,10 @@ M: Richard Henderson <richard.henderson@linaro.org>
S: Maintained S: Maintained
F: target/hppa/ F: target/hppa/
F: disas/hppa.c F: disas/hppa.c
F: tests/tcg/hppa/
LoongArch TCG CPUs LoongArch TCG CPUs
M: Song Gao <gaosong@loongson.cn> M: Song Gao <gaosong@loongson.cn>
M: Xiaojuan Yang <yangxiaojuan@loongson.cn>
S: Maintained S: Maintained
F: target/loongarch/ F: target/loongarch/
F: tests/tcg/loongarch64/ F: tests/tcg/loongarch64/
@@ -259,7 +258,6 @@ M: Laurent Vivier <laurent@vivier.eu>
S: Maintained S: Maintained
F: target/m68k/ F: target/m68k/
F: disas/m68k.c F: disas/m68k.c
F: tests/tcg/m68k/
MicroBlaze TCG CPUs MicroBlaze TCG CPUs
M: Edgar E. Iglesias <edgar.iglesias@gmail.com> M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
@@ -286,9 +284,7 @@ R: Marek Vasut <marex@denx.de>
S: Orphan S: Orphan
F: target/nios2/ F: target/nios2/
F: hw/nios2/ F: hw/nios2/
F: hw/intc/nios2_vic.c
F: disas/nios2.c F: disas/nios2.c
F: include/hw/intc/nios2_vic.h
F: configs/devices/nios2-softmmu/default.mak F: configs/devices/nios2-softmmu/default.mak
F: tests/docker/dockerfiles/debian-nios2-cross.d/build-toolchain.sh F: tests/docker/dockerfiles/debian-nios2-cross.d/build-toolchain.sh
F: tests/tcg/nios2/ F: tests/tcg/nios2/
@@ -299,7 +295,6 @@ S: Odd Fixes
F: docs/system/openrisc/cpu-features.rst F: docs/system/openrisc/cpu-features.rst
F: target/openrisc/ F: target/openrisc/
F: hw/openrisc/ F: hw/openrisc/
F: include/hw/openrisc/
F: tests/tcg/openrisc/ F: tests/tcg/openrisc/
PowerPC TCG CPUs PowerPC TCG CPUs
@@ -312,31 +307,21 @@ F: target/ppc/
F: hw/ppc/ppc.c F: hw/ppc/ppc.c
F: hw/ppc/ppc_booke.c F: hw/ppc/ppc_booke.c
F: include/hw/ppc/ppc.h F: include/hw/ppc/ppc.h
F: hw/ppc/meson.build
F: hw/ppc/trace*
F: configs/devices/ppc*
F: docs/system/ppc/embedded.rst
F: docs/system/target-ppc.rst
F: tests/tcg/ppc*/*
RISC-V TCG CPUs RISC-V TCG CPUs
M: Palmer Dabbelt <palmer@dabbelt.com> M: Palmer Dabbelt <palmer@dabbelt.com>
M: Alistair Francis <alistair.francis@wdc.com> M: Alistair Francis <alistair.francis@wdc.com>
M: Bin Meng <bin.meng@windriver.com> M: Bin Meng <bin.meng@windriver.com>
R: Weiwei Li <liwei1518@gmail.com> R: Weiwei Li <liweiwei@iscas.ac.cn>
R: Daniel Henrique Barboza <dbarboza@ventanamicro.com> R: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
R: Liu Zhiwei <zhiwei_liu@linux.alibaba.com> R: Liu Zhiwei <zhiwei_liu@linux.alibaba.com>
L: qemu-riscv@nongnu.org L: qemu-riscv@nongnu.org
S: Supported S: Supported
F: configs/targets/riscv*
F: docs/system/target-riscv.rst
F: target/riscv/ F: target/riscv/
F: hw/riscv/ F: hw/riscv/
F: hw/intc/riscv*
F: include/hw/riscv/ F: include/hw/riscv/
F: linux-user/host/riscv32/ F: linux-user/host/riscv32/
F: linux-user/host/riscv64/ F: linux-user/host/riscv64/
F: tests/tcg/riscv64/
RISC-V XThead* extensions RISC-V XThead* extensions
M: Christoph Muellner <christoph.muellner@vrull.eu> M: Christoph Muellner <christoph.muellner@vrull.eu>
@@ -345,7 +330,6 @@ L: qemu-riscv@nongnu.org
S: Supported S: Supported
F: target/riscv/insn_trans/trans_xthead.c.inc F: target/riscv/insn_trans/trans_xthead.c.inc
F: target/riscv/xthead*.decode F: target/riscv/xthead*.decode
F: disas/riscv-xthead*
RISC-V XVentanaCondOps extension RISC-V XVentanaCondOps extension
M: Philipp Tomsich <philipp.tomsich@vrull.eu> M: Philipp Tomsich <philipp.tomsich@vrull.eu>
@@ -353,7 +337,6 @@ L: qemu-riscv@nongnu.org
S: Maintained S: Maintained
F: target/riscv/XVentanaCondOps.decode F: target/riscv/XVentanaCondOps.decode
F: target/riscv/insn_trans/trans_xventanacondops.c.inc F: target/riscv/insn_trans/trans_xventanacondops.c.inc
F: disas/riscv-xventana*
RENESAS RX CPUs RENESAS RX CPUs
R: Yoshinori Sato <ysato@users.sourceforge.jp> R: Yoshinori Sato <ysato@users.sourceforge.jp>
@@ -378,7 +361,6 @@ F: target/sh4/
F: hw/sh4/ F: hw/sh4/
F: disas/sh4.c F: disas/sh4.c
F: include/hw/sh4/ F: include/hw/sh4/
F: tests/tcg/sh4/
SPARC TCG CPUs SPARC TCG CPUs
M: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> M: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
@@ -389,7 +371,6 @@ F: hw/sparc/
F: hw/sparc64/ F: hw/sparc64/
F: include/hw/sparc/sparc64.h F: include/hw/sparc/sparc64.h
F: disas/sparc.c F: disas/sparc.c
F: tests/tcg/sparc64/
X86 TCG CPUs X86 TCG CPUs
M: Paolo Bonzini <pbonzini@redhat.com> M: Paolo Bonzini <pbonzini@redhat.com>
@@ -490,7 +471,7 @@ S: Supported
F: include/sysemu/kvm_xen.h F: include/sysemu/kvm_xen.h
F: target/i386/kvm/xen* F: target/i386/kvm/xen*
F: hw/i386/kvm/xen* F: hw/i386/kvm/xen*
F: tests/avocado/kvm_xen_guest.py F: tests/avocado/xen_guest.py
Guest CPU Cores (other accelerators) Guest CPU Cores (other accelerators)
------------------------------------ ------------------------------------
@@ -575,7 +556,6 @@ M: Cornelia Huck <cohuck@redhat.com>
M: Paolo Bonzini <pbonzini@redhat.com> M: Paolo Bonzini <pbonzini@redhat.com>
S: Maintained S: Maintained
F: linux-headers/ F: linux-headers/
F: include/standard-headers/
F: scripts/update-linux-headers.sh F: scripts/update-linux-headers.sh
POSIX POSIX
@@ -687,7 +667,7 @@ M: Peter Maydell <peter.maydell@linaro.org>
L: qemu-arm@nongnu.org L: qemu-arm@nongnu.org
S: Maintained S: Maintained
F: hw/intc/arm* F: hw/intc/arm*
F: hw/intc/gic*_internal.h F: hw/intc/gic_internal.h
F: hw/misc/a9scu.c F: hw/misc/a9scu.c
F: hw/misc/arm11scu.c F: hw/misc/arm11scu.c
F: hw/misc/arm_l2x0.c F: hw/misc/arm_l2x0.c
@@ -859,10 +839,8 @@ M: Hao Wu <wuhaotsh@google.com>
L: qemu-arm@nongnu.org L: qemu-arm@nongnu.org
S: Supported S: Supported
F: hw/*/npcm* F: hw/*/npcm*
F: hw/sensor/adm1266.c
F: include/hw/*/npcm* F: include/hw/*/npcm*
F: tests/qtest/npcm* F: tests/qtest/npcm*
F: tests/qtest/adm1266-test.c
F: pc-bios/npcm7xx_bootrom.bin F: pc-bios/npcm7xx_bootrom.bin
F: roms/vbootrom F: roms/vbootrom
F: docs/system/arm/nuvoton.rst F: docs/system/arm/nuvoton.rst
@@ -901,7 +879,7 @@ S: Odd Fixes
F: hw/arm/raspi.c F: hw/arm/raspi.c
F: hw/arm/raspi_platform.h F: hw/arm/raspi_platform.h
F: hw/*/bcm283* F: hw/*/bcm283*
F: include/hw/arm/rasp* F: include/hw/arm/raspi*
F: include/hw/*/bcm283* F: include/hw/*/bcm283*
F: docs/system/arm/raspi.rst F: docs/system/arm/raspi.rst
@@ -960,9 +938,6 @@ R: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
L: qemu-arm@nongnu.org L: qemu-arm@nongnu.org
S: Maintained S: Maintained
F: hw/arm/sbsa-ref.c F: hw/arm/sbsa-ref.c
F: hw/misc/sbsa_ec.c
F: hw/watchdog/sbsa_gwdt.c
F: include/hw/watchdog/sbsa_gwdt.h
F: docs/system/arm/sbsa.rst F: docs/system/arm/sbsa.rst
F: tests/avocado/machine_aarch64_sbsaref.py F: tests/avocado/machine_aarch64_sbsaref.py
@@ -1133,7 +1108,7 @@ F: docs/system/arm/emcraft-sf2.rst
ASPEED BMCs ASPEED BMCs
M: Cédric Le Goater <clg@kaod.org> M: Cédric Le Goater <clg@kaod.org>
M: Peter Maydell <peter.maydell@linaro.org> M: Peter Maydell <peter.maydell@linaro.org>
R: Andrew Jeffery <andrew@codeconstruct.com.au> R: Andrew Jeffery <andrew@aj.id.au>
R: Joel Stanley <joel@jms.id.au> R: Joel Stanley <joel@jms.id.au>
L: qemu-arm@nongnu.org L: qemu-arm@nongnu.org
S: Maintained S: Maintained
@@ -1189,29 +1164,24 @@ F: hw/*/etraxfs_*.c
HP-PARISC Machines HP-PARISC Machines
------------------ ------------------
HP B160L, HP C3700 HP B160L
M: Richard Henderson <richard.henderson@linaro.org> M: Richard Henderson <richard.henderson@linaro.org>
R: Helge Deller <deller@gmx.de> R: Helge Deller <deller@gmx.de>
S: Odd Fixes S: Odd Fixes
F: configs/devices/hppa-softmmu/default.mak F: configs/devices/hppa-softmmu/default.mak
F: hw/display/artist.c
F: hw/hppa/ F: hw/hppa/
F: hw/input/lasips2.c
F: hw/net/*i82596* F: hw/net/*i82596*
F: hw/misc/lasi.c F: hw/misc/lasi.c
F: hw/pci-host/astro.c
F: hw/pci-host/dino.c F: hw/pci-host/dino.c
F: include/hw/input/lasips2.h
F: include/hw/misc/lasi.h F: include/hw/misc/lasi.h
F: include/hw/net/lasi_82596.h F: include/hw/net/lasi_82596.h
F: include/hw/pci-host/astro.h
F: include/hw/pci-host/dino.h F: include/hw/pci-host/dino.h
F: pc-bios/hppa-firmware.img F: pc-bios/hppa-firmware.img
F: roms/seabios-hppa/
LoongArch Machines LoongArch Machines
------------------ ------------------
Virt Virt
M: Xiaojuan Yang <yangxiaojuan@loongson.cn>
M: Song Gao <gaosong@loongson.cn> M: Song Gao <gaosong@loongson.cn>
S: Maintained S: Maintained
F: docs/system/loongarch/virt.rst F: docs/system/loongarch/virt.rst
@@ -1258,9 +1228,6 @@ F: hw/misc/mac_via.c
F: hw/nubus/* F: hw/nubus/*
F: hw/display/macfb.c F: hw/display/macfb.c
F: hw/block/swim.c F: hw/block/swim.c
F: hw/misc/djmemc.c
F: hw/misc/iosb.c
F: hw/audio/asc.c
F: hw/m68k/bootinfo.h F: hw/m68k/bootinfo.h
F: include/standard-headers/asm-m68k/bootinfo.h F: include/standard-headers/asm-m68k/bootinfo.h
F: include/standard-headers/asm-m68k/bootinfo-mac.h F: include/standard-headers/asm-m68k/bootinfo-mac.h
@@ -1270,9 +1237,6 @@ F: include/hw/display/macfb.h
F: include/hw/block/swim.h F: include/hw/block/swim.h
F: include/hw/m68k/q800.h F: include/hw/m68k/q800.h
F: include/hw/m68k/q800-glue.h F: include/hw/m68k/q800-glue.h
F: include/hw/misc/djmemc.h
F: include/hw/misc/iosb.h
F: include/hw/audio/asc.h
virt virt
M: Laurent Vivier <laurent@vivier.eu> M: Laurent Vivier <laurent@vivier.eu>
@@ -1286,7 +1250,6 @@ F: include/hw/char/goldfish_tty.h
F: include/hw/intc/goldfish_pic.h F: include/hw/intc/goldfish_pic.h
F: include/hw/intc/m68k_irqc.h F: include/hw/intc/m68k_irqc.h
F: include/hw/misc/virt_ctrl.h F: include/hw/misc/virt_ctrl.h
F: docs/specs/virt-ctlr.rst
MicroBlaze Machines MicroBlaze Machines
------------------- -------------------
@@ -1316,16 +1279,14 @@ M: Hervé Poussineau <hpoussin@reactos.org>
R: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com> R: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
S: Maintained S: Maintained
F: hw/mips/jazz.c F: hw/mips/jazz.c
F: hw/display/g364fb.c
F: hw/display/jazz_led.c F: hw/display/jazz_led.c
F: hw/dma/rc4030.c F: hw/dma/rc4030.c
F: hw/nvram/ds1225y.c
Malta Malta
M: Philippe Mathieu-Daudé <philmd@linaro.org> M: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Aurelien Jarno <aurelien@aurel32.net> R: Aurelien Jarno <aurelien@aurel32.net>
S: Odd Fixes S: Odd Fixes
F: hw/isa/piix.c F: hw/isa/piix4.c
F: hw/acpi/piix4.c F: hw/acpi/piix4.c
F: hw/mips/malta.c F: hw/mips/malta.c
F: hw/pci-host/gt64120.c F: hw/pci-host/gt64120.c
@@ -1345,7 +1306,10 @@ M: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Jiaxun Yang <jiaxun.yang@flygoat.com> R: Jiaxun Yang <jiaxun.yang@flygoat.com>
S: Odd Fixes S: Odd Fixes
F: hw/mips/fuloong2e.c F: hw/mips/fuloong2e.c
F: hw/isa/vt82c686.c
F: hw/pci-host/bonito.c F: hw/pci-host/bonito.c
F: hw/usb/vt82c686-uhci-pci.c
F: include/hw/isa/vt82c686.h
F: include/hw/pci-host/bonito.h F: include/hw/pci-host/bonito.h
F: tests/avocado/machine_mips_fuloong2e.py F: tests/avocado/machine_mips_fuloong2e.py
@@ -1357,7 +1321,6 @@ F: hw/intc/loongson_liointc.c
F: hw/mips/loongson3_bootp.c F: hw/mips/loongson3_bootp.c
F: hw/mips/loongson3_bootp.h F: hw/mips/loongson3_bootp.h
F: hw/mips/loongson3_virt.c F: hw/mips/loongson3_virt.c
F: include/hw/intc/loongson_liointc.h
F: tests/avocado/machine_mips_loongson3v.py F: tests/avocado/machine_mips_loongson3v.py
Boston Boston
@@ -1375,7 +1338,6 @@ or1k-sim
M: Jia Liu <proljc@gmail.com> M: Jia Liu <proljc@gmail.com>
S: Maintained S: Maintained
F: docs/system/openrisc/or1k-sim.rst F: docs/system/openrisc/or1k-sim.rst
F: hw/intc/ompic.c
F: hw/openrisc/openrisc_sim.c F: hw/openrisc/openrisc_sim.c
PowerPC Machines PowerPC Machines
@@ -1383,8 +1345,7 @@ PowerPC Machines
405 (ref405ep) 405 (ref405ep)
L: qemu-ppc@nongnu.org L: qemu-ppc@nongnu.org
S: Orphan S: Orphan
F: hw/ppc/ppc405* F: hw/ppc/ppc405_boards.c
F: tests/avocado/ppc_405.py
Bamboo Bamboo
L: qemu-ppc@nongnu.org L: qemu-ppc@nongnu.org
@@ -1396,7 +1357,6 @@ e500
L: qemu-ppc@nongnu.org L: qemu-ppc@nongnu.org
S: Orphan S: Orphan
F: hw/ppc/e500* F: hw/ppc/e500*
F: hw/ppc/ppce500_spin.c
F: hw/gpio/mpc8xxx.c F: hw/gpio/mpc8xxx.c
F: hw/i2c/mpc_i2c.c F: hw/i2c/mpc_i2c.c
F: hw/net/fsl_etsec/ F: hw/net/fsl_etsec/
@@ -1404,9 +1364,8 @@ F: hw/pci-host/ppce500.c
F: include/hw/ppc/ppc_e500.h F: include/hw/ppc/ppc_e500.h
F: include/hw/pci-host/ppce500.h F: include/hw/pci-host/ppce500.h
F: pc-bios/u-boot.e500 F: pc-bios/u-boot.e500
F: hw/intc/openpic_kvm.c F: hw/intc/openpic_kvm.h
F: include/hw/ppc/openpic_kvm.h F: include/hw/ppc/openpic_kvm.h
F: docs/system/ppc/ppce500.rst
mpc8544ds mpc8544ds
L: qemu-ppc@nongnu.org L: qemu-ppc@nongnu.org
@@ -1426,7 +1385,6 @@ F: hw/pci-bridge/dec.[hc]
F: hw/misc/macio/ F: hw/misc/macio/
F: hw/misc/mos6522.c F: hw/misc/mos6522.c
F: hw/nvram/mac_nvram.c F: hw/nvram/mac_nvram.c
F: hw/ppc/fw_cfg.c
F: hw/input/adb* F: hw/input/adb*
F: include/hw/misc/macio/ F: include/hw/misc/macio/
F: include/hw/misc/mos6522.h F: include/hw/misc/mos6522.h
@@ -1480,10 +1438,6 @@ F: hw/*/spapr*
F: include/hw/*/spapr* F: include/hw/*/spapr*
F: hw/*/xics* F: hw/*/xics*
F: include/hw/*/xics* F: include/hw/*/xics*
F: include/hw/ppc/fdt.h
F: hw/ppc/fdt.c
F: include/hw/ppc/pef.h
F: hw/ppc/pef.c
F: pc-bios/slof.bin F: pc-bios/slof.bin
F: docs/system/ppc/pseries.rst F: docs/system/ppc/pseries.rst
F: docs/specs/ppc-spapr-* F: docs/specs/ppc-spapr-*
@@ -1521,7 +1475,6 @@ M: BALATON Zoltan <balaton@eik.bme.hu>
L: qemu-ppc@nongnu.org L: qemu-ppc@nongnu.org
S: Maintained S: Maintained
F: hw/ppc/sam460ex.c F: hw/ppc/sam460ex.c
F: hw/ppc/ppc440_uc.c
F: hw/ppc/ppc440_pcix.c F: hw/ppc/ppc440_pcix.c
F: hw/display/sm501* F: hw/display/sm501*
F: hw/ide/sii3112.c F: hw/ide/sii3112.c
@@ -1539,14 +1492,6 @@ F: hw/pci-host/mv64361.c
F: hw/pci-host/mv643xx.h F: hw/pci-host/mv643xx.h
F: include/hw/pci-host/mv64361.h F: include/hw/pci-host/mv64361.h
amigaone
M: BALATON Zoltan <balaton@eik.bme.hu>
L: qemu-ppc@nongnu.org
S: Maintained
F: hw/ppc/amigaone.c
F: hw/pci-host/articia.c
F: include/hw/pci-host/articia.h
Virtual Open Firmware (VOF) Virtual Open Firmware (VOF)
M: Alexey Kardashevskiy <aik@ozlabs.ru> M: Alexey Kardashevskiy <aik@ozlabs.ru>
R: David Gibson <david@gibson.dropbear.id.au> R: David Gibson <david@gibson.dropbear.id.au>
@@ -1573,7 +1518,6 @@ Microchip PolarFire SoC Icicle Kit
M: Bin Meng <bin.meng@windriver.com> M: Bin Meng <bin.meng@windriver.com>
L: qemu-riscv@nongnu.org L: qemu-riscv@nongnu.org
S: Supported S: Supported
F: docs/system/riscv/microchip-icicle-kit.rst
F: hw/riscv/microchip_pfsoc.c F: hw/riscv/microchip_pfsoc.c
F: hw/char/mchp_pfsoc_mmuart.c F: hw/char/mchp_pfsoc_mmuart.c
F: hw/misc/mchp_pfsoc_dmc.c F: hw/misc/mchp_pfsoc_dmc.c
@@ -1589,7 +1533,6 @@ Shakti C class SoC
M: Vijai Kumar K <vijai@behindbytes.com> M: Vijai Kumar K <vijai@behindbytes.com>
L: qemu-riscv@nongnu.org L: qemu-riscv@nongnu.org
S: Supported S: Supported
F: docs/system/riscv/shakti-c.rst
F: hw/riscv/shakti_c.c F: hw/riscv/shakti_c.c
F: hw/char/shakti_uart.c F: hw/char/shakti_uart.c
F: include/hw/riscv/shakti_c.h F: include/hw/riscv/shakti_c.h
@@ -1601,7 +1544,6 @@ M: Bin Meng <bin.meng@windriver.com>
M: Palmer Dabbelt <palmer@dabbelt.com> M: Palmer Dabbelt <palmer@dabbelt.com>
L: qemu-riscv@nongnu.org L: qemu-riscv@nongnu.org
S: Supported S: Supported
F: docs/system/riscv/sifive_u.rst
F: hw/*/*sifive*.c F: hw/*/*sifive*.c
F: include/hw/*/*sifive*.h F: include/hw/*/*sifive*.h
@@ -1626,7 +1568,6 @@ F: hw/intc/sh_intc.c
F: hw/pci-host/sh_pci.c F: hw/pci-host/sh_pci.c
F: hw/timer/sh_timer.c F: hw/timer/sh_timer.c
F: include/hw/sh4/sh_intc.h F: include/hw/sh4/sh_intc.h
F: include/hw/timer/tmu012.h
Shix Shix
R: Yoshinori Sato <ysato@users.sourceforge.jp> R: Yoshinori Sato <ysato@users.sourceforge.jp>
@@ -1750,16 +1691,6 @@ F: hw/s390x/event-facility.c
F: hw/s390x/sclp*.c F: hw/s390x/sclp*.c
L: qemu-s390x@nongnu.org L: qemu-s390x@nongnu.org
S390 CPU topology
M: Nina Schoetterl-Glausch <nsg@linux.ibm.com>
S: Supported
F: include/hw/s390x/cpu-topology.h
F: hw/s390x/cpu-topology.c
F: target/s390x/kvm/stsi-topology.c
F: docs/devel/s390-cpu-topology.rst
F: docs/system/s390x/cpu-topology.rst
F: tests/avocado/s390_topology.py
X86 Machines X86 Machines
------------ ------------
PC PC
@@ -1774,7 +1705,7 @@ F: hw/pci-host/pam.c
F: include/hw/pci-host/i440fx.h F: include/hw/pci-host/i440fx.h
F: include/hw/pci-host/q35.h F: include/hw/pci-host/q35.h
F: include/hw/pci-host/pam.h F: include/hw/pci-host/pam.h
F: hw/isa/piix.c F: hw/isa/piix3.c
F: hw/isa/lpc_ich9.c F: hw/isa/lpc_ich9.c
F: hw/i2c/smbus_ich9.c F: hw/i2c/smbus_ich9.c
F: hw/acpi/piix4.c F: hw/acpi/piix4.c
@@ -1784,7 +1715,7 @@ F: include/hw/southbridge/ich9.h
F: include/hw/southbridge/piix.h F: include/hw/southbridge/piix.h
F: hw/isa/apm.c F: hw/isa/apm.c
F: include/hw/isa/apm.h F: include/hw/isa/apm.h
F: tests/unit/test-x86-topo.c F: tests/unit/test-x86-cpuid.c
F: tests/qtest/test-x86-cpuid-compat.c F: tests/qtest/test-x86-cpuid-compat.c
PC Chipset PC Chipset
@@ -1814,7 +1745,6 @@ F: include/hw/dma/i8257.h
F: include/hw/i2c/pm_smbus.h F: include/hw/i2c/pm_smbus.h
F: include/hw/input/i8042.h F: include/hw/input/i8042.h
F: include/hw/intc/ioapic* F: include/hw/intc/ioapic*
F: include/hw/intc/i8259.h
F: include/hw/isa/i8259_internal.h F: include/hw/isa/i8259_internal.h
F: include/hw/isa/superio.h F: include/hw/isa/superio.h
F: include/hw/timer/hpet.h F: include/hw/timer/hpet.h
@@ -1836,6 +1766,7 @@ M: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
R: Philippe Mathieu-Daudé <philmd@linaro.org> R: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Yanan Wang <wangyanan55@huawei.com> R: Yanan Wang <wangyanan55@huawei.com>
S: Supported S: Supported
F: cpu.c
F: hw/core/cpu.c F: hw/core/cpu.c
F: hw/core/machine-qmp-cmds.c F: hw/core/machine-qmp-cmds.c
F: hw/core/machine.c F: hw/core/machine.c
@@ -1844,7 +1775,6 @@ F: hw/core/null-machine.c
F: hw/core/numa.c F: hw/core/numa.c
F: hw/cpu/cluster.c F: hw/cpu/cluster.c
F: qapi/machine.json F: qapi/machine.json
F: qapi/machine-common.json
F: qapi/machine-target.json F: qapi/machine-target.json
F: include/hw/boards.h F: include/hw/boards.h
F: include/hw/core/cpu.h F: include/hw/core/cpu.h
@@ -1870,7 +1800,6 @@ M: Max Filippov <jcmvbkbc@gmail.com>
S: Maintained S: Maintained
F: hw/xtensa/xtfpga.c F: hw/xtensa/xtfpga.c
F: hw/net/opencores_eth.c F: hw/net/opencores_eth.c
F: include/hw/xtensa/mx_pic.h
Devices Devices
------- -------
@@ -1896,7 +1825,6 @@ EDU
M: Jiri Slaby <jslaby@suse.cz> M: Jiri Slaby <jslaby@suse.cz>
S: Maintained S: Maintained
F: hw/misc/edu.c F: hw/misc/edu.c
F: docs/specs/edu.rst
IDE IDE
M: John Snow <jsnow@redhat.com> M: John Snow <jsnow@redhat.com>
@@ -2032,9 +1960,7 @@ F: docs/specs/acpi_hest_ghes.rst
ppc4xx ppc4xx
L: qemu-ppc@nongnu.org L: qemu-ppc@nongnu.org
S: Orphan S: Orphan
F: hw/ppc/ppc4xx*.c F: hw/ppc/ppc4*.c
F: hw/ppc/ppc440_uc.c
F: hw/ppc/ppc440.h
F: hw/i2c/ppc4xx_i2c.c F: hw/i2c/ppc4xx_i2c.c
F: include/hw/ppc/ppc4xx.h F: include/hw/ppc/ppc4xx.h
F: include/hw/i2c/ppc4xx_i2c.h F: include/hw/i2c/ppc4xx_i2c.h
@@ -2046,7 +1972,6 @@ M: Marc-André Lureau <marcandre.lureau@redhat.com>
R: Paolo Bonzini <pbonzini@redhat.com> R: Paolo Bonzini <pbonzini@redhat.com>
S: Odd Fixes S: Odd Fixes
F: hw/char/ F: hw/char/
F: include/hw/char/
Network devices Network devices
M: Jason Wang <jasowang@redhat.com> M: Jason Wang <jasowang@redhat.com>
@@ -2183,7 +2108,7 @@ S: Maintained
F: docs/interop/virtio-balloon-stats.rst F: docs/interop/virtio-balloon-stats.rst
F: hw/virtio/virtio-balloon*.c F: hw/virtio/virtio-balloon*.c
F: include/hw/virtio/virtio-balloon.h F: include/hw/virtio/virtio-balloon.h
F: system/balloon.c F: softmmu/balloon.c
F: include/sysemu/balloon.h F: include/sysemu/balloon.h
virtio-9p virtio-9p
@@ -2229,13 +2154,6 @@ T: git https://gitlab.com/cohuck/qemu.git s390-next
T: git https://github.com/borntraeger/qemu.git s390-next T: git https://github.com/borntraeger/qemu.git s390-next
L: qemu-s390x@nongnu.org L: qemu-s390x@nongnu.org
virtio-dmabuf
M: Albert Esteve <aesteve@redhat.com>
S: Supported
F: hw/display/virtio-dmabuf.c
F: include/hw/virtio/virtio-dmabuf.h
F: tests/unit/test-virtio-dmabuf.c
virtiofs virtiofs
M: Stefan Hajnoczi <stefanha@redhat.com> M: Stefan Hajnoczi <stefanha@redhat.com>
S: Supported S: Supported
@@ -2323,15 +2241,6 @@ F: hw/virtio/virtio-mem-pci.h
F: hw/virtio/virtio-mem-pci.c F: hw/virtio/virtio-mem-pci.c
F: include/hw/virtio/virtio-mem.h F: include/hw/virtio/virtio-mem.h
virtio-snd
M: Gerd Hoffmann <kraxel@redhat.com>
R: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
S: Supported
F: hw/audio/virtio-snd.c
F: hw/audio/virtio-snd-pci.c
F: include/hw/audio/virtio-snd.h
F: docs/system/devices/virtio-snd.rst
nvme nvme
M: Keith Busch <kbusch@kernel.org> M: Keith Busch <kbusch@kernel.org>
M: Klaus Jensen <its@irrelevant.dk> M: Klaus Jensen <its@irrelevant.dk>
@@ -2374,7 +2283,6 @@ S: Maintained
F: hw/net/vmxnet* F: hw/net/vmxnet*
F: hw/scsi/vmw_pvscsi* F: hw/scsi/vmw_pvscsi*
F: tests/qtest/vmxnet3-test.c F: tests/qtest/vmxnet3-test.c
F: docs/specs/vwm_pvscsi-spec.rst
Rocker Rocker
M: Jiri Pirko <jiri@resnulli.us> M: Jiri Pirko <jiri@resnulli.us>
@@ -2459,7 +2367,7 @@ S: Orphan
R: Ani Sinha <ani@anisinha.ca> R: Ani Sinha <ani@anisinha.ca>
F: hw/acpi/vmgenid.c F: hw/acpi/vmgenid.c
F: include/hw/acpi/vmgenid.h F: include/hw/acpi/vmgenid.h
F: docs/specs/vmgenid.rst F: docs/specs/vmgenid.txt
F: tests/qtest/vmgenid-test.c F: tests/qtest/vmgenid-test.c
LED LED
@@ -2491,7 +2399,6 @@ F: hw/display/vga*
F: hw/display/bochs-display.c F: hw/display/bochs-display.c
F: include/hw/display/vga.h F: include/hw/display/vga.h
F: include/hw/display/bochs-vbe.h F: include/hw/display/bochs-vbe.h
F: docs/specs/standard-vga.rst
ramfb ramfb
M: Gerd Hoffmann <kraxel@redhat.com> M: Gerd Hoffmann <kraxel@redhat.com>
@@ -2505,7 +2412,6 @@ S: Odd Fixes
F: hw/display/virtio-gpu* F: hw/display/virtio-gpu*
F: hw/display/virtio-vga.* F: hw/display/virtio-vga.*
F: include/hw/virtio/virtio-gpu.h F: include/hw/virtio/virtio-gpu.h
F: docs/system/devices/virtio-gpu.rst
vhost-user-blk vhost-user-blk
M: Raphael Norwitz <raphael.norwitz@nutanix.com> M: Raphael Norwitz <raphael.norwitz@nutanix.com>
@@ -2546,18 +2452,9 @@ PIIX4 South Bridge (i82371AB)
M: Hervé Poussineau <hpoussin@reactos.org> M: Hervé Poussineau <hpoussin@reactos.org>
M: Philippe Mathieu-Daudé <philmd@linaro.org> M: Philippe Mathieu-Daudé <philmd@linaro.org>
S: Maintained S: Maintained
F: hw/isa/piix.c F: hw/isa/piix4.c
F: include/hw/southbridge/piix.h F: include/hw/southbridge/piix.h
VIA South Bridges (VT82C686B, VT8231)
M: BALATON Zoltan <balaton@eik.bme.hu>
M: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Jiaxun Yang <jiaxun.yang@flygoat.com>
S: Maintained
F: hw/isa/vt82c686.c
F: hw/usb/vt82c686-uhci-pci.c
F: include/hw/isa/vt82c686.h
Firmware configuration (fw_cfg) Firmware configuration (fw_cfg)
M: Philippe Mathieu-Daudé <philmd@linaro.org> M: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Gerd Hoffmann <kraxel@redhat.com> R: Gerd Hoffmann <kraxel@redhat.com>
@@ -2608,7 +2505,6 @@ W: https://canbus.pages.fel.cvut.cz/
F: net/can/* F: net/can/*
F: hw/net/can/* F: hw/net/can/*
F: include/net/can_*.h F: include/net/can_*.h
F: docs/system/devices/can.rst
OpenPIC interrupt controller OpenPIC interrupt controller
M: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> M: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
@@ -2652,7 +2548,7 @@ M: Halil Pasic <pasic@linux.ibm.com>
M: Christian Borntraeger <borntraeger@linux.ibm.com> M: Christian Borntraeger <borntraeger@linux.ibm.com>
S: Supported S: Supported
F: hw/s390x/storage-keys.h F: hw/s390x/storage-keys.h
F: hw/s390x/s390-skeys*.c F: hw/390x/s390-skeys*.c
L: qemu-s390x@nongnu.org L: qemu-s390x@nongnu.org
S390 storage attribute device S390 storage attribute device
@@ -2660,7 +2556,7 @@ M: Halil Pasic <pasic@linux.ibm.com>
M: Christian Borntraeger <borntraeger@linux.ibm.com> M: Christian Borntraeger <borntraeger@linux.ibm.com>
S: Supported S: Supported
F: hw/s390x/storage-attributes.h F: hw/s390x/storage-attributes.h
F: hw/s390x/s390-stattrib*.c F: hw/s390/s390-stattrib*.c
L: qemu-s390x@nongnu.org L: qemu-s390x@nongnu.org
S390 floating interrupt controller S390 floating interrupt controller
@@ -2680,14 +2576,6 @@ F: hw/usb/canokey.c
F: hw/usb/canokey.h F: hw/usb/canokey.h
F: docs/system/devices/canokey.rst F: docs/system/devices/canokey.rst
Hyper-V Dynamic Memory Protocol
M: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
S: Supported
F: hw/hyperv/hv-balloon*.c
F: hw/hyperv/hv-balloon*.h
F: include/hw/hyperv/dynmem-proto.h
F: include/hw/hyperv/hv-balloon.h
Subsystems Subsystems
---------- ----------
Overall Audio backends Overall Audio backends
@@ -2791,13 +2679,12 @@ S: Supported
F: util/async.c F: util/async.c
F: util/aio-*.c F: util/aio-*.c
F: util/aio-*.h F: util/aio-*.h
F: util/defer-call.c
F: util/fdmon-*.c F: util/fdmon-*.c
F: block/io.c F: block/io.c
F: block/plug.c
F: migration/block* F: migration/block*
F: include/block/aio.h F: include/block/aio.h
F: include/block/aio-wait.h F: include/block/aio-wait.h
F: include/qemu/defer-call.h
F: scripts/qemugdb/aio.py F: scripts/qemugdb/aio.py
F: tests/unit/test-fdmon-epoll.c F: tests/unit/test-fdmon-epoll.c
T: git https://github.com/stefanha/qemu.git block T: git https://github.com/stefanha/qemu.git block
@@ -2901,7 +2788,7 @@ Device Tree
M: Alistair Francis <alistair.francis@wdc.com> M: Alistair Francis <alistair.francis@wdc.com>
R: David Gibson <david@gibson.dropbear.id.au> R: David Gibson <david@gibson.dropbear.id.au>
S: Maintained S: Maintained
F: system/device_tree.c F: softmmu/device_tree.c
F: include/sysemu/device_tree.h F: include/sysemu/device_tree.h
Dump Dump
@@ -2916,7 +2803,6 @@ F: include/sysemu/dump.h
F: qapi/dump.json F: qapi/dump.json
F: scripts/dump-guest-memory.py F: scripts/dump-guest-memory.py
F: stubs/dump.c F: stubs/dump.c
F: docs/specs/vmcoreinfo.rst
Error reporting Error reporting
M: Markus Armbruster <armbru@redhat.com> M: Markus Armbruster <armbru@redhat.com>
@@ -2942,8 +2828,8 @@ F: gdbstub/*
F: include/exec/gdbstub.h F: include/exec/gdbstub.h
F: include/gdbstub/* F: include/gdbstub/*
F: gdb-xml/ F: gdb-xml/
F: tests/tcg/multiarch/gdbstub/* F: tests/tcg/multiarch/gdbstub/
F: scripts/feature_to_c.py F: scripts/feature_to_c.sh
F: scripts/probe-gdb-support.py F: scripts/probe-gdb-support.py
Memory API Memory API
@@ -2958,11 +2844,11 @@ F: include/exec/memory.h
F: include/exec/ram_addr.h F: include/exec/ram_addr.h
F: include/exec/ramblock.h F: include/exec/ramblock.h
F: include/sysemu/memory_mapping.h F: include/sysemu/memory_mapping.h
F: system/dma-helpers.c F: softmmu/dma-helpers.c
F: system/ioport.c F: softmmu/ioport.c
F: system/memory.c F: softmmu/memory.c
F: system/memory_mapping.c F: softmmu/memory_mapping.c
F: system/physmem.c F: softmmu/physmem.c
F: include/exec/memory-internal.h F: include/exec/memory-internal.h
F: scripts/coccinelle/memory-region-housekeeping.cocci F: scripts/coccinelle/memory-region-housekeeping.cocci
@@ -2977,7 +2863,6 @@ F: hw/mem/pc-dimm.c
F: include/hw/mem/memory-device.h F: include/hw/mem/memory-device.h
F: include/hw/mem/nvdimm.h F: include/hw/mem/nvdimm.h
F: include/hw/mem/pc-dimm.h F: include/hw/mem/pc-dimm.h
F: stubs/memory_device.c
F: docs/nvdimm.txt F: docs/nvdimm.txt
SPICE SPICE
@@ -3015,13 +2900,14 @@ F: include/qemu/main-loop.h
F: include/sysemu/runstate.h F: include/sysemu/runstate.h
F: include/sysemu/runstate-action.h F: include/sysemu/runstate-action.h
F: util/main-loop.c F: util/main-loop.c
F: util/qemu-timer*.c F: util/qemu-timer.c
F: system/vl.c F: softmmu/vl.c
F: system/main.c F: softmmu/main.c
F: system/cpus.c F: softmmu/cpus.c
F: system/cpu-throttle.c F: softmmu/cpu-throttle.c
F: system/cpu-timers.c F: softmmu/cpu-timers.c
F: system/runstate* F: softmmu/icount.c
F: softmmu/runstate*
F: qapi/run-state.json F: qapi/run-state.json
Read, Copy, Update (RCU) Read, Copy, Update (RCU)
@@ -3164,11 +3050,10 @@ M: Michael Roth <michael.roth@amd.com>
M: Konstantin Kostiuk <kkostiuk@redhat.com> M: Konstantin Kostiuk <kkostiuk@redhat.com>
S: Maintained S: Maintained
F: qga/ F: qga/
F: contrib/systemd/qemu-guest-agent.service
F: docs/interop/qemu-ga.rst F: docs/interop/qemu-ga.rst
F: docs/interop/qemu-ga-ref.rst F: docs/interop/qemu-ga-ref.rst
F: scripts/qemu-guest-agent/ F: scripts/qemu-guest-agent/
F: tests/*/test-qga* F: tests/unit/test-qga.c
T: git https://github.com/mdroth/qemu.git qga T: git https://github.com/mdroth/qemu.git qga
QEMU Guest Agent Win32 QEMU Guest Agent Win32
@@ -3196,7 +3081,7 @@ F: qapi/qom.json
F: qapi/qdev.json F: qapi/qdev.json
F: scripts/coccinelle/qom-parent-type.cocci F: scripts/coccinelle/qom-parent-type.cocci
F: scripts/qom-cast-macro-clean-cocci-gen.py F: scripts/qom-cast-macro-clean-cocci-gen.py
F: system/qdev-monitor.c F: softmmu/qdev-monitor.c
F: stubs/qdev.c F: stubs/qdev.c
F: qom/ F: qom/
F: tests/unit/check-qom-interface.c F: tests/unit/check-qom-interface.c
@@ -3230,8 +3115,7 @@ M: Thomas Huth <thuth@redhat.com>
M: Laurent Vivier <lvivier@redhat.com> M: Laurent Vivier <lvivier@redhat.com>
R: Paolo Bonzini <pbonzini@redhat.com> R: Paolo Bonzini <pbonzini@redhat.com>
S: Maintained S: Maintained
F: system/qtest.c F: softmmu/qtest.c
F: include/sysemu/qtest.h
F: accel/qtest/ F: accel/qtest/
F: tests/qtest/ F: tests/qtest/
F: docs/devel/qgraph.rst F: docs/devel/qgraph.rst
@@ -3286,7 +3170,6 @@ F: stubs/
Tracing Tracing
M: Stefan Hajnoczi <stefanha@redhat.com> M: Stefan Hajnoczi <stefanha@redhat.com>
R: Mads Ynddal <mads@ynddal.dk>
S: Maintained S: Maintained
F: trace/ F: trace/
F: trace-events F: trace-events
@@ -3299,15 +3182,10 @@ F: docs/tools/qemu-trace-stap.rst
F: docs/devel/tracing.rst F: docs/devel/tracing.rst
T: git https://github.com/stefanha/qemu.git tracing T: git https://github.com/stefanha/qemu.git tracing
Simpletrace
M: Mads Ynddal <mads@ynddal.dk>
S: Maintained
F: scripts/simpletrace.py
TPM TPM
M: Stefan Berger <stefanb@linux.ibm.com> M: Stefan Berger <stefanb@linux.ibm.com>
S: Maintained S: Maintained
F: system/tpm* F: softmmu/tpm*
F: hw/tpm/* F: hw/tpm/*
F: include/hw/acpi/tpm.h F: include/hw/acpi/tpm.h
F: include/sysemu/tpm* F: include/sysemu/tpm*
@@ -3323,8 +3201,7 @@ F: scripts/checkpatch.pl
Migration Migration
M: Juan Quintela <quintela@redhat.com> M: Juan Quintela <quintela@redhat.com>
M: Peter Xu <peterx@redhat.com> R: Peter Xu <peterx@redhat.com>
M: Fabiano Rosas <farosas@suse.de>
R: Leonardo Bras <leobras@redhat.com> R: Leonardo Bras <leobras@redhat.com>
S: Maintained S: Maintained
F: hw/core/vmstate-if.c F: hw/core/vmstate-if.c
@@ -3339,20 +3216,11 @@ F: docs/devel/migration.rst
F: qapi/migration.json F: qapi/migration.json
F: tests/migration/ F: tests/migration/
F: util/userfaultfd.c F: util/userfaultfd.c
X: migration/rdma*
RDMA Migration
M: Juan Quintela <quintela@redhat.com>
R: Li Zhijian <lizhijian@fujitsu.com>
R: Peter Xu <peterx@redhat.com>
R: Leonardo Bras <leobras@redhat.com>
S: Odd Fixes
F: migration/rdma*
Migration dirty limit and dirty page rate Migration dirty limit and dirty page rate
M: Hyman Huang <yong.huang@smartx.com> M: Hyman Huang <yong.huang@smartx.com>
S: Maintained S: Maintained
F: system/dirtylimit.c F: softmmu/dirtylimit.c
F: include/sysemu/dirtylimit.h F: include/sysemu/dirtylimit.h
F: migration/dirtyrate.c F: migration/dirtyrate.c
F: migration/dirtyrate.h F: migration/dirtyrate.h
@@ -3376,7 +3244,7 @@ F: scripts/xml-preprocess*
Seccomp Seccomp
M: Daniel P. Berrange <berrange@redhat.com> M: Daniel P. Berrange <berrange@redhat.com>
S: Odd Fixes S: Odd Fixes
F: system/qemu-seccomp.c F: softmmu/qemu-seccomp.c
F: include/sysemu/seccomp.h F: include/sysemu/seccomp.h
F: tests/unit/test-seccomp.c F: tests/unit/test-seccomp.c
@@ -3510,12 +3378,6 @@ M: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
S: Maintained S: Maintained
F: contrib/elf2dmp/ F: contrib/elf2dmp/
Overall sensors
M: Philippe Mathieu-Daudé <philmd@linaro.org>
S: Odd Fixes
F: hw/sensor
F: include/hw/sensor
I2C and SMBus I2C and SMBus
M: Corey Minyard <cminyard@mvista.com> M: Corey Minyard <cminyard@mvista.com>
S: Maintained S: Maintained
@@ -3681,7 +3543,7 @@ M: Alistair Francis <Alistair.Francis@wdc.com>
L: qemu-riscv@nongnu.org L: qemu-riscv@nongnu.org
S: Maintained S: Maintained
F: tcg/riscv/ F: tcg/riscv/
F: disas/riscv.[ch] F: disas/riscv.c
S390 TCG target S390 TCG target
M: Richard Henderson <richard.henderson@linaro.org> M: Richard Henderson <richard.henderson@linaro.org>
@@ -3801,7 +3663,7 @@ T: git https://github.com/stefanha/qemu.git block
Bootdevice Bootdevice
M: Gonglei <arei.gonglei@huawei.com> M: Gonglei <arei.gonglei@huawei.com>
S: Maintained S: Maintained
F: system/bootdevice.c F: softmmu/bootdevice.c
Quorum Quorum
M: Alberto Garcia <berto@igalia.com> M: Alberto Garcia <berto@igalia.com>
@@ -3953,7 +3815,7 @@ F: docs/block-replication.txt
PVRDMA PVRDMA
M: Yuval Shaia <yuval.shaia.ml@gmail.com> M: Yuval Shaia <yuval.shaia.ml@gmail.com>
M: Marcel Apfelbaum <marcel.apfelbaum@gmail.com> M: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
S: Odd Fixes S: Maintained
F: hw/rdma/* F: hw/rdma/*
F: hw/rdma/vmw/* F: hw/rdma/vmw/*
F: docs/pvrdma.txt F: docs/pvrdma.txt
@@ -4001,7 +3863,6 @@ M: Jason Wang <jasowang@redhat.com>
R: Andrew Melnychenko <andrew@daynix.com> R: Andrew Melnychenko <andrew@daynix.com>
R: Yuri Benditovich <yuri.benditovich@daynix.com> R: Yuri Benditovich <yuri.benditovich@daynix.com>
S: Maintained S: Maintained
F: docs/devel/ebpf_rss.rst
F: ebpf/* F: ebpf/*
F: tools/ebpf/* F: tools/ebpf/*
@@ -4018,7 +3879,6 @@ F: .github/workflows/lockdown.yml
F: .gitlab-ci.yml F: .gitlab-ci.yml
F: .gitlab-ci.d/ F: .gitlab-ci.d/
F: .travis.yml F: .travis.yml
F: docs/devel/ci*
F: scripts/ci/ F: scripts/ci/
F: tests/docker/ F: tests/docker/
F: tests/vm/ F: tests/vm/
@@ -4078,7 +3938,7 @@ F: gitdm.config
F: contrib/gitdm/* F: contrib/gitdm/*
Incompatible changes Incompatible changes
R: devel@lists.libvirt.org R: libvir-list@redhat.com
F: docs/about/deprecated.rst F: docs/about/deprecated.rst
Build System Build System

View File

@@ -283,13 +283,6 @@ include $(SRC_PATH)/tests/vm/Makefile.include
print-help-run = printf " %-30s - %s\\n" "$1" "$2" print-help-run = printf " %-30s - %s\\n" "$1" "$2"
print-help = @$(call print-help-run,$1,$2) print-help = @$(call print-help-run,$1,$2)
.PHONY: update-linux-vdso
update-linux-vdso:
@for m in $(SRC_PATH)/linux-user/*/Makefile.vdso; do \
$(MAKE) $(SUBDIR_MAKEFLAGS) -C $$(dirname $$m) -f Makefile.vdso \
SRC_PATH=$(SRC_PATH) BUILD_DIR=$(BUILD_DIR); \
done
.PHONY: help .PHONY: help
help: help:
@echo 'Generic targets:' @echo 'Generic targets:'
@@ -310,9 +303,6 @@ endif
$(call print-help,distclean,Remove all generated files) $(call print-help,distclean,Remove all generated files)
$(call print-help,dist,Build a distributable tarball) $(call print-help,dist,Build a distributable tarball)
@echo '' @echo ''
@echo 'Linux-user targets:'
$(call print-help,update-linux-vdso,Build linux-user vdso images)
@echo ''
@echo 'Test targets:' @echo 'Test targets:'
$(call print-help,check,Run all tests (check-help for details)) $(call print-help,check,Run all tests (check-help for details))
$(call print-help,bench,Run all benchmarks) $(call print-help,bench,Run all benchmarks)

View File

@@ -30,7 +30,7 @@
#include "hw/core/accel-cpu.h" #include "hw/core/accel-cpu.h"
#ifndef CONFIG_USER_ONLY #ifndef CONFIG_USER_ONLY
#include "accel-system.h" #include "accel-softmmu.h"
#endif /* !CONFIG_USER_ONLY */ #endif /* !CONFIG_USER_ONLY */
static const TypeInfo accel_type = { static const TypeInfo accel_type = {
@@ -119,37 +119,16 @@ void accel_cpu_instance_init(CPUState *cpu)
} }
} }
bool accel_cpu_common_realize(CPUState *cpu, Error **errp) bool accel_cpu_realizefn(CPUState *cpu, Error **errp)
{ {
CPUClass *cc = CPU_GET_CLASS(cpu); CPUClass *cc = CPU_GET_CLASS(cpu);
AccelState *accel = current_accel();
AccelClass *acc = ACCEL_GET_CLASS(accel);
/* target specific realization */ if (cc->accel_cpu && cc->accel_cpu->cpu_realizefn) {
if (cc->accel_cpu && cc->accel_cpu->cpu_target_realize return cc->accel_cpu->cpu_realizefn(cpu, errp);
&& !cc->accel_cpu->cpu_target_realize(cpu, errp)) {
return false;
} }
/* generic realization */
if (acc->cpu_common_realize && !acc->cpu_common_realize(cpu, errp)) {
return false;
}
return true; return true;
} }
void accel_cpu_common_unrealize(CPUState *cpu)
{
AccelState *accel = current_accel();
AccelClass *acc = ACCEL_GET_CLASS(accel);
/* generic unrealization */
if (acc->cpu_common_unrealize) {
acc->cpu_common_unrealize(cpu);
}
}
int accel_supported_gdbstub_sstep_flags(void) int accel_supported_gdbstub_sstep_flags(void)
{ {
AccelState *accel = current_accel(); AccelState *accel = current_accel();

View File

@@ -28,7 +28,7 @@
#include "hw/boards.h" #include "hw/boards.h"
#include "sysemu/cpus.h" #include "sysemu/cpus.h"
#include "qemu/error-report.h" #include "qemu/error-report.h"
#include "accel-system.h" #include "accel-softmmu.h"
int accel_init_machine(AccelState *accel, MachineState *ms) int accel_init_machine(AccelState *accel, MachineState *ms)
{ {
@@ -99,8 +99,8 @@ static const TypeInfo accel_ops_type_info = {
.class_size = sizeof(AccelOpsClass), .class_size = sizeof(AccelOpsClass),
}; };
static void accel_system_register_types(void) static void accel_softmmu_register_types(void)
{ {
type_register_static(&accel_ops_type_info); type_register_static(&accel_ops_type_info);
} }
type_init(accel_system_register_types); type_init(accel_softmmu_register_types);

View File

@@ -7,9 +7,9 @@
* See the COPYING file in the top-level directory. * See the COPYING file in the top-level directory.
*/ */
#ifndef ACCEL_SYSTEM_H #ifndef ACCEL_SOFTMMU_H
#define ACCEL_SYSTEM_H #define ACCEL_SOFTMMU_H
void accel_init_ops_interfaces(AccelClass *ac); void accel_init_ops_interfaces(AccelClass *ac);
#endif /* ACCEL_SYSTEM_H */ #endif /* ACCEL_SOFTMMU_H */

View File

@@ -27,7 +27,7 @@ static void *dummy_cpu_thread_fn(void *arg)
qemu_mutex_lock_iothread(); qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread); qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id(); cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true; cpu->can_do_io = 1;
current_cpu = cpu; current_cpu = cpu;
#ifndef _WIN32 #ifndef _WIN32

View File

@@ -428,7 +428,7 @@ static void *hvf_cpu_thread_fn(void *arg)
qemu_thread_get_self(cpu->thread); qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id(); cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true; cpu->can_do_io = 1;
current_cpu = cpu; current_cpu = cpu;
hvf_init_vcpu(cpu); hvf_init_vcpu(cpu);

View File

@@ -36,7 +36,7 @@ static void *kvm_vcpu_thread_fn(void *arg)
qemu_mutex_lock_iothread(); qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread); qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id(); cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true; cpu->can_do_io = 1;
current_cpu = cpu; current_cpu = cpu;
r = kvm_init_vcpu(cpu, &error_fatal); r = kvm_init_vcpu(cpu, &error_fatal);

View File

@@ -90,6 +90,8 @@ bool kvm_kernel_irqchip;
bool kvm_split_irqchip; bool kvm_split_irqchip;
bool kvm_async_interrupts_allowed; bool kvm_async_interrupts_allowed;
bool kvm_halt_in_kernel_allowed; bool kvm_halt_in_kernel_allowed;
bool kvm_eventfds_allowed;
bool kvm_irqfds_allowed;
bool kvm_resamplefds_allowed; bool kvm_resamplefds_allowed;
bool kvm_msi_via_irqfd_allowed; bool kvm_msi_via_irqfd_allowed;
bool kvm_gsi_routing_allowed; bool kvm_gsi_routing_allowed;
@@ -97,6 +99,8 @@ bool kvm_gsi_direct_mapping;
bool kvm_allowed; bool kvm_allowed;
bool kvm_readonly_mem_allowed; bool kvm_readonly_mem_allowed;
bool kvm_vm_attributes_allowed; bool kvm_vm_attributes_allowed;
bool kvm_direct_msi_allowed;
bool kvm_ioeventfd_any_length_allowed;
bool kvm_msi_use_devid; bool kvm_msi_use_devid;
bool kvm_has_guest_debug; bool kvm_has_guest_debug;
static int kvm_sstep_flags; static int kvm_sstep_flags;
@@ -107,9 +111,6 @@ static const KVMCapabilityInfo kvm_required_capabilites[] = {
KVM_CAP_INFO(USER_MEMORY), KVM_CAP_INFO(USER_MEMORY),
KVM_CAP_INFO(DESTROY_MEMORY_REGION_WORKS), KVM_CAP_INFO(DESTROY_MEMORY_REGION_WORKS),
KVM_CAP_INFO(JOIN_MEMORY_REGIONS_WORKS), KVM_CAP_INFO(JOIN_MEMORY_REGIONS_WORKS),
KVM_CAP_INFO(INTERNAL_ERROR_DATA),
KVM_CAP_INFO(IOEVENTFD),
KVM_CAP_INFO(IOEVENTFD_ANY_LENGTH),
KVM_CAP_LAST_INFO KVM_CAP_LAST_INFO
}; };
@@ -173,31 +174,13 @@ void kvm_resample_fd_notify(int gsi)
} }
} }
unsigned int kvm_get_max_memslots(void) int kvm_get_max_memslots(void)
{ {
KVMState *s = KVM_STATE(current_accel()); KVMState *s = KVM_STATE(current_accel());
return s->nr_slots; return s->nr_slots;
} }
unsigned int kvm_get_free_memslots(void)
{
unsigned int used_slots = 0;
KVMState *s = kvm_state;
int i;
kvm_slots_lock();
for (i = 0; i < s->nr_as; i++) {
if (!s->as[i].ml) {
continue;
}
used_slots = MAX(used_slots, s->as[i].ml->nr_used_slots);
}
kvm_slots_unlock();
return s->nr_slots - used_slots;
}
/* Called with KVMMemoryListener.slots_lock held */ /* Called with KVMMemoryListener.slots_lock held */
static KVMSlot *kvm_get_free_slot(KVMMemoryListener *kml) static KVMSlot *kvm_get_free_slot(KVMMemoryListener *kml)
{ {
@@ -213,6 +196,19 @@ static KVMSlot *kvm_get_free_slot(KVMMemoryListener *kml)
return NULL; return NULL;
} }
bool kvm_has_free_slot(MachineState *ms)
{
KVMState *s = KVM_STATE(ms->accelerator);
bool result;
KVMMemoryListener *kml = &s->memory_listener;
kvm_slots_lock();
result = !!kvm_get_free_slot(kml);
kvm_slots_unlock();
return result;
}
/* Called with KVMMemoryListener.slots_lock held */ /* Called with KVMMemoryListener.slots_lock held */
static KVMSlot *kvm_alloc_slot(KVMMemoryListener *kml) static KVMSlot *kvm_alloc_slot(KVMMemoryListener *kml)
{ {
@@ -1105,6 +1101,13 @@ static void kvm_coalesce_pio_del(MemoryListener *listener,
} }
} }
static MemoryListener kvm_coalesced_pio_listener = {
.name = "kvm-coalesced-pio",
.coalesced_io_add = kvm_coalesce_pio_add,
.coalesced_io_del = kvm_coalesce_pio_del,
.priority = MEMORY_LISTENER_PRIORITY_MIN,
};
int kvm_check_extension(KVMState *s, unsigned int extension) int kvm_check_extension(KVMState *s, unsigned int extension)
{ {
int ret; int ret;
@@ -1246,6 +1249,43 @@ static int kvm_set_ioeventfd_pio(int fd, uint16_t addr, uint16_t val,
} }
static int kvm_check_many_ioeventfds(void)
{
/* Userspace can use ioeventfd for io notification. This requires a host
* that supports eventfd(2) and an I/O thread; since eventfd does not
* support SIGIO it cannot interrupt the vcpu.
*
* Older kernels have a 6 device limit on the KVM io bus. Find out so we
* can avoid creating too many ioeventfds.
*/
#if defined(CONFIG_EVENTFD)
int ioeventfds[7];
int i, ret = 0;
for (i = 0; i < ARRAY_SIZE(ioeventfds); i++) {
ioeventfds[i] = eventfd(0, EFD_CLOEXEC);
if (ioeventfds[i] < 0) {
break;
}
ret = kvm_set_ioeventfd_pio(ioeventfds[i], 0, i, true, 2, true);
if (ret < 0) {
close(ioeventfds[i]);
break;
}
}
/* Decide whether many devices are supported or not */
ret = i == ARRAY_SIZE(ioeventfds);
while (i-- > 0) {
kvm_set_ioeventfd_pio(ioeventfds[i], 0, i, false, 2, true);
close(ioeventfds[i]);
}
return ret;
#else
return 0;
#endif
}
static const KVMCapabilityInfo * static const KVMCapabilityInfo *
kvm_check_extension_list(KVMState *s, const KVMCapabilityInfo *list) kvm_check_extension_list(KVMState *s, const KVMCapabilityInfo *list)
{ {
@@ -1347,7 +1387,6 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
} }
start_addr += slot_size; start_addr += slot_size;
size -= slot_size; size -= slot_size;
kml->nr_used_slots--;
} while (size); } while (size);
return; return;
} }
@@ -1373,7 +1412,6 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
ram_start_offset += slot_size; ram_start_offset += slot_size;
ram += slot_size; ram += slot_size;
size -= slot_size; size -= slot_size;
kml->nr_used_slots++;
} while (size); } while (size);
} }
@@ -1761,8 +1799,6 @@ void kvm_memory_listener_register(KVMState *s, KVMMemoryListener *kml,
static MemoryListener kvm_io_listener = { static MemoryListener kvm_io_listener = {
.name = "kvm-io", .name = "kvm-io",
.coalesced_io_add = kvm_coalesce_pio_add,
.coalesced_io_del = kvm_coalesce_pio_del,
.eventfd_add = kvm_io_ioeventfd_add, .eventfd_add = kvm_io_ioeventfd_add,
.eventfd_del = kvm_io_ioeventfd_del, .eventfd_del = kvm_io_ioeventfd_del,
.priority = MEMORY_LISTENER_PRIORITY_DEV_BACKEND, .priority = MEMORY_LISTENER_PRIORITY_DEV_BACKEND,
@@ -1804,7 +1840,7 @@ static void clear_gsi(KVMState *s, unsigned int gsi)
void kvm_init_irq_routing(KVMState *s) void kvm_init_irq_routing(KVMState *s)
{ {
int gsi_count; int gsi_count, i;
gsi_count = kvm_check_extension(s, KVM_CAP_IRQ_ROUTING) - 1; gsi_count = kvm_check_extension(s, KVM_CAP_IRQ_ROUTING) - 1;
if (gsi_count > 0) { if (gsi_count > 0) {
@@ -1816,6 +1852,12 @@ void kvm_init_irq_routing(KVMState *s)
s->irq_routes = g_malloc0(sizeof(*s->irq_routes)); s->irq_routes = g_malloc0(sizeof(*s->irq_routes));
s->nr_allocated_irq_routes = 0; s->nr_allocated_irq_routes = 0;
if (!kvm_direct_msi_allowed) {
for (i = 0; i < KVM_MSI_HASHTAB_SIZE; i++) {
QTAILQ_INIT(&s->msi_hashtab[i]);
}
}
kvm_arch_init_irq_routing(s); kvm_arch_init_irq_routing(s);
} }
@@ -1935,10 +1977,41 @@ void kvm_irqchip_change_notify(void)
notifier_list_notify(&kvm_irqchip_change_notifiers, NULL); notifier_list_notify(&kvm_irqchip_change_notifiers, NULL);
} }
static unsigned int kvm_hash_msi(uint32_t data)
{
/* This is optimized for IA32 MSI layout. However, no other arch shall
* repeat the mistake of not providing a direct MSI injection API. */
return data & 0xff;
}
static void kvm_flush_dynamic_msi_routes(KVMState *s)
{
KVMMSIRoute *route, *next;
unsigned int hash;
for (hash = 0; hash < KVM_MSI_HASHTAB_SIZE; hash++) {
QTAILQ_FOREACH_SAFE(route, &s->msi_hashtab[hash], entry, next) {
kvm_irqchip_release_virq(s, route->kroute.gsi);
QTAILQ_REMOVE(&s->msi_hashtab[hash], route, entry);
g_free(route);
}
}
}
static int kvm_irqchip_get_virq(KVMState *s) static int kvm_irqchip_get_virq(KVMState *s)
{ {
int next_virq; int next_virq;
/*
* PIC and IOAPIC share the first 16 GSI numbers, thus the available
* GSI numbers are more than the number of IRQ route. Allocating a GSI
* number can succeed even though a new route entry cannot be added.
* When this happens, flush dynamic MSI entries to free IRQ route entries.
*/
if (!kvm_direct_msi_allowed && s->irq_routes->nr == s->gsi_count) {
kvm_flush_dynamic_msi_routes(s);
}
/* Return the lowest unused GSI in the bitmap */ /* Return the lowest unused GSI in the bitmap */
next_virq = find_first_zero_bit(s->used_gsi_bitmap, s->gsi_count); next_virq = find_first_zero_bit(s->used_gsi_bitmap, s->gsi_count);
if (next_virq >= s->gsi_count) { if (next_virq >= s->gsi_count) {
@@ -1948,17 +2021,63 @@ static int kvm_irqchip_get_virq(KVMState *s)
} }
} }
static KVMMSIRoute *kvm_lookup_msi_route(KVMState *s, MSIMessage msg)
{
unsigned int hash = kvm_hash_msi(msg.data);
KVMMSIRoute *route;
QTAILQ_FOREACH(route, &s->msi_hashtab[hash], entry) {
if (route->kroute.u.msi.address_lo == (uint32_t)msg.address &&
route->kroute.u.msi.address_hi == (msg.address >> 32) &&
route->kroute.u.msi.data == le32_to_cpu(msg.data)) {
return route;
}
}
return NULL;
}
int kvm_irqchip_send_msi(KVMState *s, MSIMessage msg) int kvm_irqchip_send_msi(KVMState *s, MSIMessage msg)
{ {
struct kvm_msi msi; struct kvm_msi msi;
KVMMSIRoute *route;
msi.address_lo = (uint32_t)msg.address; if (kvm_direct_msi_allowed) {
msi.address_hi = msg.address >> 32; msi.address_lo = (uint32_t)msg.address;
msi.data = le32_to_cpu(msg.data); msi.address_hi = msg.address >> 32;
msi.flags = 0; msi.data = le32_to_cpu(msg.data);
memset(msi.pad, 0, sizeof(msi.pad)); msi.flags = 0;
memset(msi.pad, 0, sizeof(msi.pad));
return kvm_vm_ioctl(s, KVM_SIGNAL_MSI, &msi); return kvm_vm_ioctl(s, KVM_SIGNAL_MSI, &msi);
}
route = kvm_lookup_msi_route(s, msg);
if (!route) {
int virq;
virq = kvm_irqchip_get_virq(s);
if (virq < 0) {
return virq;
}
route = g_new0(KVMMSIRoute, 1);
route->kroute.gsi = virq;
route->kroute.type = KVM_IRQ_ROUTING_MSI;
route->kroute.flags = 0;
route->kroute.u.msi.address_lo = (uint32_t)msg.address;
route->kroute.u.msi.address_hi = msg.address >> 32;
route->kroute.u.msi.data = le32_to_cpu(msg.data);
kvm_add_routing_entry(s, &route->kroute);
kvm_irqchip_commit_routes(s);
QTAILQ_INSERT_TAIL(&s->msi_hashtab[kvm_hash_msi(msg.data)], route,
entry);
}
assert(route->kroute.type == KVM_IRQ_ROUTING_MSI);
return kvm_set_irq(s, route->kroute.gsi, 1);
} }
int kvm_irqchip_add_msi_route(KVMRouteChange *c, int vector, PCIDevice *dev) int kvm_irqchip_add_msi_route(KVMRouteChange *c, int vector, PCIDevice *dev)
@@ -2085,6 +2204,10 @@ static int kvm_irqchip_assign_irqfd(KVMState *s, EventNotifier *event,
} }
} }
if (!kvm_irqfds_enabled()) {
return -ENOSYS;
}
return kvm_vm_ioctl(s, KVM_IRQFD, &irqfd); return kvm_vm_ioctl(s, KVM_IRQFD, &irqfd);
} }
@@ -2245,11 +2368,6 @@ static void kvm_irqchip_create(KVMState *s)
return; return;
} }
if (kvm_check_extension(s, KVM_CAP_IRQFD) <= 0) {
fprintf(stderr, "kvm: irqfd not implemented\n");
exit(1);
}
/* First probe and see if there's a arch-specific hook to create the /* First probe and see if there's a arch-specific hook to create the
* in-kernel irqchip for us */ * in-kernel irqchip for us */
ret = kvm_arch_irqchip_create(s); ret = kvm_arch_irqchip_create(s);
@@ -2524,8 +2642,22 @@ static int kvm_init(MachineState *ms)
#ifdef KVM_CAP_VCPU_EVENTS #ifdef KVM_CAP_VCPU_EVENTS
s->vcpu_events = kvm_check_extension(s, KVM_CAP_VCPU_EVENTS); s->vcpu_events = kvm_check_extension(s, KVM_CAP_VCPU_EVENTS);
#endif #endif
s->robust_singlestep =
kvm_check_extension(s, KVM_CAP_X86_ROBUST_SINGLESTEP);
#ifdef KVM_CAP_DEBUGREGS
s->debugregs = kvm_check_extension(s, KVM_CAP_DEBUGREGS);
#endif
s->max_nested_state_len = kvm_check_extension(s, KVM_CAP_NESTED_STATE); s->max_nested_state_len = kvm_check_extension(s, KVM_CAP_NESTED_STATE);
#ifdef KVM_CAP_IRQ_ROUTING
kvm_direct_msi_allowed = (kvm_check_extension(s, KVM_CAP_SIGNAL_MSI) > 0);
#endif
s->intx_set_mask = kvm_check_extension(s, KVM_CAP_PCI_2_3);
s->irq_set_ioctl = KVM_IRQ_LINE; s->irq_set_ioctl = KVM_IRQ_LINE;
if (kvm_check_extension(s, KVM_CAP_IRQ_INJECT_STATUS)) { if (kvm_check_extension(s, KVM_CAP_IRQ_INJECT_STATUS)) {
s->irq_set_ioctl = KVM_IRQ_LINE_STATUS; s->irq_set_ioctl = KVM_IRQ_LINE_STATUS;
@@ -2534,12 +2666,21 @@ static int kvm_init(MachineState *ms)
kvm_readonly_mem_allowed = kvm_readonly_mem_allowed =
(kvm_check_extension(s, KVM_CAP_READONLY_MEM) > 0); (kvm_check_extension(s, KVM_CAP_READONLY_MEM) > 0);
kvm_eventfds_allowed =
(kvm_check_extension(s, KVM_CAP_IOEVENTFD) > 0);
kvm_irqfds_allowed =
(kvm_check_extension(s, KVM_CAP_IRQFD) > 0);
kvm_resamplefds_allowed = kvm_resamplefds_allowed =
(kvm_check_extension(s, KVM_CAP_IRQFD_RESAMPLE) > 0); (kvm_check_extension(s, KVM_CAP_IRQFD_RESAMPLE) > 0);
kvm_vm_attributes_allowed = kvm_vm_attributes_allowed =
(kvm_check_extension(s, KVM_CAP_VM_ATTRIBUTES) > 0); (kvm_check_extension(s, KVM_CAP_VM_ATTRIBUTES) > 0);
kvm_ioeventfd_any_length_allowed =
(kvm_check_extension(s, KVM_CAP_IOEVENTFD_ANY_LENGTH) > 0);
#ifdef KVM_CAP_SET_GUEST_DEBUG #ifdef KVM_CAP_SET_GUEST_DEBUG
kvm_has_guest_debug = kvm_has_guest_debug =
(kvm_check_extension(s, KVM_CAP_SET_GUEST_DEBUG) > 0); (kvm_check_extension(s, KVM_CAP_SET_GUEST_DEBUG) > 0);
@@ -2576,16 +2717,24 @@ static int kvm_init(MachineState *ms)
kvm_irqchip_create(s); kvm_irqchip_create(s);
} }
s->memory_listener.listener.eventfd_add = kvm_mem_ioeventfd_add; if (kvm_eventfds_allowed) {
s->memory_listener.listener.eventfd_del = kvm_mem_ioeventfd_del; s->memory_listener.listener.eventfd_add = kvm_mem_ioeventfd_add;
s->memory_listener.listener.eventfd_del = kvm_mem_ioeventfd_del;
}
s->memory_listener.listener.coalesced_io_add = kvm_coalesce_mmio_region; s->memory_listener.listener.coalesced_io_add = kvm_coalesce_mmio_region;
s->memory_listener.listener.coalesced_io_del = kvm_uncoalesce_mmio_region; s->memory_listener.listener.coalesced_io_del = kvm_uncoalesce_mmio_region;
kvm_memory_listener_register(s, &s->memory_listener, kvm_memory_listener_register(s, &s->memory_listener,
&address_space_memory, 0, "kvm-memory"); &address_space_memory, 0, "kvm-memory");
memory_listener_register(&kvm_io_listener, if (kvm_eventfds_allowed) {
memory_listener_register(&kvm_io_listener,
&address_space_io);
}
memory_listener_register(&kvm_coalesced_pio_listener,
&address_space_io); &address_space_io);
s->many_ioeventfds = kvm_check_many_ioeventfds();
s->sync_mmu = !!kvm_vm_check_extension(kvm_state, KVM_CAP_SYNC_MMU); s->sync_mmu = !!kvm_vm_check_extension(kvm_state, KVM_CAP_SYNC_MMU);
if (!s->sync_mmu) { if (!s->sync_mmu) {
ret = ram_block_discard_disable(true); ret = ram_block_discard_disable(true);
@@ -2638,14 +2787,16 @@ static void kvm_handle_io(uint16_t port, MemTxAttrs attrs, void *data, int direc
static int kvm_handle_internal_error(CPUState *cpu, struct kvm_run *run) static int kvm_handle_internal_error(CPUState *cpu, struct kvm_run *run)
{ {
int i;
fprintf(stderr, "KVM internal error. Suberror: %d\n", fprintf(stderr, "KVM internal error. Suberror: %d\n",
run->internal.suberror); run->internal.suberror);
for (i = 0; i < run->internal.ndata; ++i) { if (kvm_check_extension(kvm_state, KVM_CAP_INTERNAL_ERROR_DATA)) {
fprintf(stderr, "extra data[%d]: 0x%016"PRIx64"\n", int i;
i, (uint64_t)run->internal.data[i]);
for (i = 0; i < run->internal.ndata; ++i) {
fprintf(stderr, "extra data[%d]: 0x%016"PRIx64"\n",
i, (uint64_t)run->internal.data[i]);
}
} }
if (run->internal.suberror == KVM_INTERNAL_ERROR_EMULATION) { if (run->internal.suberror == KVM_INTERNAL_ERROR_EMULATION) {
fprintf(stderr, "emulation failure\n"); fprintf(stderr, "emulation failure\n");
@@ -2700,13 +2851,7 @@ bool kvm_cpu_check_are_resettable(void)
static void do_kvm_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg) static void do_kvm_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
{ {
if (!cpu->vcpu_dirty) { if (!cpu->vcpu_dirty) {
int ret = kvm_arch_get_registers(cpu); kvm_arch_get_registers(cpu);
if (ret) {
error_report("Failed to get registers: %s", strerror(-ret));
cpu_dump_state(cpu, stderr, CPU_DUMP_CODE);
vm_stop(RUN_STATE_INTERNAL_ERROR);
}
cpu->vcpu_dirty = true; cpu->vcpu_dirty = true;
} }
} }
@@ -2720,13 +2865,7 @@ void kvm_cpu_synchronize_state(CPUState *cpu)
static void do_kvm_cpu_synchronize_post_reset(CPUState *cpu, run_on_cpu_data arg) static void do_kvm_cpu_synchronize_post_reset(CPUState *cpu, run_on_cpu_data arg)
{ {
int ret = kvm_arch_put_registers(cpu, KVM_PUT_RESET_STATE); kvm_arch_put_registers(cpu, KVM_PUT_RESET_STATE);
if (ret) {
error_report("Failed to put registers after reset: %s", strerror(-ret));
cpu_dump_state(cpu, stderr, CPU_DUMP_CODE);
vm_stop(RUN_STATE_INTERNAL_ERROR);
}
cpu->vcpu_dirty = false; cpu->vcpu_dirty = false;
} }
@@ -2737,12 +2876,7 @@ void kvm_cpu_synchronize_post_reset(CPUState *cpu)
static void do_kvm_cpu_synchronize_post_init(CPUState *cpu, run_on_cpu_data arg) static void do_kvm_cpu_synchronize_post_init(CPUState *cpu, run_on_cpu_data arg)
{ {
int ret = kvm_arch_put_registers(cpu, KVM_PUT_FULL_STATE); kvm_arch_put_registers(cpu, KVM_PUT_FULL_STATE);
if (ret) {
error_report("Failed to put registers after init: %s", strerror(-ret));
exit(1);
}
cpu->vcpu_dirty = false; cpu->vcpu_dirty = false;
} }
@@ -2835,14 +2969,7 @@ int kvm_cpu_exec(CPUState *cpu)
MemTxAttrs attrs; MemTxAttrs attrs;
if (cpu->vcpu_dirty) { if (cpu->vcpu_dirty) {
ret = kvm_arch_put_registers(cpu, KVM_PUT_RUNTIME_STATE); kvm_arch_put_registers(cpu, KVM_PUT_RUNTIME_STATE);
if (ret) {
error_report("Failed to put registers after init: %s",
strerror(-ret));
ret = -1;
break;
}
cpu->vcpu_dirty = false; cpu->vcpu_dirty = false;
} }
@@ -3139,11 +3266,29 @@ int kvm_has_vcpu_events(void)
return kvm_state->vcpu_events; return kvm_state->vcpu_events;
} }
int kvm_has_robust_singlestep(void)
{
return kvm_state->robust_singlestep;
}
int kvm_has_debugregs(void)
{
return kvm_state->debugregs;
}
int kvm_max_nested_state_length(void) int kvm_max_nested_state_length(void)
{ {
return kvm_state->max_nested_state_len; return kvm_state->max_nested_state_len;
} }
int kvm_has_many_ioeventfds(void)
{
if (!kvm_enabled()) {
return 0;
}
return kvm_state->many_ioeventfds;
}
int kvm_has_gsi_routing(void) int kvm_has_gsi_routing(void)
{ {
#ifdef KVM_CAP_IRQ_ROUTING #ifdef KVM_CAP_IRQ_ROUTING
@@ -3153,6 +3298,11 @@ int kvm_has_gsi_routing(void)
#endif #endif
} }
int kvm_has_intx_set_mask(void)
{
return kvm_state->intx_set_mask;
}
bool kvm_arm_supports_user_irq(void) bool kvm_arm_supports_user_irq(void)
{ {
return kvm_check_extension(kvm_state, KVM_CAP_ARM_USER_IRQ); return kvm_check_extension(kvm_state, KVM_CAP_ARM_USER_IRQ);

View File

@@ -1,5 +1,5 @@
specific_ss.add(files('accel-target.c')) specific_ss.add(files('accel-common.c', 'accel-blocker.c'))
system_ss.add(files('accel-system.c', 'accel-blocker.c')) system_ss.add(files('accel-softmmu.c'))
user_ss.add(files('accel-user.c')) user_ss.add(files('accel-user.c'))
subdir('tcg') subdir('tcg')

View File

@@ -17,13 +17,17 @@
KVMState *kvm_state; KVMState *kvm_state;
bool kvm_kernel_irqchip; bool kvm_kernel_irqchip;
bool kvm_async_interrupts_allowed; bool kvm_async_interrupts_allowed;
bool kvm_eventfds_allowed;
bool kvm_irqfds_allowed;
bool kvm_resamplefds_allowed; bool kvm_resamplefds_allowed;
bool kvm_msi_via_irqfd_allowed; bool kvm_msi_via_irqfd_allowed;
bool kvm_gsi_routing_allowed; bool kvm_gsi_routing_allowed;
bool kvm_gsi_direct_mapping; bool kvm_gsi_direct_mapping;
bool kvm_allowed; bool kvm_allowed;
bool kvm_readonly_mem_allowed; bool kvm_readonly_mem_allowed;
bool kvm_ioeventfd_any_length_allowed;
bool kvm_msi_use_devid; bool kvm_msi_use_devid;
bool kvm_direct_msi_allowed;
void kvm_flush_coalesced_mmio_buffer(void) void kvm_flush_coalesced_mmio_buffer(void)
{ {
@@ -38,6 +42,11 @@ bool kvm_has_sync_mmu(void)
return false; return false;
} }
int kvm_has_many_ioeventfds(void)
{
return 0;
}
int kvm_on_sigbus_vcpu(CPUState *cpu, int code, void *addr) int kvm_on_sigbus_vcpu(CPUState *cpu, int code, void *addr)
{ {
return 1; return 1;
@@ -83,6 +92,11 @@ void kvm_irqchip_change_notify(void)
{ {
} }
int kvm_irqchip_add_adapter_route(KVMState *s, AdapterInfo *adapter)
{
return -ENOSYS;
}
int kvm_irqchip_add_irqfd_notifier_gsi(KVMState *s, EventNotifier *n, int kvm_irqchip_add_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
EventNotifier *rn, int virq) EventNotifier *rn, int virq)
{ {
@@ -95,14 +109,9 @@ int kvm_irqchip_remove_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
return -ENOSYS; return -ENOSYS;
} }
unsigned int kvm_get_max_memslots(void) bool kvm_has_free_slot(MachineState *ms)
{ {
return 0; return false;
}
unsigned int kvm_get_free_memslots(void)
{
return 0;
} }
void kvm_init_cpu_signals(CPUState *cpu) void kvm_init_cpu_signals(CPUState *cpu)

View File

@@ -1,6 +1,6 @@
system_stubs_ss = ss.source_set() sysemu_stubs_ss = ss.source_set()
system_stubs_ss.add(when: 'CONFIG_XEN', if_false: files('xen-stub.c')) sysemu_stubs_ss.add(when: 'CONFIG_XEN', if_false: files('xen-stub.c'))
system_stubs_ss.add(when: 'CONFIG_KVM', if_false: files('kvm-stub.c')) sysemu_stubs_ss.add(when: 'CONFIG_KVM', if_false: files('kvm-stub.c'))
system_stubs_ss.add(when: 'CONFIG_TCG', if_false: files('tcg-stub.c')) sysemu_stubs_ss.add(when: 'CONFIG_TCG', if_false: files('tcg-stub.c'))
specific_ss.add_all(when: ['CONFIG_SYSTEM_ONLY'], if_true: system_stubs_ss) specific_ss.add_all(when: ['CONFIG_SYSTEM_ONLY'], if_true: sysemu_stubs_ss)

View File

@@ -22,6 +22,10 @@ void tlb_set_dirty(CPUState *cpu, vaddr vaddr)
{ {
} }
void tcg_flush_jmp_cache(CPUState *cpu)
{
}
int probe_access_flags(CPUArchState *env, vaddr addr, int size, int probe_access_flags(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx, MMUAccessType access_type, int mmu_idx,
bool nonfault, void **phost, uintptr_t retaddr) bool nonfault, void **phost, uintptr_t retaddr)

View File

@@ -73,8 +73,7 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ABI_TYPE cmpv, ABI_TYPE newv, ABI_TYPE cmpv, ABI_TYPE newv,
MemOpIdx oi, uintptr_t retaddr) MemOpIdx oi, uintptr_t retaddr)
{ {
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr);
DATA_SIZE, retaddr);
DATA_TYPE ret; DATA_TYPE ret;
#if DATA_SIZE == 16 #if DATA_SIZE == 16
@@ -91,8 +90,7 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val, ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val,
MemOpIdx oi, uintptr_t retaddr) MemOpIdx oi, uintptr_t retaddr)
{ {
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr);
DATA_SIZE, retaddr);
DATA_TYPE ret; DATA_TYPE ret;
ret = qatomic_xchg__nocheck(haddr, val); ret = qatomic_xchg__nocheck(haddr, val);
@@ -106,7 +104,7 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE val, MemOpIdx oi, uintptr_t retaddr) \ ABI_TYPE val, MemOpIdx oi, uintptr_t retaddr) \
{ \ { \
DATA_TYPE *haddr, ret; \ DATA_TYPE *haddr, ret; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \ haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \
ret = qatomic_##X(haddr, val); \ ret = qatomic_##X(haddr, val); \
ATOMIC_MMU_CLEANUP; \ ATOMIC_MMU_CLEANUP; \
atomic_trace_rmw_post(env, addr, oi); \ atomic_trace_rmw_post(env, addr, oi); \
@@ -137,7 +135,7 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE xval, MemOpIdx oi, uintptr_t retaddr) \ ABI_TYPE xval, MemOpIdx oi, uintptr_t retaddr) \
{ \ { \
XDATA_TYPE *haddr, cmp, old, new, val = xval; \ XDATA_TYPE *haddr, cmp, old, new, val = xval; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \ haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \
smp_mb(); \ smp_mb(); \
cmp = qatomic_read__nocheck(haddr); \ cmp = qatomic_read__nocheck(haddr); \
do { \ do { \
@@ -178,8 +176,7 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ABI_TYPE cmpv, ABI_TYPE newv, ABI_TYPE cmpv, ABI_TYPE newv,
MemOpIdx oi, uintptr_t retaddr) MemOpIdx oi, uintptr_t retaddr)
{ {
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr);
DATA_SIZE, retaddr);
DATA_TYPE ret; DATA_TYPE ret;
#if DATA_SIZE == 16 #if DATA_SIZE == 16
@@ -196,8 +193,7 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val, ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val,
MemOpIdx oi, uintptr_t retaddr) MemOpIdx oi, uintptr_t retaddr)
{ {
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr);
DATA_SIZE, retaddr);
ABI_TYPE ret; ABI_TYPE ret;
ret = qatomic_xchg__nocheck(haddr, BSWAP(val)); ret = qatomic_xchg__nocheck(haddr, BSWAP(val));
@@ -211,7 +207,7 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE val, MemOpIdx oi, uintptr_t retaddr) \ ABI_TYPE val, MemOpIdx oi, uintptr_t retaddr) \
{ \ { \
DATA_TYPE *haddr, ret; \ DATA_TYPE *haddr, ret; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \ haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \
ret = qatomic_##X(haddr, BSWAP(val)); \ ret = qatomic_##X(haddr, BSWAP(val)); \
ATOMIC_MMU_CLEANUP; \ ATOMIC_MMU_CLEANUP; \
atomic_trace_rmw_post(env, addr, oi); \ atomic_trace_rmw_post(env, addr, oi); \
@@ -239,7 +235,7 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE xval, MemOpIdx oi, uintptr_t retaddr) \ ABI_TYPE xval, MemOpIdx oi, uintptr_t retaddr) \
{ \ { \
XDATA_TYPE *haddr, ldo, ldn, old, new, val = xval; \ XDATA_TYPE *haddr, ldo, ldn, old, new, val = xval; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \ haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \
smp_mb(); \ smp_mb(); \
ldn = qatomic_read__nocheck(haddr); \ ldn = qatomic_read__nocheck(haddr); \
do { \ do { \

View File

@@ -20,8 +20,9 @@
#include "qemu/osdep.h" #include "qemu/osdep.h"
#include "sysemu/cpus.h" #include "sysemu/cpus.h"
#include "sysemu/tcg.h" #include "sysemu/tcg.h"
#include "exec/exec-all.h"
#include "qemu/plugin.h" #include "qemu/plugin.h"
#include "internal-common.h" #include "internal.h"
bool tcg_allowed; bool tcg_allowed;
@@ -35,7 +36,7 @@ void cpu_loop_exit_noexc(CPUState *cpu)
void cpu_loop_exit(CPUState *cpu) void cpu_loop_exit(CPUState *cpu)
{ {
/* Undo the setting in cpu_tb_exec. */ /* Undo the setting in cpu_tb_exec. */
cpu->neg.can_do_io = true; cpu->can_do_io = 1;
/* Undo any setting in generated code. */ /* Undo any setting in generated code. */
qemu_plugin_disable_mem_helpers(cpu); qemu_plugin_disable_mem_helpers(cpu);
siglongjmp(cpu->jmp_env, 1); siglongjmp(cpu->jmp_env, 1);

View File

@@ -42,8 +42,7 @@
#include "tb-jmp-cache.h" #include "tb-jmp-cache.h"
#include "tb-hash.h" #include "tb-hash.h"
#include "tb-context.h" #include "tb-context.h"
#include "internal-common.h" #include "internal.h"
#include "internal-target.h"
/* -icount align implementation. */ /* -icount align implementation. */
@@ -74,7 +73,7 @@ static void align_clocks(SyncClocks *sc, CPUState *cpu)
return; return;
} }
cpu_icount = cpu->icount_extra + cpu->neg.icount_decr.u16.low; cpu_icount = cpu->icount_extra + cpu_neg(cpu)->icount_decr.u16.low;
sc->diff_clk += icount_to_ns(sc->last_cpu_icount - cpu_icount); sc->diff_clk += icount_to_ns(sc->last_cpu_icount - cpu_icount);
sc->last_cpu_icount = cpu_icount; sc->last_cpu_icount = cpu_icount;
@@ -125,7 +124,7 @@ static void init_delay_params(SyncClocks *sc, CPUState *cpu)
sc->realtime_clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT); sc->realtime_clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT);
sc->diff_clk = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - sc->realtime_clock; sc->diff_clk = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - sc->realtime_clock;
sc->last_cpu_icount sc->last_cpu_icount
= cpu->icount_extra + cpu->neg.icount_decr.u16.low; = cpu->icount_extra + cpu_neg(cpu)->icount_decr.u16.low;
if (sc->diff_clk < max_delay) { if (sc->diff_clk < max_delay) {
max_delay = sc->diff_clk; max_delay = sc->diff_clk;
} }
@@ -223,7 +222,7 @@ static TranslationBlock *tb_htable_lookup(CPUState *cpu, vaddr pc,
struct tb_desc desc; struct tb_desc desc;
uint32_t h; uint32_t h;
desc.env = cpu_env(cpu); desc.env = cpu->env_ptr;
desc.cs_base = cs_base; desc.cs_base = cs_base;
desc.flags = flags; desc.flags = flags;
desc.cflags = cflags; desc.cflags = cflags;
@@ -445,7 +444,7 @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
static inline TranslationBlock * QEMU_DISABLE_CFI static inline TranslationBlock * QEMU_DISABLE_CFI
cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit) cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
{ {
CPUArchState *env = cpu_env(cpu); CPUArchState *env = cpu->env_ptr;
uintptr_t ret; uintptr_t ret;
TranslationBlock *last_tb; TranslationBlock *last_tb;
const void *tb_ptr = itb->tc.ptr; const void *tb_ptr = itb->tc.ptr;
@@ -456,7 +455,7 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
qemu_thread_jit_execute(); qemu_thread_jit_execute();
ret = tcg_qemu_tb_exec(env, tb_ptr); ret = tcg_qemu_tb_exec(env, tb_ptr);
cpu->neg.can_do_io = true; cpu->can_do_io = 1;
qemu_plugin_disable_mem_helpers(cpu); qemu_plugin_disable_mem_helpers(cpu);
/* /*
* TODO: Delay swapping back to the read-write region of the TB * TODO: Delay swapping back to the read-write region of the TB
@@ -566,7 +565,7 @@ static void cpu_exec_longjmp_cleanup(CPUState *cpu)
void cpu_exec_step_atomic(CPUState *cpu) void cpu_exec_step_atomic(CPUState *cpu)
{ {
CPUArchState *env = cpu_env(cpu); CPUArchState *env = cpu->env_ptr;
TranslationBlock *tb; TranslationBlock *tb;
vaddr pc; vaddr pc;
uint64_t cs_base; uint64_t cs_base;
@@ -718,10 +717,10 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
if (cpu->exception_index < 0) { if (cpu->exception_index < 0) {
#ifndef CONFIG_USER_ONLY #ifndef CONFIG_USER_ONLY
if (replay_has_exception() if (replay_has_exception()
&& cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0) { && cpu_neg(cpu)->icount_decr.u16.low + cpu->icount_extra == 0) {
/* Execute just one insn to trigger exception pending in the log */ /* Execute just one insn to trigger exception pending in the log */
cpu->cflags_next_tb = (curr_cflags(cpu) & ~CF_USE_ICOUNT) cpu->cflags_next_tb = (curr_cflags(cpu) & ~CF_USE_ICOUNT)
| CF_LAST_IO | CF_NOIRQ | 1; | CF_NOIRQ | 1;
} }
#endif #endif
return false; return false;
@@ -808,7 +807,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
* Ensure zeroing happens before reading cpu->exit_request or * Ensure zeroing happens before reading cpu->exit_request or
* cpu->interrupt_request (see also smp_wmb in cpu_exit()) * cpu->interrupt_request (see also smp_wmb in cpu_exit())
*/ */
qatomic_set_mb(&cpu->neg.icount_decr.u16.high, 0); qatomic_set_mb(&cpu_neg(cpu)->icount_decr.u16.high, 0);
if (unlikely(qatomic_read(&cpu->interrupt_request))) { if (unlikely(qatomic_read(&cpu->interrupt_request))) {
int interrupt_request; int interrupt_request;
@@ -899,7 +898,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
if (unlikely(qatomic_read(&cpu->exit_request)) if (unlikely(qatomic_read(&cpu->exit_request))
|| (icount_enabled() || (icount_enabled()
&& (cpu->cflags_next_tb == -1 || cpu->cflags_next_tb & CF_USE_ICOUNT) && (cpu->cflags_next_tb == -1 || cpu->cflags_next_tb & CF_USE_ICOUNT)
&& cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0)) { && cpu_neg(cpu)->icount_decr.u16.low + cpu->icount_extra == 0)) {
qatomic_set(&cpu->exit_request, 0); qatomic_set(&cpu->exit_request, 0);
if (cpu->exception_index == -1) { if (cpu->exception_index == -1) {
cpu->exception_index = EXCP_INTERRUPT; cpu->exception_index = EXCP_INTERRUPT;
@@ -924,7 +923,7 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
} }
*last_tb = NULL; *last_tb = NULL;
insns_left = qatomic_read(&cpu->neg.icount_decr.u32); insns_left = qatomic_read(&cpu_neg(cpu)->icount_decr.u32);
if (insns_left < 0) { if (insns_left < 0) {
/* Something asked us to stop executing chained TBs; just /* Something asked us to stop executing chained TBs; just
* continue round the main loop. Whatever requested the exit * continue round the main loop. Whatever requested the exit
@@ -943,7 +942,7 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
icount_update(cpu); icount_update(cpu);
/* Refill decrementer and continue execution. */ /* Refill decrementer and continue execution. */
insns_left = MIN(0xffff, cpu->icount_budget); insns_left = MIN(0xffff, cpu->icount_budget);
cpu->neg.icount_decr.u16.low = insns_left; cpu_neg(cpu)->icount_decr.u16.low = insns_left;
cpu->icount_extra = cpu->icount_budget - insns_left; cpu->icount_extra = cpu->icount_budget - insns_left;
/* /*
@@ -977,7 +976,7 @@ cpu_exec_loop(CPUState *cpu, SyncClocks *sc)
uint64_t cs_base; uint64_t cs_base;
uint32_t flags, cflags; uint32_t flags, cflags;
cpu_get_tb_cpu_state(cpu_env(cpu), &pc, &cs_base, &flags); cpu_get_tb_cpu_state(cpu->env_ptr, &pc, &cs_base, &flags);
/* /*
* When requested, use an exact setting for cflags for the next * When requested, use an exact setting for cflags for the next
@@ -1089,7 +1088,7 @@ int cpu_exec(CPUState *cpu)
return ret; return ret;
} }
bool tcg_exec_realizefn(CPUState *cpu, Error **errp) void tcg_exec_realizefn(CPUState *cpu, Error **errp)
{ {
static bool tcg_target_initialized; static bool tcg_target_initialized;
CPUClass *cc = CPU_GET_CLASS(cpu); CPUClass *cc = CPU_GET_CLASS(cpu);
@@ -1105,8 +1104,6 @@ bool tcg_exec_realizefn(CPUState *cpu, Error **errp)
tcg_iommu_init_notifier_list(cpu); tcg_iommu_init_notifier_list(cpu);
#endif /* !CONFIG_USER_ONLY */ #endif /* !CONFIG_USER_ONLY */
/* qemu_plugin_vcpu_init_hook delayed until cpu_index assigned. */ /* qemu_plugin_vcpu_init_hook delayed until cpu_index assigned. */
return true;
} }
/* undo the initializations in reverse order */ /* undo the initializations in reverse order */

File diff suppressed because it is too large Load Diff

View File

@@ -1,26 +0,0 @@
/*
* Internal execution defines for qemu (target agnostic)
*
* Copyright (c) 2003 Fabrice Bellard
*
* SPDX-License-Identifier: LGPL-2.1-or-later
*/
#ifndef ACCEL_TCG_INTERNAL_COMMON_H
#define ACCEL_TCG_INTERNAL_COMMON_H
#include "exec/translation-block.h"
extern int64_t max_delay;
extern int64_t max_advance;
/*
* Return true if CS is not running in parallel with other cpus, either
* because there are no other cpus or we are within an exclusive context.
*/
static inline bool cpu_in_serial_context(CPUState *cs)
{
return !(cs->tcg_cflags & CF_PARALLEL) || cpu_in_exclusive_context(cs);
}
#endif

View File

@@ -1,13 +1,13 @@
/* /*
* Internal execution defines for qemu (target specific) * Internal execution defines for qemu
* *
* Copyright (c) 2003 Fabrice Bellard * Copyright (c) 2003 Fabrice Bellard
* *
* SPDX-License-Identifier: LGPL-2.1-or-later * SPDX-License-Identifier: LGPL-2.1-or-later
*/ */
#ifndef ACCEL_TCG_INTERNAL_TARGET_H #ifndef ACCEL_TCG_INTERNAL_H
#define ACCEL_TCG_INTERNAL_TARGET_H #define ACCEL_TCG_INTERNAL_H
#include "exec/exec-all.h" #include "exec/exec-all.h"
#include "exec/translate-all.h" #include "exec/translate-all.h"
@@ -80,9 +80,6 @@ bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc);
void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb, void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
uintptr_t host_pc); uintptr_t host_pc);
bool tcg_exec_realizefn(CPUState *cpu, Error **errp);
void tcg_exec_unrealizefn(CPUState *cpu);
/* Return the current PC from CPU, which may be cached in TB. */ /* Return the current PC from CPU, which may be cached in TB. */
static inline vaddr log_pc(CPUState *cpu, const TranslationBlock *tb) static inline vaddr log_pc(CPUState *cpu, const TranslationBlock *tb)
{ {
@@ -93,6 +90,18 @@ static inline vaddr log_pc(CPUState *cpu, const TranslationBlock *tb)
} }
} }
/*
* Return true if CS is not running in parallel with other cpus, either
* because there are no other cpus or we are within an exclusive context.
*/
static inline bool cpu_in_serial_context(CPUState *cs)
{
return !(cs->tcg_cflags & CF_PARALLEL) || cpu_in_exclusive_context(cs);
}
extern int64_t max_delay;
extern int64_t max_advance;
extern bool one_insn_per_tb; extern bool one_insn_per_tb;
/** /**

View File

@@ -26,7 +26,7 @@
* If the operation must be split into two operations to be * If the operation must be split into two operations to be
* examined separately for atomicity, return -lg2. * examined separately for atomicity, return -lg2.
*/ */
static int required_atomicity(CPUState *cpu, uintptr_t p, MemOp memop) static int required_atomicity(CPUArchState *env, uintptr_t p, MemOp memop)
{ {
MemOp atom = memop & MO_ATOM_MASK; MemOp atom = memop & MO_ATOM_MASK;
MemOp size = memop & MO_SIZE; MemOp size = memop & MO_SIZE;
@@ -93,7 +93,7 @@ static int required_atomicity(CPUState *cpu, uintptr_t p, MemOp memop)
* host atomicity in order to avoid racing. This reduction * host atomicity in order to avoid racing. This reduction
* avoids looping with cpu_loop_exit_atomic. * avoids looping with cpu_loop_exit_atomic.
*/ */
if (cpu_in_serial_context(cpu)) { if (cpu_in_serial_context(env_cpu(env))) {
return MO_8; return MO_8;
} }
return atmax; return atmax;
@@ -139,14 +139,14 @@ static inline uint64_t load_atomic8(void *pv)
/** /**
* load_atomic8_or_exit: * load_atomic8_or_exit:
* @cpu: generic cpu state * @env: cpu context
* @ra: host unwind address * @ra: host unwind address
* @pv: host address * @pv: host address
* *
* Atomically load 8 aligned bytes from @pv. * Atomically load 8 aligned bytes from @pv.
* If this is not possible, longjmp out to restart serially. * If this is not possible, longjmp out to restart serially.
*/ */
static uint64_t load_atomic8_or_exit(CPUState *cpu, uintptr_t ra, void *pv) static uint64_t load_atomic8_or_exit(CPUArchState *env, uintptr_t ra, void *pv)
{ {
if (HAVE_al8) { if (HAVE_al8) {
return load_atomic8(pv); return load_atomic8(pv);
@@ -168,19 +168,19 @@ static uint64_t load_atomic8_or_exit(CPUState *cpu, uintptr_t ra, void *pv)
#endif #endif
/* Ultimate fallback: re-execute in serial context. */ /* Ultimate fallback: re-execute in serial context. */
cpu_loop_exit_atomic(cpu, ra); cpu_loop_exit_atomic(env_cpu(env), ra);
} }
/** /**
* load_atomic16_or_exit: * load_atomic16_or_exit:
* @cpu: generic cpu state * @env: cpu context
* @ra: host unwind address * @ra: host unwind address
* @pv: host address * @pv: host address
* *
* Atomically load 16 aligned bytes from @pv. * Atomically load 16 aligned bytes from @pv.
* If this is not possible, longjmp out to restart serially. * If this is not possible, longjmp out to restart serially.
*/ */
static Int128 load_atomic16_or_exit(CPUState *cpu, uintptr_t ra, void *pv) static Int128 load_atomic16_or_exit(CPUArchState *env, uintptr_t ra, void *pv)
{ {
Int128 *p = __builtin_assume_aligned(pv, 16); Int128 *p = __builtin_assume_aligned(pv, 16);
@@ -212,7 +212,7 @@ static Int128 load_atomic16_or_exit(CPUState *cpu, uintptr_t ra, void *pv)
} }
/* Ultimate fallback: re-execute in serial context. */ /* Ultimate fallback: re-execute in serial context. */
cpu_loop_exit_atomic(cpu, ra); cpu_loop_exit_atomic(env_cpu(env), ra);
} }
/** /**
@@ -263,7 +263,7 @@ static uint64_t load_atom_extract_al8x2(void *pv)
/** /**
* load_atom_extract_al8_or_exit: * load_atom_extract_al8_or_exit:
* @cpu: generic cpu state * @env: cpu context
* @ra: host unwind address * @ra: host unwind address
* @pv: host address * @pv: host address
* @s: object size in bytes, @s <= 4. * @s: object size in bytes, @s <= 4.
@@ -273,7 +273,7 @@ static uint64_t load_atom_extract_al8x2(void *pv)
* 8-byte load and extract. * 8-byte load and extract.
* The value is returned in the low bits of a uint32_t. * The value is returned in the low bits of a uint32_t.
*/ */
static uint32_t load_atom_extract_al8_or_exit(CPUState *cpu, uintptr_t ra, static uint32_t load_atom_extract_al8_or_exit(CPUArchState *env, uintptr_t ra,
void *pv, int s) void *pv, int s)
{ {
uintptr_t pi = (uintptr_t)pv; uintptr_t pi = (uintptr_t)pv;
@@ -281,12 +281,12 @@ static uint32_t load_atom_extract_al8_or_exit(CPUState *cpu, uintptr_t ra,
int shr = (HOST_BIG_ENDIAN ? 8 - s - o : o) * 8; int shr = (HOST_BIG_ENDIAN ? 8 - s - o : o) * 8;
pv = (void *)(pi & ~7); pv = (void *)(pi & ~7);
return load_atomic8_or_exit(cpu, ra, pv) >> shr; return load_atomic8_or_exit(env, ra, pv) >> shr;
} }
/** /**
* load_atom_extract_al16_or_exit: * load_atom_extract_al16_or_exit:
* @cpu: generic cpu state * @env: cpu context
* @ra: host unwind address * @ra: host unwind address
* @p: host address * @p: host address
* @s: object size in bytes, @s <= 8. * @s: object size in bytes, @s <= 8.
@@ -299,7 +299,7 @@ static uint32_t load_atom_extract_al8_or_exit(CPUState *cpu, uintptr_t ra,
* *
* If this is not possible, longjmp out to restart serially. * If this is not possible, longjmp out to restart serially.
*/ */
static uint64_t load_atom_extract_al16_or_exit(CPUState *cpu, uintptr_t ra, static uint64_t load_atom_extract_al16_or_exit(CPUArchState *env, uintptr_t ra,
void *pv, int s) void *pv, int s)
{ {
uintptr_t pi = (uintptr_t)pv; uintptr_t pi = (uintptr_t)pv;
@@ -312,7 +312,7 @@ static uint64_t load_atom_extract_al16_or_exit(CPUState *cpu, uintptr_t ra,
* Provoke SIGBUS if possible otherwise. * Provoke SIGBUS if possible otherwise.
*/ */
pv = (void *)(pi & ~7); pv = (void *)(pi & ~7);
r = load_atomic16_or_exit(cpu, ra, pv); r = load_atomic16_or_exit(env, ra, pv);
r = int128_urshift(r, shr); r = int128_urshift(r, shr);
return int128_getlo(r); return int128_getlo(r);
@@ -394,7 +394,7 @@ static inline uint64_t load_atom_8_by_8_or_4(void *pv)
* *
* Load 2 bytes from @p, honoring the atomicity of @memop. * Load 2 bytes from @p, honoring the atomicity of @memop.
*/ */
static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra, static uint16_t load_atom_2(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop) void *pv, MemOp memop)
{ {
uintptr_t pi = (uintptr_t)pv; uintptr_t pi = (uintptr_t)pv;
@@ -410,7 +410,7 @@ static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra,
} }
} }
atmax = required_atomicity(cpu, pi, memop); atmax = required_atomicity(env, pi, memop);
switch (atmax) { switch (atmax) {
case MO_8: case MO_8:
return lduw_he_p(pv); return lduw_he_p(pv);
@@ -421,9 +421,9 @@ static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra,
return load_atomic4(pv - 1) >> 8; return load_atomic4(pv - 1) >> 8;
} }
if ((pi & 15) != 7) { if ((pi & 15) != 7) {
return load_atom_extract_al8_or_exit(cpu, ra, pv, 2); return load_atom_extract_al8_or_exit(env, ra, pv, 2);
} }
return load_atom_extract_al16_or_exit(cpu, ra, pv, 2); return load_atom_extract_al16_or_exit(env, ra, pv, 2);
default: default:
g_assert_not_reached(); g_assert_not_reached();
} }
@@ -436,7 +436,7 @@ static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra,
* *
* Load 4 bytes from @p, honoring the atomicity of @memop. * Load 4 bytes from @p, honoring the atomicity of @memop.
*/ */
static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra, static uint32_t load_atom_4(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop) void *pv, MemOp memop)
{ {
uintptr_t pi = (uintptr_t)pv; uintptr_t pi = (uintptr_t)pv;
@@ -452,7 +452,7 @@ static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra,
} }
} }
atmax = required_atomicity(cpu, pi, memop); atmax = required_atomicity(env, pi, memop);
switch (atmax) { switch (atmax) {
case MO_8: case MO_8:
case MO_16: case MO_16:
@@ -466,9 +466,9 @@ static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra,
return load_atom_extract_al4x2(pv); return load_atom_extract_al4x2(pv);
case MO_32: case MO_32:
if (!(pi & 4)) { if (!(pi & 4)) {
return load_atom_extract_al8_or_exit(cpu, ra, pv, 4); return load_atom_extract_al8_or_exit(env, ra, pv, 4);
} }
return load_atom_extract_al16_or_exit(cpu, ra, pv, 4); return load_atom_extract_al16_or_exit(env, ra, pv, 4);
default: default:
g_assert_not_reached(); g_assert_not_reached();
} }
@@ -481,7 +481,7 @@ static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra,
* *
* Load 8 bytes from @p, honoring the atomicity of @memop. * Load 8 bytes from @p, honoring the atomicity of @memop.
*/ */
static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra, static uint64_t load_atom_8(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop) void *pv, MemOp memop)
{ {
uintptr_t pi = (uintptr_t)pv; uintptr_t pi = (uintptr_t)pv;
@@ -498,12 +498,12 @@ static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
return load_atom_extract_al16_or_al8(pv, 8); return load_atom_extract_al16_or_al8(pv, 8);
} }
atmax = required_atomicity(cpu, pi, memop); atmax = required_atomicity(env, pi, memop);
if (atmax == MO_64) { if (atmax == MO_64) {
if (!HAVE_al8 && (pi & 7) == 0) { if (!HAVE_al8 && (pi & 7) == 0) {
load_atomic8_or_exit(cpu, ra, pv); load_atomic8_or_exit(env, ra, pv);
} }
return load_atom_extract_al16_or_exit(cpu, ra, pv, 8); return load_atom_extract_al16_or_exit(env, ra, pv, 8);
} }
if (HAVE_al8_fast) { if (HAVE_al8_fast) {
return load_atom_extract_al8x2(pv); return load_atom_extract_al8x2(pv);
@@ -519,7 +519,7 @@ static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
if (HAVE_al8) { if (HAVE_al8) {
return load_atom_extract_al8x2(pv); return load_atom_extract_al8x2(pv);
} }
cpu_loop_exit_atomic(cpu, ra); cpu_loop_exit_atomic(env_cpu(env), ra);
default: default:
g_assert_not_reached(); g_assert_not_reached();
} }
@@ -532,7 +532,7 @@ static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
* *
* Load 16 bytes from @p, honoring the atomicity of @memop. * Load 16 bytes from @p, honoring the atomicity of @memop.
*/ */
static Int128 load_atom_16(CPUState *cpu, uintptr_t ra, static Int128 load_atom_16(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop) void *pv, MemOp memop)
{ {
uintptr_t pi = (uintptr_t)pv; uintptr_t pi = (uintptr_t)pv;
@@ -548,7 +548,7 @@ static Int128 load_atom_16(CPUState *cpu, uintptr_t ra,
return atomic16_read_ro(pv); return atomic16_read_ro(pv);
} }
atmax = required_atomicity(cpu, pi, memop); atmax = required_atomicity(env, pi, memop);
switch (atmax) { switch (atmax) {
case MO_8: case MO_8:
memcpy(&r, pv, 16); memcpy(&r, pv, 16);
@@ -563,20 +563,20 @@ static Int128 load_atom_16(CPUState *cpu, uintptr_t ra,
break; break;
case MO_64: case MO_64:
if (!HAVE_al8) { if (!HAVE_al8) {
cpu_loop_exit_atomic(cpu, ra); cpu_loop_exit_atomic(env_cpu(env), ra);
} }
a = load_atomic8(pv); a = load_atomic8(pv);
b = load_atomic8(pv + 8); b = load_atomic8(pv + 8);
break; break;
case -MO_64: case -MO_64:
if (!HAVE_al8) { if (!HAVE_al8) {
cpu_loop_exit_atomic(cpu, ra); cpu_loop_exit_atomic(env_cpu(env), ra);
} }
a = load_atom_extract_al8x2(pv); a = load_atom_extract_al8x2(pv);
b = load_atom_extract_al8x2(pv + 8); b = load_atom_extract_al8x2(pv + 8);
break; break;
case MO_128: case MO_128:
return load_atomic16_or_exit(cpu, ra, pv); return load_atomic16_or_exit(env, ra, pv);
default: default:
g_assert_not_reached(); g_assert_not_reached();
} }
@@ -825,7 +825,7 @@ static uint64_t store_whole_le16(void *pv, int size, Int128 val_le)
int sh = o * 8; int sh = o * 8;
Int128 m, v; Int128 m, v;
qemu_build_assert(HAVE_CMPXCHG128); qemu_build_assert(HAVE_ATOMIC128_RW);
/* Like MAKE_64BIT_MASK(0, sz), but larger. */ /* Like MAKE_64BIT_MASK(0, sz), but larger. */
if (sz <= 64) { if (sz <= 64) {
@@ -857,7 +857,7 @@ static uint64_t store_whole_le16(void *pv, int size, Int128 val_le)
* *
* Store 2 bytes to @p, honoring the atomicity of @memop. * Store 2 bytes to @p, honoring the atomicity of @memop.
*/ */
static void store_atom_2(CPUState *cpu, uintptr_t ra, static void store_atom_2(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop, uint16_t val) void *pv, MemOp memop, uint16_t val)
{ {
uintptr_t pi = (uintptr_t)pv; uintptr_t pi = (uintptr_t)pv;
@@ -868,7 +868,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
return; return;
} }
atmax = required_atomicity(cpu, pi, memop); atmax = required_atomicity(env, pi, memop);
if (atmax == MO_8) { if (atmax == MO_8) {
stw_he_p(pv, val); stw_he_p(pv, val);
return; return;
@@ -887,7 +887,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
return; return;
} }
} else if ((pi & 15) == 7) { } else if ((pi & 15) == 7) {
if (HAVE_CMPXCHG128) { if (HAVE_ATOMIC128_RW) {
Int128 v = int128_lshift(int128_make64(val), 56); Int128 v = int128_lshift(int128_make64(val), 56);
Int128 m = int128_lshift(int128_make64(0xffff), 56); Int128 m = int128_lshift(int128_make64(0xffff), 56);
store_atom_insert_al16(pv - 7, v, m); store_atom_insert_al16(pv - 7, v, m);
@@ -897,7 +897,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
g_assert_not_reached(); g_assert_not_reached();
} }
cpu_loop_exit_atomic(cpu, ra); cpu_loop_exit_atomic(env_cpu(env), ra);
} }
/** /**
@@ -908,7 +908,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
* *
* Store 4 bytes to @p, honoring the atomicity of @memop. * Store 4 bytes to @p, honoring the atomicity of @memop.
*/ */
static void store_atom_4(CPUState *cpu, uintptr_t ra, static void store_atom_4(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop, uint32_t val) void *pv, MemOp memop, uint32_t val)
{ {
uintptr_t pi = (uintptr_t)pv; uintptr_t pi = (uintptr_t)pv;
@@ -919,7 +919,7 @@ static void store_atom_4(CPUState *cpu, uintptr_t ra,
return; return;
} }
atmax = required_atomicity(cpu, pi, memop); atmax = required_atomicity(env, pi, memop);
switch (atmax) { switch (atmax) {
case MO_8: case MO_8:
stl_he_p(pv, val); stl_he_p(pv, val);
@@ -956,12 +956,12 @@ static void store_atom_4(CPUState *cpu, uintptr_t ra,
return; return;
} }
} else { } else {
if (HAVE_CMPXCHG128) { if (HAVE_ATOMIC128_RW) {
store_whole_le16(pv, 4, int128_make64(cpu_to_le32(val))); store_whole_le16(pv, 4, int128_make64(cpu_to_le32(val)));
return; return;
} }
} }
cpu_loop_exit_atomic(cpu, ra); cpu_loop_exit_atomic(env_cpu(env), ra);
default: default:
g_assert_not_reached(); g_assert_not_reached();
} }
@@ -975,7 +975,7 @@ static void store_atom_4(CPUState *cpu, uintptr_t ra,
* *
* Store 8 bytes to @p, honoring the atomicity of @memop. * Store 8 bytes to @p, honoring the atomicity of @memop.
*/ */
static void store_atom_8(CPUState *cpu, uintptr_t ra, static void store_atom_8(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop, uint64_t val) void *pv, MemOp memop, uint64_t val)
{ {
uintptr_t pi = (uintptr_t)pv; uintptr_t pi = (uintptr_t)pv;
@@ -986,7 +986,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
return; return;
} }
atmax = required_atomicity(cpu, pi, memop); atmax = required_atomicity(env, pi, memop);
switch (atmax) { switch (atmax) {
case MO_8: case MO_8:
stq_he_p(pv, val); stq_he_p(pv, val);
@@ -1021,7 +1021,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
} }
break; break;
case MO_64: case MO_64:
if (HAVE_CMPXCHG128) { if (HAVE_ATOMIC128_RW) {
store_whole_le16(pv, 8, int128_make64(cpu_to_le64(val))); store_whole_le16(pv, 8, int128_make64(cpu_to_le64(val)));
return; return;
} }
@@ -1029,7 +1029,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
default: default:
g_assert_not_reached(); g_assert_not_reached();
} }
cpu_loop_exit_atomic(cpu, ra); cpu_loop_exit_atomic(env_cpu(env), ra);
} }
/** /**
@@ -1040,7 +1040,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
* *
* Store 16 bytes to @p, honoring the atomicity of @memop. * Store 16 bytes to @p, honoring the atomicity of @memop.
*/ */
static void store_atom_16(CPUState *cpu, uintptr_t ra, static void store_atom_16(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop, Int128 val) void *pv, MemOp memop, Int128 val)
{ {
uintptr_t pi = (uintptr_t)pv; uintptr_t pi = (uintptr_t)pv;
@@ -1052,7 +1052,7 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
return; return;
} }
atmax = required_atomicity(cpu, pi, memop); atmax = required_atomicity(env, pi, memop);
a = HOST_BIG_ENDIAN ? int128_gethi(val) : int128_getlo(val); a = HOST_BIG_ENDIAN ? int128_gethi(val) : int128_getlo(val);
b = HOST_BIG_ENDIAN ? int128_getlo(val) : int128_gethi(val); b = HOST_BIG_ENDIAN ? int128_getlo(val) : int128_gethi(val);
@@ -1076,7 +1076,7 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
} }
break; break;
case -MO_64: case -MO_64:
if (HAVE_CMPXCHG128) { if (HAVE_ATOMIC128_RW) {
uint64_t val_le; uint64_t val_le;
int s2 = pi & 15; int s2 = pi & 15;
int s1 = 16 - s2; int s1 = 16 - s2;
@@ -1103,9 +1103,13 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
} }
break; break;
case MO_128: case MO_128:
if (HAVE_ATOMIC128_RW) {
atomic16_set(pv, val);
return;
}
break; break;
default: default:
g_assert_not_reached(); g_assert_not_reached();
} }
cpu_loop_exit_atomic(cpu, ra); cpu_loop_exit_atomic(env_cpu(env), ra);
} }

View File

@@ -8,231 +8,6 @@
* This work is licensed under the terms of the GNU GPL, version 2 or later. * This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory. * See the COPYING file in the top-level directory.
*/ */
/*
* Load helpers for tcg-ldst.h
*/
tcg_target_ulong helper_ldub_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_8);
return do_ld1_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
tcg_target_ulong helper_lduw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
return do_ld2_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
tcg_target_ulong helper_ldul_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
return do_ld4_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
uint64_t helper_ldq_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
return do_ld8_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
/*
* Provide signed versions of the load routines as well. We can of course
* avoid this for 64-bit data, or for 32-bit data on 32-bit host.
*/
tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
return (int8_t)helper_ldub_mmu(env, addr, oi, retaddr);
}
tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
return (int16_t)helper_lduw_mmu(env, addr, oi, retaddr);
}
tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
return (int32_t)helper_ldul_mmu(env, addr, oi, retaddr);
}
Int128 helper_ld16_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
return do_ld16_mmu(env_cpu(env), addr, oi, retaddr);
}
Int128 helper_ld_i128(CPUArchState *env, uint64_t addr, uint32_t oi)
{
return helper_ld16_mmu(env, addr, oi, GETPC());
}
/*
* Store helpers for tcg-ldst.h
*/
void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_8);
do_st1_mmu(env_cpu(env), addr, val, oi, ra);
}
void helper_stw_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
do_st2_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_stl_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
do_st4_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_stq_mmu(CPUArchState *env, uint64_t addr, uint64_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
do_st8_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_st16_mmu(CPUArchState *env, uint64_t addr, Int128 val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
do_st16_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx oi)
{
helper_st16_mmu(env, addr, val, oi, GETPC());
}
/*
* Load helpers for cpu_ldst.h
*/
static void plugin_load_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi)
{
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
}
uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra)
{
uint8_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_UB);
ret = do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, oi);
return ret;
}
uint16_t cpu_ldw_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint16_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
ret = do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, oi);
return ret;
}
uint32_t cpu_ldl_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint32_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
ret = do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, oi);
return ret;
}
uint64_t cpu_ldq_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint64_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
ret = do_ld8_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, oi);
return ret;
}
Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
Int128 ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
ret = do_ld16_mmu(env_cpu(env), addr, oi, ra);
plugin_load_cb(env, addr, oi);
return ret;
}
/*
* Store helpers for cpu_ldst.h
*/
static void plugin_store_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi)
{
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val,
MemOpIdx oi, uintptr_t retaddr)
{
helper_stb_mmu(env, addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uint16_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
do_st2_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uint32_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
do_st4_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
do_st8_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
void cpu_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
do_st16_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
/*
* Wrappers of the above
*/
uint32_t cpu_ldub_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t cpu_ldub_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra) int mmu_idx, uintptr_t ra)

View File

@@ -1,9 +1,7 @@
tcg_ss = ss.source_set() tcg_ss = ss.source_set()
common_ss.add(when: 'CONFIG_TCG', if_true: files(
'cpu-exec-common.c',
))
tcg_ss.add(files( tcg_ss.add(files(
'tcg-all.c', 'tcg-all.c',
'cpu-exec-common.c',
'cpu-exec.c', 'cpu-exec.c',
'tb-maint.c', 'tb-maint.c',
'tcg-runtime-gvec.c', 'tcg-runtime-gvec.c',
@@ -22,10 +20,6 @@ specific_ss.add_all(when: 'CONFIG_TCG', if_true: tcg_ss)
specific_ss.add(when: ['CONFIG_SYSTEM_ONLY', 'CONFIG_TCG'], if_true: files( specific_ss.add(when: ['CONFIG_SYSTEM_ONLY', 'CONFIG_TCG'], if_true: files(
'cputlb.c', 'cputlb.c',
))
system_ss.add(when: ['CONFIG_TCG'], if_true: files(
'icount-common.c',
'monitor.c', 'monitor.c',
)) ))

View File

@@ -8,7 +8,6 @@
#include "qemu/osdep.h" #include "qemu/osdep.h"
#include "qemu/accel.h" #include "qemu/accel.h"
#include "qemu/qht.h"
#include "qapi/error.h" #include "qapi/error.h"
#include "qapi/type-helpers.h" #include "qapi/type-helpers.h"
#include "qapi/qapi-commands-machine.h" #include "qapi/qapi-commands-machine.h"
@@ -17,8 +16,7 @@
#include "sysemu/cpu-timers.h" #include "sysemu/cpu-timers.h"
#include "sysemu/tcg.h" #include "sysemu/tcg.h"
#include "tcg/tcg.h" #include "tcg/tcg.h"
#include "internal-common.h" #include "internal.h"
#include "tb-context.h"
static void dump_drift_info(GString *buf) static void dump_drift_info(GString *buf)
@@ -52,153 +50,6 @@ static void dump_accel_info(GString *buf)
one_insn_per_tb ? "on" : "off"); one_insn_per_tb ? "on" : "off");
} }
static void print_qht_statistics(struct qht_stats hst, GString *buf)
{
uint32_t hgram_opts;
size_t hgram_bins;
char *hgram;
if (!hst.head_buckets) {
return;
}
g_string_append_printf(buf, "TB hash buckets %zu/%zu "
"(%0.2f%% head buckets used)\n",
hst.used_head_buckets, hst.head_buckets,
(double)hst.used_head_buckets /
hst.head_buckets * 100);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_opts |= QDIST_PR_100X | QDIST_PR_PERCENT;
if (qdist_xmax(&hst.occupancy) - qdist_xmin(&hst.occupancy) == 1) {
hgram_opts |= QDIST_PR_NODECIMAL;
}
hgram = qdist_pr(&hst.occupancy, 10, hgram_opts);
g_string_append_printf(buf, "TB hash occupancy %0.2f%% avg chain occ. "
"Histogram: %s\n",
qdist_avg(&hst.occupancy) * 100, hgram);
g_free(hgram);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_bins = qdist_xmax(&hst.chain) - qdist_xmin(&hst.chain);
if (hgram_bins > 10) {
hgram_bins = 10;
} else {
hgram_bins = 0;
hgram_opts |= QDIST_PR_NODECIMAL | QDIST_PR_NOBINRANGE;
}
hgram = qdist_pr(&hst.chain, hgram_bins, hgram_opts);
g_string_append_printf(buf, "TB hash avg chain %0.3f buckets. "
"Histogram: %s\n",
qdist_avg(&hst.chain), hgram);
g_free(hgram);
}
struct tb_tree_stats {
size_t nb_tbs;
size_t host_size;
size_t target_size;
size_t max_target_size;
size_t direct_jmp_count;
size_t direct_jmp2_count;
size_t cross_page;
};
static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data)
{
const TranslationBlock *tb = value;
struct tb_tree_stats *tst = data;
tst->nb_tbs++;
tst->host_size += tb->tc.size;
tst->target_size += tb->size;
if (tb->size > tst->max_target_size) {
tst->max_target_size = tb->size;
}
if (tb->page_addr[1] != -1) {
tst->cross_page++;
}
if (tb->jmp_reset_offset[0] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp_count++;
if (tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp2_count++;
}
}
return false;
}
static void tlb_flush_counts(size_t *pfull, size_t *ppart, size_t *pelide)
{
CPUState *cpu;
size_t full = 0, part = 0, elide = 0;
CPU_FOREACH(cpu) {
full += qatomic_read(&cpu->neg.tlb.c.full_flush_count);
part += qatomic_read(&cpu->neg.tlb.c.part_flush_count);
elide += qatomic_read(&cpu->neg.tlb.c.elide_flush_count);
}
*pfull = full;
*ppart = part;
*pelide = elide;
}
static void tcg_dump_info(GString *buf)
{
g_string_append_printf(buf, "[TCG profiler not compiled]\n");
}
static void dump_exec_info(GString *buf)
{
struct tb_tree_stats tst = {};
struct qht_stats hst;
size_t nb_tbs, flush_full, flush_part, flush_elide;
tcg_tb_foreach(tb_tree_stats_iter, &tst);
nb_tbs = tst.nb_tbs;
/* XXX: avoid using doubles ? */
g_string_append_printf(buf, "Translation buffer state:\n");
/*
* Report total code size including the padding and TB structs;
* otherwise users might think "-accel tcg,tb-size" is not honoured.
* For avg host size we use the precise numbers from tb_tree_stats though.
*/
g_string_append_printf(buf, "gen code size %zu/%zu\n",
tcg_code_size(), tcg_code_capacity());
g_string_append_printf(buf, "TB count %zu\n", nb_tbs);
g_string_append_printf(buf, "TB avg target size %zu max=%zu bytes\n",
nb_tbs ? tst.target_size / nb_tbs : 0,
tst.max_target_size);
g_string_append_printf(buf, "TB avg host size %zu bytes "
"(expansion ratio: %0.1f)\n",
nb_tbs ? tst.host_size / nb_tbs : 0,
tst.target_size ?
(double)tst.host_size / tst.target_size : 0);
g_string_append_printf(buf, "cross page TB count %zu (%zu%%)\n",
tst.cross_page,
nb_tbs ? (tst.cross_page * 100) / nb_tbs : 0);
g_string_append_printf(buf, "direct jump count %zu (%zu%%) "
"(2 jumps=%zu %zu%%)\n",
tst.direct_jmp_count,
nb_tbs ? (tst.direct_jmp_count * 100) / nb_tbs : 0,
tst.direct_jmp2_count,
nb_tbs ? (tst.direct_jmp2_count * 100) / nb_tbs : 0);
qht_statistics_init(&tb_ctx.htable, &hst);
print_qht_statistics(hst, buf);
qht_statistics_destroy(&hst);
g_string_append_printf(buf, "\nStatistics:\n");
g_string_append_printf(buf, "TB flush count %u\n",
qatomic_read(&tb_ctx.tb_flush_count));
g_string_append_printf(buf, "TB invalidate count %u\n",
qatomic_read(&tb_ctx.tb_phys_invalidate_count));
tlb_flush_counts(&flush_full, &flush_part, &flush_elide);
g_string_append_printf(buf, "TLB full flushes %zu\n", flush_full);
g_string_append_printf(buf, "TLB partial flushes %zu\n", flush_part);
g_string_append_printf(buf, "TLB elided flushes %zu\n", flush_elide);
tcg_dump_info(buf);
}
HumanReadableText *qmp_x_query_jit(Error **errp) HumanReadableText *qmp_x_query_jit(Error **errp)
{ {
g_autoptr(GString) buf = g_string_new(""); g_autoptr(GString) buf = g_string_new("");
@@ -215,11 +66,6 @@ HumanReadableText *qmp_x_query_jit(Error **errp)
return human_readable_text_from_str(buf); return human_readable_text_from_str(buf);
} }
static void tcg_dump_op_count(GString *buf)
{
g_string_append_printf(buf, "[TCG profiler not compiled]\n");
}
HumanReadableText *qmp_x_query_opcount(Error **errp) HumanReadableText *qmp_x_query_opcount(Error **errp)
{ {
g_autoptr(GString) buf = g_string_new(""); g_autoptr(GString) buf = g_string_new("");

View File

@@ -104,7 +104,7 @@ static void gen_empty_udata_cb(void)
TCGv_ptr udata = tcg_temp_ebb_new_ptr(); TCGv_ptr udata = tcg_temp_ebb_new_ptr();
tcg_gen_movi_ptr(udata, 0); tcg_gen_movi_ptr(udata, 0);
tcg_gen_ld_i32(cpu_index, tcg_env, tcg_gen_ld_i32(cpu_index, cpu_env,
-offsetof(ArchCPU, env) + offsetof(CPUState, cpu_index)); -offsetof(ArchCPU, env) + offsetof(CPUState, cpu_index));
gen_helper_plugin_vcpu_udata_cb(cpu_index, udata); gen_helper_plugin_vcpu_udata_cb(cpu_index, udata);
@@ -138,7 +138,7 @@ static void gen_empty_mem_cb(TCGv_i64 addr, uint32_t info)
tcg_gen_movi_i32(meminfo, info); tcg_gen_movi_i32(meminfo, info);
tcg_gen_movi_ptr(udata, 0); tcg_gen_movi_ptr(udata, 0);
tcg_gen_ld_i32(cpu_index, tcg_env, tcg_gen_ld_i32(cpu_index, cpu_env,
-offsetof(ArchCPU, env) + offsetof(CPUState, cpu_index)); -offsetof(ArchCPU, env) + offsetof(CPUState, cpu_index));
gen_helper_plugin_vcpu_mem_cb(cpu_index, meminfo, addr, udata); gen_helper_plugin_vcpu_mem_cb(cpu_index, meminfo, addr, udata);
@@ -157,7 +157,7 @@ static void gen_empty_mem_helper(void)
TCGv_ptr ptr = tcg_temp_ebb_new_ptr(); TCGv_ptr ptr = tcg_temp_ebb_new_ptr();
tcg_gen_movi_ptr(ptr, 0); tcg_gen_movi_ptr(ptr, 0);
tcg_gen_st_ptr(ptr, tcg_env, offsetof(CPUState, plugin_mem_cbs) - tcg_gen_st_ptr(ptr, cpu_env, offsetof(CPUState, plugin_mem_cbs) -
offsetof(ArchCPU, env)); offsetof(ArchCPU, env));
tcg_temp_free_ptr(ptr); tcg_temp_free_ptr(ptr);
} }
@@ -327,7 +327,8 @@ static TCGOp *copy_st_ptr(TCGOp **begin_op, TCGOp *op)
return op; return op;
} }
static TCGOp *copy_call(TCGOp **begin_op, TCGOp *op, void *func, int *cb_idx) static TCGOp *copy_call(TCGOp **begin_op, TCGOp *op, void *empty_func,
void *func, int *cb_idx)
{ {
TCGOp *old_op; TCGOp *old_op;
int func_idx; int func_idx;
@@ -371,7 +372,8 @@ static TCGOp *append_udata_cb(const struct qemu_plugin_dyn_cb *cb,
} }
/* call */ /* call */
op = copy_call(&begin_op, op, cb->f.vcpu_udata, cb_idx); op = copy_call(&begin_op, op, HELPER(plugin_vcpu_udata_cb),
cb->f.vcpu_udata, cb_idx);
return op; return op;
} }
@@ -418,7 +420,8 @@ static TCGOp *append_mem_cb(const struct qemu_plugin_dyn_cb *cb,
if (type == PLUGIN_GEN_CB_MEM) { if (type == PLUGIN_GEN_CB_MEM) {
/* call */ /* call */
op = copy_call(&begin_op, op, cb->f.vcpu_udata, cb_idx); op = copy_call(&begin_op, op, HELPER(plugin_vcpu_mem_cb),
cb->f.vcpu_udata, cb_idx);
} }
return op; return op;
@@ -578,7 +581,7 @@ void plugin_gen_disable_mem_helpers(void)
if (!tcg_ctx->plugin_tb->mem_helper) { if (!tcg_ctx->plugin_tb->mem_helper) {
return; return;
} }
tcg_gen_st_ptr(tcg_constant_ptr(NULL), tcg_env, tcg_gen_st_ptr(tcg_constant_ptr(NULL), cpu_env,
offsetof(CPUState, plugin_mem_cbs) - offsetof(ArchCPU, env)); offsetof(CPUState, plugin_mem_cbs) - offsetof(ArchCPU, env));
} }
@@ -846,7 +849,7 @@ void plugin_gen_insn_start(CPUState *cpu, const DisasContextBase *db)
} else { } else {
if (ptb->vaddr2 == -1) { if (ptb->vaddr2 == -1) {
ptb->vaddr2 = TARGET_PAGE_ALIGN(db->pc_first); ptb->vaddr2 = TARGET_PAGE_ALIGN(db->pc_first);
get_page_addr_code_hostp(cpu_env(cpu), ptb->vaddr2, &ptb->haddr2); get_page_addr_code_hostp(cpu->env_ptr, ptb->vaddr2, &ptb->haddr2);
} }
pinsn->haddr = ptb->haddr2 + pinsn->vaddr - ptb->vaddr2; pinsn->haddr = ptb->haddr2 + pinsn->vaddr - ptb->vaddr2;
} }
@@ -863,14 +866,10 @@ void plugin_gen_insn_end(void)
* do any clean-up here and make sure things are reset in * do any clean-up here and make sure things are reset in
* plugin_gen_tb_start. * plugin_gen_tb_start.
*/ */
void plugin_gen_tb_end(CPUState *cpu, size_t num_insns) void plugin_gen_tb_end(CPUState *cpu)
{ {
struct qemu_plugin_tb *ptb = tcg_ctx->plugin_tb; struct qemu_plugin_tb *ptb = tcg_ctx->plugin_tb;
/* translator may have removed instructions, update final count */
g_assert(num_insns <= ptb->n);
ptb->n = num_insns;
/* collect instrumentation requests */ /* collect instrumentation requests */
qemu_plugin_tb_trans_cb(cpu, ptb); qemu_plugin_tb_trans_cb(cpu, ptb);

View File

@@ -29,8 +29,7 @@
#include "tcg/tcg.h" #include "tcg/tcg.h"
#include "tb-hash.h" #include "tb-hash.h"
#include "tb-context.h" #include "tb-context.h"
#include "internal-common.h" #include "internal.h"
#include "internal-target.h"
/* List iterators for lists of tagged pointers in TranslationBlock. */ /* List iterators for lists of tagged pointers in TranslationBlock. */
@@ -208,12 +207,13 @@ static PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc)
{ {
PageDesc *pd; PageDesc *pd;
void **lp; void **lp;
int i;
/* Level 1. Always allocated. */ /* Level 1. Always allocated. */
lp = l1_map + ((index >> v_l1_shift) & (v_l1_size - 1)); lp = l1_map + ((index >> v_l1_shift) & (v_l1_size - 1));
/* Level 2..N-1. */ /* Level 2..N-1. */
for (int i = v_l2_levels; i > 0; i--) { for (i = v_l2_levels; i > 0; i--) {
void **p = qatomic_rcu_read(lp); void **p = qatomic_rcu_read(lp);
if (p == NULL) { if (p == NULL) {
@@ -1083,8 +1083,7 @@ bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc)
if (current_tb_modified) { if (current_tb_modified) {
/* Force execution of one insn next time. */ /* Force execution of one insn next time. */
CPUState *cpu = current_cpu; CPUState *cpu = current_cpu;
cpu->cflags_next_tb = cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu);
1 | CF_LAST_IO | CF_NOIRQ | curr_cflags(current_cpu);
return true; return true;
} }
return false; return false;
@@ -1154,8 +1153,7 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages,
if (current_tb_modified) { if (current_tb_modified) {
page_collection_unlock(pages); page_collection_unlock(pages);
/* Force execution of one insn next time. */ /* Force execution of one insn next time. */
current_cpu->cflags_next_tb = current_cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu);
1 | CF_LAST_IO | CF_NOIRQ | curr_cflags(current_cpu);
mmap_unlock(); mmap_unlock();
cpu_loop_exit_noexc(current_cpu); cpu_loop_exit_noexc(current_cpu);
} }

View File

@@ -111,14 +111,14 @@ void icount_prepare_for_run(CPUState *cpu, int64_t cpu_budget)
* each vCPU execution. However u16.high can be raised * each vCPU execution. However u16.high can be raised
* asynchronously by cpu_exit/cpu_interrupt/tcg_handle_interrupt * asynchronously by cpu_exit/cpu_interrupt/tcg_handle_interrupt
*/ */
g_assert(cpu->neg.icount_decr.u16.low == 0); g_assert(cpu_neg(cpu)->icount_decr.u16.low == 0);
g_assert(cpu->icount_extra == 0); g_assert(cpu->icount_extra == 0);
replay_mutex_lock(); replay_mutex_lock();
cpu->icount_budget = MIN(icount_get_limit(), cpu_budget); cpu->icount_budget = MIN(icount_get_limit(), cpu_budget);
insns_left = MIN(0xffff, cpu->icount_budget); insns_left = MIN(0xffff, cpu->icount_budget);
cpu->neg.icount_decr.u16.low = insns_left; cpu_neg(cpu)->icount_decr.u16.low = insns_left;
cpu->icount_extra = cpu->icount_budget - insns_left; cpu->icount_extra = cpu->icount_budget - insns_left;
if (cpu->icount_budget == 0) { if (cpu->icount_budget == 0) {
@@ -138,7 +138,7 @@ void icount_process_data(CPUState *cpu)
icount_update(cpu); icount_update(cpu);
/* Reset the counters */ /* Reset the counters */
cpu->neg.icount_decr.u16.low = 0; cpu_neg(cpu)->icount_decr.u16.low = 0;
cpu->icount_extra = 0; cpu->icount_extra = 0;
cpu->icount_budget = 0; cpu->icount_budget = 0;
@@ -153,7 +153,7 @@ void icount_handle_interrupt(CPUState *cpu, int mask)
tcg_handle_interrupt(cpu, mask); tcg_handle_interrupt(cpu, mask);
if (qemu_cpu_is_self(cpu) && if (qemu_cpu_is_self(cpu) &&
!cpu->neg.can_do_io !cpu->can_do_io
&& (mask & ~old_mask) != 0) { && (mask & ~old_mask) != 0) {
cpu_abort(cpu, "Raised interrupt while not in I/O function"); cpu_abort(cpu, "Raised interrupt while not in I/O function");
} }

View File

@@ -32,7 +32,7 @@
#include "qemu/guest-random.h" #include "qemu/guest-random.h"
#include "exec/exec-all.h" #include "exec/exec-all.h"
#include "hw/boards.h" #include "hw/boards.h"
#include "tcg/startup.h" #include "tcg/tcg.h"
#include "tcg-accel-ops.h" #include "tcg-accel-ops.h"
#include "tcg-accel-ops-mttcg.h" #include "tcg-accel-ops-mttcg.h"
@@ -80,7 +80,7 @@ static void *mttcg_cpu_thread_fn(void *arg)
qemu_thread_get_self(cpu->thread); qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id(); cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true; cpu->can_do_io = 1;
current_cpu = cpu; current_cpu = cpu;
cpu_thread_signal_created(cpu); cpu_thread_signal_created(cpu);
qemu_guest_random_seed_thread_part2(cpu->random_seed); qemu_guest_random_seed_thread_part2(cpu->random_seed);

View File

@@ -32,7 +32,7 @@
#include "qemu/notify.h" #include "qemu/notify.h"
#include "qemu/guest-random.h" #include "qemu/guest-random.h"
#include "exec/exec-all.h" #include "exec/exec-all.h"
#include "tcg/startup.h" #include "tcg/tcg.h"
#include "tcg-accel-ops.h" #include "tcg-accel-ops.h"
#include "tcg-accel-ops-rr.h" #include "tcg-accel-ops-rr.h"
#include "tcg-accel-ops-icount.h" #include "tcg-accel-ops-icount.h"
@@ -192,7 +192,7 @@ static void *rr_cpu_thread_fn(void *arg)
qemu_thread_get_self(cpu->thread); qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id(); cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true; cpu->can_do_io = 1;
cpu_thread_signal_created(cpu); cpu_thread_signal_created(cpu);
qemu_guest_random_seed_thread_part2(cpu->random_seed); qemu_guest_random_seed_thread_part2(cpu->random_seed);
@@ -334,7 +334,7 @@ void rr_start_vcpu_thread(CPUState *cpu)
cpu->thread = single_tcg_cpu_thread; cpu->thread = single_tcg_cpu_thread;
cpu->halt_cond = single_tcg_halt_cond; cpu->halt_cond = single_tcg_halt_cond;
cpu->thread_id = first_cpu->thread_id; cpu->thread_id = first_cpu->thread_id;
cpu->neg.can_do_io = 1; cpu->can_do_io = 1;
cpu->created = true; cpu->created = true;
} }
} }

View File

@@ -34,7 +34,6 @@
#include "qemu/timer.h" #include "qemu/timer.h"
#include "exec/exec-all.h" #include "exec/exec-all.h"
#include "exec/hwaddr.h" #include "exec/hwaddr.h"
#include "exec/tb-flush.h"
#include "exec/gdbstub.h" #include "exec/gdbstub.h"
#include "tcg-accel-ops.h" #include "tcg-accel-ops.h"
@@ -78,13 +77,6 @@ int tcg_cpus_exec(CPUState *cpu)
return ret; return ret;
} }
static void tcg_cpu_reset_hold(CPUState *cpu)
{
tcg_flush_jmp_cache(cpu);
tlb_flush(cpu);
}
/* mask must never be zero, except for A20 change call */ /* mask must never be zero, except for A20 change call */
void tcg_handle_interrupt(CPUState *cpu, int mask) void tcg_handle_interrupt(CPUState *cpu, int mask)
{ {
@@ -99,7 +91,7 @@ void tcg_handle_interrupt(CPUState *cpu, int mask)
if (!qemu_cpu_is_self(cpu)) { if (!qemu_cpu_is_self(cpu)) {
qemu_cpu_kick(cpu); qemu_cpu_kick(cpu);
} else { } else {
qatomic_set(&cpu->neg.icount_decr.u16.high, -1); qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1);
} }
} }
@@ -213,7 +205,6 @@ static void tcg_accel_ops_init(AccelOpsClass *ops)
} }
} }
ops->cpu_reset_hold = tcg_cpu_reset_hold;
ops->supports_guest_debug = tcg_supports_guest_debug; ops->supports_guest_debug = tcg_supports_guest_debug;
ops->insert_breakpoint = tcg_insert_breakpoint; ops->insert_breakpoint = tcg_insert_breakpoint;
ops->remove_breakpoint = tcg_remove_breakpoint; ops->remove_breakpoint = tcg_remove_breakpoint;

View File

@@ -27,7 +27,7 @@
#include "sysemu/tcg.h" #include "sysemu/tcg.h"
#include "exec/replay-core.h" #include "exec/replay-core.h"
#include "sysemu/cpu-timers.h" #include "sysemu/cpu-timers.h"
#include "tcg/startup.h" #include "tcg/tcg.h"
#include "tcg/oversized-guest.h" #include "tcg/oversized-guest.h"
#include "qapi/error.h" #include "qapi/error.h"
#include "qemu/error-report.h" #include "qemu/error-report.h"
@@ -38,7 +38,7 @@
#if !defined(CONFIG_USER_ONLY) #if !defined(CONFIG_USER_ONLY)
#include "hw/boards.h" #include "hw/boards.h"
#endif #endif
#include "internal-target.h" #include "internal.h"
struct TCGState { struct TCGState {
AccelState parent_obj; AccelState parent_obj;
@@ -121,7 +121,7 @@ static int tcg_init_machine(MachineState *ms)
* There's no guest base to take into account, so go ahead and * There's no guest base to take into account, so go ahead and
* initialize the prologue now. * initialize the prologue now.
*/ */
tcg_prologue_init(); tcg_prologue_init(tcg_ctx);
#endif #endif
return 0; return 0;
@@ -227,8 +227,6 @@ static void tcg_accel_class_init(ObjectClass *oc, void *data)
AccelClass *ac = ACCEL_CLASS(oc); AccelClass *ac = ACCEL_CLASS(oc);
ac->name = "tcg"; ac->name = "tcg";
ac->init_machine = tcg_init_machine; ac->init_machine = tcg_init_machine;
ac->cpu_common_realize = tcg_exec_realizefn;
ac->cpu_common_unrealize = tcg_exec_unrealizefn;
ac->allowed = &tcg_allowed; ac->allowed = &tcg_allowed;
ac->gdbstub_supported_sstep_flags = tcg_gdbstub_supported_sstep_flags; ac->gdbstub_supported_sstep_flags = tcg_gdbstub_supported_sstep_flags;

View File

@@ -61,8 +61,7 @@
#include "tb-jmp-cache.h" #include "tb-jmp-cache.h"
#include "tb-hash.h" #include "tb-hash.h"
#include "tb-context.h" #include "tb-context.h"
#include "internal-common.h" #include "internal.h"
#include "internal-target.h"
#include "perf.h" #include "perf.h"
#include "tcg/insn-start-words.h" #include "tcg/insn-start-words.h"
@@ -215,7 +214,7 @@ void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
* Reset the cycle counter to the start of the block and * Reset the cycle counter to the start of the block and
* shift if to the number of actually executed instructions. * shift if to the number of actually executed instructions.
*/ */
cpu->neg.icount_decr.u16.low += insns_left; cpu_neg(cpu)->icount_decr.u16.low += insns_left;
} }
cpu->cc->tcg_ops->restore_state_to_opc(cpu, tb, data); cpu->cc->tcg_ops->restore_state_to_opc(cpu, tb, data);
@@ -289,7 +288,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
vaddr pc, uint64_t cs_base, vaddr pc, uint64_t cs_base,
uint32_t flags, int cflags) uint32_t flags, int cflags)
{ {
CPUArchState *env = cpu_env(cpu); CPUArchState *env = cpu->env_ptr;
TranslationBlock *tb, *existing_tb; TranslationBlock *tb, *existing_tb;
tb_page_addr_t phys_pc, phys_p2; tb_page_addr_t phys_pc, phys_p2;
tcg_insn_unit *gen_code_buf; tcg_insn_unit *gen_code_buf;
@@ -345,6 +344,8 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
tcg_ctx->page_bits = TARGET_PAGE_BITS; tcg_ctx->page_bits = TARGET_PAGE_BITS;
tcg_ctx->page_mask = TARGET_PAGE_MASK; tcg_ctx->page_mask = TARGET_PAGE_MASK;
tcg_ctx->tlb_dyn_max_bits = CPU_TLB_DYN_MAX_BITS; tcg_ctx->tlb_dyn_max_bits = CPU_TLB_DYN_MAX_BITS;
tcg_ctx->tlb_fast_offset =
(int)offsetof(ArchCPU, neg.tlb.f) - (int)offsetof(ArchCPU, env);
#endif #endif
tcg_ctx->insn_start_words = TARGET_INSN_START_WORDS; tcg_ctx->insn_start_words = TARGET_INSN_START_WORDS;
#ifdef TCG_GUEST_DEFAULT_MO #ifdef TCG_GUEST_DEFAULT_MO
@@ -579,7 +580,7 @@ void tb_check_watchpoint(CPUState *cpu, uintptr_t retaddr)
} else { } else {
/* The exception probably happened in a helper. The CPU state should /* The exception probably happened in a helper. The CPU state should
have been saved before calling it. Fetch the PC from there. */ have been saved before calling it. Fetch the PC from there. */
CPUArchState *env = cpu_env(cpu); CPUArchState *env = cpu->env_ptr;
vaddr pc; vaddr pc;
uint64_t cs_base; uint64_t cs_base;
tb_page_addr_t addr; tb_page_addr_t addr;
@@ -622,7 +623,7 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
cc = CPU_GET_CLASS(cpu); cc = CPU_GET_CLASS(cpu);
if (cc->tcg_ops->io_recompile_replay_branch && if (cc->tcg_ops->io_recompile_replay_branch &&
cc->tcg_ops->io_recompile_replay_branch(cpu, tb)) { cc->tcg_ops->io_recompile_replay_branch(cpu, tb)) {
cpu->neg.icount_decr.u16.low++; cpu_neg(cpu)->icount_decr.u16.low++;
n = 2; n = 2;
} }
@@ -645,13 +646,140 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
cpu_loop_exit_noexc(cpu); cpu_loop_exit_noexc(cpu);
} }
static void print_qht_statistics(struct qht_stats hst, GString *buf)
{
uint32_t hgram_opts;
size_t hgram_bins;
char *hgram;
if (!hst.head_buckets) {
return;
}
g_string_append_printf(buf, "TB hash buckets %zu/%zu "
"(%0.2f%% head buckets used)\n",
hst.used_head_buckets, hst.head_buckets,
(double)hst.used_head_buckets /
hst.head_buckets * 100);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_opts |= QDIST_PR_100X | QDIST_PR_PERCENT;
if (qdist_xmax(&hst.occupancy) - qdist_xmin(&hst.occupancy) == 1) {
hgram_opts |= QDIST_PR_NODECIMAL;
}
hgram = qdist_pr(&hst.occupancy, 10, hgram_opts);
g_string_append_printf(buf, "TB hash occupancy %0.2f%% avg chain occ. "
"Histogram: %s\n",
qdist_avg(&hst.occupancy) * 100, hgram);
g_free(hgram);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_bins = qdist_xmax(&hst.chain) - qdist_xmin(&hst.chain);
if (hgram_bins > 10) {
hgram_bins = 10;
} else {
hgram_bins = 0;
hgram_opts |= QDIST_PR_NODECIMAL | QDIST_PR_NOBINRANGE;
}
hgram = qdist_pr(&hst.chain, hgram_bins, hgram_opts);
g_string_append_printf(buf, "TB hash avg chain %0.3f buckets. "
"Histogram: %s\n",
qdist_avg(&hst.chain), hgram);
g_free(hgram);
}
struct tb_tree_stats {
size_t nb_tbs;
size_t host_size;
size_t target_size;
size_t max_target_size;
size_t direct_jmp_count;
size_t direct_jmp2_count;
size_t cross_page;
};
static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data)
{
const TranslationBlock *tb = value;
struct tb_tree_stats *tst = data;
tst->nb_tbs++;
tst->host_size += tb->tc.size;
tst->target_size += tb->size;
if (tb->size > tst->max_target_size) {
tst->max_target_size = tb->size;
}
if (tb_page_addr1(tb) != -1) {
tst->cross_page++;
}
if (tb->jmp_reset_offset[0] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp_count++;
if (tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp2_count++;
}
}
return false;
}
void dump_exec_info(GString *buf)
{
struct tb_tree_stats tst = {};
struct qht_stats hst;
size_t nb_tbs, flush_full, flush_part, flush_elide;
tcg_tb_foreach(tb_tree_stats_iter, &tst);
nb_tbs = tst.nb_tbs;
/* XXX: avoid using doubles ? */
g_string_append_printf(buf, "Translation buffer state:\n");
/*
* Report total code size including the padding and TB structs;
* otherwise users might think "-accel tcg,tb-size" is not honoured.
* For avg host size we use the precise numbers from tb_tree_stats though.
*/
g_string_append_printf(buf, "gen code size %zu/%zu\n",
tcg_code_size(), tcg_code_capacity());
g_string_append_printf(buf, "TB count %zu\n", nb_tbs);
g_string_append_printf(buf, "TB avg target size %zu max=%zu bytes\n",
nb_tbs ? tst.target_size / nb_tbs : 0,
tst.max_target_size);
g_string_append_printf(buf, "TB avg host size %zu bytes "
"(expansion ratio: %0.1f)\n",
nb_tbs ? tst.host_size / nb_tbs : 0,
tst.target_size ?
(double)tst.host_size / tst.target_size : 0);
g_string_append_printf(buf, "cross page TB count %zu (%zu%%)\n",
tst.cross_page,
nb_tbs ? (tst.cross_page * 100) / nb_tbs : 0);
g_string_append_printf(buf, "direct jump count %zu (%zu%%) "
"(2 jumps=%zu %zu%%)\n",
tst.direct_jmp_count,
nb_tbs ? (tst.direct_jmp_count * 100) / nb_tbs : 0,
tst.direct_jmp2_count,
nb_tbs ? (tst.direct_jmp2_count * 100) / nb_tbs : 0);
qht_statistics_init(&tb_ctx.htable, &hst);
print_qht_statistics(hst, buf);
qht_statistics_destroy(&hst);
g_string_append_printf(buf, "\nStatistics:\n");
g_string_append_printf(buf, "TB flush count %u\n",
qatomic_read(&tb_ctx.tb_flush_count));
g_string_append_printf(buf, "TB invalidate count %u\n",
qatomic_read(&tb_ctx.tb_phys_invalidate_count));
tlb_flush_counts(&flush_full, &flush_part, &flush_elide);
g_string_append_printf(buf, "TLB full flushes %zu\n", flush_full);
g_string_append_printf(buf, "TLB partial flushes %zu\n", flush_part);
g_string_append_printf(buf, "TLB elided flushes %zu\n", flush_elide);
tcg_dump_info(buf);
}
#else /* CONFIG_USER_ONLY */ #else /* CONFIG_USER_ONLY */
void cpu_interrupt(CPUState *cpu, int mask) void cpu_interrupt(CPUState *cpu, int mask)
{ {
g_assert(qemu_mutex_iothread_locked()); g_assert(qemu_mutex_iothread_locked());
cpu->interrupt_request |= mask; cpu->interrupt_request |= mask;
qatomic_set(&cpu->neg.icount_decr.u16.high, -1); qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1);
} }
#endif /* CONFIG_USER_ONLY */ #endif /* CONFIG_USER_ONLY */
@@ -673,3 +801,11 @@ void tcg_flush_jmp_cache(CPUState *cpu)
qatomic_set(&jc->array[i].tb, NULL); qatomic_set(&jc->array[i].tb, NULL);
} }
} }
/* This is a wrapper for common code that can not use CONFIG_SOFTMMU */
void tcg_flush_softmmu_tlb(CPUState *cs)
{
#ifdef CONFIG_SOFTMMU
tlb_flush(cs);
#endif
}

View File

@@ -14,23 +14,28 @@
#include "exec/translator.h" #include "exec/translator.h"
#include "exec/plugin-gen.h" #include "exec/plugin-gen.h"
#include "tcg/tcg-op-common.h" #include "tcg/tcg-op-common.h"
#include "internal-target.h" #include "internal.h"
static void set_can_do_io(DisasContextBase *db, bool val) static void gen_io_start(void)
{ {
if (db->saved_can_do_io != val) { tcg_gen_st_i32(tcg_constant_i32(1), cpu_env,
db->saved_can_do_io = val; offsetof(ArchCPU, parent_obj.can_do_io) -
offsetof(ArchCPU, env));
QEMU_BUILD_BUG_ON(sizeof_field(CPUState, neg.can_do_io) != 1);
tcg_gen_st8_i32(tcg_constant_i32(val), tcg_env,
offsetof(ArchCPU, parent_obj.neg.can_do_io) -
offsetof(ArchCPU, env));
}
} }
bool translator_io_start(DisasContextBase *db) bool translator_io_start(DisasContextBase *db)
{ {
set_can_do_io(db, true); uint32_t cflags = tb_cflags(db->tb);
if (!(cflags & CF_USE_ICOUNT)) {
return false;
}
if (db->num_insns == db->max_insns && (cflags & CF_LAST_IO)) {
/* Already started in translator_loop. */
return true;
}
gen_io_start();
/* /*
* Ensure that this instruction will be the last in the TB. * Ensure that this instruction will be the last in the TB.
@@ -42,17 +47,14 @@ bool translator_io_start(DisasContextBase *db)
return true; return true;
} }
static TCGOp *gen_tb_start(DisasContextBase *db, uint32_t cflags) static TCGOp *gen_tb_start(uint32_t cflags)
{ {
TCGv_i32 count = NULL; TCGv_i32 count = tcg_temp_new_i32();
TCGOp *icount_start_insn = NULL; TCGOp *icount_start_insn = NULL;
if ((cflags & CF_USE_ICOUNT) || !(cflags & CF_NOIRQ)) { tcg_gen_ld_i32(count, cpu_env,
count = tcg_temp_new_i32(); offsetof(ArchCPU, neg.icount_decr.u32) -
tcg_gen_ld_i32(count, tcg_env, offsetof(ArchCPU, env));
offsetof(ArchCPU, parent_obj.neg.icount_decr.u32)
- offsetof(ArchCPU, env));
}
if (cflags & CF_USE_ICOUNT) { if (cflags & CF_USE_ICOUNT) {
/* /*
@@ -79,18 +81,21 @@ static TCGOp *gen_tb_start(DisasContextBase *db, uint32_t cflags)
} }
if (cflags & CF_USE_ICOUNT) { if (cflags & CF_USE_ICOUNT) {
tcg_gen_st16_i32(count, tcg_env, tcg_gen_st16_i32(count, cpu_env,
offsetof(ArchCPU, parent_obj.neg.icount_decr.u16.low) offsetof(ArchCPU, neg.icount_decr.u16.low) -
- offsetof(ArchCPU, env)); offsetof(ArchCPU, env));
/*
* cpu->can_do_io is cleared automatically here at the beginning of
* each translation block. The cost is minimal and only paid for
* -icount, plus it would be very easy to forget doing it in the
* translator. Doing it here means we don't need a gen_io_end() to
* go with gen_io_start().
*/
tcg_gen_st_i32(tcg_constant_i32(0), cpu_env,
offsetof(ArchCPU, parent_obj.can_do_io) -
offsetof(ArchCPU, env));
} }
/*
* cpu->neg.can_do_io is set automatically here at the beginning of
* each translation block. The cost is minimal, plus it would be
* very easy to forget doing it in the translator.
*/
set_can_do_io(db, db->max_insns == 1 && (cflags & CF_LAST_IO));
return icount_start_insn; return icount_start_insn;
} }
@@ -139,7 +144,6 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
db->num_insns = 0; db->num_insns = 0;
db->max_insns = *max_insns; db->max_insns = *max_insns;
db->singlestep_enabled = cflags & CF_SINGLE_STEP; db->singlestep_enabled = cflags & CF_SINGLE_STEP;
db->saved_can_do_io = -1;
db->host_addr[0] = host_pc; db->host_addr[0] = host_pc;
db->host_addr[1] = NULL; db->host_addr[1] = NULL;
@@ -147,18 +151,11 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */ tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
/* Start translating. */ /* Start translating. */
icount_start_insn = gen_tb_start(db, cflags); icount_start_insn = gen_tb_start(cflags);
ops->tb_start(db, cpu); ops->tb_start(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */ tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
if (cflags & CF_MEMI_ONLY) { plugin_enabled = plugin_gen_tb_start(cpu, db, cflags & CF_MEMI_ONLY);
/* We should only see CF_MEMI_ONLY for io_recompile. */
assert(cflags & CF_LAST_IO);
plugin_enabled = plugin_gen_tb_start(cpu, db, true);
} else {
plugin_enabled = plugin_gen_tb_start(cpu, db, false);
}
db->plugin_enabled = plugin_enabled;
while (true) { while (true) {
*max_insns = ++db->num_insns; *max_insns = ++db->num_insns;
@@ -175,9 +172,13 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
the next instruction. */ the next instruction. */
if (db->num_insns == db->max_insns && (cflags & CF_LAST_IO)) { if (db->num_insns == db->max_insns && (cflags & CF_LAST_IO)) {
/* Accept I/O on the last instruction. */ /* Accept I/O on the last instruction. */
set_can_do_io(db, true); gen_io_start();
ops->translate_insn(db, cpu);
} else {
/* we should only see CF_MEMI_ONLY for io_recompile */
tcg_debug_assert(!(cflags & CF_MEMI_ONLY));
ops->translate_insn(db, cpu);
} }
ops->translate_insn(db, cpu);
/* /*
* We can't instrument after instructions that change control * We can't instrument after instructions that change control
@@ -210,7 +211,7 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
gen_tb_end(tb, cflags, icount_start_insn, db->num_insns); gen_tb_end(tb, cflags, icount_start_insn, db->num_insns);
if (plugin_enabled) { if (plugin_enabled) {
plugin_gen_tb_end(cpu, db->num_insns); plugin_gen_tb_end(cpu);
} }
/* The disas_log hook may use these values rather than recompute. */ /* The disas_log hook may use these values rather than recompute. */

View File

@@ -14,10 +14,6 @@ void qemu_init_vcpu(CPUState *cpu)
{ {
} }
void cpu_exec_reset_hold(CPUState *cpu)
{
}
/* User mode emulation does not support record/replay yet. */ /* User mode emulation does not support record/replay yet. */
bool replay_exception(void) bool replay_exception(void)

View File

@@ -29,8 +29,7 @@
#include "qemu/atomic128.h" #include "qemu/atomic128.h"
#include "trace/trace-root.h" #include "trace/trace-root.h"
#include "tcg/tcg-ldst.h" #include "tcg/tcg-ldst.h"
#include "internal-common.h" #include "internal.h"
#include "internal-target.h"
__thread uintptr_t helper_retaddr; __thread uintptr_t helper_retaddr;
@@ -940,9 +939,9 @@ void *page_get_target_data(target_ulong address)
void page_reset_target_data(target_ulong start, target_ulong last) { } void page_reset_target_data(target_ulong start, target_ulong last) { }
#endif /* TARGET_PAGE_DATA_SIZE */ #endif /* TARGET_PAGE_DATA_SIZE */
/* The system-mode versions of these helpers are in cputlb.c. */ /* The softmmu versions of these helpers are in cputlb.c. */
static void *cpu_mmu_lookup(CPUState *cpu, vaddr addr, static void *cpu_mmu_lookup(CPUArchState *env, vaddr addr,
MemOp mop, uintptr_t ra, MMUAccessType type) MemOp mop, uintptr_t ra, MMUAccessType type)
{ {
int a_bits = get_alignment_bits(mop); int a_bits = get_alignment_bits(mop);
@@ -950,39 +949,60 @@ static void *cpu_mmu_lookup(CPUState *cpu, vaddr addr,
/* Enforce guest required alignment. */ /* Enforce guest required alignment. */
if (unlikely(addr & ((1 << a_bits) - 1))) { if (unlikely(addr & ((1 << a_bits) - 1))) {
cpu_loop_exit_sigbus(cpu, addr, type, ra); cpu_loop_exit_sigbus(env_cpu(env), addr, type, ra);
} }
ret = g2h(cpu, addr); ret = g2h(env_cpu(env), addr);
set_helper_retaddr(ra); set_helper_retaddr(ra);
return ret; return ret;
} }
#include "ldst_atomicity.c.inc" #include "ldst_atomicity.c.inc"
static uint8_t do_ld1_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, static uint8_t do_ld1_mmu(CPUArchState *env, abi_ptr addr,
uintptr_t ra, MMUAccessType access_type) MemOp mop, uintptr_t ra)
{ {
void *haddr; void *haddr;
uint8_t ret; uint8_t ret;
tcg_debug_assert((mop & MO_SIZE) == MO_8);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, get_memop(oi), ra, access_type); haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = ldub_p(haddr); ret = ldub_p(haddr);
clear_helper_retaddr(); clear_helper_retaddr();
return ret; return ret;
} }
static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, tcg_target_ulong helper_ldub_mmu(CPUArchState *env, uint64_t addr,
uintptr_t ra, MMUAccessType access_type) MemOpIdx oi, uintptr_t ra)
{
return do_ld1_mmu(env, addr, get_memop(oi), ra);
}
tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return (int8_t)do_ld1_mmu(env, addr, get_memop(oi), ra);
}
uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint8_t ret = do_ld1_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static uint16_t do_ld2_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{ {
void *haddr; void *haddr;
uint16_t ret; uint16_t ret;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_16);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, access_type); haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_2(cpu, ra, haddr, mop); ret = load_atom_2(env, ra, haddr, mop);
clear_helper_retaddr(); clear_helper_retaddr();
if (mop & MO_BSWAP) { if (mop & MO_BSWAP) {
@@ -991,16 +1011,36 @@ static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
return ret; return ret;
} }
static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, tcg_target_ulong helper_lduw_mmu(CPUArchState *env, uint64_t addr,
uintptr_t ra, MMUAccessType access_type) MemOpIdx oi, uintptr_t ra)
{
return do_ld2_mmu(env, addr, get_memop(oi), ra);
}
tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return (int16_t)do_ld2_mmu(env, addr, get_memop(oi), ra);
}
uint16_t cpu_ldw_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint16_t ret = do_ld2_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static uint32_t do_ld4_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{ {
void *haddr; void *haddr;
uint32_t ret; uint32_t ret;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_32);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, access_type); haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_4(cpu, ra, haddr, mop); ret = load_atom_4(env, ra, haddr, mop);
clear_helper_retaddr(); clear_helper_retaddr();
if (mop & MO_BSWAP) { if (mop & MO_BSWAP) {
@@ -1009,16 +1049,36 @@ static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
return ret; return ret;
} }
static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, tcg_target_ulong helper_ldul_mmu(CPUArchState *env, uint64_t addr,
uintptr_t ra, MMUAccessType access_type) MemOpIdx oi, uintptr_t ra)
{
return do_ld4_mmu(env, addr, get_memop(oi), ra);
}
tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return (int32_t)do_ld4_mmu(env, addr, get_memop(oi), ra);
}
uint32_t cpu_ldl_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint32_t ret = do_ld4_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static uint64_t do_ld8_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{ {
void *haddr; void *haddr;
uint64_t ret; uint64_t ret;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_64);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, access_type); haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_8(cpu, ra, haddr, mop); ret = load_atom_8(env, ra, haddr, mop);
clear_helper_retaddr(); clear_helper_retaddr();
if (mop & MO_BSWAP) { if (mop & MO_BSWAP) {
@@ -1027,17 +1087,30 @@ static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
return ret; return ret;
} }
static Int128 do_ld16_mmu(CPUState *cpu, abi_ptr addr, uint64_t helper_ldq_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra) MemOpIdx oi, uintptr_t ra)
{
return do_ld8_mmu(env, addr, get_memop(oi), ra);
}
uint64_t cpu_ldq_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint64_t ret = do_ld8_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static Int128 do_ld16_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{ {
void *haddr; void *haddr;
Int128 ret; Int128 ret;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_128); tcg_debug_assert((mop & MO_SIZE) == MO_128);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_LOAD); haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_16(cpu, ra, haddr, mop); ret = load_atom_16(env, ra, haddr, mop);
clear_helper_retaddr(); clear_helper_retaddr();
if (mop & MO_BSWAP) { if (mop & MO_BSWAP) {
@@ -1046,81 +1119,171 @@ static Int128 do_ld16_mmu(CPUState *cpu, abi_ptr addr,
return ret; return ret;
} }
static void do_st1_mmu(CPUState *cpu, vaddr addr, uint8_t val, Int128 helper_ld16_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra) MemOpIdx oi, uintptr_t ra)
{
return do_ld16_mmu(env, addr, get_memop(oi), ra);
}
Int128 helper_ld_i128(CPUArchState *env, uint64_t addr, MemOpIdx oi)
{
return helper_ld16_mmu(env, addr, oi, GETPC());
}
Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
Int128 ret = do_ld16_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static void do_st1_mmu(CPUArchState *env, abi_ptr addr, uint8_t val,
MemOp mop, uintptr_t ra)
{ {
void *haddr; void *haddr;
tcg_debug_assert((mop & MO_SIZE) == MO_8);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, get_memop(oi), ra, MMU_DATA_STORE); haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
stb_p(haddr, val); stb_p(haddr, val);
clear_helper_retaddr(); clear_helper_retaddr();
} }
static void do_st2_mmu(CPUState *cpu, vaddr addr, uint16_t val, void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra) MemOpIdx oi, uintptr_t ra)
{
do_st1_mmu(env, addr, val, get_memop(oi), ra);
}
void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st1_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
static void do_st2_mmu(CPUArchState *env, abi_ptr addr, uint16_t val,
MemOp mop, uintptr_t ra)
{ {
void *haddr; void *haddr;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_16);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE); haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) { if (mop & MO_BSWAP) {
val = bswap16(val); val = bswap16(val);
} }
store_atom_2(cpu, ra, haddr, mop, val); store_atom_2(env, ra, haddr, mop, val);
clear_helper_retaddr(); clear_helper_retaddr();
} }
static void do_st4_mmu(CPUState *cpu, vaddr addr, uint32_t val, void helper_stw_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra) MemOpIdx oi, uintptr_t ra)
{
do_st2_mmu(env, addr, val, get_memop(oi), ra);
}
void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uint16_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st2_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
static void do_st4_mmu(CPUArchState *env, abi_ptr addr, uint32_t val,
MemOp mop, uintptr_t ra)
{ {
void *haddr; void *haddr;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_32);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE); haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) { if (mop & MO_BSWAP) {
val = bswap32(val); val = bswap32(val);
} }
store_atom_4(cpu, ra, haddr, mop, val); store_atom_4(env, ra, haddr, mop, val);
clear_helper_retaddr(); clear_helper_retaddr();
} }
static void do_st8_mmu(CPUState *cpu, vaddr addr, uint64_t val, void helper_stl_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra) MemOpIdx oi, uintptr_t ra)
{
do_st4_mmu(env, addr, val, get_memop(oi), ra);
}
void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st4_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
static void do_st8_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
MemOp mop, uintptr_t ra)
{ {
void *haddr; void *haddr;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_64);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE); haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) { if (mop & MO_BSWAP) {
val = bswap64(val); val = bswap64(val);
} }
store_atom_8(cpu, ra, haddr, mop, val); store_atom_8(env, ra, haddr, mop, val);
clear_helper_retaddr(); clear_helper_retaddr();
} }
static void do_st16_mmu(CPUState *cpu, vaddr addr, Int128 val, void helper_stq_mmu(CPUArchState *env, uint64_t addr, uint64_t val,
MemOpIdx oi, uintptr_t ra) MemOpIdx oi, uintptr_t ra)
{
do_st8_mmu(env, addr, val, get_memop(oi), ra);
}
void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st8_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
static void do_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val,
MemOp mop, uintptr_t ra)
{ {
void *haddr; void *haddr;
MemOpIdx mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_128);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE); haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) { if (mop & MO_BSWAP) {
val = bswap128(val); val = bswap128(val);
} }
store_atom_16(cpu, ra, haddr, mop, val); store_atom_16(env, ra, haddr, mop, val);
clear_helper_retaddr(); clear_helper_retaddr();
} }
void helper_st16_mmu(CPUArchState *env, uint64_t addr, Int128 val,
MemOpIdx oi, uintptr_t ra)
{
do_st16_mmu(env, addr, val, get_memop(oi), ra);
}
void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx oi)
{
helper_st16_mmu(env, addr, val, oi, GETPC());
}
void cpu_st16_mmu(CPUArchState *env, abi_ptr addr,
Int128 val, MemOpIdx oi, uintptr_t ra)
{
do_st16_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr ptr) uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr ptr)
{ {
uint32_t ret; uint32_t ret;
@@ -1167,7 +1330,7 @@ uint8_t cpu_ldb_code_mmu(CPUArchState *env, abi_ptr addr,
void *haddr; void *haddr;
uint8_t ret; uint8_t ret;
haddr = cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH);
ret = ldub_p(haddr); ret = ldub_p(haddr);
clear_helper_retaddr(); clear_helper_retaddr();
return ret; return ret;
@@ -1179,7 +1342,7 @@ uint16_t cpu_ldw_code_mmu(CPUArchState *env, abi_ptr addr,
void *haddr; void *haddr;
uint16_t ret; uint16_t ret;
haddr = cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH);
ret = lduw_p(haddr); ret = lduw_p(haddr);
clear_helper_retaddr(); clear_helper_retaddr();
if (get_memop(oi) & MO_BSWAP) { if (get_memop(oi) & MO_BSWAP) {
@@ -1194,7 +1357,7 @@ uint32_t cpu_ldl_code_mmu(CPUArchState *env, abi_ptr addr,
void *haddr; void *haddr;
uint32_t ret; uint32_t ret;
haddr = cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH);
ret = ldl_p(haddr); ret = ldl_p(haddr);
clear_helper_retaddr(); clear_helper_retaddr();
if (get_memop(oi) & MO_BSWAP) { if (get_memop(oi) & MO_BSWAP) {
@@ -1209,7 +1372,7 @@ uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr addr,
void *haddr; void *haddr;
uint64_t ret; uint64_t ret;
haddr = cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD);
ret = ldq_p(haddr); ret = ldq_p(haddr);
clear_helper_retaddr(); clear_helper_retaddr();
if (get_memop(oi) & MO_BSWAP) { if (get_memop(oi) & MO_BSWAP) {
@@ -1223,7 +1386,7 @@ uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr addr,
/* /*
* Do not allow unaligned operations to proceed. Return the host address. * Do not allow unaligned operations to proceed. Return the host address.
*/ */
static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, static void *atomic_mmu_lookup(CPUArchState *env, vaddr addr, MemOpIdx oi,
int size, uintptr_t retaddr) int size, uintptr_t retaddr)
{ {
MemOp mop = get_memop(oi); MemOp mop = get_memop(oi);
@@ -1232,15 +1395,15 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
/* Enforce guest required alignment. */ /* Enforce guest required alignment. */
if (unlikely(addr & ((1 << a_bits) - 1))) { if (unlikely(addr & ((1 << a_bits) - 1))) {
cpu_loop_exit_sigbus(cpu, addr, MMU_DATA_STORE, retaddr); cpu_loop_exit_sigbus(env_cpu(env), addr, MMU_DATA_STORE, retaddr);
} }
/* Enforce qemu required alignment. */ /* Enforce qemu required alignment. */
if (unlikely(addr & (size - 1))) { if (unlikely(addr & (size - 1))) {
cpu_loop_exit_atomic(cpu, retaddr); cpu_loop_exit_atomic(env_cpu(env), retaddr);
} }
ret = g2h(cpu, addr); ret = g2h(env_cpu(env), addr);
set_helper_retaddr(retaddr); set_helper_retaddr(retaddr);
return ret; return ret;
} }

View File

@@ -904,7 +904,7 @@ static void alsa_init_per_direction(AudiodevAlsaPerDirectionOptions *apdo)
} }
} }
static void *alsa_audio_init(Audiodev *dev, Error **errp) static void *alsa_audio_init(Audiodev *dev)
{ {
AudiodevAlsaOptions *aopts; AudiodevAlsaOptions *aopts;
assert(dev->driver == AUDIODEV_DRIVER_ALSA); assert(dev->driver == AUDIODEV_DRIVER_ALSA);
@@ -960,6 +960,7 @@ static struct audio_driver alsa_audio_driver = {
.init = alsa_audio_init, .init = alsa_audio_init,
.fini = alsa_audio_fini, .fini = alsa_audio_fini,
.pcm_ops = &alsa_pcm_ops, .pcm_ops = &alsa_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX, .max_voices_out = INT_MAX,
.max_voices_in = INT_MAX, .max_voices_in = INT_MAX,
.voice_size_out = sizeof (ALSAVoiceOut), .voice_size_out = sizeof (ALSAVoiceOut),

View File

@@ -26,7 +26,6 @@
#include "audio/audio.h" #include "audio/audio.h"
#include "monitor/hmp.h" #include "monitor/hmp.h"
#include "monitor/monitor.h" #include "monitor/monitor.h"
#include "qapi/error.h"
#include "qapi/qmp/qdict.h" #include "qapi/qmp/qdict.h"
static QLIST_HEAD (capture_list_head, CaptureState) capture_head; static QLIST_HEAD (capture_list_head, CaptureState) capture_head;
@@ -66,11 +65,10 @@ void hmp_wavcapture(Monitor *mon, const QDict *qdict)
int nchannels = qdict_get_try_int(qdict, "nchannels", 2); int nchannels = qdict_get_try_int(qdict, "nchannels", 2);
const char *audiodev = qdict_get_str(qdict, "audiodev"); const char *audiodev = qdict_get_str(qdict, "audiodev");
CaptureState *s; CaptureState *s;
Error *local_err = NULL; AudioState *as = audio_state_by_name(audiodev);
AudioState *as = audio_state_by_name(audiodev, &local_err);
if (!as) { if (!as) {
error_report_err(local_err); monitor_printf(mon, "Audiodev '%s' not found\n", audiodev);
return; return;
} }

View File

@@ -32,9 +32,7 @@
#include "qapi/qobject-input-visitor.h" #include "qapi/qobject-input-visitor.h"
#include "qapi/qapi-visit-audio.h" #include "qapi/qapi-visit-audio.h"
#include "qapi/qapi-commands-audio.h" #include "qapi/qapi-commands-audio.h"
#include "qapi/qmp/qdict.h"
#include "qemu/cutils.h" #include "qemu/cutils.h"
#include "qemu/error-report.h"
#include "qemu/log.h" #include "qemu/log.h"
#include "qemu/module.h" #include "qemu/module.h"
#include "qemu/help_option.h" #include "qemu/help_option.h"
@@ -63,22 +61,19 @@ const char *audio_prio_list[] = {
"spice", "spice",
CONFIG_AUDIO_DRIVERS CONFIG_AUDIO_DRIVERS
"none", "none",
"wav",
NULL NULL
}; };
static QLIST_HEAD(, audio_driver) audio_drivers; static QLIST_HEAD(, audio_driver) audio_drivers;
static AudiodevListHead audiodevs = static AudiodevListHead audiodevs = QSIMPLEQ_HEAD_INITIALIZER(audiodevs);
QSIMPLEQ_HEAD_INITIALIZER(audiodevs);
static AudiodevListHead default_audiodevs =
QSIMPLEQ_HEAD_INITIALIZER(default_audiodevs);
void audio_driver_register(audio_driver *drv) void audio_driver_register(audio_driver *drv)
{ {
QLIST_INSERT_HEAD(&audio_drivers, drv, next); QLIST_INSERT_HEAD(&audio_drivers, drv, next);
} }
static audio_driver *audio_driver_lookup(const char *name) audio_driver *audio_driver_lookup(const char *name)
{ {
struct audio_driver *d; struct audio_driver *d;
Error *local_err = NULL; Error *local_err = NULL;
@@ -104,7 +99,6 @@ static audio_driver *audio_driver_lookup(const char *name)
static QTAILQ_HEAD(AudioStateHead, AudioState) audio_states = static QTAILQ_HEAD(AudioStateHead, AudioState) audio_states =
QTAILQ_HEAD_INITIALIZER(audio_states); QTAILQ_HEAD_INITIALIZER(audio_states);
static AudioState *default_audio_state;
const struct mixeng_volume nominal_volume = { const struct mixeng_volume nominal_volume = {
.mute = 0, .mute = 0,
@@ -117,6 +111,8 @@ const struct mixeng_volume nominal_volume = {
#endif #endif
}; };
static bool legacy_config = true;
int audio_bug (const char *funcname, int cond) int audio_bug (const char *funcname, int cond)
{ {
if (cond) { if (cond) {
@@ -1557,11 +1553,9 @@ size_t audio_generic_read(HWVoiceIn *hw, void *buf, size_t size)
} }
static int audio_driver_init(AudioState *s, struct audio_driver *drv, static int audio_driver_init(AudioState *s, struct audio_driver *drv,
Audiodev *dev, Error **errp) bool msg, Audiodev *dev)
{ {
Error *local_err = NULL; s->drv_opaque = drv->init(dev);
s->drv_opaque = drv->init(dev, &local_err);
if (s->drv_opaque) { if (s->drv_opaque) {
if (!drv->pcm_ops->get_buffer_in) { if (!drv->pcm_ops->get_buffer_in) {
@@ -1573,15 +1567,13 @@ static int audio_driver_init(AudioState *s, struct audio_driver *drv,
drv->pcm_ops->put_buffer_out = audio_generic_put_buffer_out; drv->pcm_ops->put_buffer_out = audio_generic_put_buffer_out;
} }
audio_init_nb_voices_out(s, drv, 1); audio_init_nb_voices_out(s, drv);
audio_init_nb_voices_in(s, drv, 0); audio_init_nb_voices_in(s, drv);
s->drv = drv; s->drv = drv;
return 0; return 0;
} else { } else {
if (local_err) { if (msg) {
error_propagate(errp, local_err); dolog("Could not init `%s' audio driver\n", drv->name);
} else {
error_setg(errp, "Could not init `%s' audio driver", drv->name);
} }
return -1; return -1;
} }
@@ -1661,7 +1653,6 @@ static void free_audio_state(AudioState *s)
void audio_cleanup(void) void audio_cleanup(void)
{ {
default_audio_state = NULL;
while (!QTAILQ_EMPTY(&audio_states)) { while (!QTAILQ_EMPTY(&audio_states)) {
AudioState *s = QTAILQ_FIRST(&audio_states); AudioState *s = QTAILQ_FIRST(&audio_states);
QTAILQ_REMOVE(&audio_states, s, list); QTAILQ_REMOVE(&audio_states, s, list);
@@ -1688,25 +1679,19 @@ static const VMStateDescription vmstate_audio = {
} }
}; };
void audio_create_default_audiodevs(void) static void audio_validate_opts(Audiodev *dev, Error **errp);
static AudiodevListEntry *audiodev_find(
AudiodevListHead *head, const char *drvname)
{ {
for (int i = 0; audio_prio_list[i]; i++) { AudiodevListEntry *e;
if (audio_driver_lookup(audio_prio_list[i])) { QSIMPLEQ_FOREACH(e, head, next) {
QDict *dict = qdict_new(); if (strcmp(AudiodevDriver_str(e->dev->driver), drvname) == 0) {
Audiodev *dev = NULL; return e;
Visitor *v;
qdict_put_str(dict, "driver", audio_prio_list[i]);
qdict_put_str(dict, "id", "#default");
v = qobject_input_visitor_new_keyval(QOBJECT(dict));
qobject_unref(dict);
visit_type_Audiodev(v, NULL, &dev, &error_fatal);
visit_free(v);
audio_define_default(dev, &error_abort);
} }
} }
return NULL;
} }
/* /*
@@ -1715,16 +1700,62 @@ void audio_create_default_audiodevs(void)
* if dev == NULL => legacy implicit initialization, return the already created * if dev == NULL => legacy implicit initialization, return the already created
* state or create a new one * state or create a new one
*/ */
static AudioState *audio_init(Audiodev *dev, Error **errp) static AudioState *audio_init(Audiodev *dev, const char *name)
{ {
static bool atexit_registered; static bool atexit_registered;
size_t i;
int done = 0; int done = 0;
const char *drvname; const char *drvname = NULL;
VMChangeStateEntry *vmse; VMChangeStateEntry *vmse;
AudioState *s; AudioState *s;
struct audio_driver *driver; struct audio_driver *driver;
/* silence gcc warning about uninitialized variable */
AudiodevListHead head = QSIMPLEQ_HEAD_INITIALIZER(head);
if (using_spice) {
/*
* When using spice allow the spice audio driver being picked
* as default.
*
* Temporary hack. Using audio devices without explicit
* audiodev= property is already deprecated. Same goes for
* the -soundhw switch. Once this support gets finally
* removed we can also drop the concept of a default audio
* backend and this can go away.
*/
driver = audio_driver_lookup("spice");
if (driver) {
driver->can_be_default = 1;
}
}
if (dev) {
/* -audiodev option */
legacy_config = false;
drvname = AudiodevDriver_str(dev->driver);
} else if (!QTAILQ_EMPTY(&audio_states)) {
if (!legacy_config) {
dolog("Device %s: audiodev default parameter is deprecated, please "
"specify audiodev=%s\n", name,
QTAILQ_FIRST(&audio_states)->dev->id);
}
return QTAILQ_FIRST(&audio_states);
} else {
/* legacy implicit initialization */
head = audio_handle_legacy_opts();
/*
* In case of legacy initialization, all Audiodevs in the list will have
* the same configuration (except the driver), so it doesn't matter which
* one we chose. We need an Audiodev to set up AudioState before we can
* init a driver. Also note that dev at this point is still in the
* list.
*/
dev = QSIMPLEQ_FIRST(&head)->dev;
audio_validate_opts(dev, &error_abort);
}
s = g_new0(AudioState, 1); s = g_new0(AudioState, 1);
s->dev = dev;
QLIST_INIT (&s->hw_head_out); QLIST_INIT (&s->hw_head_out);
QLIST_INIT (&s->hw_head_in); QLIST_INIT (&s->hw_head_in);
@@ -1736,36 +1767,56 @@ static AudioState *audio_init(Audiodev *dev, Error **errp)
s->ts = timer_new_ns(QEMU_CLOCK_VIRTUAL, audio_timer, s); s->ts = timer_new_ns(QEMU_CLOCK_VIRTUAL, audio_timer, s);
if (dev) { s->nb_hw_voices_out = audio_get_pdo_out(dev)->voices;
/* -audiodev option */ s->nb_hw_voices_in = audio_get_pdo_in(dev)->voices;
s->dev = dev;
drvname = AudiodevDriver_str(dev->driver); if (s->nb_hw_voices_out < 1) {
dolog ("Bogus number of playback voices %d, setting to 1\n",
s->nb_hw_voices_out);
s->nb_hw_voices_out = 1;
}
if (s->nb_hw_voices_in < 0) {
dolog ("Bogus number of capture voices %d, setting to 0\n",
s->nb_hw_voices_in);
s->nb_hw_voices_in = 0;
}
if (drvname) {
driver = audio_driver_lookup(drvname); driver = audio_driver_lookup(drvname);
if (driver) { if (driver) {
done = !audio_driver_init(s, driver, dev, errp); done = !audio_driver_init(s, driver, true, dev);
} else { } else {
error_setg(errp, "Unknown audio driver `%s'\n", drvname); dolog ("Unknown audio driver `%s'\n", drvname);
} }
if (!done) { if (!done) {
goto out; free_audio_state(s);
return NULL;
} }
} else { } else {
assert(!default_audio_state); for (i = 0; audio_prio_list[i]; i++) {
for (;;) { AudiodevListEntry *e = audiodev_find(&head, audio_prio_list[i]);
AudiodevListEntry *e = QSIMPLEQ_FIRST(&default_audiodevs); driver = audio_driver_lookup(audio_prio_list[i]);
if (!e) {
error_setg(errp, "no default audio driver available"); if (e && driver) {
goto out; s->dev = dev = e->dev;
audio_validate_opts(dev, &error_abort);
done = !audio_driver_init(s, driver, false, dev);
if (done) {
e->dev = NULL;
break;
}
} }
s->dev = dev = e->dev;
drvname = AudiodevDriver_str(dev->driver);
driver = audio_driver_lookup(drvname);
if (!audio_driver_init(s, driver, dev, NULL)) {
break;
}
QSIMPLEQ_REMOVE_HEAD(&default_audiodevs, next);
} }
} }
audio_free_audiodev_list(&head);
if (!done) {
driver = audio_driver_lookup("none");
done = !audio_driver_init(s, driver, false, dev);
assert(done);
dolog("warning: Using timer based audio emulation\n");
}
if (dev->timer_period <= 0) { if (dev->timer_period <= 0) {
s->period_ticks = 1; s->period_ticks = 1;
@@ -1781,43 +1832,29 @@ static AudioState *audio_init(Audiodev *dev, Error **errp)
QTAILQ_INSERT_TAIL(&audio_states, s, list); QTAILQ_INSERT_TAIL(&audio_states, s, list);
QLIST_INIT (&s->card_head); QLIST_INIT (&s->card_head);
vmstate_register_any(NULL, &vmstate_audio, s); vmstate_register (NULL, 0, &vmstate_audio, s);
return s; return s;
out:
free_audio_state(s);
return NULL;
} }
AudioState *audio_get_default_audio_state(Error **errp) void audio_free_audiodev_list(AudiodevListHead *head)
{ {
if (!default_audio_state) { AudiodevListEntry *e;
default_audio_state = audio_init(NULL, errp); while ((e = QSIMPLEQ_FIRST(head))) {
if (!default_audio_state) { QSIMPLEQ_REMOVE_HEAD(head, next);
if (!QSIMPLEQ_EMPTY(&audiodevs)) { qapi_free_Audiodev(e->dev);
error_append_hint(errp, "Perhaps you wanted to use -audio or set audiodev=%s?\n", g_free(e);
QSIMPLEQ_FIRST(&audiodevs)->dev->id);
}
}
} }
return default_audio_state;
} }
bool AUD_register_card (const char *name, QEMUSoundCard *card, Error **errp) void AUD_register_card (const char *name, QEMUSoundCard *card)
{ {
if (!card->state) { if (!card->state) {
card->state = audio_get_default_audio_state(errp); card->state = audio_init(NULL, name);
if (!card->state) {
return false;
}
} }
card->name = g_strdup (name); card->name = g_strdup (name);
memset (&card->entries, 0, sizeof (card->entries)); memset (&card->entries, 0, sizeof (card->entries));
QLIST_INSERT_HEAD(&card->state->card_head, card, entries); QLIST_INSERT_HEAD(&card->state->card_head, card, entries);
return true;
} }
void AUD_remove_card (QEMUSoundCard *card) void AUD_remove_card (QEMUSoundCard *card)
@@ -1839,8 +1876,10 @@ CaptureVoiceOut *AUD_add_capture(
struct capture_callback *cb; struct capture_callback *cb;
if (!s) { if (!s) {
error_report("Capturing without setting an audiodev is not supported"); if (!legacy_config) {
abort(); dolog("Capturing without setting an audiodev is deprecated\n");
}
s = audio_init(NULL, NULL);
} }
if (!audio_get_pdo_out(s->dev)->mixing_engine) { if (!audio_get_pdo_out(s->dev)->mixing_engine) {
@@ -2144,24 +2183,17 @@ void audio_define(Audiodev *dev)
QSIMPLEQ_INSERT_TAIL(&audiodevs, e, next); QSIMPLEQ_INSERT_TAIL(&audiodevs, e, next);
} }
void audio_define_default(Audiodev *dev, Error **errp) bool audio_init_audiodevs(void)
{
AudiodevListEntry *e;
audio_validate_opts(dev, errp);
e = g_new0(AudiodevListEntry, 1);
e->dev = dev;
QSIMPLEQ_INSERT_TAIL(&default_audiodevs, e, next);
}
void audio_init_audiodevs(void)
{ {
AudiodevListEntry *e; AudiodevListEntry *e;
QSIMPLEQ_FOREACH(e, &audiodevs, next) { QSIMPLEQ_FOREACH(e, &audiodevs, next) {
audio_init(e->dev, &error_fatal); if (!audio_init(e->dev, NULL)) {
return false;
}
} }
return true;
} }
audsettings audiodev_to_audsettings(AudiodevPerDirectionOptions *pdo) audsettings audiodev_to_audsettings(AudiodevPerDirectionOptions *pdo)
@@ -2223,7 +2255,7 @@ int audio_buffer_bytes(AudiodevPerDirectionOptions *pdo,
audioformat_bytes_per_sample(as->fmt); audioformat_bytes_per_sample(as->fmt);
} }
AudioState *audio_state_by_name(const char *name, Error **errp) AudioState *audio_state_by_name(const char *name)
{ {
AudioState *s; AudioState *s;
QTAILQ_FOREACH(s, &audio_states, list) { QTAILQ_FOREACH(s, &audio_states, list) {
@@ -2232,7 +2264,6 @@ AudioState *audio_state_by_name(const char *name, Error **errp)
return s; return s;
} }
} }
error_setg(errp, "audiodev '%s' not found", name);
return NULL; return NULL;
} }

View File

@@ -94,7 +94,7 @@ typedef struct QEMUAudioTimeStamp {
void AUD_vlog (const char *cap, const char *fmt, va_list ap) G_GNUC_PRINTF(2, 0); void AUD_vlog (const char *cap, const char *fmt, va_list ap) G_GNUC_PRINTF(2, 0);
void AUD_log (const char *cap, const char *fmt, ...) G_GNUC_PRINTF(2, 3); void AUD_log (const char *cap, const char *fmt, ...) G_GNUC_PRINTF(2, 3);
bool AUD_register_card (const char *name, QEMUSoundCard *card, Error **errp); void AUD_register_card (const char *name, QEMUSoundCard *card);
void AUD_remove_card (QEMUSoundCard *card); void AUD_remove_card (QEMUSoundCard *card);
CaptureVoiceOut *AUD_add_capture( CaptureVoiceOut *AUD_add_capture(
AudioState *s, AudioState *s,
@@ -169,14 +169,12 @@ void audio_sample_from_uint64(void *samples, int pos,
uint64_t left, uint64_t right); uint64_t left, uint64_t right);
void audio_define(Audiodev *audio); void audio_define(Audiodev *audio);
void audio_define_default(Audiodev *dev, Error **errp);
void audio_parse_option(const char *opt); void audio_parse_option(const char *opt);
void audio_create_default_audiodevs(void); bool audio_init_audiodevs(void);
void audio_init_audiodevs(void);
void audio_help(void); void audio_help(void);
void audio_legacy_help(void);
AudioState *audio_state_by_name(const char *name, Error **errp); AudioState *audio_state_by_name(const char *name);
AudioState *audio_get_default_audio_state(Error **errp);
const char *audio_get_id(QEMUSoundCard *card); const char *audio_get_id(QEMUSoundCard *card);
#define DEFINE_AUDIO_PROPERTIES(_s, _f) \ #define DEFINE_AUDIO_PROPERTIES(_s, _f) \

View File

@@ -140,12 +140,13 @@ typedef struct audio_driver audio_driver;
struct audio_driver { struct audio_driver {
const char *name; const char *name;
const char *descr; const char *descr;
void *(*init) (Audiodev *, Error **); void *(*init) (Audiodev *);
void (*fini) (void *); void (*fini) (void *);
#ifdef CONFIG_GIO #ifdef CONFIG_GIO
void (*set_dbus_server) (AudioState *s, GDBusObjectManagerServer *manager, bool p2p); void (*set_dbus_server) (AudioState *s, GDBusObjectManagerServer *manager, bool p2p);
#endif #endif
struct audio_pcm_ops *pcm_ops; struct audio_pcm_ops *pcm_ops;
int can_be_default;
int max_voices_out; int max_voices_out;
int max_voices_in; int max_voices_in;
size_t voice_size_out; size_t voice_size_out;
@@ -242,6 +243,7 @@ extern const struct mixeng_volume nominal_volume;
extern const char *audio_prio_list[]; extern const char *audio_prio_list[];
void audio_driver_register(audio_driver *drv); void audio_driver_register(audio_driver *drv);
audio_driver *audio_driver_lookup(const char *name);
void audio_pcm_init_info (struct audio_pcm_info *info, struct audsettings *as); void audio_pcm_init_info (struct audio_pcm_info *info, struct audsettings *as);
void audio_pcm_info_clear_buf (struct audio_pcm_info *info, void *buf, int len); void audio_pcm_info_clear_buf (struct audio_pcm_info *info, void *buf, int len);
@@ -295,6 +297,9 @@ typedef struct AudiodevListEntry {
} AudiodevListEntry; } AudiodevListEntry;
typedef QSIMPLEQ_HEAD(, AudiodevListEntry) AudiodevListHead; typedef QSIMPLEQ_HEAD(, AudiodevListEntry) AudiodevListHead;
AudiodevListHead audio_handle_legacy_opts(void);
void audio_free_audiodev_list(AudiodevListHead *head);
void audio_create_pdos(Audiodev *dev); void audio_create_pdos(Audiodev *dev);
AudiodevPerDirectionOptions *audio_get_pdo_in(Audiodev *dev); AudiodevPerDirectionOptions *audio_get_pdo_in(Audiodev *dev);

591
audio/audio_legacy.c Normal file
View File

@@ -0,0 +1,591 @@
/*
* QEMU Audio subsystem: legacy configuration handling
*
* Copyright (c) 2015-2019 Zoltán Kővágó <DirtY.iCE.hu@gmail.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "audio.h"
#include "audio_int.h"
#include "qemu/cutils.h"
#include "qemu/timer.h"
#include "qapi/error.h"
#include "qapi/qapi-visit-audio.h"
#include "qapi/visitor-impl.h"
#define AUDIO_CAP "audio-legacy"
#include "audio_int.h"
static uint32_t toui32(const char *str)
{
uint64_t ret;
if (parse_uint_full(str, 10, &ret) || ret > UINT32_MAX) {
dolog("Invalid integer value `%s'\n", str);
exit(1);
}
return ret;
}
/* helper functions to convert env variables */
static void get_bool(const char *env, bool *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
*dst = toui32(val) != 0;
*has_dst = true;
}
}
static void get_int(const char *env, uint32_t *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
*dst = toui32(val);
*has_dst = true;
}
}
static void get_str(const char *env, char **dst)
{
const char *val = getenv(env);
if (val) {
g_free(*dst);
*dst = g_strdup(val);
}
}
static void get_fmt(const char *env, AudioFormat *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
size_t i;
for (i = 0; AudioFormat_lookup.size; ++i) {
if (strcasecmp(val, AudioFormat_lookup.array[i]) == 0) {
*dst = i;
*has_dst = true;
return;
}
}
dolog("Invalid audio format `%s'\n", val);
exit(1);
}
}
#if defined(CONFIG_AUDIO_ALSA) || defined(CONFIG_AUDIO_DSOUND)
static void get_millis_to_usecs(const char *env, uint32_t *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
*dst = toui32(val) * 1000;
*has_dst = true;
}
}
#endif
#if defined(CONFIG_AUDIO_ALSA) || defined(CONFIG_AUDIO_COREAUDIO) || \
defined(CONFIG_AUDIO_PA) || defined(CONFIG_AUDIO_SDL) || \
defined(CONFIG_AUDIO_DSOUND) || defined(CONFIG_AUDIO_OSS)
static uint32_t frames_to_usecs(uint32_t frames,
AudiodevPerDirectionOptions *pdo)
{
uint32_t freq = pdo->has_frequency ? pdo->frequency : 44100;
return (frames * 1000000 + freq / 2) / freq;
}
#endif
#ifdef CONFIG_AUDIO_COREAUDIO
static void get_frames_to_usecs(const char *env, uint32_t *dst, bool *has_dst,
AudiodevPerDirectionOptions *pdo)
{
const char *val = getenv(env);
if (val) {
*dst = frames_to_usecs(toui32(val), pdo);
*has_dst = true;
}
}
#endif
#if defined(CONFIG_AUDIO_PA) || defined(CONFIG_AUDIO_SDL) || \
defined(CONFIG_AUDIO_DSOUND) || defined(CONFIG_AUDIO_OSS)
static uint32_t samples_to_usecs(uint32_t samples,
AudiodevPerDirectionOptions *pdo)
{
uint32_t channels = pdo->has_channels ? pdo->channels : 2;
return frames_to_usecs(samples / channels, pdo);
}
#endif
#if defined(CONFIG_AUDIO_PA) || defined(CONFIG_AUDIO_SDL)
static void get_samples_to_usecs(const char *env, uint32_t *dst, bool *has_dst,
AudiodevPerDirectionOptions *pdo)
{
const char *val = getenv(env);
if (val) {
*dst = samples_to_usecs(toui32(val), pdo);
*has_dst = true;
}
}
#endif
#if defined(CONFIG_AUDIO_DSOUND) || defined(CONFIG_AUDIO_OSS)
static uint32_t bytes_to_usecs(uint32_t bytes, AudiodevPerDirectionOptions *pdo)
{
AudioFormat fmt = pdo->has_format ? pdo->format : AUDIO_FORMAT_S16;
uint32_t bytes_per_sample = audioformat_bytes_per_sample(fmt);
return samples_to_usecs(bytes / bytes_per_sample, pdo);
}
static void get_bytes_to_usecs(const char *env, uint32_t *dst, bool *has_dst,
AudiodevPerDirectionOptions *pdo)
{
const char *val = getenv(env);
if (val) {
*dst = bytes_to_usecs(toui32(val), pdo);
*has_dst = true;
}
}
#endif
/* backend specific functions */
#ifdef CONFIG_AUDIO_ALSA
/* ALSA */
static void handle_alsa_per_direction(
AudiodevAlsaPerDirectionOptions *apdo, const char *prefix)
{
char buf[64];
size_t len = strlen(prefix);
bool size_in_usecs = false;
bool dummy;
memcpy(buf, prefix, len);
strcpy(buf + len, "TRY_POLL");
get_bool(buf, &apdo->try_poll, &apdo->has_try_poll);
strcpy(buf + len, "DEV");
get_str(buf, &apdo->dev);
strcpy(buf + len, "SIZE_IN_USEC");
get_bool(buf, &size_in_usecs, &dummy);
strcpy(buf + len, "PERIOD_SIZE");
get_int(buf, &apdo->period_length, &apdo->has_period_length);
if (apdo->has_period_length && !size_in_usecs) {
apdo->period_length = frames_to_usecs(
apdo->period_length,
qapi_AudiodevAlsaPerDirectionOptions_base(apdo));
}
strcpy(buf + len, "BUFFER_SIZE");
get_int(buf, &apdo->buffer_length, &apdo->has_buffer_length);
if (apdo->has_buffer_length && !size_in_usecs) {
apdo->buffer_length = frames_to_usecs(
apdo->buffer_length,
qapi_AudiodevAlsaPerDirectionOptions_base(apdo));
}
}
static void handle_alsa(Audiodev *dev)
{
AudiodevAlsaOptions *aopt = &dev->u.alsa;
handle_alsa_per_direction(aopt->in, "QEMU_ALSA_ADC_");
handle_alsa_per_direction(aopt->out, "QEMU_ALSA_DAC_");
get_millis_to_usecs("QEMU_ALSA_THRESHOLD",
&aopt->threshold, &aopt->has_threshold);
}
#endif
#ifdef CONFIG_AUDIO_COREAUDIO
/* coreaudio */
static void handle_coreaudio(Audiodev *dev)
{
get_frames_to_usecs(
"QEMU_COREAUDIO_BUFFER_SIZE",
&dev->u.coreaudio.out->buffer_length,
&dev->u.coreaudio.out->has_buffer_length,
qapi_AudiodevCoreaudioPerDirectionOptions_base(dev->u.coreaudio.out));
get_int("QEMU_COREAUDIO_BUFFER_COUNT",
&dev->u.coreaudio.out->buffer_count,
&dev->u.coreaudio.out->has_buffer_count);
}
#endif
#ifdef CONFIG_AUDIO_DSOUND
/* dsound */
static void handle_dsound(Audiodev *dev)
{
get_millis_to_usecs("QEMU_DSOUND_LATENCY_MILLIS",
&dev->u.dsound.latency, &dev->u.dsound.has_latency);
get_bytes_to_usecs("QEMU_DSOUND_BUFSIZE_OUT",
&dev->u.dsound.out->buffer_length,
&dev->u.dsound.out->has_buffer_length,
dev->u.dsound.out);
get_bytes_to_usecs("QEMU_DSOUND_BUFSIZE_IN",
&dev->u.dsound.in->buffer_length,
&dev->u.dsound.in->has_buffer_length,
dev->u.dsound.in);
}
#endif
#ifdef CONFIG_AUDIO_OSS
/* OSS */
static void handle_oss_per_direction(
AudiodevOssPerDirectionOptions *opdo, const char *try_poll_env,
const char *dev_env)
{
get_bool(try_poll_env, &opdo->try_poll, &opdo->has_try_poll);
get_str(dev_env, &opdo->dev);
get_bytes_to_usecs("QEMU_OSS_FRAGSIZE",
&opdo->buffer_length, &opdo->has_buffer_length,
qapi_AudiodevOssPerDirectionOptions_base(opdo));
get_int("QEMU_OSS_NFRAGS", &opdo->buffer_count,
&opdo->has_buffer_count);
}
static void handle_oss(Audiodev *dev)
{
AudiodevOssOptions *oopt = &dev->u.oss;
handle_oss_per_direction(oopt->in, "QEMU_AUDIO_ADC_TRY_POLL",
"QEMU_OSS_ADC_DEV");
handle_oss_per_direction(oopt->out, "QEMU_AUDIO_DAC_TRY_POLL",
"QEMU_OSS_DAC_DEV");
get_bool("QEMU_OSS_MMAP", &oopt->try_mmap, &oopt->has_try_mmap);
get_bool("QEMU_OSS_EXCLUSIVE", &oopt->exclusive, &oopt->has_exclusive);
get_int("QEMU_OSS_POLICY", &oopt->dsp_policy, &oopt->has_dsp_policy);
}
#endif
#ifdef CONFIG_AUDIO_PA
/* pulseaudio */
static void handle_pa_per_direction(
AudiodevPaPerDirectionOptions *ppdo, const char *env)
{
get_str(env, &ppdo->name);
}
static void handle_pa(Audiodev *dev)
{
handle_pa_per_direction(dev->u.pa.in, "QEMU_PA_SOURCE");
handle_pa_per_direction(dev->u.pa.out, "QEMU_PA_SINK");
get_samples_to_usecs(
"QEMU_PA_SAMPLES", &dev->u.pa.in->buffer_length,
&dev->u.pa.in->has_buffer_length,
qapi_AudiodevPaPerDirectionOptions_base(dev->u.pa.in));
get_samples_to_usecs(
"QEMU_PA_SAMPLES", &dev->u.pa.out->buffer_length,
&dev->u.pa.out->has_buffer_length,
qapi_AudiodevPaPerDirectionOptions_base(dev->u.pa.out));
get_str("QEMU_PA_SERVER", &dev->u.pa.server);
}
#endif
#ifdef CONFIG_AUDIO_SDL
/* SDL */
static void handle_sdl(Audiodev *dev)
{
/* SDL is output only */
get_samples_to_usecs("QEMU_SDL_SAMPLES", &dev->u.sdl.out->buffer_length,
&dev->u.sdl.out->has_buffer_length,
qapi_AudiodevSdlPerDirectionOptions_base(dev->u.sdl.out));
}
#endif
/* wav */
static void handle_wav(Audiodev *dev)
{
get_int("QEMU_WAV_FREQUENCY",
&dev->u.wav.out->frequency, &dev->u.wav.out->has_frequency);
get_fmt("QEMU_WAV_FORMAT", &dev->u.wav.out->format,
&dev->u.wav.out->has_format);
get_int("QEMU_WAV_DAC_FIXED_CHANNELS",
&dev->u.wav.out->channels, &dev->u.wav.out->has_channels);
get_str("QEMU_WAV_PATH", &dev->u.wav.path);
}
/* general */
static void handle_per_direction(
AudiodevPerDirectionOptions *pdo, const char *prefix)
{
char buf[64];
size_t len = strlen(prefix);
memcpy(buf, prefix, len);
strcpy(buf + len, "FIXED_SETTINGS");
get_bool(buf, &pdo->fixed_settings, &pdo->has_fixed_settings);
strcpy(buf + len, "FIXED_FREQ");
get_int(buf, &pdo->frequency, &pdo->has_frequency);
strcpy(buf + len, "FIXED_FMT");
get_fmt(buf, &pdo->format, &pdo->has_format);
strcpy(buf + len, "FIXED_CHANNELS");
get_int(buf, &pdo->channels, &pdo->has_channels);
strcpy(buf + len, "VOICES");
get_int(buf, &pdo->voices, &pdo->has_voices);
}
static AudiodevListEntry *legacy_opt(const char *drvname)
{
AudiodevListEntry *e = g_new0(AudiodevListEntry, 1);
e->dev = g_new0(Audiodev, 1);
e->dev->id = g_strdup(drvname);
e->dev->driver = qapi_enum_parse(
&AudiodevDriver_lookup, drvname, -1, &error_abort);
audio_create_pdos(e->dev);
handle_per_direction(audio_get_pdo_in(e->dev), "QEMU_AUDIO_ADC_");
handle_per_direction(audio_get_pdo_out(e->dev), "QEMU_AUDIO_DAC_");
/* Original description: Timer period in HZ (0 - use lowest possible) */
get_int("QEMU_AUDIO_TIMER_PERIOD",
&e->dev->timer_period, &e->dev->has_timer_period);
if (e->dev->has_timer_period && e->dev->timer_period) {
e->dev->timer_period = NANOSECONDS_PER_SECOND / 1000 /
e->dev->timer_period;
}
switch (e->dev->driver) {
#ifdef CONFIG_AUDIO_ALSA
case AUDIODEV_DRIVER_ALSA:
handle_alsa(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_COREAUDIO
case AUDIODEV_DRIVER_COREAUDIO:
handle_coreaudio(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_DSOUND
case AUDIODEV_DRIVER_DSOUND:
handle_dsound(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_OSS
case AUDIODEV_DRIVER_OSS:
handle_oss(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_PA
case AUDIODEV_DRIVER_PA:
handle_pa(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_SDL
case AUDIODEV_DRIVER_SDL:
handle_sdl(e->dev);
break;
#endif
case AUDIODEV_DRIVER_WAV:
handle_wav(e->dev);
break;
default:
break;
}
return e;
}
AudiodevListHead audio_handle_legacy_opts(void)
{
const char *drvname = getenv("QEMU_AUDIO_DRV");
AudiodevListHead head = QSIMPLEQ_HEAD_INITIALIZER(head);
if (drvname) {
AudiodevListEntry *e;
audio_driver *driver = audio_driver_lookup(drvname);
if (!driver) {
dolog("Unknown audio driver `%s'\n", drvname);
exit(1);
}
e = legacy_opt(drvname);
QSIMPLEQ_INSERT_TAIL(&head, e, next);
} else {
for (int i = 0; audio_prio_list[i]; i++) {
audio_driver *driver = audio_driver_lookup(audio_prio_list[i]);
if (driver && driver->can_be_default) {
AudiodevListEntry *e = legacy_opt(driver->name);
QSIMPLEQ_INSERT_TAIL(&head, e, next);
}
}
if (QSIMPLEQ_EMPTY(&head)) {
dolog("Internal error: no default audio driver available\n");
exit(1);
}
}
return head;
}
/* visitor to print -audiodev option */
typedef struct {
Visitor visitor;
bool comma;
GList *path;
} LegacyPrintVisitor;
static bool lv_start_struct(Visitor *v, const char *name, void **obj,
size_t size, Error **errp)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
lv->path = g_list_append(lv->path, g_strdup(name));
return true;
}
static void lv_end_struct(Visitor *v, void **obj)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
lv->path = g_list_delete_link(lv->path, g_list_last(lv->path));
}
static void lv_print_key(Visitor *v, const char *name)
{
GList *e;
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
if (lv->comma) {
putchar(',');
} else {
lv->comma = true;
}
for (e = lv->path; e; e = e->next) {
if (e->data) {
printf("%s.", (const char *) e->data);
}
}
printf("%s=", name);
}
static bool lv_type_int64(Visitor *v, const char *name, int64_t *obj,
Error **errp)
{
lv_print_key(v, name);
printf("%" PRIi64, *obj);
return true;
}
static bool lv_type_uint64(Visitor *v, const char *name, uint64_t *obj,
Error **errp)
{
lv_print_key(v, name);
printf("%" PRIu64, *obj);
return true;
}
static bool lv_type_bool(Visitor *v, const char *name, bool *obj, Error **errp)
{
lv_print_key(v, name);
printf("%s", *obj ? "on" : "off");
return true;
}
static bool lv_type_str(Visitor *v, const char *name, char **obj, Error **errp)
{
const char *str = *obj;
lv_print_key(v, name);
while (*str) {
if (*str == ',') {
putchar(',');
}
putchar(*str++);
}
return true;
}
static void lv_complete(Visitor *v, void *opaque)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
assert(lv->path == NULL);
}
static void lv_free(Visitor *v)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
g_list_free_full(lv->path, g_free);
g_free(lv);
}
static Visitor *legacy_visitor_new(void)
{
LegacyPrintVisitor *lv = g_new0(LegacyPrintVisitor, 1);
lv->visitor.start_struct = lv_start_struct;
lv->visitor.end_struct = lv_end_struct;
/* lists not supported */
lv->visitor.type_int64 = lv_type_int64;
lv->visitor.type_uint64 = lv_type_uint64;
lv->visitor.type_bool = lv_type_bool;
lv->visitor.type_str = lv_type_str;
lv->visitor.type = VISITOR_OUTPUT;
lv->visitor.complete = lv_complete;
lv->visitor.free = lv_free;
return &lv->visitor;
}
void audio_legacy_help(void)
{
AudiodevListHead head;
AudiodevListEntry *e;
printf("Environment variable based configuration deprecated.\n");
printf("Please use the new -audiodev option.\n");
head = audio_handle_legacy_opts();
printf("\nEquivalent -audiodev to your current environment variables:\n");
if (!getenv("QEMU_AUDIO_DRV")) {
printf("(Since you didn't specify QEMU_AUDIO_DRV, I'll list all "
"possibilities)\n");
}
QSIMPLEQ_FOREACH(e, &head, next) {
Visitor *v;
Audiodev *dev = e->dev;
printf("-audiodev ");
v = legacy_visitor_new();
visit_type_Audiodev(v, NULL, &dev, &error_abort);
visit_free(v);
printf("\n");
}
audio_free_audiodev_list(&head);
}

View File

@@ -37,12 +37,11 @@
#endif #endif
static void glue(audio_init_nb_voices_, TYPE)(AudioState *s, static void glue(audio_init_nb_voices_, TYPE)(AudioState *s,
struct audio_driver *drv, int min_voices) struct audio_driver *drv)
{ {
int max_voices = glue (drv->max_voices_, TYPE); int max_voices = glue (drv->max_voices_, TYPE);
size_t voice_size = glue(drv->voice_size_, TYPE); size_t voice_size = glue(drv->voice_size_, TYPE);
glue (s->nb_hw_voices_, TYPE) = glue(audio_get_pdo_, TYPE)(s->dev)->voices;
if (glue (s->nb_hw_voices_, TYPE) > max_voices) { if (glue (s->nb_hw_voices_, TYPE) > max_voices) {
if (!max_voices) { if (!max_voices) {
#ifdef DAC #ifdef DAC
@@ -57,12 +56,6 @@ static void glue(audio_init_nb_voices_, TYPE)(AudioState *s,
glue (s->nb_hw_voices_, TYPE) = max_voices; glue (s->nb_hw_voices_, TYPE) = max_voices;
} }
if (glue (s->nb_hw_voices_, TYPE) < min_voices) {
dolog ("Bogus number of " NAME " voices %d, setting to %d\n",
glue (s->nb_hw_voices_, TYPE),
min_voices);
}
if (audio_bug(__func__, !voice_size && max_voices)) { if (audio_bug(__func__, !voice_size && max_voices)) {
dolog ("drv=`%s' voice_size=0 max_voices=%d\n", dolog ("drv=`%s' voice_size=0 max_voices=%d\n",
drv->name, max_voices); drv->name, max_voices);

View File

@@ -644,7 +644,7 @@ static void coreaudio_enable_out(HWVoiceOut *hw, bool enable)
update_device_playback_state(core); update_device_playback_state(core);
} }
static void *coreaudio_audio_init(Audiodev *dev, Error **errp) static void *coreaudio_audio_init(Audiodev *dev)
{ {
return dev; return dev;
} }
@@ -673,6 +673,7 @@ static struct audio_driver coreaudio_audio_driver = {
.init = coreaudio_audio_init, .init = coreaudio_audio_init,
.fini = coreaudio_audio_fini, .fini = coreaudio_audio_fini,
.pcm_ops = &coreaudio_pcm_ops, .pcm_ops = &coreaudio_pcm_ops,
.can_be_default = 1,
.max_voices_out = 1, .max_voices_out = 1,
.max_voices_in = 0, .max_voices_in = 0,
.voice_size_out = sizeof (coreaudioVoiceOut), .voice_size_out = sizeof (coreaudioVoiceOut),

View File

@@ -395,7 +395,7 @@ dbus_enable_in(HWVoiceIn *hw, bool enable)
} }
static void * static void *
dbus_audio_init(Audiodev *dev, Error **errp) dbus_audio_init(Audiodev *dev)
{ {
DBusAudio *da = g_new0(DBusAudio, 1); DBusAudio *da = g_new0(DBusAudio, 1);
@@ -676,6 +676,7 @@ static struct audio_driver dbus_audio_driver = {
.fini = dbus_audio_fini, .fini = dbus_audio_fini,
.set_dbus_server = dbus_audio_set_server, .set_dbus_server = dbus_audio_set_server,
.pcm_ops = &dbus_pcm_ops, .pcm_ops = &dbus_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX, .max_voices_out = INT_MAX,
.max_voices_in = INT_MAX, .max_voices_in = INT_MAX,
.voice_size_out = sizeof(DBusVoiceOut), .voice_size_out = sizeof(DBusVoiceOut),

View File

@@ -619,7 +619,7 @@ static void dsound_audio_fini (void *opaque)
g_free(s); g_free(s);
} }
static void *dsound_audio_init(Audiodev *dev, Error **errp) static void *dsound_audio_init(Audiodev *dev)
{ {
int err; int err;
HRESULT hr; HRESULT hr;
@@ -721,6 +721,7 @@ static struct audio_driver dsound_audio_driver = {
.init = dsound_audio_init, .init = dsound_audio_init,
.fini = dsound_audio_fini, .fini = dsound_audio_fini,
.pcm_ops = &dsound_pcm_ops, .pcm_ops = &dsound_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX, .max_voices_out = INT_MAX,
.max_voices_in = 1, .max_voices_in = 1,
.voice_size_out = sizeof (DSoundVoiceOut), .voice_size_out = sizeof (DSoundVoiceOut),

View File

@@ -645,7 +645,7 @@ static int qjack_thread_creator(jack_native_thread_t *thread,
} }
#endif #endif
static void *qjack_init(Audiodev *dev, Error **errp) static void *qjack_init(Audiodev *dev)
{ {
assert(dev->driver == AUDIODEV_DRIVER_JACK); assert(dev->driver == AUDIODEV_DRIVER_JACK);
return dev; return dev;
@@ -676,6 +676,7 @@ static struct audio_driver jack_driver = {
.init = qjack_init, .init = qjack_init,
.fini = qjack_fini, .fini = qjack_fini,
.pcm_ops = &jack_pcm_ops, .pcm_ops = &jack_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX, .max_voices_out = INT_MAX,
.max_voices_in = INT_MAX, .max_voices_in = INT_MAX,
.voice_size_out = sizeof(QJackOut), .voice_size_out = sizeof(QJackOut),

View File

@@ -1,6 +1,7 @@
system_ss.add([spice_headers, files('audio.c')]) system_ss.add([spice_headers, files('audio.c')])
system_ss.add(files( system_ss.add(files(
'audio-hmp-cmds.c', 'audio-hmp-cmds.c',
'audio_legacy.c',
'mixeng.c', 'mixeng.c',
'noaudio.c', 'noaudio.c',
'wavaudio.c', 'wavaudio.c',

View File

@@ -104,7 +104,7 @@ static void no_enable_in(HWVoiceIn *hw, bool enable)
} }
} }
static void *no_audio_init(Audiodev *dev, Error **errp) static void *no_audio_init(Audiodev *dev)
{ {
return &no_audio_init; return &no_audio_init;
} }
@@ -135,6 +135,7 @@ static struct audio_driver no_audio_driver = {
.init = no_audio_init, .init = no_audio_init,
.fini = no_audio_fini, .fini = no_audio_fini,
.pcm_ops = &no_pcm_ops, .pcm_ops = &no_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX, .max_voices_out = INT_MAX,
.max_voices_in = INT_MAX, .max_voices_in = INT_MAX,
.voice_size_out = sizeof (NoVoiceOut), .voice_size_out = sizeof (NoVoiceOut),

View File

@@ -28,7 +28,6 @@
#include "qemu/main-loop.h" #include "qemu/main-loop.h"
#include "qemu/module.h" #include "qemu/module.h"
#include "qemu/host-utils.h" #include "qemu/host-utils.h"
#include "qapi/error.h"
#include "audio.h" #include "audio.h"
#include "trace.h" #include "trace.h"
@@ -549,6 +548,7 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
hw->size_emul); hw->size_emul);
hw->buf_emul = NULL; hw->buf_emul = NULL;
} else { } else {
int err;
int trig = 0; int trig = 0;
if (ioctl (fd, SNDCTL_DSP_SETTRIGGER, &trig) < 0) { if (ioctl (fd, SNDCTL_DSP_SETTRIGGER, &trig) < 0) {
oss_logerr (errno, "SNDCTL_DSP_SETTRIGGER 0 failed\n"); oss_logerr (errno, "SNDCTL_DSP_SETTRIGGER 0 failed\n");
@@ -736,7 +736,7 @@ static void oss_init_per_direction(AudiodevOssPerDirectionOptions *opdo)
} }
} }
static void *oss_audio_init(Audiodev *dev, Error **errp) static void *oss_audio_init(Audiodev *dev)
{ {
AudiodevOssOptions *oopts; AudiodevOssOptions *oopts;
assert(dev->driver == AUDIODEV_DRIVER_OSS); assert(dev->driver == AUDIODEV_DRIVER_OSS);
@@ -745,12 +745,8 @@ static void *oss_audio_init(Audiodev *dev, Error **errp)
oss_init_per_direction(oopts->in); oss_init_per_direction(oopts->in);
oss_init_per_direction(oopts->out); oss_init_per_direction(oopts->out);
if (access(oopts->in->dev ?: "/dev/dsp", R_OK | W_OK) < 0) { if (access(oopts->in->dev ?: "/dev/dsp", R_OK | W_OK) < 0 ||
error_setg_errno(errp, errno, "%s not accessible", oopts->in->dev ?: "/dev/dsp"); access(oopts->out->dev ?: "/dev/dsp", R_OK | W_OK) < 0) {
return NULL;
}
if (access(oopts->out->dev ?: "/dev/dsp", R_OK | W_OK) < 0) {
error_setg_errno(errp, errno, "%s not accessible", oopts->out->dev ?: "/dev/dsp");
return NULL; return NULL;
} }
return dev; return dev;
@@ -783,6 +779,7 @@ static struct audio_driver oss_audio_driver = {
.init = oss_audio_init, .init = oss_audio_init,
.fini = oss_audio_fini, .fini = oss_audio_fini,
.pcm_ops = &oss_pcm_ops, .pcm_ops = &oss_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX, .max_voices_out = INT_MAX,
.max_voices_in = INT_MAX, .max_voices_in = INT_MAX,
.voice_size_out = sizeof (OSSVoiceOut), .voice_size_out = sizeof (OSSVoiceOut),

View File

@@ -3,7 +3,7 @@
#include "qemu/osdep.h" #include "qemu/osdep.h"
#include "qemu/module.h" #include "qemu/module.h"
#include "audio.h" #include "audio.h"
#include "qapi/error.h" #include "qapi/opts-visitor.h"
#include <pulse/pulseaudio.h> #include <pulse/pulseaudio.h>
@@ -818,7 +818,7 @@ fail:
return NULL; return NULL;
} }
static void *qpa_audio_init(Audiodev *dev, Error **errp) static void *qpa_audio_init(Audiodev *dev)
{ {
paaudio *g; paaudio *g;
AudiodevPaOptions *popts = &dev->u.pa; AudiodevPaOptions *popts = &dev->u.pa;
@@ -834,12 +834,10 @@ static void *qpa_audio_init(Audiodev *dev, Error **errp)
runtime = getenv("XDG_RUNTIME_DIR"); runtime = getenv("XDG_RUNTIME_DIR");
if (!runtime) { if (!runtime) {
error_setg(errp, "XDG_RUNTIME_DIR not set");
return NULL; return NULL;
} }
snprintf(pidfile, sizeof(pidfile), "%s/pulse/pid", runtime); snprintf(pidfile, sizeof(pidfile), "%s/pulse/pid", runtime);
if (stat(pidfile, &st) != 0) { if (stat(pidfile, &st) != 0) {
error_setg_errno(errp, errno, "could not stat pidfile %s", pidfile);
return NULL; return NULL;
} }
} }
@@ -869,7 +867,6 @@ static void *qpa_audio_init(Audiodev *dev, Error **errp)
} }
if (!g->conn) { if (!g->conn) {
g_free(g); g_free(g);
error_setg(errp, "could not connect to PulseAudio server");
return NULL; return NULL;
} }
@@ -931,6 +928,7 @@ static struct audio_driver pa_audio_driver = {
.init = qpa_audio_init, .init = qpa_audio_init,
.fini = qpa_audio_fini, .fini = qpa_audio_fini,
.pcm_ops = &qpa_pcm_ops, .pcm_ops = &qpa_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX, .max_voices_out = INT_MAX,
.max_voices_in = INT_MAX, .max_voices_in = INT_MAX,
.voice_size_out = sizeof (PAVoiceOut), .voice_size_out = sizeof (PAVoiceOut),

View File

@@ -13,7 +13,6 @@
#include "audio.h" #include "audio.h"
#include <errno.h> #include <errno.h>
#include "qemu/error-report.h" #include "qemu/error-report.h"
#include "qapi/error.h"
#include <spa/param/audio/format-utils.h> #include <spa/param/audio/format-utils.h>
#include <spa/utils/ringbuffer.h> #include <spa/utils/ringbuffer.h>
#include <spa/utils/result.h> #include <spa/utils/result.h>
@@ -737,7 +736,7 @@ static const struct pw_core_events core_events = {
}; };
static void * static void *
qpw_audio_init(Audiodev *dev, Error **errp) qpw_audio_init(Audiodev *dev)
{ {
g_autofree pwaudio *pw = g_new0(pwaudio, 1); g_autofree pwaudio *pw = g_new0(pwaudio, 1);
@@ -749,19 +748,19 @@ qpw_audio_init(Audiodev *dev, Error **errp)
pw->dev = dev; pw->dev = dev;
pw->thread_loop = pw_thread_loop_new("PipeWire thread loop", NULL); pw->thread_loop = pw_thread_loop_new("PipeWire thread loop", NULL);
if (pw->thread_loop == NULL) { if (pw->thread_loop == NULL) {
error_setg_errno(errp, errno, "Could not create PipeWire loop"); error_report("Could not create PipeWire loop: %s", g_strerror(errno));
goto fail; goto fail;
} }
pw->context = pw->context =
pw_context_new(pw_thread_loop_get_loop(pw->thread_loop), NULL, 0); pw_context_new(pw_thread_loop_get_loop(pw->thread_loop), NULL, 0);
if (pw->context == NULL) { if (pw->context == NULL) {
error_setg_errno(errp, errno, "Could not create PipeWire context"); error_report("Could not create PipeWire context: %s", g_strerror(errno));
goto fail; goto fail;
} }
if (pw_thread_loop_start(pw->thread_loop) < 0) { if (pw_thread_loop_start(pw->thread_loop) < 0) {
error_setg_errno(errp, errno, "Could not start PipeWire loop"); error_report("Could not start PipeWire loop: %s", g_strerror(errno));
goto fail; goto fail;
} }
@@ -770,13 +769,13 @@ qpw_audio_init(Audiodev *dev, Error **errp)
pw->core = pw_context_connect(pw->context, NULL, 0); pw->core = pw_context_connect(pw->context, NULL, 0);
if (pw->core == NULL) { if (pw->core == NULL) {
pw_thread_loop_unlock(pw->thread_loop); pw_thread_loop_unlock(pw->thread_loop);
goto fail_error; goto fail;
} }
if (pw_core_add_listener(pw->core, &pw->core_listener, if (pw_core_add_listener(pw->core, &pw->core_listener,
&core_events, pw) < 0) { &core_events, pw) < 0) {
pw_thread_loop_unlock(pw->thread_loop); pw_thread_loop_unlock(pw->thread_loop);
goto fail_error; goto fail;
} }
if (wait_resync(pw) < 0) { if (wait_resync(pw) < 0) {
pw_thread_loop_unlock(pw->thread_loop); pw_thread_loop_unlock(pw->thread_loop);
@@ -786,9 +785,8 @@ qpw_audio_init(Audiodev *dev, Error **errp)
return g_steal_pointer(&pw); return g_steal_pointer(&pw);
fail_error:
error_setg(errp, "Failed to initialize PW context");
fail: fail:
AUD_log(AUDIO_CAP, "Failed to initialize PW context");
if (pw->thread_loop) { if (pw->thread_loop) {
pw_thread_loop_stop(pw->thread_loop); pw_thread_loop_stop(pw->thread_loop);
} }
@@ -843,6 +841,7 @@ static struct audio_driver pw_audio_driver = {
.init = qpw_audio_init, .init = qpw_audio_init,
.fini = qpw_audio_fini, .fini = qpw_audio_fini,
.pcm_ops = &qpw_pcm_ops, .pcm_ops = &qpw_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX, .max_voices_out = INT_MAX,
.max_voices_in = INT_MAX, .max_voices_in = INT_MAX,
.voice_size_out = sizeof(PWVoiceOut), .voice_size_out = sizeof(PWVoiceOut),

View File

@@ -26,7 +26,6 @@
#include <SDL.h> #include <SDL.h>
#include <SDL_thread.h> #include <SDL_thread.h>
#include "qemu/module.h" #include "qemu/module.h"
#include "qapi/error.h"
#include "audio.h" #include "audio.h"
#ifndef _WIN32 #ifndef _WIN32
@@ -450,10 +449,10 @@ static void sdl_enable_in(HWVoiceIn *hw, bool enable)
SDL_PauseAudioDevice(sdl->devid, !enable); SDL_PauseAudioDevice(sdl->devid, !enable);
} }
static void *sdl_audio_init(Audiodev *dev, Error **errp) static void *sdl_audio_init(Audiodev *dev)
{ {
if (SDL_InitSubSystem (SDL_INIT_AUDIO)) { if (SDL_InitSubSystem (SDL_INIT_AUDIO)) {
error_setg(errp, "SDL failed to initialize audio subsystem"); sdl_logerr ("SDL failed to initialize audio subsystem\n");
return NULL; return NULL;
} }
@@ -494,6 +493,7 @@ static struct audio_driver sdl_audio_driver = {
.init = sdl_audio_init, .init = sdl_audio_init,
.fini = sdl_audio_fini, .fini = sdl_audio_fini,
.pcm_ops = &sdl_pcm_ops, .pcm_ops = &sdl_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX, .max_voices_out = INT_MAX,
.max_voices_in = INT_MAX, .max_voices_in = INT_MAX,
.voice_size_out = sizeof(SDLVoiceOut), .voice_size_out = sizeof(SDLVoiceOut),

View File

@@ -518,7 +518,7 @@ static void sndio_fini_in(HWVoiceIn *hw)
sndio_fini(self); sndio_fini(self);
} }
static void *sndio_audio_init(Audiodev *dev, Error **errp) static void *sndio_audio_init(Audiodev *dev)
{ {
assert(dev->driver == AUDIODEV_DRIVER_SNDIO); assert(dev->driver == AUDIODEV_DRIVER_SNDIO);
return dev; return dev;
@@ -550,6 +550,7 @@ static struct audio_driver sndio_audio_driver = {
.init = sndio_audio_init, .init = sndio_audio_init,
.fini = sndio_audio_fini, .fini = sndio_audio_fini,
.pcm_ops = &sndio_pcm_ops, .pcm_ops = &sndio_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX, .max_voices_out = INT_MAX,
.max_voices_in = INT_MAX, .max_voices_in = INT_MAX,
.voice_size_out = sizeof(SndioVoice), .voice_size_out = sizeof(SndioVoice),

View File

@@ -22,7 +22,6 @@
#include "qemu/module.h" #include "qemu/module.h"
#include "qemu/error-report.h" #include "qemu/error-report.h"
#include "qemu/timer.h" #include "qemu/timer.h"
#include "qapi/error.h"
#include "ui/qemu-spice.h" #include "ui/qemu-spice.h"
#define AUDIO_CAP "spice" #define AUDIO_CAP "spice"
@@ -72,13 +71,11 @@ static const SpiceRecordInterface record_sif = {
.base.minor_version = SPICE_INTERFACE_RECORD_MINOR, .base.minor_version = SPICE_INTERFACE_RECORD_MINOR,
}; };
static void *spice_audio_init(Audiodev *dev, Error **errp) static void *spice_audio_init(Audiodev *dev)
{ {
if (!using_spice) { if (!using_spice) {
error_setg(errp, "Cannot use spice audio without -spice");
return NULL; return NULL;
} }
return &spice_audio_init; return &spice_audio_init;
} }

View File

@@ -97,10 +97,6 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
dolog ("WAVE files can not handle 32bit formats\n"); dolog ("WAVE files can not handle 32bit formats\n");
return -1; return -1;
case AUDIO_FORMAT_F32:
dolog("WAVE files can not handle float formats\n");
return -1;
default: default:
abort(); abort();
} }
@@ -186,7 +182,7 @@ static void wav_enable_out(HWVoiceOut *hw, bool enable)
} }
} }
static void *wav_audio_init(Audiodev *dev, Error **errp) static void *wav_audio_init(Audiodev *dev)
{ {
assert(dev->driver == AUDIODEV_DRIVER_WAV); assert(dev->driver == AUDIODEV_DRIVER_WAV);
return dev; return dev;
@@ -212,6 +208,7 @@ static struct audio_driver wav_audio_driver = {
.init = wav_audio_init, .init = wav_audio_init,
.fini = wav_audio_fini, .fini = wav_audio_fini,
.pcm_ops = &wav_pcm_ops, .pcm_ops = &wav_pcm_ops,
.can_be_default = 0,
.max_voices_out = 1, .max_voices_out = 1,
.max_voices_in = 0, .max_voices_in = 0,
.voice_size_out = sizeof (WAVVoiceOut), .voice_size_out = sizeof (WAVVoiceOut),

View File

@@ -426,7 +426,8 @@ dbus_vmstate_complete(UserCreatable *uc, Error **errp)
return; return;
} }
if (vmstate_register_any(VMSTATE_IF(self), &dbus_vmstate, self) < 0) { if (vmstate_register(VMSTATE_IF(self), VMSTATE_INSTANCE_ID_ANY,
&dbus_vmstate, self) < 0) {
error_setg(errp, "Failed to register vmstate"); error_setg(errp, "Failed to register vmstate");
} }
} }

View File

@@ -534,8 +534,11 @@ static int tpm_emulator_block_migration(TPMEmulator *tpm_emu)
error_setg(&tpm_emu->migration_blocker, error_setg(&tpm_emu->migration_blocker,
"Migration disabled: TPM emulator does not support " "Migration disabled: TPM emulator does not support "
"migration"); "migration");
if (migrate_add_blocker(&tpm_emu->migration_blocker, &err) < 0) { if (migrate_add_blocker(tpm_emu->migration_blocker, &err) < 0) {
error_report_err(err); error_report_err(err);
error_free(tpm_emu->migration_blocker);
tpm_emu->migration_blocker = NULL;
return -1; return -1;
} }
} }
@@ -975,7 +978,8 @@ static void tpm_emulator_inst_init(Object *obj)
qemu_add_vm_change_state_handler(tpm_emulator_vm_state_change, qemu_add_vm_change_state_handler(tpm_emulator_vm_state_change,
tpm_emu); tpm_emu);
vmstate_register_any(NULL, &vmstate_tpm_emulator, obj); vmstate_register(NULL, VMSTATE_INSTANCE_ID_ANY,
&vmstate_tpm_emulator, obj);
} }
/* /*
@@ -1012,7 +1016,10 @@ static void tpm_emulator_inst_finalize(Object *obj)
qapi_free_TPMEmulatorOptions(tpm_emu->options); qapi_free_TPMEmulatorOptions(tpm_emu->options);
migrate_del_blocker(&tpm_emu->migration_blocker); if (tpm_emu->migration_blocker) {
migrate_del_blocker(tpm_emu->migration_blocker);
error_free(tpm_emu->migration_blocker);
}
tpm_sized_buffer_reset(&state_blobs->volatil); tpm_sized_buffer_reset(&state_blobs->volatil);
tpm_sized_buffer_reset(&state_blobs->permanent); tpm_sized_buffer_reset(&state_blobs->permanent);

303
block.c
View File

@@ -279,9 +279,8 @@ bool bdrv_is_read_only(BlockDriverState *bs)
return !(bs->open_flags & BDRV_O_RDWR); return !(bs->open_flags & BDRV_O_RDWR);
} }
static int GRAPH_RDLOCK static int bdrv_can_set_read_only(BlockDriverState *bs, bool read_only,
bdrv_can_set_read_only(BlockDriverState *bs, bool read_only, bool ignore_allow_rdw, Error **errp)
bool ignore_allow_rdw, Error **errp)
{ {
IO_CODE(); IO_CODE();
@@ -372,9 +371,8 @@ char *bdrv_get_full_backing_filename_from_filename(const char *backed,
* setting @errp. In all other cases, NULL will only be returned with * setting @errp. In all other cases, NULL will only be returned with
* @errp set. * @errp set.
*/ */
static char * GRAPH_RDLOCK static char *bdrv_make_absolute_filename(BlockDriverState *relative_to,
bdrv_make_absolute_filename(BlockDriverState *relative_to, const char *filename, Error **errp)
const char *filename, Error **errp)
{ {
char *dir, *full_name; char *dir, *full_name;
@@ -820,17 +818,12 @@ int bdrv_probe_blocksizes(BlockDriverState *bs, BlockSizes *bsz)
int bdrv_probe_geometry(BlockDriverState *bs, HDGeometry *geo) int bdrv_probe_geometry(BlockDriverState *bs, HDGeometry *geo)
{ {
BlockDriver *drv = bs->drv; BlockDriver *drv = bs->drv;
BlockDriverState *filtered; BlockDriverState *filtered = bdrv_filter_bs(bs);
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (drv && drv->bdrv_probe_geometry) { if (drv && drv->bdrv_probe_geometry) {
return drv->bdrv_probe_geometry(bs, geo); return drv->bdrv_probe_geometry(bs, geo);
} } else if (filtered) {
filtered = bdrv_filter_bs(bs);
if (filtered) {
return bdrv_probe_geometry(filtered, geo); return bdrv_probe_geometry(filtered, geo);
} }
@@ -1199,19 +1192,19 @@ static char *bdrv_child_get_parent_desc(BdrvChild *c)
return g_strdup_printf("node '%s'", bdrv_get_node_name(parent)); return g_strdup_printf("node '%s'", bdrv_get_node_name(parent));
} }
static void GRAPH_RDLOCK bdrv_child_cb_drained_begin(BdrvChild *child) static void bdrv_child_cb_drained_begin(BdrvChild *child)
{ {
BlockDriverState *bs = child->opaque; BlockDriverState *bs = child->opaque;
bdrv_do_drained_begin_quiesce(bs, NULL); bdrv_do_drained_begin_quiesce(bs, NULL);
} }
static bool GRAPH_RDLOCK bdrv_child_cb_drained_poll(BdrvChild *child) static bool bdrv_child_cb_drained_poll(BdrvChild *child)
{ {
BlockDriverState *bs = child->opaque; BlockDriverState *bs = child->opaque;
return bdrv_drain_poll(bs, NULL, false); return bdrv_drain_poll(bs, NULL, false);
} }
static void GRAPH_RDLOCK bdrv_child_cb_drained_end(BdrvChild *child) static void bdrv_child_cb_drained_end(BdrvChild *child)
{ {
BlockDriverState *bs = child->opaque; BlockDriverState *bs = child->opaque;
bdrv_drained_end(bs); bdrv_drained_end(bs);
@@ -1257,7 +1250,7 @@ static void bdrv_temp_snapshot_options(int *child_flags, QDict *child_options,
*child_flags &= ~BDRV_O_NATIVE_AIO; *child_flags &= ~BDRV_O_NATIVE_AIO;
} }
static void GRAPH_WRLOCK bdrv_backing_attach(BdrvChild *c) static void bdrv_backing_attach(BdrvChild *c)
{ {
BlockDriverState *parent = c->opaque; BlockDriverState *parent = c->opaque;
BlockDriverState *backing_hd = c->bs; BlockDriverState *backing_hd = c->bs;
@@ -1707,14 +1700,12 @@ bdrv_open_driver(BlockDriverState *bs, BlockDriver *drv, const char *node_name,
return 0; return 0;
open_failed: open_failed:
bs->drv = NULL; bs->drv = NULL;
bdrv_graph_wrlock(NULL);
if (bs->file != NULL) { if (bs->file != NULL) {
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, bs->file); bdrv_unref_child(bs, bs->file);
bdrv_graph_wrunlock();
assert(!bs->file); assert(!bs->file);
} }
bdrv_graph_wrunlock();
g_free(bs->opaque); g_free(bs->opaque);
bs->opaque = NULL; bs->opaque = NULL;
return ret; return ret;
@@ -1856,12 +1847,9 @@ static int bdrv_open_common(BlockDriverState *bs, BlockBackend *file,
Error *local_err = NULL; Error *local_err = NULL;
bool ro; bool ro;
GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop();
assert(bs->file == NULL); assert(bs->file == NULL);
assert(options != NULL && bs->options != options); assert(options != NULL && bs->options != options);
bdrv_graph_rdunlock_main_loop(); GLOBAL_STATE_CODE();
opts = qemu_opts_create(&bdrv_runtime_opts, NULL, 0, &error_abort); opts = qemu_opts_create(&bdrv_runtime_opts, NULL, 0, &error_abort);
if (!qemu_opts_absorb_qdict(opts, options, errp)) { if (!qemu_opts_absorb_qdict(opts, options, errp)) {
@@ -1886,10 +1874,7 @@ static int bdrv_open_common(BlockDriverState *bs, BlockBackend *file,
} }
if (file != NULL) { if (file != NULL) {
bdrv_graph_rdlock_main_loop();
bdrv_refresh_filename(blk_bs(file)); bdrv_refresh_filename(blk_bs(file));
bdrv_graph_rdunlock_main_loop();
filename = blk_bs(file)->filename; filename = blk_bs(file)->filename;
} else { } else {
/* /*
@@ -1916,9 +1901,7 @@ static int bdrv_open_common(BlockDriverState *bs, BlockBackend *file,
if (use_bdrv_whitelist && !bdrv_is_whitelisted(drv, ro)) { if (use_bdrv_whitelist && !bdrv_is_whitelisted(drv, ro)) {
if (!ro && bdrv_is_whitelisted(drv, true)) { if (!ro && bdrv_is_whitelisted(drv, true)) {
bdrv_graph_rdlock_main_loop();
ret = bdrv_apply_auto_read_only(bs, NULL, NULL); ret = bdrv_apply_auto_read_only(bs, NULL, NULL);
bdrv_graph_rdunlock_main_loop();
} else { } else {
ret = -ENOTSUP; ret = -ENOTSUP;
} }
@@ -2983,8 +2966,6 @@ static void bdrv_child_free(BdrvChild *child)
{ {
assert(!child->bs); assert(!child->bs);
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
assert(!child->next.le_prev); /* not in children list */ assert(!child->next.le_prev); /* not in children list */
g_free(child->name); g_free(child->name);
@@ -3091,19 +3072,18 @@ bdrv_attach_child_common(BlockDriverState *child_bs,
&local_err); &local_err);
if (ret < 0 && child_class->change_aio_ctx) { if (ret < 0 && child_class->change_aio_ctx) {
Transaction *aio_ctx_tran = tran_new(); Transaction *tran = tran_new();
GHashTable *visited = g_hash_table_new(NULL, NULL); GHashTable *visited = g_hash_table_new(NULL, NULL);
bool ret_child; bool ret_child;
g_hash_table_add(visited, new_child); g_hash_table_add(visited, new_child);
ret_child = child_class->change_aio_ctx(new_child, child_ctx, ret_child = child_class->change_aio_ctx(new_child, child_ctx,
visited, aio_ctx_tran, visited, tran, NULL);
NULL);
if (ret_child == true) { if (ret_child == true) {
error_free(local_err); error_free(local_err);
ret = 0; ret = 0;
} }
tran_finalize(aio_ctx_tran, ret_child == true ? 0 : -1); tran_finalize(tran, ret_child == true ? 0 : -1);
g_hash_table_destroy(visited); g_hash_table_destroy(visited);
} }
@@ -3219,6 +3199,8 @@ BdrvChild *bdrv_root_attach_child(BlockDriverState *child_bs,
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_graph_wrlock(child_bs);
child = bdrv_attach_child_common(child_bs, child_name, child_class, child = bdrv_attach_child_common(child_bs, child_name, child_class,
child_role, perm, shared_perm, opaque, child_role, perm, shared_perm, opaque,
tran, errp); tran, errp);
@@ -3231,8 +3213,9 @@ BdrvChild *bdrv_root_attach_child(BlockDriverState *child_bs,
out: out:
tran_finalize(tran, ret); tran_finalize(tran, ret);
bdrv_graph_wrunlock();
bdrv_schedule_unref(child_bs); bdrv_unref(child_bs);
return ret < 0 ? NULL : child; return ret < 0 ? NULL : child;
} }
@@ -3537,7 +3520,19 @@ out:
* *
* If a backing child is already present (i.e. we're detaching a node), that * If a backing child is already present (i.e. we're detaching a node), that
* child node must be drained. * child node must be drained.
*
* After calling this function, the transaction @tran may only be completed
* while holding a writer lock for the graph.
*/ */
static int GRAPH_WRLOCK
bdrv_set_backing_noperm(BlockDriverState *bs,
BlockDriverState *backing_hd,
Transaction *tran, Error **errp)
{
GLOBAL_STATE_CODE();
return bdrv_set_file_or_backing_noperm(bs, backing_hd, true, tran, errp);
}
int bdrv_set_backing_hd_drained(BlockDriverState *bs, int bdrv_set_backing_hd_drained(BlockDriverState *bs,
BlockDriverState *backing_hd, BlockDriverState *backing_hd,
Error **errp) Error **errp)
@@ -3550,8 +3545,9 @@ int bdrv_set_backing_hd_drained(BlockDriverState *bs,
if (bs->backing) { if (bs->backing) {
assert(bs->backing->bs->quiesce_counter > 0); assert(bs->backing->bs->quiesce_counter > 0);
} }
bdrv_graph_wrlock(backing_hd);
ret = bdrv_set_file_or_backing_noperm(bs, backing_hd, true, tran, errp); ret = bdrv_set_backing_noperm(bs, backing_hd, tran, errp);
if (ret < 0) { if (ret < 0) {
goto out; goto out;
} }
@@ -3559,25 +3555,20 @@ int bdrv_set_backing_hd_drained(BlockDriverState *bs,
ret = bdrv_refresh_perms(bs, tran, errp); ret = bdrv_refresh_perms(bs, tran, errp);
out: out:
tran_finalize(tran, ret); tran_finalize(tran, ret);
bdrv_graph_wrunlock();
return ret; return ret;
} }
int bdrv_set_backing_hd(BlockDriverState *bs, BlockDriverState *backing_hd, int bdrv_set_backing_hd(BlockDriverState *bs, BlockDriverState *backing_hd,
Error **errp) Error **errp)
{ {
BlockDriverState *drain_bs; BlockDriverState *drain_bs = bs->backing ? bs->backing->bs : bs;
int ret; int ret;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop();
drain_bs = bs->backing ? bs->backing->bs : bs;
bdrv_graph_rdunlock_main_loop();
bdrv_ref(drain_bs); bdrv_ref(drain_bs);
bdrv_drained_begin(drain_bs); bdrv_drained_begin(drain_bs);
bdrv_graph_wrlock(backing_hd);
ret = bdrv_set_backing_hd_drained(bs, backing_hd, errp); ret = bdrv_set_backing_hd_drained(bs, backing_hd, errp);
bdrv_graph_wrunlock();
bdrv_drained_end(drain_bs); bdrv_drained_end(drain_bs);
bdrv_unref(drain_bs); bdrv_unref(drain_bs);
@@ -3611,7 +3602,6 @@ int bdrv_open_backing_file(BlockDriverState *bs, QDict *parent_options,
Error *local_err = NULL; Error *local_err = NULL;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (bs->backing != NULL) { if (bs->backing != NULL) {
goto free_exit; goto free_exit;
@@ -4323,8 +4313,8 @@ static int bdrv_reset_options_allowed(BlockDriverState *bs,
/* /*
* Returns true if @child can be reached recursively from @bs * Returns true if @child can be reached recursively from @bs
*/ */
static bool GRAPH_RDLOCK static bool bdrv_recurse_has_child(BlockDriverState *bs,
bdrv_recurse_has_child(BlockDriverState *bs, BlockDriverState *child) BlockDriverState *child)
{ {
BdrvChild *c; BdrvChild *c;
@@ -4365,12 +4355,15 @@ bdrv_recurse_has_child(BlockDriverState *bs, BlockDriverState *child)
* *
* To be called with bs->aio_context locked. * To be called with bs->aio_context locked.
*/ */
static BlockReopenQueue * GRAPH_RDLOCK static BlockReopenQueue *bdrv_reopen_queue_child(BlockReopenQueue *bs_queue,
bdrv_reopen_queue_child(BlockReopenQueue *bs_queue, BlockDriverState *bs, BlockDriverState *bs,
QDict *options, const BdrvChildClass *klass, QDict *options,
BdrvChildRole role, bool parent_is_format, const BdrvChildClass *klass,
QDict *parent_options, int parent_flags, BdrvChildRole role,
bool keep_old_opts) bool parent_is_format,
QDict *parent_options,
int parent_flags,
bool keep_old_opts)
{ {
assert(bs != NULL); assert(bs != NULL);
@@ -4382,11 +4375,6 @@ bdrv_reopen_queue_child(BlockReopenQueue *bs_queue, BlockDriverState *bs,
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
/*
* Strictly speaking, draining is illegal under GRAPH_RDLOCK. We know that
* we've been called with bdrv_graph_rdlock_main_loop(), though, so it's ok
* in practice.
*/
bdrv_drained_begin(bs); bdrv_drained_begin(bs);
if (bs_queue == NULL) { if (bs_queue == NULL) {
@@ -4528,7 +4516,6 @@ BlockReopenQueue *bdrv_reopen_queue(BlockReopenQueue *bs_queue,
QDict *options, bool keep_old_opts) QDict *options, bool keep_old_opts)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
return bdrv_reopen_queue_child(bs_queue, bs, options, NULL, 0, false, return bdrv_reopen_queue_child(bs_queue, bs, options, NULL, 0, false,
NULL, 0, keep_old_opts); NULL, 0, keep_old_opts);
@@ -4748,20 +4735,18 @@ int bdrv_reopen_set_read_only(BlockDriverState *bs, bool read_only,
* Callers must make sure that their AioContext locking is still correct after * Callers must make sure that their AioContext locking is still correct after
* this. * this.
*/ */
static int GRAPH_UNLOCKED static int bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state, bool is_backing, Transaction *tran,
bool is_backing, Transaction *tran, Error **errp)
Error **errp)
{ {
BlockDriverState *bs = reopen_state->bs; BlockDriverState *bs = reopen_state->bs;
BlockDriverState *new_child_bs; BlockDriverState *new_child_bs;
BlockDriverState *old_child_bs; BlockDriverState *old_child_bs = is_backing ? child_bs(bs->backing) :
child_bs(bs->file);
const char *child_name = is_backing ? "backing" : "file"; const char *child_name = is_backing ? "backing" : "file";
QObject *value; QObject *value;
const char *str; const char *str;
AioContext *ctx, *old_ctx; AioContext *ctx, *old_ctx;
bool has_child;
int ret; int ret;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
@@ -4771,8 +4756,6 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
return 0; return 0;
} }
bdrv_graph_rdlock_main_loop();
switch (qobject_type(value)) { switch (qobject_type(value)) {
case QTYPE_QNULL: case QTYPE_QNULL:
assert(is_backing); /* The 'file' option does not allow a null value */ assert(is_backing); /* The 'file' option does not allow a null value */
@@ -4782,16 +4765,11 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
str = qstring_get_str(qobject_to(QString, value)); str = qstring_get_str(qobject_to(QString, value));
new_child_bs = bdrv_lookup_bs(NULL, str, errp); new_child_bs = bdrv_lookup_bs(NULL, str, errp);
if (new_child_bs == NULL) { if (new_child_bs == NULL) {
ret = -EINVAL; return -EINVAL;
goto out_rdlock; } else if (bdrv_recurse_has_child(new_child_bs, bs)) {
}
has_child = bdrv_recurse_has_child(new_child_bs, bs);
if (has_child) {
error_setg(errp, "Making '%s' a %s child of '%s' would create a " error_setg(errp, "Making '%s' a %s child of '%s' would create a "
"cycle", str, child_name, bs->node_name); "cycle", str, child_name, bs->node_name);
ret = -EINVAL; return -EINVAL;
goto out_rdlock;
} }
break; break;
default: default:
@@ -4802,23 +4780,19 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
g_assert_not_reached(); g_assert_not_reached();
} }
old_child_bs = is_backing ? child_bs(bs->backing) : child_bs(bs->file);
if (old_child_bs == new_child_bs) { if (old_child_bs == new_child_bs) {
ret = 0; return 0;
goto out_rdlock;
} }
if (old_child_bs) { if (old_child_bs) {
if (bdrv_skip_implicit_filters(old_child_bs) == new_child_bs) { if (bdrv_skip_implicit_filters(old_child_bs) == new_child_bs) {
ret = 0; return 0;
goto out_rdlock;
} }
if (old_child_bs->implicit) { if (old_child_bs->implicit) {
error_setg(errp, "Cannot replace implicit %s child of %s", error_setg(errp, "Cannot replace implicit %s child of %s",
child_name, bs->node_name); child_name, bs->node_name);
ret = -EPERM; return -EPERM;
goto out_rdlock;
} }
} }
@@ -4829,8 +4803,7 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
*/ */
error_setg(errp, "'%s' is a %s filter node that does not support a " error_setg(errp, "'%s' is a %s filter node that does not support a "
"%s child", bs->node_name, bs->drv->format_name, child_name); "%s child", bs->node_name, bs->drv->format_name, child_name);
ret = -EINVAL; return -EINVAL;
goto out_rdlock;
} }
if (is_backing) { if (is_backing) {
@@ -4851,7 +4824,6 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
aio_context_acquire(ctx); aio_context_acquire(ctx);
} }
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(new_child_bs); bdrv_graph_wrlock(new_child_bs);
ret = bdrv_set_file_or_backing_noperm(bs, new_child_bs, is_backing, ret = bdrv_set_file_or_backing_noperm(bs, new_child_bs, is_backing,
@@ -4870,10 +4842,6 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
} }
return ret; return ret;
out_rdlock:
bdrv_graph_rdunlock_main_loop();
return ret;
} }
/* /*
@@ -4897,9 +4865,9 @@ out_rdlock:
* After calling this function, the transaction @change_child_tran may only be * After calling this function, the transaction @change_child_tran may only be
* completed while holding a writer lock for the graph. * completed while holding a writer lock for the graph.
*/ */
static int GRAPH_UNLOCKED static int bdrv_reopen_prepare(BDRVReopenState *reopen_state,
bdrv_reopen_prepare(BDRVReopenState *reopen_state, BlockReopenQueue *queue, BlockReopenQueue *queue,
Transaction *change_child_tran, Error **errp) Transaction *change_child_tran, Error **errp)
{ {
int ret = -1; int ret = -1;
int old_flags; int old_flags;
@@ -4961,10 +4929,7 @@ bdrv_reopen_prepare(BDRVReopenState *reopen_state, BlockReopenQueue *queue,
* to r/w. Attempting to set to r/w may fail if either BDRV_O_ALLOW_RDWR is * to r/w. Attempting to set to r/w may fail if either BDRV_O_ALLOW_RDWR is
* not set, or if the BDS still has copy_on_read enabled */ * not set, or if the BDS still has copy_on_read enabled */
read_only = !(reopen_state->flags & BDRV_O_RDWR); read_only = !(reopen_state->flags & BDRV_O_RDWR);
bdrv_graph_rdlock_main_loop();
ret = bdrv_can_set_read_only(reopen_state->bs, read_only, true, &local_err); ret = bdrv_can_set_read_only(reopen_state->bs, read_only, true, &local_err);
bdrv_graph_rdunlock_main_loop();
if (local_err) { if (local_err) {
error_propagate(errp, local_err); error_propagate(errp, local_err);
goto error; goto error;
@@ -4987,9 +4952,7 @@ bdrv_reopen_prepare(BDRVReopenState *reopen_state, BlockReopenQueue *queue,
if (local_err != NULL) { if (local_err != NULL) {
error_propagate(errp, local_err); error_propagate(errp, local_err);
} else { } else {
bdrv_graph_rdlock_main_loop();
bdrv_refresh_filename(reopen_state->bs); bdrv_refresh_filename(reopen_state->bs);
bdrv_graph_rdunlock_main_loop();
error_setg(errp, "failed while preparing to reopen image '%s'", error_setg(errp, "failed while preparing to reopen image '%s'",
reopen_state->bs->filename); reopen_state->bs->filename);
} }
@@ -4998,11 +4961,9 @@ bdrv_reopen_prepare(BDRVReopenState *reopen_state, BlockReopenQueue *queue,
} else { } else {
/* It is currently mandatory to have a bdrv_reopen_prepare() /* It is currently mandatory to have a bdrv_reopen_prepare()
* handler for each supported drv. */ * handler for each supported drv. */
bdrv_graph_rdlock_main_loop();
error_setg(errp, "Block format '%s' used by node '%s' " error_setg(errp, "Block format '%s' used by node '%s' "
"does not support reopening files", drv->format_name, "does not support reopening files", drv->format_name,
bdrv_get_device_or_node_name(reopen_state->bs)); bdrv_get_device_or_node_name(reopen_state->bs));
bdrv_graph_rdunlock_main_loop();
ret = -1; ret = -1;
goto error; goto error;
} }
@@ -5014,16 +4975,13 @@ bdrv_reopen_prepare(BDRVReopenState *reopen_state, BlockReopenQueue *queue,
* file or if the image file has a backing file name as part of * file or if the image file has a backing file name as part of
* its metadata. Otherwise the 'backing' option can be omitted. * its metadata. Otherwise the 'backing' option can be omitted.
*/ */
bdrv_graph_rdlock_main_loop();
if (drv->supports_backing && reopen_state->backing_missing && if (drv->supports_backing && reopen_state->backing_missing &&
(reopen_state->bs->backing || reopen_state->bs->backing_file[0])) { (reopen_state->bs->backing || reopen_state->bs->backing_file[0])) {
error_setg(errp, "backing is missing for '%s'", error_setg(errp, "backing is missing for '%s'",
reopen_state->bs->node_name); reopen_state->bs->node_name);
bdrv_graph_rdunlock_main_loop();
ret = -EINVAL; ret = -EINVAL;
goto error; goto error;
} }
bdrv_graph_rdunlock_main_loop();
/* /*
* Allow changing the 'backing' option. The new value can be * Allow changing the 'backing' option. The new value can be
@@ -5051,8 +5009,6 @@ bdrv_reopen_prepare(BDRVReopenState *reopen_state, BlockReopenQueue *queue,
if (qdict_size(reopen_state->options)) { if (qdict_size(reopen_state->options)) {
const QDictEntry *entry = qdict_first(reopen_state->options); const QDictEntry *entry = qdict_first(reopen_state->options);
GRAPH_RDLOCK_GUARD_MAINLOOP();
do { do {
QObject *new = entry->value; QObject *new = entry->value;
QObject *old = qdict_get(reopen_state->bs->options, entry->key); QObject *old = qdict_get(reopen_state->bs->options, entry->key);
@@ -5126,7 +5082,7 @@ error:
* makes them final by swapping the staging BlockDriverState contents into * makes them final by swapping the staging BlockDriverState contents into
* the active BlockDriverState contents. * the active BlockDriverState contents.
*/ */
static void GRAPH_UNLOCKED bdrv_reopen_commit(BDRVReopenState *reopen_state) static void bdrv_reopen_commit(BDRVReopenState *reopen_state)
{ {
BlockDriver *drv; BlockDriver *drv;
BlockDriverState *bs; BlockDriverState *bs;
@@ -5143,8 +5099,6 @@ static void GRAPH_UNLOCKED bdrv_reopen_commit(BDRVReopenState *reopen_state)
drv->bdrv_reopen_commit(reopen_state); drv->bdrv_reopen_commit(reopen_state);
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* set BDS specific flags now */ /* set BDS specific flags now */
qobject_unref(bs->explicit_options); qobject_unref(bs->explicit_options);
qobject_unref(bs->options); qobject_unref(bs->options);
@@ -5166,7 +5120,9 @@ static void GRAPH_UNLOCKED bdrv_reopen_commit(BDRVReopenState *reopen_state)
qdict_del(bs->explicit_options, "backing"); qdict_del(bs->explicit_options, "backing");
qdict_del(bs->options, "backing"); qdict_del(bs->options, "backing");
bdrv_graph_rdlock_main_loop();
bdrv_refresh_limits(bs, NULL, NULL); bdrv_refresh_limits(bs, NULL, NULL);
bdrv_graph_rdunlock_main_loop();
bdrv_refresh_total_sectors(bs, bs->total_sectors); bdrv_refresh_total_sectors(bs, bs->total_sectors);
} }
@@ -5174,7 +5130,7 @@ static void GRAPH_UNLOCKED bdrv_reopen_commit(BDRVReopenState *reopen_state)
* Abort the reopen, and delete and free the staged changes in * Abort the reopen, and delete and free the staged changes in
* reopen_state * reopen_state
*/ */
static void GRAPH_UNLOCKED bdrv_reopen_abort(BDRVReopenState *reopen_state) static void bdrv_reopen_abort(BDRVReopenState *reopen_state)
{ {
BlockDriver *drv; BlockDriver *drv;
@@ -5209,15 +5165,14 @@ static void bdrv_close(BlockDriverState *bs)
bs->drv = NULL; bs->drv = NULL;
} }
bdrv_graph_wrlock(bs); bdrv_graph_wrlock(NULL);
QLIST_FOREACH_SAFE(child, &bs->children, next, next) { QLIST_FOREACH_SAFE(child, &bs->children, next, next) {
bdrv_unref_child(bs, child); bdrv_unref_child(bs, child);
} }
bdrv_graph_wrunlock();
assert(!bs->backing); assert(!bs->backing);
assert(!bs->file); assert(!bs->file);
bdrv_graph_wrunlock();
g_free(bs->opaque); g_free(bs->opaque);
bs->opaque = NULL; bs->opaque = NULL;
qatomic_set(&bs->copy_on_read, 0); qatomic_set(&bs->copy_on_read, 0);
@@ -5422,9 +5377,6 @@ bdrv_replace_node_noperm(BlockDriverState *from,
} }
/* /*
* Switch all parents of @from to point to @to instead. @from and @to must be in
* the same AioContext and both must be drained.
*
* With auto_skip=true bdrv_replace_node_common skips updating from parents * With auto_skip=true bdrv_replace_node_common skips updating from parents
* if it creates a parent-child relation loop or if parent is block-job. * if it creates a parent-child relation loop or if parent is block-job.
* *
@@ -5434,9 +5386,10 @@ bdrv_replace_node_noperm(BlockDriverState *from,
* With @detach_subchain=true @to must be in a backing chain of @from. In this * With @detach_subchain=true @to must be in a backing chain of @from. In this
* case backing link of the cow-parent of @to is removed. * case backing link of the cow-parent of @to is removed.
*/ */
static int GRAPH_WRLOCK static int bdrv_replace_node_common(BlockDriverState *from,
bdrv_replace_node_common(BlockDriverState *from, BlockDriverState *to, BlockDriverState *to,
bool auto_skip, bool detach_subchain, Error **errp) bool auto_skip, bool detach_subchain,
Error **errp)
{ {
Transaction *tran = tran_new(); Transaction *tran = tran_new();
g_autoptr(GSList) refresh_list = NULL; g_autoptr(GSList) refresh_list = NULL;
@@ -5445,10 +5398,6 @@ bdrv_replace_node_common(BlockDriverState *from, BlockDriverState *to,
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
assert(from->quiesce_counter);
assert(to->quiesce_counter);
assert(bdrv_get_aio_context(from) == bdrv_get_aio_context(to));
if (detach_subchain) { if (detach_subchain) {
assert(bdrv_chain_contains(from, to)); assert(bdrv_chain_contains(from, to));
assert(from != to); assert(from != to);
@@ -5460,6 +5409,17 @@ bdrv_replace_node_common(BlockDriverState *from, BlockDriverState *to,
} }
} }
/* Make sure that @from doesn't go away until we have successfully attached
* all of its parents to @to. */
bdrv_ref(from);
assert(qemu_get_current_aio_context() == qemu_get_aio_context());
assert(bdrv_get_aio_context(from) == bdrv_get_aio_context(to));
bdrv_drained_begin(from);
bdrv_drained_begin(to);
bdrv_graph_wrlock(to);
/* /*
* Do the replacement without permission update. * Do the replacement without permission update.
* Replacement may influence the permissions, we should calculate new * Replacement may influence the permissions, we should calculate new
@@ -5488,33 +5448,29 @@ bdrv_replace_node_common(BlockDriverState *from, BlockDriverState *to,
out: out:
tran_finalize(tran, ret); tran_finalize(tran, ret);
bdrv_graph_wrunlock();
bdrv_drained_end(to);
bdrv_drained_end(from);
bdrv_unref(from);
return ret; return ret;
} }
int bdrv_replace_node(BlockDriverState *from, BlockDriverState *to, int bdrv_replace_node(BlockDriverState *from, BlockDriverState *to,
Error **errp) Error **errp)
{ {
GLOBAL_STATE_CODE();
return bdrv_replace_node_common(from, to, true, false, errp); return bdrv_replace_node_common(from, to, true, false, errp);
} }
int bdrv_drop_filter(BlockDriverState *bs, Error **errp) int bdrv_drop_filter(BlockDriverState *bs, Error **errp)
{ {
BlockDriverState *child_bs;
int ret;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop(); return bdrv_replace_node_common(bs, bdrv_filter_or_cow_bs(bs), true, true,
child_bs = bdrv_filter_or_cow_bs(bs); errp);
bdrv_graph_rdunlock_main_loop();
bdrv_drained_begin(child_bs);
bdrv_graph_wrlock(bs);
ret = bdrv_replace_node_common(bs, child_bs, true, true, errp);
bdrv_graph_wrunlock();
bdrv_drained_end(child_bs);
return ret;
} }
/* /*
@@ -5541,9 +5497,7 @@ int bdrv_append(BlockDriverState *bs_new, BlockDriverState *bs_top,
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop();
assert(!bs_new->backing); assert(!bs_new->backing);
bdrv_graph_rdunlock_main_loop();
old_context = bdrv_get_aio_context(bs_top); old_context = bdrv_get_aio_context(bs_top);
bdrv_drained_begin(bs_top); bdrv_drained_begin(bs_top);
@@ -5711,19 +5665,9 @@ BlockDriverState *bdrv_insert_node(BlockDriverState *bs, QDict *options,
goto fail; goto fail;
} }
/*
* Make sure that @bs doesn't go away until we have successfully attached
* all of its parents to @new_node_bs and undrained it again.
*/
bdrv_ref(bs);
bdrv_drained_begin(bs); bdrv_drained_begin(bs);
bdrv_drained_begin(new_node_bs);
bdrv_graph_wrlock(new_node_bs);
ret = bdrv_replace_node(bs, new_node_bs, errp); ret = bdrv_replace_node(bs, new_node_bs, errp);
bdrv_graph_wrunlock();
bdrv_drained_end(new_node_bs);
bdrv_drained_end(bs); bdrv_drained_end(bs);
bdrv_unref(bs);
if (ret < 0) { if (ret < 0) {
error_prepend(errp, "Could not replace node: "); error_prepend(errp, "Could not replace node: ");
@@ -5769,14 +5713,13 @@ int coroutine_fn bdrv_co_check(BlockDriverState *bs,
* image file header * image file header
* -ENOTSUP - format driver doesn't support changing the backing file * -ENOTSUP - format driver doesn't support changing the backing file
*/ */
int coroutine_fn int bdrv_change_backing_file(BlockDriverState *bs, const char *backing_file,
bdrv_co_change_backing_file(BlockDriverState *bs, const char *backing_file, const char *backing_fmt, bool require)
const char *backing_fmt, bool require)
{ {
BlockDriver *drv = bs->drv; BlockDriver *drv = bs->drv;
int ret; int ret;
IO_CODE(); GLOBAL_STATE_CODE();
if (!drv) { if (!drv) {
return -ENOMEDIUM; return -ENOMEDIUM;
@@ -5791,8 +5734,8 @@ bdrv_co_change_backing_file(BlockDriverState *bs, const char *backing_file,
return -EINVAL; return -EINVAL;
} }
if (drv->bdrv_co_change_backing_file != NULL) { if (drv->bdrv_change_backing_file != NULL) {
ret = drv->bdrv_co_change_backing_file(bs, backing_file, backing_fmt); ret = drv->bdrv_change_backing_file(bs, backing_file, backing_fmt);
} else { } else {
ret = -ENOTSUP; ret = -ENOTSUP;
} }
@@ -5849,9 +5792,8 @@ BlockDriverState *bdrv_find_base(BlockDriverState *bs)
* between @bs and @base is frozen. @errp is set if that's the case. * between @bs and @base is frozen. @errp is set if that's the case.
* @base must be reachable from @bs, or NULL. * @base must be reachable from @bs, or NULL.
*/ */
static bool GRAPH_RDLOCK bool bdrv_is_backing_chain_frozen(BlockDriverState *bs, BlockDriverState *base,
bdrv_is_backing_chain_frozen(BlockDriverState *bs, BlockDriverState *base, Error **errp)
Error **errp)
{ {
BlockDriverState *i; BlockDriverState *i;
BdrvChild *child; BdrvChild *child;
@@ -5975,15 +5917,14 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
bdrv_ref(top); bdrv_ref(top);
bdrv_drained_begin(base); bdrv_drained_begin(base);
bdrv_graph_wrlock(base);
if (!top->drv || !base->drv) { if (!top->drv || !base->drv) {
goto exit_wrlock; goto exit;
} }
/* Make sure that base is in the backing chain of top */ /* Make sure that base is in the backing chain of top */
if (!bdrv_chain_contains(top, base)) { if (!bdrv_chain_contains(top, base)) {
goto exit_wrlock; goto exit;
} }
/* If 'base' recursively inherits from 'top' then we should set /* If 'base' recursively inherits from 'top' then we should set
@@ -6000,9 +5941,11 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
backing_file_str = base->filename; backing_file_str = base->filename;
} }
bdrv_graph_rdlock_main_loop();
QLIST_FOREACH(c, &top->parents, next_parent) { QLIST_FOREACH(c, &top->parents, next_parent) {
updated_children = g_slist_prepend(updated_children, c); updated_children = g_slist_prepend(updated_children, c);
} }
bdrv_graph_rdunlock_main_loop();
/* /*
* It seems correct to pass detach_subchain=true here, but it triggers * It seems correct to pass detach_subchain=true here, but it triggers
@@ -6015,8 +5958,6 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
* That's a FIXME. * That's a FIXME.
*/ */
bdrv_replace_node_common(top, base, false, false, &local_err); bdrv_replace_node_common(top, base, false, false, &local_err);
bdrv_graph_wrunlock();
if (local_err) { if (local_err) {
error_report_err(local_err); error_report_err(local_err);
goto exit; goto exit;
@@ -6049,10 +5990,6 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
} }
ret = 0; ret = 0;
goto exit;
exit_wrlock:
bdrv_graph_wrunlock();
exit: exit:
bdrv_drained_end(base); bdrv_drained_end(base);
bdrv_unref(top); bdrv_unref(top);
@@ -6271,12 +6208,12 @@ void bdrv_iterate_format(void (*it)(void *opaque, const char *name),
QLIST_FOREACH(drv, &bdrv_drivers, list) { QLIST_FOREACH(drv, &bdrv_drivers, list) {
if (drv->format_name) { if (drv->format_name) {
bool found = false; bool found = false;
int i = count;
if (use_bdrv_whitelist && !bdrv_is_whitelisted(drv, read_only)) { if (use_bdrv_whitelist && !bdrv_is_whitelisted(drv, read_only)) {
continue; continue;
} }
i = count;
while (formats && i && !found) { while (formats && i && !found) {
found = !strcmp(formats[--i], drv->format_name); found = !strcmp(formats[--i], drv->format_name);
} }
@@ -6344,7 +6281,6 @@ BlockDeviceInfoList *bdrv_named_nodes_list(bool flat,
BlockDriverState *bs; BlockDriverState *bs;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
list = NULL; list = NULL;
QTAILQ_FOREACH(bs, &graph_bdrv_states, node_list) { QTAILQ_FOREACH(bs, &graph_bdrv_states, node_list) {
@@ -6615,7 +6551,7 @@ int bdrv_has_zero_init_1(BlockDriverState *bs)
return 1; return 1;
} }
int coroutine_mixed_fn bdrv_has_zero_init(BlockDriverState *bs) int bdrv_has_zero_init(BlockDriverState *bs)
{ {
BlockDriverState *filtered; BlockDriverState *filtered;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
@@ -6730,8 +6666,7 @@ void coroutine_fn bdrv_co_debug_event(BlockDriverState *bs, BlkdebugEvent event)
bs->drv->bdrv_co_debug_event(bs, event); bs->drv->bdrv_co_debug_event(bs, event);
} }
static BlockDriverState * GRAPH_RDLOCK static BlockDriverState *bdrv_find_debug_node(BlockDriverState *bs)
bdrv_find_debug_node(BlockDriverState *bs)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
while (bs && bs->drv && !bs->drv->bdrv_debug_breakpoint) { while (bs && bs->drv && !bs->drv->bdrv_debug_breakpoint) {
@@ -6750,8 +6685,6 @@ int bdrv_debug_breakpoint(BlockDriverState *bs, const char *event,
const char *tag) const char *tag)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs = bdrv_find_debug_node(bs); bs = bdrv_find_debug_node(bs);
if (bs) { if (bs) {
return bs->drv->bdrv_debug_breakpoint(bs, event, tag); return bs->drv->bdrv_debug_breakpoint(bs, event, tag);
@@ -6763,8 +6696,6 @@ int bdrv_debug_breakpoint(BlockDriverState *bs, const char *event,
int bdrv_debug_remove_breakpoint(BlockDriverState *bs, const char *tag) int bdrv_debug_remove_breakpoint(BlockDriverState *bs, const char *tag)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs = bdrv_find_debug_node(bs); bs = bdrv_find_debug_node(bs);
if (bs) { if (bs) {
return bs->drv->bdrv_debug_remove_breakpoint(bs, tag); return bs->drv->bdrv_debug_remove_breakpoint(bs, tag);
@@ -6776,8 +6707,6 @@ int bdrv_debug_remove_breakpoint(BlockDriverState *bs, const char *tag)
int bdrv_debug_resume(BlockDriverState *bs, const char *tag) int bdrv_debug_resume(BlockDriverState *bs, const char *tag)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
while (bs && (!bs->drv || !bs->drv->bdrv_debug_resume)) { while (bs && (!bs->drv || !bs->drv->bdrv_debug_resume)) {
bs = bdrv_primary_bs(bs); bs = bdrv_primary_bs(bs);
} }
@@ -6792,8 +6721,6 @@ int bdrv_debug_resume(BlockDriverState *bs, const char *tag)
bool bdrv_debug_is_suspended(BlockDriverState *bs, const char *tag) bool bdrv_debug_is_suspended(BlockDriverState *bs, const char *tag)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
while (bs && bs->drv && !bs->drv->bdrv_debug_is_suspended) { while (bs && bs->drv && !bs->drv->bdrv_debug_is_suspended) {
bs = bdrv_primary_bs(bs); bs = bdrv_primary_bs(bs);
} }
@@ -6822,7 +6749,6 @@ BlockDriverState *bdrv_find_backing_image(BlockDriverState *bs,
BlockDriverState *bs_below; BlockDriverState *bs_below;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!bs || !bs->drv || !backing_file) { if (!bs || !bs->drv || !backing_file) {
return NULL; return NULL;
@@ -7034,7 +6960,6 @@ void bdrv_activate_all(Error **errp)
BdrvNextIterator it; BdrvNextIterator it;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
AioContext *aio_context = bdrv_get_aio_context(bs); AioContext *aio_context = bdrv_get_aio_context(bs);
@@ -7050,8 +6975,7 @@ void bdrv_activate_all(Error **errp)
} }
} }
static bool GRAPH_RDLOCK static bool bdrv_has_bds_parent(BlockDriverState *bs, bool only_active)
bdrv_has_bds_parent(BlockDriverState *bs, bool only_active)
{ {
BdrvChild *parent; BdrvChild *parent;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
@@ -7068,13 +6992,14 @@ bdrv_has_bds_parent(BlockDriverState *bs, bool only_active)
return false; return false;
} }
static int GRAPH_RDLOCK bdrv_inactivate_recurse(BlockDriverState *bs) static int bdrv_inactivate_recurse(BlockDriverState *bs)
{ {
BdrvChild *child, *parent; BdrvChild *child, *parent;
int ret; int ret;
uint64_t cumulative_perms, cumulative_shared_perms; uint64_t cumulative_perms, cumulative_shared_perms;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!bs->drv) { if (!bs->drv) {
return -ENOMEDIUM; return -ENOMEDIUM;
@@ -7140,7 +7065,6 @@ int bdrv_inactivate_all(void)
GSList *aio_ctxs = NULL, *ctx; GSList *aio_ctxs = NULL, *ctx;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
AioContext *aio_context = bdrv_get_aio_context(bs); AioContext *aio_context = bdrv_get_aio_context(bs);
@@ -7280,7 +7204,6 @@ bool bdrv_op_is_blocked(BlockDriverState *bs, BlockOpType op, Error **errp)
{ {
BdrvOpBlocker *blocker; BdrvOpBlocker *blocker;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
assert((int) op >= 0 && op < BLOCK_OP_TYPE_MAX); assert((int) op >= 0 && op < BLOCK_OP_TYPE_MAX);
if (!QLIST_EMPTY(&bs->op_blockers[op])) { if (!QLIST_EMPTY(&bs->op_blockers[op])) {
blocker = QLIST_FIRST(&bs->op_blockers[op]); blocker = QLIST_FIRST(&bs->op_blockers[op]);
@@ -8128,7 +8051,7 @@ static bool append_strong_runtime_options(QDict *d, BlockDriverState *bs)
/* Note: This function may return false positives; it may return true /* Note: This function may return false positives; it may return true
* even if opening the backing file specified by bs's image header * even if opening the backing file specified by bs's image header
* would result in exactly bs->backing. */ * would result in exactly bs->backing. */
static bool GRAPH_RDLOCK bdrv_backing_overridden(BlockDriverState *bs) static bool bdrv_backing_overridden(BlockDriverState *bs)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
if (bs->backing) { if (bs->backing) {
@@ -8502,8 +8425,8 @@ BdrvChild *bdrv_primary_child(BlockDriverState *bs)
return found; return found;
} }
static BlockDriverState * GRAPH_RDLOCK static BlockDriverState *bdrv_do_skip_filters(BlockDriverState *bs,
bdrv_do_skip_filters(BlockDriverState *bs, bool stop_on_explicit_filter) bool stop_on_explicit_filter)
{ {
BdrvChild *c; BdrvChild *c;

View File

@@ -384,33 +384,31 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
return NULL; return NULL;
} }
bdrv_graph_rdlock_main_loop();
if (!bdrv_is_inserted(bs)) { if (!bdrv_is_inserted(bs)) {
error_setg(errp, "Device is not inserted: %s", error_setg(errp, "Device is not inserted: %s",
bdrv_get_device_name(bs)); bdrv_get_device_name(bs));
goto error_rdlock; return NULL;
} }
if (!bdrv_is_inserted(target)) { if (!bdrv_is_inserted(target)) {
error_setg(errp, "Device is not inserted: %s", error_setg(errp, "Device is not inserted: %s",
bdrv_get_device_name(target)); bdrv_get_device_name(target));
goto error_rdlock; return NULL;
} }
if (compress && !bdrv_supports_compressed_writes(target)) { if (compress && !bdrv_supports_compressed_writes(target)) {
error_setg(errp, "Compression is not supported for this drive %s", error_setg(errp, "Compression is not supported for this drive %s",
bdrv_get_device_name(target)); bdrv_get_device_name(target));
goto error_rdlock; return NULL;
} }
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) { if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) {
goto error_rdlock; return NULL;
} }
if (bdrv_op_is_blocked(target, BLOCK_OP_TYPE_BACKUP_TARGET, errp)) { if (bdrv_op_is_blocked(target, BLOCK_OP_TYPE_BACKUP_TARGET, errp)) {
goto error_rdlock; return NULL;
} }
bdrv_graph_rdunlock_main_loop();
if (perf->max_workers < 1 || perf->max_workers > INT_MAX) { if (perf->max_workers < 1 || perf->max_workers > INT_MAX) {
error_setg(errp, "max-workers must be between 1 and %d", INT_MAX); error_setg(errp, "max-workers must be between 1 and %d", INT_MAX);
@@ -438,7 +436,6 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
len = bdrv_getlength(bs); len = bdrv_getlength(bs);
if (len < 0) { if (len < 0) {
GRAPH_RDLOCK_GUARD_MAINLOOP();
error_setg_errno(errp, -len, "Unable to get length for '%s'", error_setg_errno(errp, -len, "Unable to get length for '%s'",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
goto error; goto error;
@@ -446,7 +443,6 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
target_len = bdrv_getlength(target); target_len = bdrv_getlength(target);
if (target_len < 0) { if (target_len < 0) {
GRAPH_RDLOCK_GUARD_MAINLOOP();
error_setg_errno(errp, -target_len, "Unable to get length for '%s'", error_setg_errno(errp, -target_len, "Unable to get length for '%s'",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
goto error; goto error;
@@ -496,10 +492,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
block_copy_set_speed(bcs, speed); block_copy_set_speed(bcs, speed);
/* Required permissions are taken by copy-before-write filter target */ /* Required permissions are taken by copy-before-write filter target */
bdrv_graph_wrlock(target);
block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL, block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
&error_abort); &error_abort);
bdrv_graph_wrunlock();
return &job->common; return &job->common;
@@ -512,8 +506,4 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
} }
return NULL; return NULL;
error_rdlock:
bdrv_graph_rdunlock_main_loop();
return NULL;
} }

View File

@@ -508,8 +508,6 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
goto out; goto out;
} }
bdrv_graph_rdlock_main_loop();
bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED | bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
(BDRV_REQ_FUA & bs->file->bs->supported_write_flags); (BDRV_REQ_FUA & bs->file->bs->supported_write_flags);
bs->supported_zero_flags = BDRV_REQ_WRITE_UNCHANGED | bs->supported_zero_flags = BDRV_REQ_WRITE_UNCHANGED |
@@ -522,7 +520,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
if (s->align && (s->align >= INT_MAX || !is_power_of_2(s->align))) { if (s->align && (s->align >= INT_MAX || !is_power_of_2(s->align))) {
error_setg(errp, "Cannot meet constraints with align %" PRIu64, error_setg(errp, "Cannot meet constraints with align %" PRIu64,
s->align); s->align);
goto out_rdlock; goto out;
} }
align = MAX(s->align, bs->file->bs->bl.request_alignment); align = MAX(s->align, bs->file->bs->bl.request_alignment);
@@ -532,7 +530,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
!QEMU_IS_ALIGNED(s->max_transfer, align))) { !QEMU_IS_ALIGNED(s->max_transfer, align))) {
error_setg(errp, "Cannot meet constraints with max-transfer %" PRIu64, error_setg(errp, "Cannot meet constraints with max-transfer %" PRIu64,
s->max_transfer); s->max_transfer);
goto out_rdlock; goto out;
} }
s->opt_write_zero = qemu_opt_get_size(opts, "opt-write-zero", 0); s->opt_write_zero = qemu_opt_get_size(opts, "opt-write-zero", 0);
@@ -541,7 +539,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
!QEMU_IS_ALIGNED(s->opt_write_zero, align))) { !QEMU_IS_ALIGNED(s->opt_write_zero, align))) {
error_setg(errp, "Cannot meet constraints with opt-write-zero %" PRIu64, error_setg(errp, "Cannot meet constraints with opt-write-zero %" PRIu64,
s->opt_write_zero); s->opt_write_zero);
goto out_rdlock; goto out;
} }
s->max_write_zero = qemu_opt_get_size(opts, "max-write-zero", 0); s->max_write_zero = qemu_opt_get_size(opts, "max-write-zero", 0);
@@ -551,7 +549,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
MAX(s->opt_write_zero, align)))) { MAX(s->opt_write_zero, align)))) {
error_setg(errp, "Cannot meet constraints with max-write-zero %" PRIu64, error_setg(errp, "Cannot meet constraints with max-write-zero %" PRIu64,
s->max_write_zero); s->max_write_zero);
goto out_rdlock; goto out;
} }
s->opt_discard = qemu_opt_get_size(opts, "opt-discard", 0); s->opt_discard = qemu_opt_get_size(opts, "opt-discard", 0);
@@ -560,7 +558,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
!QEMU_IS_ALIGNED(s->opt_discard, align))) { !QEMU_IS_ALIGNED(s->opt_discard, align))) {
error_setg(errp, "Cannot meet constraints with opt-discard %" PRIu64, error_setg(errp, "Cannot meet constraints with opt-discard %" PRIu64,
s->opt_discard); s->opt_discard);
goto out_rdlock; goto out;
} }
s->max_discard = qemu_opt_get_size(opts, "max-discard", 0); s->max_discard = qemu_opt_get_size(opts, "max-discard", 0);
@@ -570,14 +568,12 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
MAX(s->opt_discard, align)))) { MAX(s->opt_discard, align)))) {
error_setg(errp, "Cannot meet constraints with max-discard %" PRIu64, error_setg(errp, "Cannot meet constraints with max-discard %" PRIu64,
s->max_discard); s->max_discard);
goto out_rdlock; goto out;
} }
bdrv_debug_event(bs, BLKDBG_NONE); bdrv_debug_event(bs, BLKDBG_NONE);
ret = 0; ret = 0;
out_rdlock:
bdrv_graph_rdunlock_main_loop();
out: out:
if (ret < 0) { if (ret < 0) {
qemu_mutex_destroy(&s->lock); qemu_mutex_destroy(&s->lock);
@@ -750,10 +746,13 @@ blkdebug_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
return bdrv_co_pdiscard(bs->file, offset, bytes); return bdrv_co_pdiscard(bs->file, offset, bytes);
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn blkdebug_co_block_status(BlockDriverState *bs,
blkdebug_co_block_status(BlockDriverState *bs, bool want_zero, int64_t offset, bool want_zero,
int64_t bytes, int64_t *pnum, int64_t *map, int64_t offset,
BlockDriverState **file) int64_t bytes,
int64_t *pnum,
int64_t *map,
BlockDriverState **file)
{ {
int err; int err;
@@ -974,7 +973,7 @@ blkdebug_co_getlength(BlockDriverState *bs)
return bdrv_co_getlength(bs->file->bs); return bdrv_co_getlength(bs->file->bs);
} }
static void GRAPH_RDLOCK blkdebug_refresh_filename(BlockDriverState *bs) static void blkdebug_refresh_filename(BlockDriverState *bs)
{ {
BDRVBlkdebugState *s = bs->opaque; BDRVBlkdebugState *s = bs->opaque;
const QDictEntry *e; const QDictEntry *e;

View File

@@ -13,7 +13,6 @@
#include "block/block_int.h" #include "block/block_int.h"
#include "exec/memory.h" #include "exec/memory.h"
#include "exec/cpu-common.h" /* for qemu_ram_get_fd() */ #include "exec/cpu-common.h" /* for qemu_ram_get_fd() */
#include "qemu/defer-call.h"
#include "qapi/error.h" #include "qapi/error.h"
#include "qemu/error-report.h" #include "qemu/error-report.h"
#include "qapi/qmp/qdict.h" #include "qapi/qmp/qdict.h"
@@ -313,10 +312,10 @@ static void blkio_detach_aio_context(BlockDriverState *bs)
} }
/* /*
* Called by defer_call_end() or immediately if not in a deferred section. * Called by blk_io_unplug() or immediately if not plugged. Called without
* Called without blkio_lock. * blkio_lock.
*/ */
static void blkio_deferred_fn(void *opaque) static void blkio_unplug_fn(void *opaque)
{ {
BDRVBlkioState *s = opaque; BDRVBlkioState *s = opaque;
@@ -333,7 +332,7 @@ static void blkio_submit_io(BlockDriverState *bs)
{ {
BDRVBlkioState *s = bs->opaque; BDRVBlkioState *s = bs->opaque;
defer_call(blkio_deferred_fn, s); blk_io_plug_call(blkio_unplug_fn, s);
} }
static int coroutine_fn static int coroutine_fn

View File

@@ -130,13 +130,7 @@ static int coroutine_fn GRAPH_RDLOCK blkreplay_co_flush(BlockDriverState *bs)
static int blkreplay_snapshot_goto(BlockDriverState *bs, static int blkreplay_snapshot_goto(BlockDriverState *bs,
const char *snapshot_id) const char *snapshot_id)
{ {
BlockDriverState *file_bs; return bdrv_snapshot_goto(bs->file->bs, snapshot_id, NULL);
bdrv_graph_rdlock_main_loop();
file_bs = bs->file->bs;
bdrv_graph_rdunlock_main_loop();
return bdrv_snapshot_goto(file_bs, snapshot_id, NULL);
} }
static BlockDriver bdrv_blkreplay = { static BlockDriver bdrv_blkreplay = {

View File

@@ -33,8 +33,8 @@ typedef struct BlkverifyRequest {
uint64_t bytes; uint64_t bytes;
int flags; int flags;
int GRAPH_RDLOCK_PTR (*request_fn)( int (*request_fn)(BdrvChild *, int64_t, int64_t, QEMUIOVector *,
BdrvChild *, int64_t, int64_t, QEMUIOVector *, BdrvRequestFlags); BdrvRequestFlags);
int ret; /* test image result */ int ret; /* test image result */
int raw_ret; /* raw image result */ int raw_ret; /* raw image result */
@@ -170,11 +170,8 @@ static void coroutine_fn blkverify_do_test_req(void *opaque)
BlkverifyRequest *r = opaque; BlkverifyRequest *r = opaque;
BDRVBlkverifyState *s = r->bs->opaque; BDRVBlkverifyState *s = r->bs->opaque;
bdrv_graph_co_rdlock();
r->ret = r->request_fn(s->test_file, r->offset, r->bytes, r->qiov, r->ret = r->request_fn(s->test_file, r->offset, r->bytes, r->qiov,
r->flags); r->flags);
bdrv_graph_co_rdunlock();
r->done++; r->done++;
qemu_coroutine_enter_if_inactive(r->co); qemu_coroutine_enter_if_inactive(r->co);
} }
@@ -183,16 +180,13 @@ static void coroutine_fn blkverify_do_raw_req(void *opaque)
{ {
BlkverifyRequest *r = opaque; BlkverifyRequest *r = opaque;
bdrv_graph_co_rdlock();
r->raw_ret = r->request_fn(r->bs->file, r->offset, r->bytes, r->raw_qiov, r->raw_ret = r->request_fn(r->bs->file, r->offset, r->bytes, r->raw_qiov,
r->flags); r->flags);
bdrv_graph_co_rdunlock();
r->done++; r->done++;
qemu_coroutine_enter_if_inactive(r->co); qemu_coroutine_enter_if_inactive(r->co);
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn
blkverify_co_prwv(BlockDriverState *bs, BlkverifyRequest *r, uint64_t offset, blkverify_co_prwv(BlockDriverState *bs, BlkverifyRequest *r, uint64_t offset,
uint64_t bytes, QEMUIOVector *qiov, QEMUIOVector *raw_qiov, uint64_t bytes, QEMUIOVector *qiov, QEMUIOVector *raw_qiov,
int flags, bool is_write) int flags, bool is_write)
@@ -228,7 +222,7 @@ blkverify_co_prwv(BlockDriverState *bs, BlkverifyRequest *r, uint64_t offset,
return r->ret; return r->ret;
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn
blkverify_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes, blkverify_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
QEMUIOVector *qiov, BdrvRequestFlags flags) QEMUIOVector *qiov, BdrvRequestFlags flags)
{ {
@@ -257,7 +251,7 @@ blkverify_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
return ret; return ret;
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn
blkverify_co_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes, blkverify_co_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes,
QEMUIOVector *qiov, BdrvRequestFlags flags) QEMUIOVector *qiov, BdrvRequestFlags flags)
{ {
@@ -288,7 +282,7 @@ blkverify_recurse_can_replace(BlockDriverState *bs,
bdrv_recurse_can_replace(s->test_file->bs, to_replace); bdrv_recurse_can_replace(s->test_file->bs, to_replace);
} }
static void GRAPH_RDLOCK blkverify_refresh_filename(BlockDriverState *bs) static void blkverify_refresh_filename(BlockDriverState *bs)
{ {
BDRVBlkverifyState *s = bs->opaque; BDRVBlkverifyState *s = bs->opaque;

View File

@@ -780,12 +780,11 @@ BlockDriverState *blk_bs(BlockBackend *blk)
return blk->root ? blk->root->bs : NULL; return blk->root ? blk->root->bs : NULL;
} }
static BlockBackend * GRAPH_RDLOCK bdrv_first_blk(BlockDriverState *bs) static BlockBackend *bdrv_first_blk(BlockDriverState *bs)
{ {
BdrvChild *child; BdrvChild *child;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
assert_bdrv_graph_readable();
QLIST_FOREACH(child, &bs->parents, next_parent) { QLIST_FOREACH(child, &bs->parents, next_parent) {
if (child->klass == &child_root) { if (child->klass == &child_root) {
@@ -813,8 +812,6 @@ bool bdrv_is_root_node(BlockDriverState *bs)
BdrvChild *c; BdrvChild *c;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
assert_bdrv_graph_readable();
QLIST_FOREACH(c, &bs->parents, next_parent) { QLIST_FOREACH(c, &bs->parents, next_parent) {
if (c->klass != &child_root) { if (c->klass != &child_root) {
return false; return false;
@@ -931,12 +928,10 @@ int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp)
ThrottleGroupMember *tgm = &blk->public.throttle_group_member; ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_ref(bs); bdrv_ref(bs);
bdrv_graph_wrlock(bs);
blk->root = bdrv_root_attach_child(bs, "root", &child_root, blk->root = bdrv_root_attach_child(bs, "root", &child_root,
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY, BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
blk->perm, blk->shared_perm, blk->perm, blk->shared_perm,
blk, errp); blk, errp);
bdrv_graph_wrunlock();
if (blk->root == NULL) { if (blk->root == NULL) {
return -EPERM; return -EPERM;
} }
@@ -2264,7 +2259,6 @@ void blk_activate(BlockBackend *blk, Error **errp)
if (qemu_in_coroutine()) { if (qemu_in_coroutine()) {
bdrv_co_activate(bs, errp); bdrv_co_activate(bs, errp);
} else { } else {
GRAPH_RDLOCK_GUARD_MAINLOOP();
bdrv_activate(bs, errp); bdrv_activate(bs, errp);
} }
} }
@@ -2390,7 +2384,6 @@ bool blk_op_is_blocked(BlockBackend *blk, BlockOpType op, Error **errp)
{ {
BlockDriverState *bs = blk_bs(blk); BlockDriverState *bs = blk_bs(blk);
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!bs) { if (!bs) {
return false; return false;
@@ -2668,8 +2661,6 @@ int blk_load_vmstate(BlockBackend *blk, uint8_t *buf, int64_t pos, int size)
int blk_probe_blocksizes(BlockBackend *blk, BlockSizes *bsz) int blk_probe_blocksizes(BlockBackend *blk, BlockSizes *bsz)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!blk_is_available(blk)) { if (!blk_is_available(blk)) {
return -ENOMEDIUM; return -ENOMEDIUM;
} }
@@ -2730,7 +2721,6 @@ int blk_commit_all(void)
{ {
BlockBackend *blk = NULL; BlockBackend *blk = NULL;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
while ((blk = blk_all_next(blk)) != NULL) { while ((blk = blk_all_next(blk)) != NULL) {
AioContext *aio_context = blk_get_aio_context(blk); AioContext *aio_context = blk_get_aio_context(blk);
@@ -2911,8 +2901,6 @@ const BdrvChild *blk_root(BlockBackend *blk)
int blk_make_empty(BlockBackend *blk, Error **errp) int blk_make_empty(BlockBackend *blk, Error **errp)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!blk_is_available(blk)) { if (!blk_is_available(blk)) {
error_setg(errp, "No medium inserted"); error_setg(errp, "No medium inserted");
return -ENOMEDIUM; return -ENOMEDIUM;

View File

@@ -313,12 +313,7 @@ static int64_t block_copy_calculate_cluster_size(BlockDriverState *target,
{ {
int ret; int ret;
BlockDriverInfo bdi; BlockDriverInfo bdi;
bool target_does_cow; bool target_does_cow = bdrv_backing_chain_next(target);
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
target_does_cow = bdrv_backing_chain_next(target);
/* /*
* If there is no backing file on the target, we cannot rely on COW if our * If there is no backing file on the target, we cannot rely on COW if our
@@ -360,8 +355,6 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
BdrvDirtyBitmap *copy_bitmap; BdrvDirtyBitmap *copy_bitmap;
bool is_fleecing; bool is_fleecing;
GLOBAL_STATE_CODE();
cluster_size = block_copy_calculate_cluster_size(target->bs, errp); cluster_size = block_copy_calculate_cluster_size(target->bs, errp);
if (cluster_size < 0) { if (cluster_size < 0) {
return NULL; return NULL;
@@ -399,9 +392,7 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
* For more information see commit f8d59dfb40bb and test * For more information see commit f8d59dfb40bb and test
* tests/qemu-iotests/222 * tests/qemu-iotests/222
*/ */
bdrv_graph_rdlock_main_loop();
is_fleecing = bdrv_chain_contains(target->bs, source->bs); is_fleecing = bdrv_chain_contains(target->bs, source->bs);
bdrv_graph_rdunlock_main_loop();
s = g_new(BlockCopyState, 1); s = g_new(BlockCopyState, 1);
*s = (BlockCopyState) { *s = (BlockCopyState) {

View File

@@ -105,12 +105,8 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
struct bochs_header bochs; struct bochs_header bochs;
int ret; int ret;
GLOBAL_STATE_CODE();
/* No write support yet */ /* No write support yet */
bdrv_graph_rdlock_main_loop();
ret = bdrv_apply_auto_read_only(bs, NULL, errp); ret = bdrv_apply_auto_read_only(bs, NULL, errp);
bdrv_graph_rdunlock_main_loop();
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
@@ -120,8 +116,6 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
ret = bdrv_pread(bs->file, 0, sizeof(bochs), &bochs, 0); ret = bdrv_pread(bs->file, 0, sizeof(bochs), &bochs, 0);
if (ret < 0) { if (ret < 0) {
return ret; return ret;

View File

@@ -67,11 +67,7 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
uint32_t offsets_size, max_compressed_block_size = 1, i; uint32_t offsets_size, max_compressed_block_size = 1, i;
int ret; int ret;
GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop();
ret = bdrv_apply_auto_read_only(bs, NULL, errp); ret = bdrv_apply_auto_read_only(bs, NULL, errp);
bdrv_graph_rdunlock_main_loop();
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
@@ -81,8 +77,6 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* read header */ /* read header */
ret = bdrv_pread(bs->file, 128, 4, &s->block_size, 0); ret = bdrv_pread(bs->file, 128, 4, &s->block_size, 0);
if (ret < 0) { if (ret < 0) {

View File

@@ -48,10 +48,8 @@ static int commit_prepare(Job *job)
{ {
CommitBlockJob *s = container_of(job, CommitBlockJob, common.job); CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
bdrv_graph_rdlock_main_loop();
bdrv_unfreeze_backing_chain(s->commit_top_bs, s->base_bs); bdrv_unfreeze_backing_chain(s->commit_top_bs, s->base_bs);
s->chain_frozen = false; s->chain_frozen = false;
bdrv_graph_rdunlock_main_loop();
/* Remove base node parent that still uses BLK_PERM_WRITE/RESIZE before /* Remove base node parent that still uses BLK_PERM_WRITE/RESIZE before
* the normal backing chain can be restored. */ * the normal backing chain can be restored. */
@@ -68,12 +66,9 @@ static void commit_abort(Job *job)
{ {
CommitBlockJob *s = container_of(job, CommitBlockJob, common.job); CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
BlockDriverState *top_bs = blk_bs(s->top); BlockDriverState *top_bs = blk_bs(s->top);
BlockDriverState *commit_top_backing_bs;
if (s->chain_frozen) { if (s->chain_frozen) {
bdrv_graph_rdlock_main_loop();
bdrv_unfreeze_backing_chain(s->commit_top_bs, s->base_bs); bdrv_unfreeze_backing_chain(s->commit_top_bs, s->base_bs);
bdrv_graph_rdunlock_main_loop();
} }
/* Make sure commit_top_bs and top stay around until bdrv_replace_node() */ /* Make sure commit_top_bs and top stay around until bdrv_replace_node() */
@@ -95,15 +90,8 @@ static void commit_abort(Job *job)
* XXX Can (or should) we somehow keep 'consistent read' blocked even * XXX Can (or should) we somehow keep 'consistent read' blocked even
* after the failed/cancelled commit job is gone? If we already wrote * after the failed/cancelled commit job is gone? If we already wrote
* something to base, the intermediate images aren't valid any more. */ * something to base, the intermediate images aren't valid any more. */
bdrv_graph_rdlock_main_loop(); bdrv_replace_node(s->commit_top_bs, s->commit_top_bs->backing->bs,
commit_top_backing_bs = s->commit_top_bs->backing->bs; &error_abort);
bdrv_graph_rdunlock_main_loop();
bdrv_drained_begin(commit_top_backing_bs);
bdrv_graph_wrlock(commit_top_backing_bs);
bdrv_replace_node(s->commit_top_bs, commit_top_backing_bs, &error_abort);
bdrv_graph_wrunlock();
bdrv_drained_end(commit_top_backing_bs);
bdrv_unref(s->commit_top_bs); bdrv_unref(s->commit_top_bs);
bdrv_unref(top_bs); bdrv_unref(top_bs);
@@ -222,7 +210,7 @@ bdrv_commit_top_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags); return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags);
} }
static GRAPH_RDLOCK void bdrv_commit_top_refresh_filename(BlockDriverState *bs) static void bdrv_commit_top_refresh_filename(BlockDriverState *bs)
{ {
pstrcpy(bs->exact_filename, sizeof(bs->exact_filename), pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
bs->backing->bs->filename); bs->backing->bs->filename);
@@ -267,13 +255,10 @@ void commit_start(const char *job_id, BlockDriverState *bs,
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
assert(top != bs); assert(top != bs);
bdrv_graph_rdlock_main_loop();
if (bdrv_skip_filters(top) == bdrv_skip_filters(base)) { if (bdrv_skip_filters(top) == bdrv_skip_filters(base)) {
error_setg(errp, "Invalid files for merge: top and base are the same"); error_setg(errp, "Invalid files for merge: top and base are the same");
bdrv_graph_rdunlock_main_loop();
return; return;
} }
bdrv_graph_rdunlock_main_loop();
base_size = bdrv_getlength(base); base_size = bdrv_getlength(base);
if (base_size < 0) { if (base_size < 0) {
@@ -339,7 +324,6 @@ void commit_start(const char *job_id, BlockDriverState *bs,
* this is the responsibility of the interface (i.e. whoever calls * this is the responsibility of the interface (i.e. whoever calls
* commit_start()). * commit_start()).
*/ */
bdrv_graph_wrlock(top);
s->base_overlay = bdrv_find_overlay(top, base); s->base_overlay = bdrv_find_overlay(top, base);
assert(s->base_overlay); assert(s->base_overlay);
@@ -370,20 +354,16 @@ void commit_start(const char *job_id, BlockDriverState *bs,
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0, ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
iter_shared_perms, errp); iter_shared_perms, errp);
if (ret < 0) { if (ret < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
} }
if (bdrv_freeze_backing_chain(commit_top_bs, base, errp) < 0) { if (bdrv_freeze_backing_chain(commit_top_bs, base, errp) < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
s->chain_frozen = true; s->chain_frozen = true;
ret = block_job_add_bdrv(&s->common, "base", base, 0, BLK_PERM_ALL, errp); ret = block_job_add_bdrv(&s->common, "base", base, 0, BLK_PERM_ALL, errp);
bdrv_graph_wrunlock();
if (ret < 0) { if (ret < 0) {
goto fail; goto fail;
} }
@@ -416,9 +396,7 @@ void commit_start(const char *job_id, BlockDriverState *bs,
fail: fail:
if (s->chain_frozen) { if (s->chain_frozen) {
bdrv_graph_rdlock_main_loop();
bdrv_unfreeze_backing_chain(commit_top_bs, base); bdrv_unfreeze_backing_chain(commit_top_bs, base);
bdrv_graph_rdunlock_main_loop();
} }
if (s->base) { if (s->base) {
blk_unref(s->base); blk_unref(s->base);
@@ -433,11 +411,7 @@ fail:
/* commit_top_bs has to be replaced after deleting the block job, /* commit_top_bs has to be replaced after deleting the block job,
* otherwise this would fail because of lack of permissions. */ * otherwise this would fail because of lack of permissions. */
if (commit_top_bs) { if (commit_top_bs) {
bdrv_drained_begin(top);
bdrv_graph_wrlock(top);
bdrv_replace_node(commit_top_bs, top, &error_abort); bdrv_replace_node(commit_top_bs, top, &error_abort);
bdrv_graph_wrunlock();
bdrv_drained_end(top);
} }
} }
@@ -460,7 +434,6 @@ int bdrv_commit(BlockDriverState *bs)
Error *local_err = NULL; Error *local_err = NULL;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!drv) if (!drv)
return -ENOMEDIUM; return -ENOMEDIUM;

View File

@@ -203,7 +203,7 @@ static int coroutine_fn GRAPH_RDLOCK cbw_co_flush(BlockDriverState *bs)
* It's guaranteed that guest writes will not interact in the region until * It's guaranteed that guest writes will not interact in the region until
* cbw_snapshot_read_unlock() called. * cbw_snapshot_read_unlock() called.
*/ */
static BlockReq * coroutine_fn GRAPH_RDLOCK static coroutine_fn BlockReq *
cbw_snapshot_read_lock(BlockDriverState *bs, int64_t offset, int64_t bytes, cbw_snapshot_read_lock(BlockDriverState *bs, int64_t offset, int64_t bytes,
int64_t *pnum, BdrvChild **file) int64_t *pnum, BdrvChild **file)
{ {
@@ -305,7 +305,7 @@ cbw_co_snapshot_block_status(BlockDriverState *bs,
return -EACCES; return -EACCES;
} }
ret = bdrv_co_block_status(child->bs, offset, cur_bytes, pnum, map, file); ret = bdrv_block_status(child->bs, offset, cur_bytes, pnum, map, file);
if (child == s->target) { if (child == s->target) {
/* /*
* We refer to s->target only for areas that we've written to it. * We refer to s->target only for areas that we've written to it.
@@ -335,7 +335,7 @@ cbw_co_pdiscard_snapshot(BlockDriverState *bs, int64_t offset, int64_t bytes)
return bdrv_co_pdiscard(s->target, offset, bytes); return bdrv_co_pdiscard(s->target, offset, bytes);
} }
static void GRAPH_RDLOCK cbw_refresh_filename(BlockDriverState *bs) static void cbw_refresh_filename(BlockDriverState *bs)
{ {
pstrcpy(bs->exact_filename, sizeof(bs->exact_filename), pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
bs->file->bs->filename); bs->file->bs->filename);
@@ -433,8 +433,6 @@ static int cbw_open(BlockDriverState *bs, QDict *options, int flags,
return -EINVAL; return -EINVAL;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
ctx = bdrv_get_aio_context(bs); ctx = bdrv_get_aio_context(bs);
aio_context_acquire(ctx); aio_context_acquire(ctx);

View File

@@ -35,8 +35,8 @@ typedef struct BDRVStateCOR {
} BDRVStateCOR; } BDRVStateCOR;
static int GRAPH_UNLOCKED static int cor_open(BlockDriverState *bs, QDict *options, int flags,
cor_open(BlockDriverState *bs, QDict *options, int flags, Error **errp) Error **errp)
{ {
BlockDriverState *bottom_bs = NULL; BlockDriverState *bottom_bs = NULL;
BDRVStateCOR *state = bs->opaque; BDRVStateCOR *state = bs->opaque;
@@ -44,15 +44,11 @@ cor_open(BlockDriverState *bs, QDict *options, int flags, Error **errp)
const char *bottom_node = qdict_get_try_str(options, "bottom"); const char *bottom_node = qdict_get_try_str(options, "bottom");
int ret; int ret;
GLOBAL_STATE_CODE();
ret = bdrv_open_file_child(NULL, options, "file", bs, errp); ret = bdrv_open_file_child(NULL, options, "file", bs, errp);
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs->supported_read_flags = BDRV_REQ_PREFETCH; bs->supported_read_flags = BDRV_REQ_PREFETCH;
bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED | bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
@@ -150,11 +146,11 @@ cor_co_preadv_part(BlockDriverState *bs, int64_t offset, int64_t bytes,
local_flags = flags; local_flags = flags;
/* In case of failure, try to copy-on-read anyway */ /* In case of failure, try to copy-on-read anyway */
ret = bdrv_co_is_allocated(bs->file->bs, offset, bytes, &n); ret = bdrv_is_allocated(bs->file->bs, offset, bytes, &n);
if (ret <= 0) { if (ret <= 0) {
ret = bdrv_co_is_allocated_above(bdrv_backing_chain_next(bs->file->bs), ret = bdrv_is_allocated_above(bdrv_backing_chain_next(bs->file->bs),
state->bottom_bs, true, offset, state->bottom_bs, true, offset,
n, &n); n, &n);
if (ret > 0 || ret < 0) { if (ret > 0 || ret < 0) {
local_flags |= BDRV_REQ_COPY_ON_READ; local_flags |= BDRV_REQ_COPY_ON_READ;
} }
@@ -231,17 +227,13 @@ cor_co_lock_medium(BlockDriverState *bs, bool locked)
} }
static void GRAPH_UNLOCKED cor_close(BlockDriverState *bs) static void cor_close(BlockDriverState *bs)
{ {
BDRVStateCOR *s = bs->opaque; BDRVStateCOR *s = bs->opaque;
GLOBAL_STATE_CODE();
if (s->chain_frozen) { if (s->chain_frozen) {
bdrv_graph_rdlock_main_loop();
s->chain_frozen = false; s->chain_frozen = false;
bdrv_unfreeze_backing_chain(bs, s->bottom_bs); bdrv_unfreeze_backing_chain(bs, s->bottom_bs);
bdrv_graph_rdunlock_main_loop();
} }
bdrv_unref(s->bottom_bs); bdrv_unref(s->bottom_bs);
@@ -271,15 +263,12 @@ static BlockDriver bdrv_copy_on_read = {
}; };
void no_coroutine_fn bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs) void bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs)
{ {
BDRVStateCOR *s = cor_filter_bs->opaque; BDRVStateCOR *s = cor_filter_bs->opaque;
GLOBAL_STATE_CODE();
/* unfreeze, as otherwise bdrv_replace_node() will fail */ /* unfreeze, as otherwise bdrv_replace_node() will fail */
if (s->chain_frozen) { if (s->chain_frozen) {
GRAPH_RDLOCK_GUARD_MAINLOOP();
s->chain_frozen = false; s->chain_frozen = false;
bdrv_unfreeze_backing_chain(cor_filter_bs, s->bottom_bs); bdrv_unfreeze_backing_chain(cor_filter_bs, s->bottom_bs);
} }

View File

@@ -27,7 +27,6 @@
#include "block/block_int.h" #include "block/block_int.h"
void no_coroutine_fn GRAPH_UNLOCKED void bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs);
bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs);
#endif /* BLOCK_COPY_ON_READ_H */ #endif /* BLOCK_COPY_ON_READ_H */

View File

@@ -65,9 +65,6 @@ static int block_crypto_read_func(QCryptoBlock *block,
BlockDriverState *bs = opaque; BlockDriverState *bs = opaque;
ssize_t ret; ssize_t ret;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
ret = bdrv_pread(bs->file, offset, buflen, buf, 0); ret = bdrv_pread(bs->file, offset, buflen, buf, 0);
if (ret < 0) { if (ret < 0) {
error_setg_errno(errp, -ret, "Could not read encryption header"); error_setg_errno(errp, -ret, "Could not read encryption header");
@@ -86,9 +83,6 @@ static int block_crypto_write_func(QCryptoBlock *block,
BlockDriverState *bs = opaque; BlockDriverState *bs = opaque;
ssize_t ret; ssize_t ret;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
ret = bdrv_pwrite(bs->file, offset, buflen, buf, 0); ret = bdrv_pwrite(bs->file, offset, buflen, buf, 0);
if (ret < 0) { if (ret < 0) {
error_setg_errno(errp, -ret, "Could not write encryption header"); error_setg_errno(errp, -ret, "Could not write encryption header");
@@ -269,15 +263,11 @@ static int block_crypto_open_generic(QCryptoBlockFormat format,
unsigned int cflags = 0; unsigned int cflags = 0;
QDict *cryptoopts = NULL; QDict *cryptoopts = NULL;
GLOBAL_STATE_CODE();
ret = bdrv_open_file_child(NULL, options, "file", bs, errp); ret = bdrv_open_file_child(NULL, options, "file", bs, errp);
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs->supported_write_flags = BDRV_REQ_FUA & bs->supported_write_flags = BDRV_REQ_FUA &
bs->file->bs->supported_write_flags; bs->file->bs->supported_write_flags;
@@ -838,7 +828,7 @@ block_crypto_amend_options_generic_luks(BlockDriverState *bs,
errp); errp);
} }
static int GRAPH_RDLOCK static int
block_crypto_amend_options_luks(BlockDriverState *bs, block_crypto_amend_options_luks(BlockDriverState *bs,
QemuOpts *opts, QemuOpts *opts,
BlockDriverAmendStatusCB *status_cb, BlockDriverAmendStatusCB *status_cb,
@@ -851,6 +841,8 @@ block_crypto_amend_options_luks(BlockDriverState *bs,
QCryptoBlockAmendOptions *amend_options = NULL; QCryptoBlockAmendOptions *amend_options = NULL;
int ret = -EINVAL; int ret = -EINVAL;
assume_graph_lock(); /* FIXME */
assert(crypto); assert(crypto);
assert(crypto->block); assert(crypto->block);

View File

@@ -696,10 +696,8 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags,
const char *protocol_delimiter; const char *protocol_delimiter;
int ret; int ret;
bdrv_graph_rdlock_main_loop();
ret = bdrv_apply_auto_read_only(bs, "curl driver does not support writes", ret = bdrv_apply_auto_read_only(bs, "curl driver does not support writes",
errp); errp);
bdrv_graph_rdunlock_main_loop();
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }

View File

@@ -70,8 +70,7 @@ static int dmg_probe(const uint8_t *buf, int buf_size, const char *filename)
return 0; return 0;
} }
static int GRAPH_RDLOCK static int read_uint64(BlockDriverState *bs, int64_t offset, uint64_t *result)
read_uint64(BlockDriverState *bs, int64_t offset, uint64_t *result)
{ {
uint64_t buffer; uint64_t buffer;
int ret; int ret;
@@ -85,8 +84,7 @@ read_uint64(BlockDriverState *bs, int64_t offset, uint64_t *result)
return 0; return 0;
} }
static int GRAPH_RDLOCK static int read_uint32(BlockDriverState *bs, int64_t offset, uint32_t *result)
read_uint32(BlockDriverState *bs, int64_t offset, uint32_t *result)
{ {
uint32_t buffer; uint32_t buffer;
int ret; int ret;
@@ -323,9 +321,8 @@ fail:
return ret; return ret;
} }
static int GRAPH_RDLOCK static int dmg_read_resource_fork(BlockDriverState *bs, DmgHeaderState *ds,
dmg_read_resource_fork(BlockDriverState *bs, DmgHeaderState *ds, uint64_t info_begin, uint64_t info_length)
uint64_t info_begin, uint64_t info_length)
{ {
BDRVDMGState *s = bs->opaque; BDRVDMGState *s = bs->opaque;
int ret; int ret;
@@ -391,9 +388,8 @@ fail:
return ret; return ret;
} }
static int GRAPH_RDLOCK static int dmg_read_plist_xml(BlockDriverState *bs, DmgHeaderState *ds,
dmg_read_plist_xml(BlockDriverState *bs, DmgHeaderState *ds, uint64_t info_begin, uint64_t info_length)
uint64_t info_begin, uint64_t info_length)
{ {
BDRVDMGState *s = bs->opaque; BDRVDMGState *s = bs->opaque;
int ret; int ret;
@@ -456,11 +452,7 @@ static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
int64_t offset; int64_t offset;
int ret; int ret;
GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop();
ret = bdrv_apply_auto_read_only(bs, NULL, errp); ret = bdrv_apply_auto_read_only(bs, NULL, errp);
bdrv_graph_rdunlock_main_loop();
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
@@ -469,9 +461,6 @@ static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* /*
* NB: if uncompress submodules are absent, * NB: if uncompress submodules are absent,
* ie block_module_load return value == 0, the function pointers * ie block_module_load return value == 0, the function pointers

View File

@@ -83,8 +83,6 @@ BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp)
uint64_t perm; uint64_t perm;
int ret; int ret;
GLOBAL_STATE_CODE();
if (!id_wellformed(export->id)) { if (!id_wellformed(export->id)) {
error_setg(errp, "Invalid block export id"); error_setg(errp, "Invalid block export id");
return NULL; return NULL;
@@ -147,9 +145,7 @@ BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp)
* access since the export could be available before migration handover. * access since the export could be available before migration handover.
* ctx was acquired in the caller. * ctx was acquired in the caller.
*/ */
bdrv_graph_rdlock_main_loop();
bdrv_activate(bs, NULL); bdrv_activate(bs, NULL);
bdrv_graph_rdunlock_main_loop();
perm = BLK_PERM_CONSISTENT_READ; perm = BLK_PERM_CONSISTENT_READ;
if (export->writable) { if (export->writable) {

View File

@@ -160,6 +160,7 @@ typedef struct BDRVRawState {
bool has_write_zeroes:1; bool has_write_zeroes:1;
bool use_linux_aio:1; bool use_linux_aio:1;
bool use_linux_io_uring:1; bool use_linux_io_uring:1;
int64_t *offset; /* offset of zone append operation */
int page_cache_inconsistent; /* errno from fdatasync failure */ int page_cache_inconsistent; /* errno from fdatasync failure */
bool has_fallocate; bool has_fallocate;
bool needs_alignment; bool needs_alignment;
@@ -2444,13 +2445,12 @@ static bool bdrv_qiov_is_aligned(BlockDriverState *bs, QEMUIOVector *qiov)
return true; return true;
} }
static int coroutine_fn raw_co_prw(BlockDriverState *bs, int64_t *offset_ptr, static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
uint64_t bytes, QEMUIOVector *qiov, int type) uint64_t bytes, QEMUIOVector *qiov, int type)
{ {
BDRVRawState *s = bs->opaque; BDRVRawState *s = bs->opaque;
RawPosixAIOData acb; RawPosixAIOData acb;
int ret; int ret;
uint64_t offset = *offset_ptr;
if (fd_open(bs) < 0) if (fd_open(bs) < 0)
return -EIO; return -EIO;
@@ -2513,8 +2513,8 @@ out:
uint64_t *wp = &wps->wp[offset / bs->bl.zone_size]; uint64_t *wp = &wps->wp[offset / bs->bl.zone_size];
if (!BDRV_ZT_IS_CONV(*wp)) { if (!BDRV_ZT_IS_CONV(*wp)) {
if (type & QEMU_AIO_ZONE_APPEND) { if (type & QEMU_AIO_ZONE_APPEND) {
*offset_ptr = *wp; *s->offset = *wp;
trace_zbd_zone_append_complete(bs, *offset_ptr trace_zbd_zone_append_complete(bs, *s->offset
>> BDRV_SECTOR_BITS); >> BDRV_SECTOR_BITS);
} }
/* Advance the wp if needed */ /* Advance the wp if needed */
@@ -2523,10 +2523,7 @@ out:
} }
} }
} else { } else {
/* update_zones_wp(bs, s->fd, 0, 1);
* write and append write are not allowed to cross zone boundaries
*/
update_zones_wp(bs, s->fd, offset, 1);
} }
qemu_co_mutex_unlock(&wps->colock); qemu_co_mutex_unlock(&wps->colock);
@@ -2539,14 +2536,14 @@ static int coroutine_fn raw_co_preadv(BlockDriverState *bs, int64_t offset,
int64_t bytes, QEMUIOVector *qiov, int64_t bytes, QEMUIOVector *qiov,
BdrvRequestFlags flags) BdrvRequestFlags flags)
{ {
return raw_co_prw(bs, &offset, bytes, qiov, QEMU_AIO_READ); return raw_co_prw(bs, offset, bytes, qiov, QEMU_AIO_READ);
} }
static int coroutine_fn raw_co_pwritev(BlockDriverState *bs, int64_t offset, static int coroutine_fn raw_co_pwritev(BlockDriverState *bs, int64_t offset,
int64_t bytes, QEMUIOVector *qiov, int64_t bytes, QEMUIOVector *qiov,
BdrvRequestFlags flags) BdrvRequestFlags flags)
{ {
return raw_co_prw(bs, &offset, bytes, qiov, QEMU_AIO_WRITE); return raw_co_prw(bs, offset, bytes, qiov, QEMU_AIO_WRITE);
} }
static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs) static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
@@ -3473,7 +3470,7 @@ static int coroutine_fn raw_co_zone_mgmt(BlockDriverState *bs, BlockZoneOp op,
len >> BDRV_SECTOR_BITS); len >> BDRV_SECTOR_BITS);
ret = raw_thread_pool_submit(handle_aiocb_zone_mgmt, &acb); ret = raw_thread_pool_submit(handle_aiocb_zone_mgmt, &acb);
if (ret != 0) { if (ret != 0) {
update_zones_wp(bs, s->fd, offset, nrz); update_zones_wp(bs, s->fd, offset, i);
error_report("ioctl %s failed %d", op_name, ret); error_report("ioctl %s failed %d", op_name, ret);
return ret; return ret;
} }
@@ -3509,6 +3506,8 @@ static int coroutine_fn raw_co_zone_append(BlockDriverState *bs,
int64_t zone_size_mask = bs->bl.zone_size - 1; int64_t zone_size_mask = bs->bl.zone_size - 1;
int64_t iov_len = 0; int64_t iov_len = 0;
int64_t len = 0; int64_t len = 0;
BDRVRawState *s = bs->opaque;
s->offset = offset;
if (*offset & zone_size_mask) { if (*offset & zone_size_mask) {
error_report("sector offset %" PRId64 " is not aligned to zone size " error_report("sector offset %" PRId64 " is not aligned to zone size "
@@ -3529,7 +3528,7 @@ static int coroutine_fn raw_co_zone_append(BlockDriverState *bs,
} }
trace_zbd_zone_append(bs, *offset >> BDRV_SECTOR_BITS); trace_zbd_zone_append(bs, *offset >> BDRV_SECTOR_BITS);
return raw_co_prw(bs, offset, len, qiov, QEMU_AIO_ZONE_APPEND); return raw_co_prw(bs, *offset, len, qiov, QEMU_AIO_ZONE_APPEND);
} }
#endif #endif

View File

@@ -36,8 +36,6 @@ static int compress_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!bs->file->bs->drv || !block_driver_can_compress(bs->file->bs->drv)) { if (!bs->file->bs->drv || !block_driver_can_compress(bs->file->bs->drv)) {
error_setg(errp, error_setg(errp,
"Compression is not supported for underlying format: %s", "Compression is not supported for underlying format: %s",
@@ -99,8 +97,7 @@ compress_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
} }
static void GRAPH_RDLOCK static void compress_refresh_limits(BlockDriverState *bs, Error **errp)
compress_refresh_limits(BlockDriverState *bs, Error **errp)
{ {
BlockDriverInfo bdi; BlockDriverInfo bdi;
int ret; int ret;

View File

@@ -863,13 +863,11 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
if (ret == -EACCES || ret == -EROFS) { if (ret == -EACCES || ret == -EROFS) {
/* Try to degrade to read-only, but if it doesn't work, still use the /* Try to degrade to read-only, but if it doesn't work, still use the
* normal error message. */ * normal error message. */
bdrv_graph_rdlock_main_loop();
if (bdrv_apply_auto_read_only(bs, NULL, NULL) == 0) { if (bdrv_apply_auto_read_only(bs, NULL, NULL) == 0) {
open_flags = (open_flags & ~O_RDWR) | O_RDONLY; open_flags = (open_flags & ~O_RDWR) | O_RDONLY;
s->fd = glfs_open(s->glfs, gconf->path, open_flags); s->fd = glfs_open(s->glfs, gconf->path, open_flags);
ret = s->fd ? 0 : -errno; ret = s->fd ? 0 : -errno;
} }
bdrv_graph_rdunlock_main_loop();
} }
s->supports_seek_data = qemu_gluster_test_seek(s->fd); s->supports_seek_data = qemu_gluster_test_seek(s->fd);

View File

@@ -106,13 +106,12 @@ static uint32_t reader_count(void)
return rd; return rd;
} }
void no_coroutine_fn bdrv_graph_wrlock(BlockDriverState *bs) void bdrv_graph_wrlock(BlockDriverState *bs)
{ {
AioContext *ctx = NULL; AioContext *ctx = NULL;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
assert(!qatomic_read(&has_writer)); assert(!qatomic_read(&has_writer));
assert(!qemu_in_coroutine());
/* /*
* Release only non-mainloop AioContext. The mainloop often relies on the * Release only non-mainloop AioContext. The mainloop often relies on the

View File

@@ -42,18 +42,13 @@
/* Maximum bounce buffer for copy-on-read and write zeroes, in bytes */ /* Maximum bounce buffer for copy-on-read and write zeroes, in bytes */
#define MAX_BOUNCE_BUFFER (32768 << BDRV_SECTOR_BITS) #define MAX_BOUNCE_BUFFER (32768 << BDRV_SECTOR_BITS)
static void coroutine_fn GRAPH_RDLOCK static void bdrv_parent_cb_resize(BlockDriverState *bs);
bdrv_parent_cb_resize(BlockDriverState *bs);
static int coroutine_fn bdrv_co_do_pwrite_zeroes(BlockDriverState *bs, static int coroutine_fn bdrv_co_do_pwrite_zeroes(BlockDriverState *bs,
int64_t offset, int64_t bytes, BdrvRequestFlags flags); int64_t offset, int64_t bytes, BdrvRequestFlags flags);
static void GRAPH_RDLOCK static void bdrv_parent_drained_begin(BlockDriverState *bs, BdrvChild *ignore)
bdrv_parent_drained_begin(BlockDriverState *bs, BdrvChild *ignore)
{ {
BdrvChild *c, *next; BdrvChild *c, *next;
IO_OR_GS_CODE();
assert_bdrv_graph_readable();
QLIST_FOREACH_SAFE(c, &bs->parents, next_parent, next) { QLIST_FOREACH_SAFE(c, &bs->parents, next_parent, next) {
if (c == ignore) { if (c == ignore) {
@@ -75,12 +70,9 @@ void bdrv_parent_drained_end_single(BdrvChild *c)
} }
} }
static void GRAPH_RDLOCK static void bdrv_parent_drained_end(BlockDriverState *bs, BdrvChild *ignore)
bdrv_parent_drained_end(BlockDriverState *bs, BdrvChild *ignore)
{ {
BdrvChild *c; BdrvChild *c;
IO_OR_GS_CODE();
assert_bdrv_graph_readable();
QLIST_FOREACH(c, &bs->parents, next_parent) { QLIST_FOREACH(c, &bs->parents, next_parent) {
if (c == ignore) { if (c == ignore) {
@@ -92,22 +84,17 @@ bdrv_parent_drained_end(BlockDriverState *bs, BdrvChild *ignore)
bool bdrv_parent_drained_poll_single(BdrvChild *c) bool bdrv_parent_drained_poll_single(BdrvChild *c)
{ {
IO_OR_GS_CODE();
if (c->klass->drained_poll) { if (c->klass->drained_poll) {
return c->klass->drained_poll(c); return c->klass->drained_poll(c);
} }
return false; return false;
} }
static bool GRAPH_RDLOCK static bool bdrv_parent_drained_poll(BlockDriverState *bs, BdrvChild *ignore,
bdrv_parent_drained_poll(BlockDriverState *bs, BdrvChild *ignore, bool ignore_bds_parents)
bool ignore_bds_parents)
{ {
BdrvChild *c, *next; BdrvChild *c, *next;
bool busy = false; bool busy = false;
IO_OR_GS_CODE();
assert_bdrv_graph_readable();
QLIST_FOREACH_SAFE(c, &bs->parents, next_parent, next) { QLIST_FOREACH_SAFE(c, &bs->parents, next_parent, next) {
if (c == ignore || (ignore_bds_parents && c->klass->parent_is_bds)) { if (c == ignore || (ignore_bds_parents && c->klass->parent_is_bds)) {
@@ -127,7 +114,6 @@ void bdrv_parent_drained_begin_single(BdrvChild *c)
c->quiesced_parent = true; c->quiesced_parent = true;
if (c->klass->drained_begin) { if (c->klass->drained_begin) {
/* called with rdlock taken, but it doesn't really need it. */
c->klass->drained_begin(c); c->klass->drained_begin(c);
} }
} }
@@ -277,9 +263,6 @@ bool bdrv_drain_poll(BlockDriverState *bs, BdrvChild *ignore_parent,
static bool bdrv_drain_poll_top_level(BlockDriverState *bs, static bool bdrv_drain_poll_top_level(BlockDriverState *bs,
BdrvChild *ignore_parent) BdrvChild *ignore_parent)
{ {
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
return bdrv_drain_poll(bs, ignore_parent, false); return bdrv_drain_poll(bs, ignore_parent, false);
} }
@@ -379,7 +362,6 @@ static void bdrv_do_drained_begin(BlockDriverState *bs, BdrvChild *parent,
/* Stop things in parent-to-child order */ /* Stop things in parent-to-child order */
if (qatomic_fetch_inc(&bs->quiesce_counter) == 0) { if (qatomic_fetch_inc(&bs->quiesce_counter) == 0) {
GRAPH_RDLOCK_GUARD_MAINLOOP();
bdrv_parent_drained_begin(bs, parent); bdrv_parent_drained_begin(bs, parent);
if (bs->drv && bs->drv->bdrv_drain_begin) { if (bs->drv && bs->drv->bdrv_drain_begin) {
bs->drv->bdrv_drain_begin(bs); bs->drv->bdrv_drain_begin(bs);
@@ -426,16 +408,12 @@ static void bdrv_do_drained_end(BlockDriverState *bs, BdrvChild *parent)
bdrv_co_yield_to_drain(bs, false, parent, false); bdrv_co_yield_to_drain(bs, false, parent, false);
return; return;
} }
/* At this point, we should be always running in the main loop. */
GLOBAL_STATE_CODE();
assert(bs->quiesce_counter > 0); assert(bs->quiesce_counter > 0);
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
/* Re-enable things in child-to-parent order */ /* Re-enable things in child-to-parent order */
old_quiesce_counter = qatomic_fetch_dec(&bs->quiesce_counter); old_quiesce_counter = qatomic_fetch_dec(&bs->quiesce_counter);
if (old_quiesce_counter == 1) { if (old_quiesce_counter == 1) {
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (bs->drv && bs->drv->bdrv_drain_end) { if (bs->drv && bs->drv->bdrv_drain_end) {
bs->drv->bdrv_drain_end(bs); bs->drv->bdrv_drain_end(bs);
} }
@@ -459,8 +437,6 @@ void bdrv_drain(BlockDriverState *bs)
static void bdrv_drain_assert_idle(BlockDriverState *bs) static void bdrv_drain_assert_idle(BlockDriverState *bs)
{ {
BdrvChild *child, *next; BdrvChild *child, *next;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
assert(qatomic_read(&bs->in_flight) == 0); assert(qatomic_read(&bs->in_flight) == 0);
QLIST_FOREACH_SAFE(child, &bs->children, next, next) { QLIST_FOREACH_SAFE(child, &bs->children, next, next) {
@@ -474,9 +450,7 @@ static bool bdrv_drain_all_poll(void)
{ {
BlockDriverState *bs = NULL; BlockDriverState *bs = NULL;
bool result = false; bool result = false;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* bdrv_drain_poll() can't make changes to the graph and we are holding the /* bdrv_drain_poll() can't make changes to the graph and we are holding the
* main AioContext lock, so iterating bdrv_next_all_states() is safe. */ * main AioContext lock, so iterating bdrv_next_all_states() is safe. */
@@ -1249,8 +1223,8 @@ bdrv_co_do_copy_on_readv(BdrvChild *child, int64_t offset, int64_t bytes,
ret = 1; /* "already allocated", so nothing will be copied */ ret = 1; /* "already allocated", so nothing will be copied */
pnum = MIN(align_bytes, max_transfer); pnum = MIN(align_bytes, max_transfer);
} else { } else {
ret = bdrv_co_is_allocated(bs, align_offset, ret = bdrv_is_allocated(bs, align_offset,
MIN(align_bytes, max_transfer), &pnum); MIN(align_bytes, max_transfer), &pnum);
if (ret < 0) { if (ret < 0) {
/* /*
* Safe to treat errors in querying allocation as if * Safe to treat errors in querying allocation as if
@@ -1397,7 +1371,7 @@ bdrv_aligned_preadv(BdrvChild *child, BdrvTrackedRequest *req,
/* The flag BDRV_REQ_COPY_ON_READ has reached its addressee */ /* The flag BDRV_REQ_COPY_ON_READ has reached its addressee */
flags &= ~BDRV_REQ_COPY_ON_READ; flags &= ~BDRV_REQ_COPY_ON_READ;
ret = bdrv_co_is_allocated(bs, offset, bytes, &pnum); ret = bdrv_is_allocated(bs, offset, bytes, &pnum);
if (ret < 0) { if (ret < 0) {
goto out; goto out;
} }
@@ -2029,7 +2003,7 @@ bdrv_co_write_req_prepare(BdrvChild *child, int64_t offset, int64_t bytes,
} }
} }
static inline void coroutine_fn GRAPH_RDLOCK static inline void coroutine_fn
bdrv_co_write_req_finish(BdrvChild *child, int64_t offset, int64_t bytes, bdrv_co_write_req_finish(BdrvChild *child, int64_t offset, int64_t bytes,
BdrvTrackedRequest *req, int ret) BdrvTrackedRequest *req, int ret)
{ {
@@ -2356,7 +2330,6 @@ int bdrv_flush_all(void)
int result = 0; int result = 0;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* /*
* bdrv queue is managed by record/replay, * bdrv queue is managed by record/replay,
@@ -2410,9 +2383,9 @@ int bdrv_flush_all(void)
* set to the host mapping and BDS corresponding to the guest offset. * set to the host mapping and BDS corresponding to the guest offset.
*/ */
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn GRAPH_RDLOCK
bdrv_co_do_block_status(BlockDriverState *bs, bool want_zero, bdrv_co_block_status(BlockDriverState *bs, bool want_zero,
int64_t offset, int64_t bytes, int64_t offset, int64_t bytes,
int64_t *pnum, int64_t *map, BlockDriverState **file) int64_t *pnum, int64_t *map, BlockDriverState **file)
{ {
int64_t total_size; int64_t total_size;
int64_t n; /* bytes */ int64_t n; /* bytes */
@@ -2571,8 +2544,8 @@ bdrv_co_do_block_status(BlockDriverState *bs, bool want_zero,
if (ret & BDRV_BLOCK_RAW) { if (ret & BDRV_BLOCK_RAW) {
assert(ret & BDRV_BLOCK_OFFSET_VALID && local_file); assert(ret & BDRV_BLOCK_OFFSET_VALID && local_file);
ret = bdrv_co_do_block_status(local_file, want_zero, local_map, ret = bdrv_co_block_status(local_file, want_zero, local_map,
*pnum, pnum, &local_map, &local_file); *pnum, pnum, &local_map, &local_file);
goto out; goto out;
} }
@@ -2599,8 +2572,8 @@ bdrv_co_do_block_status(BlockDriverState *bs, bool want_zero,
int64_t file_pnum; int64_t file_pnum;
int ret2; int ret2;
ret2 = bdrv_co_do_block_status(local_file, want_zero, local_map, ret2 = bdrv_co_block_status(local_file, want_zero, local_map,
*pnum, &file_pnum, NULL, NULL); *pnum, &file_pnum, NULL, NULL);
if (ret2 >= 0) { if (ret2 >= 0) {
/* Ignore errors. This is just providing extra information, it /* Ignore errors. This is just providing extra information, it
* is useful but not necessary. * is useful but not necessary.
@@ -2667,8 +2640,7 @@ bdrv_co_common_block_status_above(BlockDriverState *bs,
return 0; return 0;
} }
ret = bdrv_co_do_block_status(bs, want_zero, offset, bytes, pnum, ret = bdrv_co_block_status(bs, want_zero, offset, bytes, pnum, map, file);
map, file);
++*depth; ++*depth;
if (ret < 0 || *pnum == 0 || ret & BDRV_BLOCK_ALLOCATED || bs == base) { if (ret < 0 || *pnum == 0 || ret & BDRV_BLOCK_ALLOCATED || bs == base) {
return ret; return ret;
@@ -2684,8 +2656,8 @@ bdrv_co_common_block_status_above(BlockDriverState *bs,
for (p = bdrv_filter_or_cow_bs(bs); include_base || p != base; for (p = bdrv_filter_or_cow_bs(bs); include_base || p != base;
p = bdrv_filter_or_cow_bs(p)) p = bdrv_filter_or_cow_bs(p))
{ {
ret = bdrv_co_do_block_status(p, want_zero, offset, bytes, pnum, ret = bdrv_co_block_status(p, want_zero, offset, bytes, pnum, map,
map, file); file);
++*depth; ++*depth;
if (ret < 0) { if (ret < 0) {
return ret; return ret;
@@ -2751,13 +2723,21 @@ int coroutine_fn bdrv_co_block_status_above(BlockDriverState *bs,
bytes, pnum, map, file, NULL); bytes, pnum, map, file, NULL);
} }
int coroutine_fn bdrv_co_block_status(BlockDriverState *bs, int64_t offset, int bdrv_block_status_above(BlockDriverState *bs, BlockDriverState *base,
int64_t bytes, int64_t *pnum, int64_t offset, int64_t bytes, int64_t *pnum,
int64_t *map, BlockDriverState **file) int64_t *map, BlockDriverState **file)
{ {
IO_CODE(); IO_CODE();
return bdrv_co_block_status_above(bs, bdrv_filter_or_cow_bs(bs), return bdrv_common_block_status_above(bs, base, false, true, offset, bytes,
offset, bytes, pnum, map, file); pnum, map, file, NULL);
}
int bdrv_block_status(BlockDriverState *bs, int64_t offset, int64_t bytes,
int64_t *pnum, int64_t *map, BlockDriverState **file)
{
IO_CODE();
return bdrv_block_status_above(bs, bdrv_filter_or_cow_bs(bs),
offset, bytes, pnum, map, file);
} }
/* /*
@@ -2804,6 +2784,45 @@ int coroutine_fn bdrv_co_is_allocated(BlockDriverState *bs, int64_t offset,
return !!(ret & BDRV_BLOCK_ALLOCATED); return !!(ret & BDRV_BLOCK_ALLOCATED);
} }
int bdrv_is_allocated(BlockDriverState *bs, int64_t offset, int64_t bytes,
int64_t *pnum)
{
int ret;
int64_t dummy;
IO_CODE();
ret = bdrv_common_block_status_above(bs, bs, true, false, offset,
bytes, pnum ? pnum : &dummy, NULL,
NULL, NULL);
if (ret < 0) {
return ret;
}
return !!(ret & BDRV_BLOCK_ALLOCATED);
}
/* See bdrv_is_allocated_above for documentation */
int coroutine_fn bdrv_co_is_allocated_above(BlockDriverState *top,
BlockDriverState *base,
bool include_base, int64_t offset,
int64_t bytes, int64_t *pnum)
{
int depth;
int ret;
IO_CODE();
ret = bdrv_co_common_block_status_above(top, base, include_base, false,
offset, bytes, pnum, NULL, NULL,
&depth);
if (ret < 0) {
return ret;
}
if (ret & BDRV_BLOCK_ALLOCATED) {
return depth;
}
return 0;
}
/* /*
* Given an image chain: ... -> [BASE] -> [INTER1] -> [INTER2] -> [TOP] * Given an image chain: ... -> [BASE] -> [INTER1] -> [INTER2] -> [TOP]
* *
@@ -2821,18 +2840,18 @@ int coroutine_fn bdrv_co_is_allocated(BlockDriverState *bs, int64_t offset,
* words, the result is not necessarily the maximum possible range); * words, the result is not necessarily the maximum possible range);
* but 'pnum' will only be 0 when end of file is reached. * but 'pnum' will only be 0 when end of file is reached.
*/ */
int coroutine_fn bdrv_co_is_allocated_above(BlockDriverState *bs, int bdrv_is_allocated_above(BlockDriverState *top,
BlockDriverState *base, BlockDriverState *base,
bool include_base, int64_t offset, bool include_base, int64_t offset,
int64_t bytes, int64_t *pnum) int64_t bytes, int64_t *pnum)
{ {
int depth; int depth;
int ret; int ret;
IO_CODE(); IO_CODE();
ret = bdrv_co_common_block_status_above(bs, base, include_base, false, ret = bdrv_common_block_status_above(top, base, include_base, false,
offset, bytes, pnum, NULL, NULL, offset, bytes, pnum, NULL, NULL,
&depth); &depth);
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
@@ -3532,13 +3551,9 @@ int coroutine_fn bdrv_co_copy_range(BdrvChild *src, int64_t src_offset,
bytes, read_flags, write_flags); bytes, read_flags, write_flags);
} }
static void coroutine_fn GRAPH_RDLOCK static void bdrv_parent_cb_resize(BlockDriverState *bs)
bdrv_parent_cb_resize(BlockDriverState *bs)
{ {
BdrvChild *c; BdrvChild *c;
assert_bdrv_graph_readable();
QLIST_FOREACH(c, &bs->parents, next_parent) { QLIST_FOREACH(c, &bs->parents, next_parent) {
if (c->klass->resize) { if (c->klass->resize) {
c->klass->resize(c); c->klass->resize(c);
@@ -3685,8 +3700,6 @@ out:
void bdrv_cancel_in_flight(BlockDriverState *bs) void bdrv_cancel_in_flight(BlockDriverState *bs)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!bs || !bs->drv) { if (!bs || !bs->drv) {
return; return;
} }

View File

@@ -15,7 +15,6 @@
#include "block/block.h" #include "block/block.h"
#include "block/raw-aio.h" #include "block/raw-aio.h"
#include "qemu/coroutine.h" #include "qemu/coroutine.h"
#include "qemu/defer-call.h"
#include "qapi/error.h" #include "qapi/error.h"
#include "sysemu/block-backend.h" #include "sysemu/block-backend.h"
#include "trace.h" #include "trace.h"
@@ -125,9 +124,6 @@ static void luring_process_completions(LuringState *s)
{ {
struct io_uring_cqe *cqes; struct io_uring_cqe *cqes;
int total_bytes; int total_bytes;
defer_call_begin();
/* /*
* Request completion callbacks can run the nested event loop. * Request completion callbacks can run the nested event loop.
* Schedule ourselves so the nested event loop will "see" remaining * Schedule ourselves so the nested event loop will "see" remaining
@@ -220,10 +216,7 @@ end:
aio_co_wake(luringcb->co); aio_co_wake(luringcb->co);
} }
} }
qemu_bh_cancel(s->completion_bh); qemu_bh_cancel(s->completion_bh);
defer_call_end();
} }
static int ioq_submit(LuringState *s) static int ioq_submit(LuringState *s)
@@ -313,7 +306,7 @@ static void ioq_init(LuringQueue *io_q)
io_q->blocked = false; io_q->blocked = false;
} }
static void luring_deferred_fn(void *opaque) static void luring_unplug_fn(void *opaque)
{ {
LuringState *s = opaque; LuringState *s = opaque;
trace_luring_unplug_fn(s, s->io_q.blocked, s->io_q.in_queue, trace_luring_unplug_fn(s, s->io_q.blocked, s->io_q.in_queue,
@@ -374,7 +367,7 @@ static int luring_do_submit(int fd, LuringAIOCB *luringcb, LuringState *s,
return ret; return ret;
} }
defer_call(luring_deferred_fn, s); blk_io_plug_call(luring_unplug_fn, s);
} }
return 0; return 0;
} }

View File

@@ -1925,9 +1925,7 @@ static int iscsi_open(BlockDriverState *bs, QDict *options, int flags,
/* Check the write protect flag of the LUN if we want to write */ /* Check the write protect flag of the LUN if we want to write */
if (iscsilun->type == TYPE_DISK && (flags & BDRV_O_RDWR) && if (iscsilun->type == TYPE_DISK && (flags & BDRV_O_RDWR) &&
iscsilun->write_protected) { iscsilun->write_protected) {
bdrv_graph_rdlock_main_loop();
ret = bdrv_apply_auto_read_only(bs, "LUN is write protected", errp); ret = bdrv_apply_auto_read_only(bs, "LUN is write protected", errp);
bdrv_graph_rdunlock_main_loop();
if (ret < 0) { if (ret < 0) {
goto out; goto out;
} }

View File

@@ -14,7 +14,6 @@
#include "block/raw-aio.h" #include "block/raw-aio.h"
#include "qemu/event_notifier.h" #include "qemu/event_notifier.h"
#include "qemu/coroutine.h" #include "qemu/coroutine.h"
#include "qemu/defer-call.h"
#include "qapi/error.h" #include "qapi/error.h"
#include "sysemu/block-backend.h" #include "sysemu/block-backend.h"
@@ -205,8 +204,6 @@ static void qemu_laio_process_completions(LinuxAioState *s)
{ {
struct io_event *events; struct io_event *events;
defer_call_begin();
/* Reschedule so nested event loops see currently pending completions */ /* Reschedule so nested event loops see currently pending completions */
qemu_bh_schedule(s->completion_bh); qemu_bh_schedule(s->completion_bh);
@@ -233,8 +230,6 @@ static void qemu_laio_process_completions(LinuxAioState *s)
* own `for` loop. If we are the last all counters dropped to zero. */ * own `for` loop. If we are the last all counters dropped to zero. */
s->event_max = 0; s->event_max = 0;
s->event_idx = 0; s->event_idx = 0;
defer_call_end();
} }
static void qemu_laio_process_completions_and_submit(LinuxAioState *s) static void qemu_laio_process_completions_and_submit(LinuxAioState *s)
@@ -358,7 +353,7 @@ static uint64_t laio_max_batch(LinuxAioState *s, uint64_t dev_max_batch)
return max_batch; return max_batch;
} }
static void laio_deferred_fn(void *opaque) static void laio_unplug_fn(void *opaque)
{ {
LinuxAioState *s = opaque; LinuxAioState *s = opaque;
@@ -398,7 +393,7 @@ static int laio_do_submit(int fd, struct qemu_laiocb *laiocb, off_t offset,
if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch)) { if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch)) {
ioq_submit(s); ioq_submit(s);
} else { } else {
defer_call(laio_deferred_fn, s); blk_io_plug_call(laio_unplug_fn, s);
} }
} }

View File

@@ -21,6 +21,7 @@ block_ss.add(files(
'mirror.c', 'mirror.c',
'nbd.c', 'nbd.c',
'null.c', 'null.c',
'plug.c',
'preallocate.c', 'preallocate.c',
'progress_meter.c', 'progress_meter.c',
'qapi.c', 'qapi.c',

View File

@@ -55,18 +55,10 @@ typedef struct MirrorBlockJob {
BlockMirrorBackingMode backing_mode; BlockMirrorBackingMode backing_mode;
/* Whether the target image requires explicit zero-initialization */ /* Whether the target image requires explicit zero-initialization */
bool zero_target; bool zero_target;
/*
* To be accesssed with atomics. Written only under the BQL (required by the
* current implementation of mirror_change()).
*/
MirrorCopyMode copy_mode; MirrorCopyMode copy_mode;
BlockdevOnError on_source_error, on_target_error; BlockdevOnError on_source_error, on_target_error;
/* /* Set when the target is synced (dirty bitmap is clean, nothing
* To be accessed with atomics. * in flight) and the job is running in active mode */
*
* Set when the target is synced (dirty bitmap is clean, nothing in flight)
* and the job is running in active mode.
*/
bool actively_synced; bool actively_synced;
bool should_complete; bool should_complete;
int64_t granularity; int64_t granularity;
@@ -130,7 +122,7 @@ typedef enum MirrorMethod {
static BlockErrorAction mirror_error_action(MirrorBlockJob *s, bool read, static BlockErrorAction mirror_error_action(MirrorBlockJob *s, bool read,
int error) int error)
{ {
qatomic_set(&s->actively_synced, false); s->actively_synced = false;
if (read) { if (read) {
return block_job_error_action(&s->common, s->on_source_error, return block_job_error_action(&s->common, s->on_source_error,
true, error); true, error);
@@ -479,7 +471,7 @@ static unsigned mirror_perform(MirrorBlockJob *s, int64_t offset,
return bytes_handled; return bytes_handled;
} }
static void coroutine_fn GRAPH_RDLOCK mirror_iteration(MirrorBlockJob *s) static void coroutine_fn mirror_iteration(MirrorBlockJob *s)
{ {
BlockDriverState *source = s->mirror_top_bs->backing->bs; BlockDriverState *source = s->mirror_top_bs->backing->bs;
MirrorOp *pseudo_op; MirrorOp *pseudo_op;
@@ -567,9 +559,9 @@ static void coroutine_fn GRAPH_RDLOCK mirror_iteration(MirrorBlockJob *s)
assert(!(offset % s->granularity)); assert(!(offset % s->granularity));
WITH_GRAPH_RDLOCK_GUARD() { WITH_GRAPH_RDLOCK_GUARD() {
ret = bdrv_co_block_status_above(source, NULL, offset, ret = bdrv_block_status_above(source, NULL, offset,
nb_chunks * s->granularity, nb_chunks * s->granularity,
&io_bytes, NULL, NULL); &io_bytes, NULL, NULL);
} }
if (ret < 0) { if (ret < 0) {
io_bytes = MIN(nb_chunks * s->granularity, max_io_bytes); io_bytes = MIN(nb_chunks * s->granularity, max_io_bytes);
@@ -678,7 +670,6 @@ static int mirror_exit_common(Job *job)
s->prepared = true; s->prepared = true;
aio_context_acquire(qemu_get_aio_context()); aio_context_acquire(qemu_get_aio_context());
bdrv_graph_rdlock_main_loop();
mirror_top_bs = s->mirror_top_bs; mirror_top_bs = s->mirror_top_bs;
bs_opaque = mirror_top_bs->opaque; bs_opaque = mirror_top_bs->opaque;
@@ -697,8 +688,6 @@ static int mirror_exit_common(Job *job)
bdrv_ref(mirror_top_bs); bdrv_ref(mirror_top_bs);
bdrv_ref(target_bs); bdrv_ref(target_bs);
bdrv_graph_rdunlock_main_loop();
/* /*
* Remove target parent that still uses BLK_PERM_WRITE/RESIZE before * Remove target parent that still uses BLK_PERM_WRITE/RESIZE before
* inserting target_bs at s->to_replace, where we might not be able to get * inserting target_bs at s->to_replace, where we might not be able to get
@@ -712,12 +701,12 @@ static int mirror_exit_common(Job *job)
* these permissions any more means that we can't allow any new requests on * these permissions any more means that we can't allow any new requests on
* mirror_top_bs from now on, so keep it drained. */ * mirror_top_bs from now on, so keep it drained. */
bdrv_drained_begin(mirror_top_bs); bdrv_drained_begin(mirror_top_bs);
bdrv_drained_begin(target_bs);
bs_opaque->stop = true; bs_opaque->stop = true;
bdrv_graph_rdlock_main_loop(); bdrv_graph_rdlock_main_loop();
bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing, bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing,
&error_abort); &error_abort);
bdrv_graph_rdunlock_main_loop();
if (!abort && s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) { if (!abort && s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) {
BlockDriverState *backing = s->is_none_mode ? src : s->base; BlockDriverState *backing = s->is_none_mode ? src : s->base;
@@ -740,7 +729,6 @@ static int mirror_exit_common(Job *job)
local_err = NULL; local_err = NULL;
} }
} }
bdrv_graph_rdunlock_main_loop();
if (s->to_replace) { if (s->to_replace) {
replace_aio_context = bdrv_get_aio_context(s->to_replace); replace_aio_context = bdrv_get_aio_context(s->to_replace);
@@ -758,13 +746,15 @@ static int mirror_exit_common(Job *job)
/* The mirror job has no requests in flight any more, but we need to /* The mirror job has no requests in flight any more, but we need to
* drain potential other users of the BDS before changing the graph. */ * drain potential other users of the BDS before changing the graph. */
assert(s->in_drain); assert(s->in_drain);
bdrv_drained_begin(to_replace); bdrv_drained_begin(target_bs);
/* /*
* Cannot use check_to_replace_node() here, because that would * Cannot use check_to_replace_node() here, because that would
* check for an op blocker on @to_replace, and we have our own * check for an op blocker on @to_replace, and we have our own
* there. * there.
*
* TODO Pull out the writer lock from bdrv_replace_node() to here
*/ */
bdrv_graph_wrlock(target_bs); bdrv_graph_rdlock_main_loop();
if (bdrv_recurse_can_replace(src, to_replace)) { if (bdrv_recurse_can_replace(src, to_replace)) {
bdrv_replace_node(to_replace, target_bs, &local_err); bdrv_replace_node(to_replace, target_bs, &local_err);
} else { } else {
@@ -773,8 +763,8 @@ static int mirror_exit_common(Job *job)
"would not lead to an abrupt change of visible data", "would not lead to an abrupt change of visible data",
to_replace->node_name, target_bs->node_name); to_replace->node_name, target_bs->node_name);
} }
bdrv_graph_wrunlock(); bdrv_graph_rdunlock_main_loop();
bdrv_drained_end(to_replace); bdrv_drained_end(target_bs);
if (local_err) { if (local_err) {
error_report_err(local_err); error_report_err(local_err);
ret = -EPERM; ret = -EPERM;
@@ -789,6 +779,7 @@ static int mirror_exit_common(Job *job)
aio_context_release(replace_aio_context); aio_context_release(replace_aio_context);
} }
g_free(s->replaces); g_free(s->replaces);
bdrv_unref(target_bs);
/* /*
* Remove the mirror filter driver from the graph. Before this, get rid of * Remove the mirror filter driver from the graph. Before this, get rid of
@@ -796,12 +787,7 @@ static int mirror_exit_common(Job *job)
* valid. * valid.
*/ */
block_job_remove_all_bdrv(bjob); block_job_remove_all_bdrv(bjob);
bdrv_graph_wrlock(mirror_top_bs);
bdrv_replace_node(mirror_top_bs, mirror_top_bs->backing->bs, &error_abort); bdrv_replace_node(mirror_top_bs, mirror_top_bs->backing->bs, &error_abort);
bdrv_graph_wrunlock();
bdrv_drained_end(target_bs);
bdrv_unref(target_bs);
bs_opaque->job = NULL; bs_opaque->job = NULL;
@@ -839,18 +825,14 @@ static void coroutine_fn mirror_throttle(MirrorBlockJob *s)
} }
} }
static int coroutine_fn GRAPH_UNLOCKED mirror_dirty_init(MirrorBlockJob *s) static int coroutine_fn mirror_dirty_init(MirrorBlockJob *s)
{ {
int64_t offset; int64_t offset;
BlockDriverState *bs; BlockDriverState *bs = s->mirror_top_bs->backing->bs;
BlockDriverState *target_bs = blk_bs(s->target); BlockDriverState *target_bs = blk_bs(s->target);
int ret; int ret;
int64_t count; int64_t count;
bdrv_graph_co_rdlock();
bs = s->mirror_top_bs->backing->bs;
bdrv_graph_co_rdunlock();
if (s->zero_target) { if (s->zero_target) {
if (!bdrv_can_write_zeroes_with_unmap(target_bs)) { if (!bdrv_can_write_zeroes_with_unmap(target_bs)) {
bdrv_set_dirty_bitmap(s->dirty_bitmap, 0, s->bdev_length); bdrv_set_dirty_bitmap(s->dirty_bitmap, 0, s->bdev_length);
@@ -897,8 +879,8 @@ static int coroutine_fn GRAPH_UNLOCKED mirror_dirty_init(MirrorBlockJob *s)
} }
WITH_GRAPH_RDLOCK_GUARD() { WITH_GRAPH_RDLOCK_GUARD() {
ret = bdrv_co_is_allocated_above(bs, s->base_overlay, true, offset, ret = bdrv_is_allocated_above(bs, s->base_overlay, true, offset,
bytes, &count); bytes, &count);
} }
if (ret < 0) { if (ret < 0) {
return ret; return ret;
@@ -930,7 +912,7 @@ static int coroutine_fn mirror_flush(MirrorBlockJob *s)
static int coroutine_fn mirror_run(Job *job, Error **errp) static int coroutine_fn mirror_run(Job *job, Error **errp)
{ {
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job); MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job);
BlockDriverState *bs; BlockDriverState *bs = s->mirror_top_bs->backing->bs;
MirrorBDSOpaque *mirror_top_opaque = s->mirror_top_bs->opaque; MirrorBDSOpaque *mirror_top_opaque = s->mirror_top_bs->opaque;
BlockDriverState *target_bs = blk_bs(s->target); BlockDriverState *target_bs = blk_bs(s->target);
bool need_drain = true; bool need_drain = true;
@@ -942,10 +924,6 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
checking for a NULL string */ checking for a NULL string */
int ret = 0; int ret = 0;
bdrv_graph_co_rdlock();
bs = bdrv_filter_bs(s->mirror_top_bs);
bdrv_graph_co_rdunlock();
if (job_is_cancelled(&s->common.job)) { if (job_is_cancelled(&s->common.job)) {
goto immediate_exit; goto immediate_exit;
} }
@@ -984,7 +962,7 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
if (s->bdev_length == 0) { if (s->bdev_length == 0) {
/* Transition to the READY state and wait for complete. */ /* Transition to the READY state and wait for complete. */
job_transition_to_ready(&s->common.job); job_transition_to_ready(&s->common.job);
qatomic_set(&s->actively_synced, true); s->actively_synced = true;
while (!job_cancel_requested(&s->common.job) && !s->should_complete) { while (!job_cancel_requested(&s->common.job) && !s->should_complete) {
job_yield(&s->common.job); job_yield(&s->common.job);
} }
@@ -1006,13 +984,13 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
} else { } else {
s->target_cluster_size = BDRV_SECTOR_SIZE; s->target_cluster_size = BDRV_SECTOR_SIZE;
} }
bdrv_graph_co_rdunlock();
if (backing_filename[0] && !bdrv_backing_chain_next(target_bs) && if (backing_filename[0] && !bdrv_backing_chain_next(target_bs) &&
s->granularity < s->target_cluster_size) { s->granularity < s->target_cluster_size) {
s->buf_size = MAX(s->buf_size, s->target_cluster_size); s->buf_size = MAX(s->buf_size, s->target_cluster_size);
s->cow_bitmap = bitmap_new(length); s->cow_bitmap = bitmap_new(length);
} }
s->max_iov = MIN(bs->bl.max_iov, target_bs->bl.max_iov); s->max_iov = MIN(bs->bl.max_iov, target_bs->bl.max_iov);
bdrv_graph_co_rdunlock();
s->buf = qemu_try_blockalign(bs, s->buf_size); s->buf = qemu_try_blockalign(bs, s->buf_size);
if (s->buf == NULL) { if (s->buf == NULL) {
@@ -1078,9 +1056,7 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
mirror_wait_for_free_in_flight_slot(s); mirror_wait_for_free_in_flight_slot(s);
continue; continue;
} else if (cnt != 0) { } else if (cnt != 0) {
bdrv_graph_co_rdlock();
mirror_iteration(s); mirror_iteration(s);
bdrv_graph_co_rdunlock();
} }
} }
@@ -1098,9 +1074,9 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
* the target in a consistent state. * the target in a consistent state.
*/ */
job_transition_to_ready(&s->common.job); job_transition_to_ready(&s->common.job);
} if (s->copy_mode != MIRROR_COPY_MODE_BACKGROUND) {
if (qatomic_read(&s->copy_mode) != MIRROR_COPY_MODE_BACKGROUND) { s->actively_synced = true;
qatomic_set(&s->actively_synced, true); }
} }
should_complete = s->should_complete || should_complete = s->should_complete ||
@@ -1270,48 +1246,6 @@ static bool commit_active_cancel(Job *job, bool force)
return force || !job_is_ready(job); return force || !job_is_ready(job);
} }
static void mirror_change(BlockJob *job, BlockJobChangeOptions *opts,
Error **errp)
{
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
BlockJobChangeOptionsMirror *change_opts = &opts->u.mirror;
MirrorCopyMode current;
/*
* The implementation relies on the fact that copy_mode is only written
* under the BQL. Otherwise, further synchronization would be required.
*/
GLOBAL_STATE_CODE();
if (qatomic_read(&s->copy_mode) == change_opts->copy_mode) {
return;
}
if (change_opts->copy_mode != MIRROR_COPY_MODE_WRITE_BLOCKING) {
error_setg(errp, "Change to copy mode '%s' is not implemented",
MirrorCopyMode_str(change_opts->copy_mode));
return;
}
current = qatomic_cmpxchg(&s->copy_mode, MIRROR_COPY_MODE_BACKGROUND,
change_opts->copy_mode);
if (current != MIRROR_COPY_MODE_BACKGROUND) {
error_setg(errp, "Expected current copy mode '%s', got '%s'",
MirrorCopyMode_str(MIRROR_COPY_MODE_BACKGROUND),
MirrorCopyMode_str(current));
}
}
static void mirror_query(BlockJob *job, BlockJobInfo *info)
{
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
info->u.mirror = (BlockJobInfoMirror) {
.actively_synced = qatomic_read(&s->actively_synced),
};
}
static const BlockJobDriver mirror_job_driver = { static const BlockJobDriver mirror_job_driver = {
.job_driver = { .job_driver = {
.instance_size = sizeof(MirrorBlockJob), .instance_size = sizeof(MirrorBlockJob),
@@ -1326,8 +1260,6 @@ static const BlockJobDriver mirror_job_driver = {
.cancel = mirror_cancel, .cancel = mirror_cancel,
}, },
.drained_poll = mirror_drained_poll, .drained_poll = mirror_drained_poll,
.change = mirror_change,
.query = mirror_query,
}; };
static const BlockJobDriver commit_active_job_driver = { static const BlockJobDriver commit_active_job_driver = {
@@ -1446,7 +1378,7 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMethod method,
bitmap_end = QEMU_ALIGN_UP(offset + bytes, job->granularity); bitmap_end = QEMU_ALIGN_UP(offset + bytes, job->granularity);
bdrv_set_dirty_bitmap(job->dirty_bitmap, bitmap_offset, bdrv_set_dirty_bitmap(job->dirty_bitmap, bitmap_offset,
bitmap_end - bitmap_offset); bitmap_end - bitmap_offset);
qatomic_set(&job->actively_synced, false); job->actively_synced = false;
action = mirror_error_action(job, false, -ret); action = mirror_error_action(job, false, -ret);
if (action == BLOCK_ERROR_ACTION_REPORT) { if (action == BLOCK_ERROR_ACTION_REPORT) {
@@ -1505,8 +1437,7 @@ static void coroutine_fn GRAPH_RDLOCK active_write_settle(MirrorOp *op)
uint64_t end_chunk = DIV_ROUND_UP(op->offset + op->bytes, uint64_t end_chunk = DIV_ROUND_UP(op->offset + op->bytes,
op->s->granularity); op->s->granularity);
if (!--op->s->in_active_write_counter && if (!--op->s->in_active_write_counter && op->s->actively_synced) {
qatomic_read(&op->s->actively_synced)) {
BdrvChild *source = op->s->mirror_top_bs->backing; BdrvChild *source = op->s->mirror_top_bs->backing;
if (QLIST_FIRST(&source->bs->parents) == source && if (QLIST_FIRST(&source->bs->parents) == source &&
@@ -1532,21 +1463,21 @@ bdrv_mirror_top_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags); return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags);
} }
static bool should_copy_to_target(MirrorBDSOpaque *s)
{
return s->job && s->job->ret >= 0 &&
!job_is_cancelled(&s->job->common.job) &&
qatomic_read(&s->job->copy_mode) == MIRROR_COPY_MODE_WRITE_BLOCKING;
}
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn GRAPH_RDLOCK
bdrv_mirror_top_do_write(BlockDriverState *bs, MirrorMethod method, bdrv_mirror_top_do_write(BlockDriverState *bs, MirrorMethod method,
bool copy_to_target, uint64_t offset, uint64_t bytes, uint64_t offset, uint64_t bytes, QEMUIOVector *qiov,
QEMUIOVector *qiov, int flags) int flags)
{ {
MirrorOp *op = NULL; MirrorOp *op = NULL;
MirrorBDSOpaque *s = bs->opaque; MirrorBDSOpaque *s = bs->opaque;
int ret = 0; int ret = 0;
bool copy_to_target = false;
if (s->job) {
copy_to_target = s->job->ret >= 0 &&
!job_is_cancelled(&s->job->common.job) &&
s->job->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING;
}
if (copy_to_target) { if (copy_to_target) {
op = active_write_prepare(s->job, offset, bytes); op = active_write_prepare(s->job, offset, bytes);
@@ -1569,11 +1500,6 @@ bdrv_mirror_top_do_write(BlockDriverState *bs, MirrorMethod method,
abort(); abort();
} }
if (!copy_to_target && s->job && s->job->dirty_bitmap) {
qatomic_set(&s->job->actively_synced, false);
bdrv_set_dirty_bitmap(s->job->dirty_bitmap, offset, bytes);
}
if (ret < 0) { if (ret < 0) {
goto out; goto out;
} }
@@ -1593,10 +1519,17 @@ static int coroutine_fn GRAPH_RDLOCK
bdrv_mirror_top_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes, bdrv_mirror_top_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes,
QEMUIOVector *qiov, BdrvRequestFlags flags) QEMUIOVector *qiov, BdrvRequestFlags flags)
{ {
MirrorBDSOpaque *s = bs->opaque;
QEMUIOVector bounce_qiov; QEMUIOVector bounce_qiov;
void *bounce_buf; void *bounce_buf;
int ret = 0; int ret = 0;
bool copy_to_target = should_copy_to_target(bs->opaque); bool copy_to_target = false;
if (s->job) {
copy_to_target = s->job->ret >= 0 &&
!job_is_cancelled(&s->job->common.job) &&
s->job->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING;
}
if (copy_to_target) { if (copy_to_target) {
/* The guest might concurrently modify the data to write; but /* The guest might concurrently modify the data to write; but
@@ -1613,8 +1546,8 @@ bdrv_mirror_top_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes,
flags &= ~BDRV_REQ_REGISTERED_BUF; flags &= ~BDRV_REQ_REGISTERED_BUF;
} }
ret = bdrv_mirror_top_do_write(bs, MIRROR_METHOD_COPY, copy_to_target, ret = bdrv_mirror_top_do_write(bs, MIRROR_METHOD_COPY, offset, bytes, qiov,
offset, bytes, qiov, flags); flags);
if (copy_to_target) { if (copy_to_target) {
qemu_iovec_destroy(&bounce_qiov); qemu_iovec_destroy(&bounce_qiov);
@@ -1637,20 +1570,18 @@ static int coroutine_fn GRAPH_RDLOCK
bdrv_mirror_top_pwrite_zeroes(BlockDriverState *bs, int64_t offset, bdrv_mirror_top_pwrite_zeroes(BlockDriverState *bs, int64_t offset,
int64_t bytes, BdrvRequestFlags flags) int64_t bytes, BdrvRequestFlags flags)
{ {
bool copy_to_target = should_copy_to_target(bs->opaque); return bdrv_mirror_top_do_write(bs, MIRROR_METHOD_ZERO, offset, bytes, NULL,
return bdrv_mirror_top_do_write(bs, MIRROR_METHOD_ZERO, copy_to_target, flags);
offset, bytes, NULL, flags);
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn GRAPH_RDLOCK
bdrv_mirror_top_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes) bdrv_mirror_top_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
{ {
bool copy_to_target = should_copy_to_target(bs->opaque); return bdrv_mirror_top_do_write(bs, MIRROR_METHOD_DISCARD, offset, bytes,
return bdrv_mirror_top_do_write(bs, MIRROR_METHOD_DISCARD, copy_to_target, NULL, 0);
offset, bytes, NULL, 0);
} }
static void GRAPH_RDLOCK bdrv_mirror_top_refresh_filename(BlockDriverState *bs) static void bdrv_mirror_top_refresh_filename(BlockDriverState *bs)
{ {
if (bs->backing == NULL) { if (bs->backing == NULL) {
/* we can be here after failed bdrv_attach_child in /* we can be here after failed bdrv_attach_child in
@@ -1760,15 +1691,12 @@ static BlockJob *mirror_start_job(
buf_size = DEFAULT_MIRROR_BUF_SIZE; buf_size = DEFAULT_MIRROR_BUF_SIZE;
} }
bdrv_graph_rdlock_main_loop();
if (bdrv_skip_filters(bs) == bdrv_skip_filters(target)) { if (bdrv_skip_filters(bs) == bdrv_skip_filters(target)) {
error_setg(errp, "Can't mirror node into itself"); error_setg(errp, "Can't mirror node into itself");
bdrv_graph_rdunlock_main_loop();
return NULL; return NULL;
} }
target_is_backing = bdrv_chain_contains(bs, target); target_is_backing = bdrv_chain_contains(bs, target);
bdrv_graph_rdunlock_main_loop();
/* In the case of active commit, add dummy driver to provide consistent /* In the case of active commit, add dummy driver to provide consistent
* reads on the top, while disabling it in the intermediate nodes, and make * reads on the top, while disabling it in the intermediate nodes, and make
@@ -1851,19 +1779,14 @@ static BlockJob *mirror_start_job(
} }
target_shared_perms |= BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE; target_shared_perms |= BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE;
} else { } else if (bdrv_chain_contains(bs, bdrv_skip_filters(target))) {
bdrv_graph_rdlock_main_loop(); /*
if (bdrv_chain_contains(bs, bdrv_skip_filters(target))) { * We may want to allow this in the future, but it would
/* * require taking some extra care.
* We may want to allow this in the future, but it would */
* require taking some extra care. error_setg(errp, "Cannot mirror to a filter on top of a node in the "
*/ "source's backing chain");
error_setg(errp, "Cannot mirror to a filter on top of a node in " goto fail;
"the source's backing chain");
bdrv_graph_rdunlock_main_loop();
goto fail;
}
bdrv_graph_rdunlock_main_loop();
} }
s->target = blk_new(s->common.job.aio_context, s->target = blk_new(s->common.job.aio_context,
@@ -1884,14 +1807,13 @@ static BlockJob *mirror_start_job(
blk_set_allow_aio_context_change(s->target, true); blk_set_allow_aio_context_change(s->target, true);
blk_set_disable_request_queuing(s->target, true); blk_set_disable_request_queuing(s->target, true);
bdrv_graph_rdlock_main_loop();
s->replaces = g_strdup(replaces); s->replaces = g_strdup(replaces);
s->on_source_error = on_source_error; s->on_source_error = on_source_error;
s->on_target_error = on_target_error; s->on_target_error = on_target_error;
s->is_none_mode = is_none_mode; s->is_none_mode = is_none_mode;
s->backing_mode = backing_mode; s->backing_mode = backing_mode;
s->zero_target = zero_target; s->zero_target = zero_target;
qatomic_set(&s->copy_mode, copy_mode); s->copy_mode = copy_mode;
s->base = base; s->base = base;
s->base_overlay = bdrv_find_overlay(bs, base); s->base_overlay = bdrv_find_overlay(bs, base);
s->granularity = granularity; s->granularity = granularity;
@@ -1900,27 +1822,20 @@ static BlockJob *mirror_start_job(
if (auto_complete) { if (auto_complete) {
s->should_complete = true; s->should_complete = true;
} }
bdrv_graph_rdunlock_main_loop();
s->dirty_bitmap = bdrv_create_dirty_bitmap(s->mirror_top_bs, granularity, s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, NULL, errp);
NULL, errp);
if (!s->dirty_bitmap) { if (!s->dirty_bitmap) {
goto fail; goto fail;
} }
if (s->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING) {
bdrv_disable_dirty_bitmap(s->dirty_bitmap);
}
/*
* The dirty bitmap is set by bdrv_mirror_top_do_write() when not in active
* mode.
*/
bdrv_disable_dirty_bitmap(s->dirty_bitmap);
bdrv_graph_wrlock(bs);
ret = block_job_add_bdrv(&s->common, "source", bs, 0, ret = block_job_add_bdrv(&s->common, "source", bs, 0,
BLK_PERM_WRITE_UNCHANGED | BLK_PERM_WRITE | BLK_PERM_WRITE_UNCHANGED | BLK_PERM_WRITE |
BLK_PERM_CONSISTENT_READ, BLK_PERM_CONSISTENT_READ,
errp); errp);
if (ret < 0) { if (ret < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
@@ -1965,17 +1880,14 @@ static BlockJob *mirror_start_job(
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0, ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
iter_shared_perms, errp); iter_shared_perms, errp);
if (ret < 0) { if (ret < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
} }
if (bdrv_freeze_backing_chain(mirror_top_bs, target, errp) < 0) { if (bdrv_freeze_backing_chain(mirror_top_bs, target, errp) < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
} }
bdrv_graph_wrunlock();
QTAILQ_INIT(&s->ops_in_flight); QTAILQ_INIT(&s->ops_in_flight);
@@ -2000,14 +1912,11 @@ fail:
} }
bs_opaque->stop = true; bs_opaque->stop = true;
bdrv_drained_begin(bs); bdrv_graph_rdlock_main_loop();
bdrv_graph_wrlock(bs);
assert(mirror_top_bs->backing->bs == bs);
bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing, bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing,
&error_abort); &error_abort);
bdrv_replace_node(mirror_top_bs, bs, &error_abort); bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrunlock(); bdrv_replace_node(mirror_top_bs, mirror_top_bs->backing->bs, &error_abort);
bdrv_drained_end(bs);
bdrv_unref(mirror_top_bs); bdrv_unref(mirror_top_bs);
@@ -2036,12 +1945,8 @@ void mirror_start(const char *job_id, BlockDriverState *bs,
MirrorSyncMode_str(mode)); MirrorSyncMode_str(mode));
return; return;
} }
bdrv_graph_rdlock_main_loop();
is_none_mode = mode == MIRROR_SYNC_MODE_NONE; is_none_mode = mode == MIRROR_SYNC_MODE_NONE;
base = mode == MIRROR_SYNC_MODE_TOP ? bdrv_backing_chain_next(bs) : NULL; base = mode == MIRROR_SYNC_MODE_TOP ? bdrv_backing_chain_next(bs) : NULL;
bdrv_graph_rdunlock_main_loop();
mirror_start_job(job_id, bs, creation_flags, target, replaces, mirror_start_job(job_id, bs, creation_flags, target, replaces,
speed, granularity, buf_size, backing_mode, zero_target, speed, granularity, buf_size, backing_mode, zero_target,
on_source_error, on_target_error, unmap, NULL, NULL, on_source_error, on_target_error, unmap, NULL, NULL,

View File

@@ -258,38 +258,37 @@ void qmp_block_dirty_bitmap_disable(const char *node, const char *name,
bdrv_disable_dirty_bitmap(bitmap); bdrv_disable_dirty_bitmap(bitmap);
} }
BdrvDirtyBitmap *block_dirty_bitmap_merge(const char *dst_node, BdrvDirtyBitmap *block_dirty_bitmap_merge(const char *node, const char *target,
const char *dst_bitmap,
BlockDirtyBitmapOrStrList *bms, BlockDirtyBitmapOrStrList *bms,
HBitmap **backup, Error **errp) HBitmap **backup, Error **errp)
{ {
BlockDriverState *bs; BlockDriverState *bs;
BdrvDirtyBitmap *dst, *src; BdrvDirtyBitmap *dst, *src;
BlockDirtyBitmapOrStrList *lst; BlockDirtyBitmapOrStrList *lst;
const char *src_node, *src_bitmap;
HBitmap *local_backup = NULL; HBitmap *local_backup = NULL;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
dst = block_dirty_bitmap_lookup(dst_node, dst_bitmap, &bs, errp); dst = block_dirty_bitmap_lookup(node, target, &bs, errp);
if (!dst) { if (!dst) {
return NULL; return NULL;
} }
for (lst = bms; lst; lst = lst->next) { for (lst = bms; lst; lst = lst->next) {
switch (lst->value->type) { switch (lst->value->type) {
const char *name, *node;
case QTYPE_QSTRING: case QTYPE_QSTRING:
src_bitmap = lst->value->u.local; name = lst->value->u.local;
src = bdrv_find_dirty_bitmap(bs, src_bitmap); src = bdrv_find_dirty_bitmap(bs, name);
if (!src) { if (!src) {
error_setg(errp, "Dirty bitmap '%s' not found", src_bitmap); error_setg(errp, "Dirty bitmap '%s' not found", name);
goto fail; goto fail;
} }
break; break;
case QTYPE_QDICT: case QTYPE_QDICT:
src_node = lst->value->u.external.node; node = lst->value->u.external.node;
src_bitmap = lst->value->u.external.name; name = lst->value->u.external.name;
src = block_dirty_bitmap_lookup(src_node, src_bitmap, NULL, errp); src = block_dirty_bitmap_lookup(node, name, NULL, errp);
if (!src) { if (!src) {
goto fail; goto fail;
} }

View File

@@ -144,9 +144,6 @@ void hmp_drive_del(Monitor *mon, const QDict *qdict)
AioContext *aio_context; AioContext *aio_context;
Error *local_err = NULL; Error *local_err = NULL;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs = bdrv_find_node(id); bs = bdrv_find_node(id);
if (bs) { if (bs) {
qmp_blockdev_del(id, &local_err); qmp_blockdev_del(id, &local_err);
@@ -206,9 +203,6 @@ void hmp_commit(Monitor *mon, const QDict *qdict)
BlockBackend *blk; BlockBackend *blk;
int ret; int ret;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!strcmp(device, "all")) { if (!strcmp(device, "all")) {
ret = blk_commit_all(); ret = blk_commit_all();
} else { } else {
@@ -849,7 +843,7 @@ void hmp_info_block_jobs(Monitor *mon, const QDict *qdict)
} }
while (list) { while (list) {
if (list->value->type == JOB_TYPE_STREAM) { if (strcmp(list->value->type, "stream") == 0) {
monitor_printf(mon, "Streaming device %s: Completed %" PRId64 monitor_printf(mon, "Streaming device %s: Completed %" PRId64
" of %" PRId64 " bytes, speed limit %" PRId64 " of %" PRId64 " bytes, speed limit %" PRId64
" bytes/s\n", " bytes/s\n",
@@ -861,7 +855,7 @@ void hmp_info_block_jobs(Monitor *mon, const QDict *qdict)
monitor_printf(mon, "Type %s, device %s: Completed %" PRId64 monitor_printf(mon, "Type %s, device %s: Completed %" PRId64
" of %" PRId64 " bytes, speed limit %" PRId64 " of %" PRId64 " bytes, speed limit %" PRId64
" bytes/s\n", " bytes/s\n",
JobType_str(list->value->type), list->value->type,
list->value->device, list->value->device,
list->value->offset, list->value->offset,
list->value->len, list->value->len,
@@ -902,8 +896,6 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict)
SnapshotEntry *snapshot_entry; SnapshotEntry *snapshot_entry;
Error *err = NULL; Error *err = NULL;
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs = bdrv_all_find_vmstate_bs(NULL, false, NULL, &err); bs = bdrv_all_find_vmstate_bs(NULL, false, NULL, &err);
if (!bs) { if (!bs) {
error_report_err(err); error_report_err(err);

View File

@@ -275,8 +275,7 @@ static bool nbd_client_will_reconnect(BDRVNBDState *s)
* Return failure if the server's advertised options are incompatible with the * Return failure if the server's advertised options are incompatible with the
* client's needs. * client's needs.
*/ */
static int coroutine_fn GRAPH_RDLOCK static int nbd_handle_updated_info(BlockDriverState *bs, Error **errp)
nbd_handle_updated_info(BlockDriverState *bs, Error **errp)
{ {
BDRVNBDState *s = (BDRVNBDState *)bs->opaque; BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
int ret; int ret;
@@ -417,8 +416,7 @@ static void coroutine_fn GRAPH_RDLOCK nbd_reconnect_attempt(BDRVNBDState *s)
reconnect_delay_timer_del(s); reconnect_delay_timer_del(s);
} }
static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t cookie, static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t cookie)
Error **errp)
{ {
int ret; int ret;
uint64_t ind = COOKIE_TO_INDEX(cookie), ind2; uint64_t ind = COOKIE_TO_INDEX(cookie), ind2;
@@ -459,25 +457,20 @@ static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t cookie,
/* We are under mutex and cookie is 0. We have to do the dirty work. */ /* We are under mutex and cookie is 0. We have to do the dirty work. */
assert(s->reply.cookie == 0); assert(s->reply.cookie == 0);
ret = nbd_receive_reply(s->bs, s->ioc, &s->reply, s->info.mode, errp); ret = nbd_receive_reply(s->bs, s->ioc, &s->reply, NULL);
if (ret == 0) { if (ret <= 0) {
ret = -EIO; ret = ret ? ret : -EIO;
error_setg(errp, "server dropped connection");
}
if (ret < 0) {
nbd_channel_error(s, ret); nbd_channel_error(s, ret);
return ret; return ret;
} }
if (nbd_reply_is_structured(&s->reply) && if (nbd_reply_is_structured(&s->reply) &&
s->info.mode < NBD_MODE_STRUCTURED) { s->info.mode < NBD_MODE_STRUCTURED) {
nbd_channel_error(s, -EINVAL); nbd_channel_error(s, -EINVAL);
error_setg(errp, "unexpected structured reply");
return -EINVAL; return -EINVAL;
} }
ind2 = COOKIE_TO_INDEX(s->reply.cookie); ind2 = COOKIE_TO_INDEX(s->reply.cookie);
if (ind2 >= MAX_NBD_REQUESTS || !s->requests[ind2].coroutine) { if (ind2 >= MAX_NBD_REQUESTS || !s->requests[ind2].coroutine) {
nbd_channel_error(s, -EINVAL); nbd_channel_error(s, -EINVAL);
error_setg(errp, "unexpected cookie value");
return -EINVAL; return -EINVAL;
} }
if (s->reply.cookie == cookie) { if (s->reply.cookie == cookie) {
@@ -616,17 +609,13 @@ static int nbd_parse_offset_hole_payload(BDRVNBDState *s,
*/ */
static int nbd_parse_blockstatus_payload(BDRVNBDState *s, static int nbd_parse_blockstatus_payload(BDRVNBDState *s,
NBDStructuredReplyChunk *chunk, NBDStructuredReplyChunk *chunk,
uint8_t *payload, bool wide, uint8_t *payload, uint64_t orig_length,
uint64_t orig_length, NBDExtent32 *extent, Error **errp)
NBDExtent64 *extent, Error **errp)
{ {
uint32_t context_id; uint32_t context_id;
uint32_t count;
size_t ext_len = wide ? sizeof(*extent) : sizeof(NBDExtent32);
size_t pay_len = sizeof(context_id) + wide * sizeof(count) + ext_len;
/* The server succeeded, so it must have sent [at least] one extent */ /* The server succeeded, so it must have sent [at least] one extent */
if (chunk->length < pay_len) { if (chunk->length < sizeof(context_id) + sizeof(*extent)) {
error_setg(errp, "Protocol error: invalid payload for " error_setg(errp, "Protocol error: invalid payload for "
"NBD_REPLY_TYPE_BLOCK_STATUS"); "NBD_REPLY_TYPE_BLOCK_STATUS");
return -EINVAL; return -EINVAL;
@@ -641,15 +630,8 @@ static int nbd_parse_blockstatus_payload(BDRVNBDState *s,
return -EINVAL; return -EINVAL;
} }
if (wide) { extent->length = payload_advance32(&payload);
count = payload_advance32(&payload); extent->flags = payload_advance32(&payload);
extent->length = payload_advance64(&payload);
extent->flags = payload_advance64(&payload);
} else {
count = 0;
extent->length = payload_advance32(&payload);
extent->flags = payload_advance32(&payload);
}
if (extent->length == 0) { if (extent->length == 0) {
error_setg(errp, "Protocol error: server sent status chunk with " error_setg(errp, "Protocol error: server sent status chunk with "
@@ -670,7 +652,7 @@ static int nbd_parse_blockstatus_payload(BDRVNBDState *s,
* (always a safe status, even if it loses information). * (always a safe status, even if it loses information).
*/ */
if (s->info.min_block && !QEMU_IS_ALIGNED(extent->length, if (s->info.min_block && !QEMU_IS_ALIGNED(extent->length,
s->info.min_block)) { s->info.min_block)) {
trace_nbd_parse_blockstatus_compliance("extent length is unaligned"); trace_nbd_parse_blockstatus_compliance("extent length is unaligned");
if (extent->length > s->info.min_block) { if (extent->length > s->info.min_block) {
extent->length = QEMU_ALIGN_DOWN(extent->length, extent->length = QEMU_ALIGN_DOWN(extent->length,
@@ -684,15 +666,13 @@ static int nbd_parse_blockstatus_payload(BDRVNBDState *s,
/* /*
* We used NBD_CMD_FLAG_REQ_ONE, so the server should not have * We used NBD_CMD_FLAG_REQ_ONE, so the server should not have
* sent us any more than one extent, nor should it have included * sent us any more than one extent, nor should it have included
* status beyond our request in that extent. Furthermore, a wide * status beyond our request in that extent. However, it's easy
* server should have replied with an accurate count (we left * enough to ignore the server's noncompliance without killing the
* count at 0 for a narrow server). However, it's easy enough to
* ignore the server's noncompliance without killing the
* connection; just ignore trailing extents, and clamp things to * connection; just ignore trailing extents, and clamp things to
* the length of our request. * the length of our request.
*/ */
if (count != wide || chunk->length > pay_len) { if (chunk->length > sizeof(context_id) + sizeof(*extent)) {
trace_nbd_parse_blockstatus_compliance("unexpected extent count"); trace_nbd_parse_blockstatus_compliance("more than one extent");
} }
if (extent->length > orig_length) { if (extent->length > orig_length) {
extent->length = orig_length; extent->length = orig_length;
@@ -862,9 +842,9 @@ static coroutine_fn int nbd_co_do_receive_one_chunk(
} }
*request_ret = 0; *request_ret = 0;
ret = nbd_receive_replies(s, cookie, errp); ret = nbd_receive_replies(s, cookie);
if (ret < 0) { if (ret < 0) {
error_prepend(errp, "Connection closed: "); error_setg(errp, "Connection closed");
return -EIO; return -EIO;
} }
assert(s->ioc); assert(s->ioc);
@@ -1138,7 +1118,7 @@ nbd_co_receive_cmdread_reply(BDRVNBDState *s, uint64_t cookie,
static int coroutine_fn static int coroutine_fn
nbd_co_receive_blockstatus_reply(BDRVNBDState *s, uint64_t cookie, nbd_co_receive_blockstatus_reply(BDRVNBDState *s, uint64_t cookie,
uint64_t length, NBDExtent64 *extent, uint64_t length, NBDExtent32 *extent,
int *request_ret, Error **errp) int *request_ret, Error **errp)
{ {
NBDReplyChunkIter iter; NBDReplyChunkIter iter;
@@ -1151,17 +1131,11 @@ nbd_co_receive_blockstatus_reply(BDRVNBDState *s, uint64_t cookie,
NBD_FOREACH_REPLY_CHUNK(s, iter, cookie, false, NULL, &reply, &payload) { NBD_FOREACH_REPLY_CHUNK(s, iter, cookie, false, NULL, &reply, &payload) {
int ret; int ret;
NBDStructuredReplyChunk *chunk = &reply.structured; NBDStructuredReplyChunk *chunk = &reply.structured;
bool wide;
assert(nbd_reply_is_structured(&reply)); assert(nbd_reply_is_structured(&reply));
switch (chunk->type) { switch (chunk->type) {
case NBD_REPLY_TYPE_BLOCK_STATUS_EXT:
case NBD_REPLY_TYPE_BLOCK_STATUS: case NBD_REPLY_TYPE_BLOCK_STATUS:
wide = chunk->type == NBD_REPLY_TYPE_BLOCK_STATUS_EXT;
if ((s->info.mode >= NBD_MODE_EXTENDED) != wide) {
trace_nbd_extended_headers_compliance("block_status");
}
if (received) { if (received) {
nbd_channel_error(s, -EINVAL); nbd_channel_error(s, -EINVAL);
error_setg(&local_err, "Several BLOCK_STATUS chunks in reply"); error_setg(&local_err, "Several BLOCK_STATUS chunks in reply");
@@ -1169,9 +1143,9 @@ nbd_co_receive_blockstatus_reply(BDRVNBDState *s, uint64_t cookie,
} }
received = true; received = true;
ret = nbd_parse_blockstatus_payload( ret = nbd_parse_blockstatus_payload(s, &reply.structured,
s, &reply.structured, payload, wide, payload, length, extent,
length, extent, &local_err); &local_err);
if (ret < 0) { if (ret < 0) {
nbd_channel_error(s, ret); nbd_channel_error(s, ret);
nbd_iter_channel_error(&iter, ret, &local_err); nbd_iter_channel_error(&iter, ret, &local_err);
@@ -1401,7 +1375,7 @@ static int coroutine_fn GRAPH_RDLOCK nbd_client_co_block_status(
int64_t *pnum, int64_t *map, BlockDriverState **file) int64_t *pnum, int64_t *map, BlockDriverState **file)
{ {
int ret, request_ret; int ret, request_ret;
NBDExtent64 extent = { 0 }; NBDExtent32 extent = { 0 };
BDRVNBDState *s = (BDRVNBDState *)bs->opaque; BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
Error *local_err = NULL; Error *local_err = NULL;

View File

@@ -843,7 +843,7 @@ static void nfs_refresh_filename(BlockDriverState *bs)
} }
} }
static char * GRAPH_RDLOCK nfs_dirname(BlockDriverState *bs, Error **errp) static char *nfs_dirname(BlockDriverState *bs, Error **errp)
{ {
NFSClient *client = bs->opaque; NFSClient *client = bs->opaque;

View File

@@ -16,7 +16,6 @@
#include "qapi/error.h" #include "qapi/error.h"
#include "qapi/qmp/qdict.h" #include "qapi/qmp/qdict.h"
#include "qapi/qmp/qstring.h" #include "qapi/qmp/qstring.h"
#include "qemu/defer-call.h"
#include "qemu/error-report.h" #include "qemu/error-report.h"
#include "qemu/main-loop.h" #include "qemu/main-loop.h"
#include "qemu/module.h" #include "qemu/module.h"
@@ -417,10 +416,9 @@ static bool nvme_process_completion(NVMeQueuePair *q)
q->cq_phase = !q->cq_phase; q->cq_phase = !q->cq_phase;
} }
cid = le16_to_cpu(c->cid); cid = le16_to_cpu(c->cid);
if (cid == 0 || cid > NVME_NUM_REQS) { if (cid == 0 || cid > NVME_QUEUE_SIZE) {
warn_report("NVMe: Unexpected CID in completion queue: %" PRIu32 warn_report("NVMe: Unexpected CID in completion queue: %"PRIu32", "
", should be within: 1..%u inclusively", cid, "queue size: %u", cid, NVME_QUEUE_SIZE);
NVME_NUM_REQS);
continue; continue;
} }
trace_nvme_complete_command(s, q->index, cid); trace_nvme_complete_command(s, q->index, cid);
@@ -478,7 +476,7 @@ static void nvme_trace_command(const NvmeCmd *cmd)
} }
} }
static void nvme_deferred_fn(void *opaque) static void nvme_unplug_fn(void *opaque)
{ {
NVMeQueuePair *q = opaque; NVMeQueuePair *q = opaque;
@@ -505,7 +503,7 @@ static void nvme_submit_command(NVMeQueuePair *q, NVMeRequest *req,
q->need_kick++; q->need_kick++;
qemu_mutex_unlock(&q->lock); qemu_mutex_unlock(&q->lock);
defer_call(nvme_deferred_fn, q); blk_io_plug_call(nvme_unplug_fn, q);
} }
static void nvme_admin_cmd_sync_cb(void *opaque, int ret) static void nvme_admin_cmd_sync_cb(void *opaque, int ret)

View File

@@ -59,10 +59,11 @@ typedef struct ParallelsDirtyBitmapFeature {
} QEMU_PACKED ParallelsDirtyBitmapFeature; } QEMU_PACKED ParallelsDirtyBitmapFeature;
/* Given L1 table read bitmap data from the image and populate @bitmap */ /* Given L1 table read bitmap data from the image and populate @bitmap */
static int GRAPH_RDLOCK static int parallels_load_bitmap_data(BlockDriverState *bs,
parallels_load_bitmap_data(BlockDriverState *bs, const uint64_t *l1_table, const uint64_t *l1_table,
uint32_t l1_size, BdrvDirtyBitmap *bitmap, uint32_t l1_size,
Error **errp) BdrvDirtyBitmap *bitmap,
Error **errp)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
int ret = 0; int ret = 0;
@@ -119,16 +120,17 @@ finish:
* @data buffer (of @data_size size) is the Dirty bitmaps feature which * @data buffer (of @data_size size) is the Dirty bitmaps feature which
* consists of ParallelsDirtyBitmapFeature followed by L1 table. * consists of ParallelsDirtyBitmapFeature followed by L1 table.
*/ */
static BdrvDirtyBitmap * GRAPH_RDLOCK static BdrvDirtyBitmap *parallels_load_bitmap(BlockDriverState *bs,
parallels_load_bitmap(BlockDriverState *bs, uint8_t *data, size_t data_size, uint8_t *data,
Error **errp) size_t data_size,
Error **errp)
{ {
int ret; int ret;
ParallelsDirtyBitmapFeature bf; ParallelsDirtyBitmapFeature bf;
g_autofree uint64_t *l1_table = NULL; g_autofree uint64_t *l1_table = NULL;
BdrvDirtyBitmap *bitmap; BdrvDirtyBitmap *bitmap;
QemuUUID uuid; QemuUUID uuid;
char uuidstr[UUID_STR_LEN]; char uuidstr[UUID_FMT_LEN + 1];
int i; int i;
if (data_size < sizeof(bf)) { if (data_size < sizeof(bf)) {
@@ -181,9 +183,8 @@ parallels_load_bitmap(BlockDriverState *bs, uint8_t *data, size_t data_size,
return bitmap; return bitmap;
} }
static int GRAPH_RDLOCK static int parallels_parse_format_extension(BlockDriverState *bs,
parallels_parse_format_extension(BlockDriverState *bs, uint8_t *ext_cluster, uint8_t *ext_cluster, Error **errp)
Error **errp)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
int ret; int ret;

View File

@@ -200,7 +200,7 @@ static int mark_used(BlockDriverState *bs, unsigned long *bitmap,
* bitmap anyway, as much as we can. This information will be used for * bitmap anyway, as much as we can. This information will be used for
* error resolution. * error resolution.
*/ */
static int GRAPH_RDLOCK parallels_fill_used_bitmap(BlockDriverState *bs) static int parallels_fill_used_bitmap(BlockDriverState *bs)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
int64_t payload_bytes; int64_t payload_bytes;
@@ -415,10 +415,14 @@ parallels_co_flush_to_os(BlockDriverState *bs)
return 0; return 0;
} }
static int coroutine_fn GRAPH_RDLOCK
parallels_co_block_status(BlockDriverState *bs, bool want_zero, int64_t offset, static int coroutine_fn parallels_co_block_status(BlockDriverState *bs,
int64_t bytes, int64_t *pnum, int64_t *map, bool want_zero,
BlockDriverState **file) int64_t offset,
int64_t bytes,
int64_t *pnum,
int64_t *map,
BlockDriverState **file)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
int count; int count;
@@ -1185,7 +1189,7 @@ static int parallels_probe(const uint8_t *buf, int buf_size,
return 0; return 0;
} }
static int GRAPH_RDLOCK parallels_update_header(BlockDriverState *bs) static int parallels_update_header(BlockDriverState *bs)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
unsigned size = MAX(bdrv_opt_mem_align(bs->file->bs), unsigned size = MAX(bdrv_opt_mem_align(bs->file->bs),
@@ -1255,8 +1259,6 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
file_nb_sectors = bdrv_nb_sectors(bs->file->bs); file_nb_sectors = bdrv_nb_sectors(bs->file->bs);
if (file_nb_sectors < 0) { if (file_nb_sectors < 0) {
return -EINVAL; return -EINVAL;
@@ -1364,9 +1366,9 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
error_setg(&s->migration_blocker, "The Parallels format used by node '%s' " error_setg(&s->migration_blocker, "The Parallels format used by node '%s' "
"does not support live migration", "does not support live migration",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
ret = migrate_add_blocker(s->migration_blocker, errp);
ret = migrate_add_blocker_normal(&s->migration_blocker, errp);
if (ret < 0) { if (ret < 0) {
error_setg(errp, "Migration blocker error");
goto fail; goto fail;
} }
qemu_co_mutex_init(&s->lock); qemu_co_mutex_init(&s->lock);
@@ -1401,7 +1403,7 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
ret = bdrv_check(bs, &res, BDRV_FIX_ERRORS | BDRV_FIX_LEAKS); ret = bdrv_check(bs, &res, BDRV_FIX_ERRORS | BDRV_FIX_LEAKS);
if (ret < 0) { if (ret < 0) {
error_setg_errno(errp, -ret, "Could not repair corrupted image"); error_setg_errno(errp, -ret, "Could not repair corrupted image");
migrate_del_blocker(&s->migration_blocker); migrate_del_blocker(s->migration_blocker);
goto fail; goto fail;
} }
} }
@@ -1418,6 +1420,7 @@ fail:
*/ */
parallels_free_used_bitmap(bs); parallels_free_used_bitmap(bs);
error_free(s->migration_blocker);
g_free(s->bat_dirty_bmap); g_free(s->bat_dirty_bmap);
qemu_vfree(s->header); qemu_vfree(s->header);
return ret; return ret;
@@ -1428,8 +1431,6 @@ static void parallels_close(BlockDriverState *bs)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
GRAPH_RDLOCK_GUARD_MAINLOOP();
if ((bs->open_flags & BDRV_O_RDWR) && !(bs->open_flags & BDRV_O_INACTIVE)) { if ((bs->open_flags & BDRV_O_RDWR) && !(bs->open_flags & BDRV_O_INACTIVE)) {
s->header->inuse = 0; s->header->inuse = 0;
parallels_update_header(bs); parallels_update_header(bs);
@@ -1444,7 +1445,8 @@ static void parallels_close(BlockDriverState *bs)
g_free(s->bat_dirty_bmap); g_free(s->bat_dirty_bmap);
qemu_vfree(s->header); qemu_vfree(s->header);
migrate_del_blocker(&s->migration_blocker); migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
} }
static bool parallels_is_support_dirty_bitmaps(BlockDriverState *bs) static bool parallels_is_support_dirty_bitmaps(BlockDriverState *bs)

View File

@@ -90,8 +90,7 @@ typedef struct BDRVParallelsState {
Error *migration_blocker; Error *migration_blocker;
} BDRVParallelsState; } BDRVParallelsState;
int GRAPH_RDLOCK int parallels_read_format_extension(BlockDriverState *bs,
parallels_read_format_extension(BlockDriverState *bs, int64_t ext_off, int64_t ext_off, Error **errp);
Error **errp);
#endif #endif

Some files were not shown because too many files have changed in this diff Show More