Compare commits

..

34 Commits

Author SHA1 Message Date
Fabiano Rosas
1426f4034c tests/qtest/migration: Print migration incoming errors
We currently just asserting when incoming migration fails. Let's print
the error message from QMP as well.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 18:28:54 -03:00
Fabiano Rosas
4bc24a1768 tests/qtest: Add a test for fixed-ram with passing of fds
Add a multifd test for fixed-ram with passing of fds into QEMU. This
is how libvirt will consume the feature.

There are a couple of details to the fdset mechanism:

- multifd needs two distinct file descriptors (not duplicated with
  dup()) on the outgoing side so it can enable O_DIRECT only on the
  channels that write with alignment. The dup() system call creates
  file descriptors that share status flags, of which O_DIRECT is one.

  the incoming side doesn't set O_DIRECT, so it can dup() fds and
  therefore can receive only one in the fdset.

- the open() access mode flags used for the fds passed into QEMU need
  to match the flags QEMU uses to open the file. Currently O_WRONLY
  for src and O_RDONLY for dst.

O_DIRECT is not supported on all systems/filesystems, so run the fdset
test without O_DIRECT if that's the case. The migration code should
still work in that scenario.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 18:28:54 -03:00
Fabiano Rosas
c8a4fbac41 migration: Add support for fdset with multifd + file
Allow multifd to use an fdset when migrating to a file. This is useful
for the scenario where the management layer wants to have control over
the migration file.

By receiving the file descriptors directly, QEMU can delegate some
high level operating system operations to the management layer (such
as mandatory access control).

The management layer might also want to add its own headers before the
migration stream.

Enable the "file:/dev/fdset/#" syntax for the multifd migration with
fixed-ram. The fdset should contain two fds on the source side of
migration and 1 fd on the destination side. The two fds should not be
duplicates between themselves.

Multifd enables O_DIRECT on the source side using one of the fds and
keeps the other without the flag. None of the fds should have the
O_DIRECT flag already set.

The fdset mechanism also requires that the open() access mode flags be
the same as what QEMU uses internally: WRONLY for the source fds and
RDONLY for the destination fds.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 18:28:54 -03:00
Fabiano Rosas
cb362dd6a2 monitor: fdset: Match against O_DIRECT
We're about to enable the use of O_DIRECT in the migration code and
due to the alignment restrictions imposed by filesystems we need to
make sure the flag is only used when doing aligned IO.

The migration will do parallel IO to different regions of a file, so
we need to use more than one file descriptor. Those cannot be obtained
by duplicating (dup()) since duplicated file descriptors share the
file status flags, including O_DIRECT. If one migration channel does
unaligned IO while another sets O_DIRECT to do aligned IO, the
filesystem would fail the unaligned operation.

The add-fd QMP command along with the fdset code are specifically
designed to allow the user to pass a set of file descriptors with
different access flags into QEMU to be later fetched by code that
needs to alternate between those flags when doing IO.

Extend the fdset matching function to behave the same with the
O_DIRECT flag.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 17:11:48 -03:00
Fabiano Rosas
eeefd3103f monitor: Extract fdset fd flags comparison into a function
We're about to add one more condition to the flags comparison that
requires an ifdef. Move the code into a separate function now to make
it cleaner after the next patch.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 17:11:48 -03:00
Fabiano Rosas
b4e9a2b235 monitor: Honor QMP request for fd removal immediately
We're currently only removing an fd from the fdset if the VM is
running. This causes a QMP call to "remove-fd" to not actually remove
the fd if the VM happens to be stopped.

While the fd would eventually be removed when monitor_fdset_cleanup()
is called again, the user request should be honored and the fd
actually removed. Calling remove-fd + query-fdset shows a recently
removed fd still present.

The runstate_is_running() check was introduced by commit ebe52b592d
("monitor: Prevent removing fd from set during init"), which by the
shortlog indicates that they were trying to avoid removing an
yet-unduplicated fd too early.

I don't see why an fd explicitly removed with qmp_remove_fd() should
be under runstate_is_running(). I'm assuming this was a mistake when
adding the parenthesis around the expression.

Move the runstate_is_running() check to apply only to the
QLIST_EMPTY(dup_fds) side of the expression and ignore it when
mon_fdset_fd->removed has been explicitly set.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 17:11:47 -03:00
Fabiano Rosas
e98d9aaec1 tests/qtest: Add a test for migration with direct-io and multifd
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 17:11:38 -03:00
Fabiano Rosas
cba2c89c13 migration: Add direct-io parameter
Add the direct-io migration parameter that tells the migration code to
use O_DIRECT when opening the migration stream file whenever possible.

This is currently only used for the secondary channels of fixed-ram
migration, which can guarantee that writes are page aligned.

However the parameter could be made to affect other types of
file-based migrations in the future.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 16:58:43 -03:00
Fabiano Rosas
60645076f3 tests/qtest: Add a multifd + fixed-ram migration test
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 15:45:21 -03:00
Fabiano Rosas
01c82ca093 migration/multifd: Support incoming fixed-ram stream format
For the incoming fixed-ram migration we need to read the ramblock
headers, get the pages bitmap and send the host address of each
non-zero page to the multifd channel thread for writing.

To read from the migration file we need a preadv function that can
read into the iovs in segments of contiguous pages because (as in the
writing case) the file offset applies to the entire iovec.

Usage on HMP is:

(qemu) migrate_set_capability multifd on
(qemu) migrate_set_capability fixed-ram on
(qemu) migrate_set_parameter max-bandwidth 0
(qemu) migrate_set_parameter multifd-channels 8
(qemu) migrate_incoming file:migfile
(qemu) info status
(qemu) c

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 15:45:21 -03:00
Fabiano Rosas
ea602b85ed migration/multifd: Support outgoing fixed-ram stream format
The new fixed-ram stream format uses a file transport and puts ram
pages in the migration file at their respective offsets and can be
done in parallel by using the pwritev system call which takes iovecs
and an offset.

Add support to enabling the new format along with multifd to make use
of the threading and page handling already in place.

This requires multifd to stop sending headers and leaving the stream
format to the fixed-ram code. When it comes time to write the data, we
need to call a version of qio_channel_write that can take an offset.

Usage on HMP is:

(qemu) stop
(qemu) migrate_set_capability multifd on
(qemu) migrate_set_capability fixed-ram on
(qemu) migrate_set_parameter max-bandwidth 0
(qemu) migrate_set_parameter multifd-channels 8
(qemu) migrate file:migfile

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 15:45:21 -03:00
Fabiano Rosas
65ceb3509b migration/ram: Ignore multifd flush when doing fixed-ram migration
Some functionalities of multifd are incompatible with the 'fixed-ram'
migration format.

The MULTIFD_FLUSH flag in particular is not used because in fixed-ram
there is no sinchronicity between migration source and destination so
there is not need for a sync packet. In fact, fixed-ram disables
packets in multifd as a whole.

Make sure RAM_SAVE_FLAG_MULTIFD_FLUSH is never emitted when fixed-ram
is enabled.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 15:45:20 -03:00
Fabiano Rosas
f33324162e migration/ram: Add a wrapper for fixed-ram shadow bitmap
We'll need to set the shadow_bmap bits from outside ram.c soon and
TARGET_PAGE_BITS is poisoned, so add a wrapper to it.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 15:45:20 -03:00
Fabiano Rosas
3f46b2e302 io: Add a pwritev/preadv version that takes a discontiguous iovec
For the upcoming support to fixed-ram migration with multifd, we need
to be able to accept an iovec array with non-contiguous data.

Add a pwritev and preadv version that splits the array into contiguous
segments before writing. With that we can have the ram code continue
to add pages in any order and the multifd code continue to send large
arrays for reading and writing.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
Since iovs can be non contiguous, we'd need a separate array on the
side to carry an extra file offset for each of them, so I'm relying on
the fact that iovs are all within a same host page and passing in an
encoded offset that takes the host page into account.
2023-11-03 15:45:20 -03:00
Fabiano Rosas
7d9bc61bbe migration/multifd: Add pages to the receiving side
Currently multifd does not need to have knowledge of pages on the
receiving side because all the information needed is within the
packets that come in the stream.

We're about to add support to fixed-ram migration, which cannot use
packets because it expects the ramblock section in the migration file
to contain only the guest pages data.

Add a pointer to MultiFDPages in the multifd_recv_state and use the
pages similarly to what we already do on the sending side. The pages
are used to transfer data between the ram migration code in the main
migration thread and the multifd receiving threads.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 15:45:20 -03:00
Fabiano Rosas
28c14413ee migration/multifd: Add incoming QIOChannelFile support
On the receiving side we don't need to differentiate between main
channel and threads, so whichever channel is defined first gets to be
the main one. And since there are no packets, use the atomic channel
count to index into the params array.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 15:45:20 -03:00
Fabiano Rosas
60eca4db23 migration/multifd: Add outgoing QIOChannelFile support
Allow multifd to open file-backed channels. This will be used when
enabling the fixed-ram migration stream format which expects a
seekable transport.

The QIOChannel read and write methods will use the preadv/pwritev
versions which don't update the file offset at each call so we can
reuse the fd without re-opening for every channel.

Note that this is just setup code and multifd cannot yet make use of
the file channels.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 15:45:14 -03:00
Fabiano Rosas
66687953e3 migration/multifd: Allow multifd without packets
For the upcoming support to the new 'fixed-ram' migration stream
format, we cannot use multifd packets because each write into the
ramblock section in the migration file is expected to contain only the
guest pages. They are written at their respective offsets relative to
the ramblock section header.

There is no space for the packet information and the expected gains
from the new approach come partly from being able to write the pages
sequentially without extraneous data in between.

The new format also doesn't need the packets and all necessary
information can be taken from the standard migration headers with some
(future) changes to multifd code.

Use the presence of the fixed-ram capability to decide whether to send
packets. For now this has no effect as fixed-ram cannot yet be enabled
with multifd.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 15:35:38 -03:00
Nikolay Borisov
92ef7c2da4 tests/qtest: migration-test: Add tests for fixed-ram file-based migration
Add basic tests for 'fixed-ram' migration.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 15:35:38 -03:00
Nikolay Borisov
0f985fa252 migration/ram: Add support for 'fixed-ram' migration restore
Add the necessary code to parse the format changes for the 'fixed-ram'
capability.

One of the more notable changes in behavior is that in the 'fixed-ram'
case ram pages are restored in one go rather than constantly looping
through the migration stream.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 15:35:38 -03:00
Nikolay Borisov
5ea00e7789 migration/ram: Add support for 'fixed-ram' outgoing migration
Implement the outgoing migration side for the 'fixed-ram' capability.

A bitmap is introduced to track which pages have been written in the
migration file. Pages are written at a fixed location for every
ramblock. Zero pages are ignored as they'd be zero in the destination
migration as well.

The migration stream is altered to put the dirty pages for a ramblock
after its header instead of having a sequential stream of pages that
follow the ramblock headers. Since all pages have a fixed location,
RAM_SAVE_FLAG_EOS is no longer generated on every migration iteration.

Without fixed-ram (current):

ramblock 1 header|ramblock 2 header|...|RAM_SAVE_FLAG_EOS|stream of
 pages (iter 1)|RAM_SAVE_FLAG_EOS|stream of pages (iter 2)|...

With fixed-ram (new):

ramblock 1 header|ramblock 1 fixed-ram header|ramblock 1 pages (fixed
 offsets)|ramblock 2 header|ramblock 2 fixed-ram header|ramblock 2
 pages (fixed offsets)|...|RAM_SAVE_FLAG_EOS

where:
 - ramblock header: the generic information for a ramblock, such as
   idstr, used_len, etc.

 - ramblock fixed-ram header: the new information added by this
   feature: bitmap of pages written, bitmap size and offset of pages
   in the migration file.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 15:08:08 -03:00
Fabiano Rosas
ae9ef03607 migration: Add fixed-ram URI compatibility check
The fixed-ram migration format needs a channel that supports seeking
to be able to write each page to an arbitrary offset in the migration
stream.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-11-03 15:05:04 -03:00
Fabiano Rosas
83b60e1f77 migration/ram: Introduce 'fixed-ram' migration capability
Add a new migration capability 'fixed-ram'.

The core of the feature is to ensure that each RAM page has a specific
offset in the resulting migration stream. The reasons why we'd want
such behavior are:

 - The resulting file will have a bounded size, since pages which are
   dirtied multiple times will always go to a fixed location in the
   file, rather than constantly being added to a sequential
   stream. This eliminates cases where a VM with, say, 1G of RAM can
   result in a migration file that's 10s of GBs, provided that the
   workload constantly redirties memory.

 - It paves the way to implement O_DIRECT-enabled save/restore of the
   migration stream as the pages are ensured to be written at aligned
   offsets.

 - It allows the usage of multifd so we can write RAM pages to the
   migration file in parallel.

For now, enabling the capability has no effect. The next couple of
patches implement the core functionality.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 14:59:51 -03:00
Nikolay Borisov
607b519673 migration/qemu-file: add utility methods for working with seekable channels
Add utility methods that will be needed when implementing 'fixed-ram'
migration capability.

qemu_file_is_seekable
qemu_put_buffer_at
qemu_get_buffer_at
qemu_set_offset
qemu_get_offset

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-11-03 14:54:16 -03:00
Nikolay Borisov
75a0a94d57 io: implement io_pwritev/preadv for QIOChannelFile
The upcoming 'fixed-ram' feature will require qemu to write data to
(and restore from) specific offsets of the migration file.

Add a minimal implementation of pwritev/preadv and expose them via the
io_pwritev and io_preadv interfaces.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-11-03 11:57:43 -03:00
Nikolay Borisov
f8c7a93b21 io: Add generic pwritev/preadv interface
Introduce basic pwritev/preadv support in the generic channel layer.
Specific implementation will follow for the file channel as this is
required in order to support migration streams with fixed location of
each ram page.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 11:57:43 -03:00
Nikolay Borisov
e1bf06c06e io: add and implement QIO_CHANNEL_FEATURE_SEEKABLE for channel file
Add a generic QIOChannel feature SEEKABLE which would be used by the
qemu_file* apis. For the time being this will be only implemented for
file channels.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-11-03 11:57:42 -03:00
Fabiano Rosas
2001f4bd65 tests/qtest: Re-enable multifd cancel test
We've found the source of flakiness in this test, so re-enable it.

Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 11:57:42 -03:00
Fabiano Rosas
7dbc58d1b8 migration/multifd: Allow QIOTask error reporting without an object
The only way for the channel backend to report an error to the multifd
core during creation is by setting the QIOTask error. We must allow
the channel backend to set the error even if the QIOChannel has failed
to be created, which means the QIOTask source object would be NULL.

At multifd_new_send_channel_async() move the QOM casting of the
channel until after we have checked for the QIOTask error.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
context: When doing multifd + file, it's possible that we fail to open
the file. I'll use the empty QIOTask to report the error back to
multifd.
2023-11-03 11:57:42 -03:00
Fabiano Rosas
257ff17dce migration/multifd: Stop setting p->ioc before connecting
This is already done at multifd_channel_connect().

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 11:57:42 -03:00
Fabiano Rosas
e392ce4809 migration/multifd: Move p->running into multifd_channel_connect
The multifd_send_thread is only really running when the else branch at
multifd_channel_connect() is taken. For TLS this means the function
needs to be called a second time after the TLS handshake. Setting
p->running before that is misleading and could lead to issues because
multifd_tls_channel_connect() overwrites the channel (p->c) set at
multifd_new_send_channel_async().

Set p->running right before starting the multifd thread to be more
precise.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 11:57:42 -03:00
Fabiano Rosas
677780fd36 migration/multifd: Fix multifd_pages_init argument
The 'size' argument is the number of pages that fit in a multifd
packet. Change it to uint32_t and rename.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 11:57:42 -03:00
Fabiano Rosas
f6834bad8d migration/multifd: Remove QEMUFile from where it is not needed
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 11:57:40 -03:00
Fabiano Rosas
0e1bc6f8f7 migration/multifd: Remove MultiFDPages_t::packet_num
This was introduced by commit 34c55a94b1 ("migration: Create multipage
support") and never used.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-03 11:56:58 -03:00
619 changed files with 10128 additions and 24071 deletions

View File

@@ -165,7 +165,7 @@ cross-win32-system:
job: win32-fedora-cross-container job: win32-fedora-cross-container
variables: variables:
IMAGE: fedora-win32-cross IMAGE: fedora-win32-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins EXTRA_CONFIGURE_OPTS: --enable-fdt=internal
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu m68k-softmmu CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu m68k-softmmu
microblazeel-softmmu mips64el-softmmu nios2-softmmu microblazeel-softmmu mips64el-softmmu nios2-softmmu
artifacts: artifacts:
@@ -179,7 +179,7 @@ cross-win64-system:
job: win64-fedora-cross-container job: win64-fedora-cross-container
variables: variables:
IMAGE: fedora-win64-cross IMAGE: fedora-win64-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins EXTRA_CONFIGURE_OPTS: --enable-fdt=internal
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu
m68k-softmmu microblazeel-softmmu nios2-softmmu m68k-softmmu microblazeel-softmmu nios2-softmmu
or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu

View File

@@ -72,7 +72,6 @@
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed - .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
bison diffutils flex bison diffutils flex
git grep make sed git grep make sed
$MINGW_TARGET-binutils
$MINGW_TARGET-capstone $MINGW_TARGET-capstone
$MINGW_TARGET-ccache $MINGW_TARGET-ccache
$MINGW_TARGET-curl $MINGW_TARGET-curl

View File

@@ -30,12 +30,10 @@ malc <av1474@comtv.ru> malc <malc@c046a42c-6fe2-441c-8c8c-71466251a162>
# Corrupted Author fields # Corrupted Author fields
Aaron Larson <alarson@ddci.com> alarson@ddci.com Aaron Larson <alarson@ddci.com> alarson@ddci.com
Andreas Färber <andreas.faerber@web.de> Andreas Färber <andreas.faerber> Andreas Färber <andreas.faerber@web.de> Andreas Färber <andreas.faerber>
fanwenjie <fanwj@mail.ustc.edu.cn> fanwj@mail.ustc.edu.cn <fanwj@mail.ustc.edu.cn>
Jason Wang <jasowang@redhat.com> Jason Wang <jasowang> Jason Wang <jasowang@redhat.com> Jason Wang <jasowang>
Marek Dolata <mkdolata@us.ibm.com> mkdolata@us.ibm.com <mkdolata@us.ibm.com> Marek Dolata <mkdolata@us.ibm.com> mkdolata@us.ibm.com <mkdolata@us.ibm.com>
Michael Ellerman <mpe@ellerman.id.au> michael@ozlabs.org <michael@ozlabs.org> Michael Ellerman <mpe@ellerman.id.au> michael@ozlabs.org <michael@ozlabs.org>
Nick Hudson <hnick@vmware.com> hnick@vmware.com <hnick@vmware.com> Nick Hudson <hnick@vmware.com> hnick@vmware.com <hnick@vmware.com>
Timothée Cocault <timothee.cocault@gmail.com> timothee.cocault@gmail.com <timothee.cocault@gmail.com>
# There is also a: # There is also a:
# (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162> # (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162>

View File

@@ -11,9 +11,6 @@ config OPENGL
config X11 config X11
bool bool
config PIXMAN
bool
config SPICE config SPICE
bool bool
@@ -49,6 +46,3 @@ config FUZZ
config VFIO_USER_SERVER_ALLOWED config VFIO_USER_SERVER_ALLOWED
bool bool
imply VFIO_USER_SERVER imply VFIO_USER_SERVER
config HV_BALLOON_POSSIBLE
bool

View File

@@ -323,7 +323,7 @@ RISC-V TCG CPUs
M: Palmer Dabbelt <palmer@dabbelt.com> M: Palmer Dabbelt <palmer@dabbelt.com>
M: Alistair Francis <alistair.francis@wdc.com> M: Alistair Francis <alistair.francis@wdc.com>
M: Bin Meng <bin.meng@windriver.com> M: Bin Meng <bin.meng@windriver.com>
R: Weiwei Li <liwei1518@gmail.com> R: Weiwei Li <liweiwei@iscas.ac.cn>
R: Daniel Henrique Barboza <dbarboza@ventanamicro.com> R: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
R: Liu Zhiwei <zhiwei_liu@linux.alibaba.com> R: Liu Zhiwei <zhiwei_liu@linux.alibaba.com>
L: qemu-riscv@nongnu.org L: qemu-riscv@nongnu.org
@@ -490,7 +490,7 @@ S: Supported
F: include/sysemu/kvm_xen.h F: include/sysemu/kvm_xen.h
F: target/i386/kvm/xen* F: target/i386/kvm/xen*
F: hw/i386/kvm/xen* F: hw/i386/kvm/xen*
F: tests/avocado/kvm_xen_guest.py F: tests/avocado/xen_guest.py
Guest CPU Cores (other accelerators) Guest CPU Cores (other accelerators)
------------------------------------ ------------------------------------
@@ -859,10 +859,8 @@ M: Hao Wu <wuhaotsh@google.com>
L: qemu-arm@nongnu.org L: qemu-arm@nongnu.org
S: Supported S: Supported
F: hw/*/npcm* F: hw/*/npcm*
F: hw/sensor/adm1266.c
F: include/hw/*/npcm* F: include/hw/*/npcm*
F: tests/qtest/npcm* F: tests/qtest/npcm*
F: tests/qtest/adm1266-test.c
F: pc-bios/npcm7xx_bootrom.bin F: pc-bios/npcm7xx_bootrom.bin
F: roms/vbootrom F: roms/vbootrom
F: docs/system/arm/nuvoton.rst F: docs/system/arm/nuvoton.rst
@@ -1194,7 +1192,6 @@ M: Richard Henderson <richard.henderson@linaro.org>
R: Helge Deller <deller@gmx.de> R: Helge Deller <deller@gmx.de>
S: Odd Fixes S: Odd Fixes
F: configs/devices/hppa-softmmu/default.mak F: configs/devices/hppa-softmmu/default.mak
F: hw/display/artist.c
F: hw/hppa/ F: hw/hppa/
F: hw/input/lasips2.c F: hw/input/lasips2.c
F: hw/net/*i82596* F: hw/net/*i82596*
@@ -1539,14 +1536,6 @@ F: hw/pci-host/mv64361.c
F: hw/pci-host/mv643xx.h F: hw/pci-host/mv643xx.h
F: include/hw/pci-host/mv64361.h F: include/hw/pci-host/mv64361.h
amigaone
M: BALATON Zoltan <balaton@eik.bme.hu>
L: qemu-ppc@nongnu.org
S: Maintained
F: hw/ppc/amigaone.c
F: hw/pci-host/articia.c
F: include/hw/pci-host/articia.h
Virtual Open Firmware (VOF) Virtual Open Firmware (VOF)
M: Alexey Kardashevskiy <aik@ozlabs.ru> M: Alexey Kardashevskiy <aik@ozlabs.ru>
R: David Gibson <david@gibson.dropbear.id.au> R: David Gibson <david@gibson.dropbear.id.au>
@@ -1626,7 +1615,6 @@ F: hw/intc/sh_intc.c
F: hw/pci-host/sh_pci.c F: hw/pci-host/sh_pci.c
F: hw/timer/sh_timer.c F: hw/timer/sh_timer.c
F: include/hw/sh4/sh_intc.h F: include/hw/sh4/sh_intc.h
F: include/hw/timer/tmu012.h
Shix Shix
R: Yoshinori Sato <ysato@users.sourceforge.jp> R: Yoshinori Sato <ysato@users.sourceforge.jp>
@@ -1784,7 +1772,7 @@ F: include/hw/southbridge/ich9.h
F: include/hw/southbridge/piix.h F: include/hw/southbridge/piix.h
F: hw/isa/apm.c F: hw/isa/apm.c
F: include/hw/isa/apm.h F: include/hw/isa/apm.h
F: tests/unit/test-x86-topo.c F: tests/unit/test-x86-cpuid.c
F: tests/qtest/test-x86-cpuid-compat.c F: tests/qtest/test-x86-cpuid-compat.c
PC Chipset PC Chipset
@@ -1870,7 +1858,6 @@ M: Max Filippov <jcmvbkbc@gmail.com>
S: Maintained S: Maintained
F: hw/xtensa/xtfpga.c F: hw/xtensa/xtfpga.c
F: hw/net/opencores_eth.c F: hw/net/opencores_eth.c
F: include/hw/xtensa/mx_pic.h
Devices Devices
------- -------
@@ -2323,15 +2310,6 @@ F: hw/virtio/virtio-mem-pci.h
F: hw/virtio/virtio-mem-pci.c F: hw/virtio/virtio-mem-pci.c
F: include/hw/virtio/virtio-mem.h F: include/hw/virtio/virtio-mem.h
virtio-snd
M: Gerd Hoffmann <kraxel@redhat.com>
R: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
S: Supported
F: hw/audio/virtio-snd.c
F: hw/audio/virtio-snd-pci.c
F: include/hw/audio/virtio-snd.h
F: docs/system/devices/virtio-snd.rst
nvme nvme
M: Keith Busch <kbusch@kernel.org> M: Keith Busch <kbusch@kernel.org>
M: Klaus Jensen <its@irrelevant.dk> M: Klaus Jensen <its@irrelevant.dk>
@@ -2505,7 +2483,6 @@ S: Odd Fixes
F: hw/display/virtio-gpu* F: hw/display/virtio-gpu*
F: hw/display/virtio-vga.* F: hw/display/virtio-vga.*
F: include/hw/virtio/virtio-gpu.h F: include/hw/virtio/virtio-gpu.h
F: docs/system/devices/virtio-gpu.rst
vhost-user-blk vhost-user-blk
M: Raphael Norwitz <raphael.norwitz@nutanix.com> M: Raphael Norwitz <raphael.norwitz@nutanix.com>
@@ -2608,7 +2585,6 @@ W: https://canbus.pages.fel.cvut.cz/
F: net/can/* F: net/can/*
F: hw/net/can/* F: hw/net/can/*
F: include/net/can_*.h F: include/net/can_*.h
F: docs/system/devices/can.rst
OpenPIC interrupt controller OpenPIC interrupt controller
M: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> M: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
@@ -2680,14 +2656,6 @@ F: hw/usb/canokey.c
F: hw/usb/canokey.h F: hw/usb/canokey.h
F: docs/system/devices/canokey.rst F: docs/system/devices/canokey.rst
Hyper-V Dynamic Memory Protocol
M: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
S: Supported
F: hw/hyperv/hv-balloon*.c
F: hw/hyperv/hv-balloon*.h
F: include/hw/hyperv/dynmem-proto.h
F: include/hw/hyperv/hv-balloon.h
Subsystems Subsystems
---------- ----------
Overall Audio backends Overall Audio backends
@@ -2942,7 +2910,7 @@ F: gdbstub/*
F: include/exec/gdbstub.h F: include/exec/gdbstub.h
F: include/gdbstub/* F: include/gdbstub/*
F: gdb-xml/ F: gdb-xml/
F: tests/tcg/multiarch/gdbstub/* F: tests/tcg/multiarch/gdbstub/
F: scripts/feature_to_c.py F: scripts/feature_to_c.py
F: scripts/probe-gdb-support.py F: scripts/probe-gdb-support.py
@@ -3164,11 +3132,10 @@ M: Michael Roth <michael.roth@amd.com>
M: Konstantin Kostiuk <kkostiuk@redhat.com> M: Konstantin Kostiuk <kkostiuk@redhat.com>
S: Maintained S: Maintained
F: qga/ F: qga/
F: contrib/systemd/qemu-guest-agent.service
F: docs/interop/qemu-ga.rst F: docs/interop/qemu-ga.rst
F: docs/interop/qemu-ga-ref.rst F: docs/interop/qemu-ga-ref.rst
F: scripts/qemu-guest-agent/ F: scripts/qemu-guest-agent/
F: tests/*/test-qga* F: tests/unit/test-qga.c
T: git https://github.com/mdroth/qemu.git qga T: git https://github.com/mdroth/qemu.git qga
QEMU Guest Agent Win32 QEMU Guest Agent Win32
@@ -4078,7 +4045,7 @@ F: gitdm.config
F: contrib/gitdm/* F: contrib/gitdm/*
Incompatible changes Incompatible changes
R: devel@lists.libvirt.org R: libvir-list@redhat.com
F: docs/about/deprecated.rst F: docs/about/deprecated.rst
Build System Build System

View File

@@ -22,6 +22,10 @@ void tlb_set_dirty(CPUState *cpu, vaddr vaddr)
{ {
} }
void tcg_flush_jmp_cache(CPUState *cpu)
{
}
int probe_access_flags(CPUArchState *env, vaddr addr, int size, int probe_access_flags(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx, MMUAccessType access_type, int mmu_idx,
bool nonfault, void **phost, uintptr_t retaddr) bool nonfault, void **phost, uintptr_t retaddr)

View File

@@ -24,7 +24,6 @@
#include "exec/memory.h" #include "exec/memory.h"
#include "exec/cpu_ldst.h" #include "exec/cpu_ldst.h"
#include "exec/cputlb.h" #include "exec/cputlb.h"
#include "exec/tb-flush.h"
#include "exec/memory-internal.h" #include "exec/memory-internal.h"
#include "exec/ram_addr.h" #include "exec/ram_addr.h"
#include "tcg/tcg.h" #include "tcg/tcg.h"
@@ -322,6 +321,21 @@ static void flush_all_helper(CPUState *src, run_on_cpu_func fn,
} }
} }
void tlb_flush_counts(size_t *pfull, size_t *ppart, size_t *pelide)
{
CPUState *cpu;
size_t full = 0, part = 0, elide = 0;
CPU_FOREACH(cpu) {
full += qatomic_read(&cpu->neg.tlb.c.full_flush_count);
part += qatomic_read(&cpu->neg.tlb.c.part_flush_count);
elide += qatomic_read(&cpu->neg.tlb.c.elide_flush_count);
}
*pfull = full;
*ppart = part;
*pelide = elide;
}
static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data) static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data)
{ {
uint16_t asked = data.host_int; uint16_t asked = data.host_int;
@@ -2692,7 +2706,7 @@ static uint64_t do_st16_leN(CPUState *cpu, MMULookupPageData *p,
case MO_ATOM_WITHIN16_PAIR: case MO_ATOM_WITHIN16_PAIR:
/* Since size > 8, this is the half that must be atomic. */ /* Since size > 8, this is the half that must be atomic. */
if (!HAVE_CMPXCHG128) { if (!HAVE_ATOMIC128_RW) {
cpu_loop_exit_atomic(cpu, ra); cpu_loop_exit_atomic(cpu, ra);
} }
return store_whole_le16(p->haddr, p->size, val_le); return store_whole_le16(p->haddr, p->size, val_le);

View File

@@ -14,6 +14,8 @@
extern int64_t max_delay; extern int64_t max_delay;
extern int64_t max_advance; extern int64_t max_advance;
void dump_exec_info(GString *buf);
/* /*
* Return true if CS is not running in parallel with other cpus, either * Return true if CS is not running in parallel with other cpus, either
* because there are no other cpus or we are within an exclusive context. * because there are no other cpus or we are within an exclusive context.

View File

@@ -825,7 +825,7 @@ static uint64_t store_whole_le16(void *pv, int size, Int128 val_le)
int sh = o * 8; int sh = o * 8;
Int128 m, v; Int128 m, v;
qemu_build_assert(HAVE_CMPXCHG128); qemu_build_assert(HAVE_ATOMIC128_RW);
/* Like MAKE_64BIT_MASK(0, sz), but larger. */ /* Like MAKE_64BIT_MASK(0, sz), but larger. */
if (sz <= 64) { if (sz <= 64) {
@@ -887,7 +887,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
return; return;
} }
} else if ((pi & 15) == 7) { } else if ((pi & 15) == 7) {
if (HAVE_CMPXCHG128) { if (HAVE_ATOMIC128_RW) {
Int128 v = int128_lshift(int128_make64(val), 56); Int128 v = int128_lshift(int128_make64(val), 56);
Int128 m = int128_lshift(int128_make64(0xffff), 56); Int128 m = int128_lshift(int128_make64(0xffff), 56);
store_atom_insert_al16(pv - 7, v, m); store_atom_insert_al16(pv - 7, v, m);
@@ -956,7 +956,7 @@ static void store_atom_4(CPUState *cpu, uintptr_t ra,
return; return;
} }
} else { } else {
if (HAVE_CMPXCHG128) { if (HAVE_ATOMIC128_RW) {
store_whole_le16(pv, 4, int128_make64(cpu_to_le32(val))); store_whole_le16(pv, 4, int128_make64(cpu_to_le32(val)));
return; return;
} }
@@ -1021,7 +1021,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
} }
break; break;
case MO_64: case MO_64:
if (HAVE_CMPXCHG128) { if (HAVE_ATOMIC128_RW) {
store_whole_le16(pv, 8, int128_make64(cpu_to_le64(val))); store_whole_le16(pv, 8, int128_make64(cpu_to_le64(val)));
return; return;
} }
@@ -1076,7 +1076,7 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
} }
break; break;
case -MO_64: case -MO_64:
if (HAVE_CMPXCHG128) { if (HAVE_ATOMIC128_RW) {
uint64_t val_le; uint64_t val_le;
int s2 = pi & 15; int s2 = pi & 15;
int s1 = 16 - s2; int s1 = 16 - s2;
@@ -1103,6 +1103,10 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
} }
break; break;
case MO_128: case MO_128:
if (HAVE_ATOMIC128_RW) {
atomic16_set(pv, val);
return;
}
break; break;
default: default:
g_assert_not_reached(); g_assert_not_reached();

View File

@@ -8,7 +8,6 @@
#include "qemu/osdep.h" #include "qemu/osdep.h"
#include "qemu/accel.h" #include "qemu/accel.h"
#include "qemu/qht.h"
#include "qapi/error.h" #include "qapi/error.h"
#include "qapi/type-helpers.h" #include "qapi/type-helpers.h"
#include "qapi/qapi-commands-machine.h" #include "qapi/qapi-commands-machine.h"
@@ -18,7 +17,6 @@
#include "sysemu/tcg.h" #include "sysemu/tcg.h"
#include "tcg/tcg.h" #include "tcg/tcg.h"
#include "internal-common.h" #include "internal-common.h"
#include "tb-context.h"
static void dump_drift_info(GString *buf) static void dump_drift_info(GString *buf)
@@ -52,153 +50,6 @@ static void dump_accel_info(GString *buf)
one_insn_per_tb ? "on" : "off"); one_insn_per_tb ? "on" : "off");
} }
static void print_qht_statistics(struct qht_stats hst, GString *buf)
{
uint32_t hgram_opts;
size_t hgram_bins;
char *hgram;
if (!hst.head_buckets) {
return;
}
g_string_append_printf(buf, "TB hash buckets %zu/%zu "
"(%0.2f%% head buckets used)\n",
hst.used_head_buckets, hst.head_buckets,
(double)hst.used_head_buckets /
hst.head_buckets * 100);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_opts |= QDIST_PR_100X | QDIST_PR_PERCENT;
if (qdist_xmax(&hst.occupancy) - qdist_xmin(&hst.occupancy) == 1) {
hgram_opts |= QDIST_PR_NODECIMAL;
}
hgram = qdist_pr(&hst.occupancy, 10, hgram_opts);
g_string_append_printf(buf, "TB hash occupancy %0.2f%% avg chain occ. "
"Histogram: %s\n",
qdist_avg(&hst.occupancy) * 100, hgram);
g_free(hgram);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_bins = qdist_xmax(&hst.chain) - qdist_xmin(&hst.chain);
if (hgram_bins > 10) {
hgram_bins = 10;
} else {
hgram_bins = 0;
hgram_opts |= QDIST_PR_NODECIMAL | QDIST_PR_NOBINRANGE;
}
hgram = qdist_pr(&hst.chain, hgram_bins, hgram_opts);
g_string_append_printf(buf, "TB hash avg chain %0.3f buckets. "
"Histogram: %s\n",
qdist_avg(&hst.chain), hgram);
g_free(hgram);
}
struct tb_tree_stats {
size_t nb_tbs;
size_t host_size;
size_t target_size;
size_t max_target_size;
size_t direct_jmp_count;
size_t direct_jmp2_count;
size_t cross_page;
};
static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data)
{
const TranslationBlock *tb = value;
struct tb_tree_stats *tst = data;
tst->nb_tbs++;
tst->host_size += tb->tc.size;
tst->target_size += tb->size;
if (tb->size > tst->max_target_size) {
tst->max_target_size = tb->size;
}
if (tb->page_addr[1] != -1) {
tst->cross_page++;
}
if (tb->jmp_reset_offset[0] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp_count++;
if (tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp2_count++;
}
}
return false;
}
static void tlb_flush_counts(size_t *pfull, size_t *ppart, size_t *pelide)
{
CPUState *cpu;
size_t full = 0, part = 0, elide = 0;
CPU_FOREACH(cpu) {
full += qatomic_read(&cpu->neg.tlb.c.full_flush_count);
part += qatomic_read(&cpu->neg.tlb.c.part_flush_count);
elide += qatomic_read(&cpu->neg.tlb.c.elide_flush_count);
}
*pfull = full;
*ppart = part;
*pelide = elide;
}
static void tcg_dump_info(GString *buf)
{
g_string_append_printf(buf, "[TCG profiler not compiled]\n");
}
static void dump_exec_info(GString *buf)
{
struct tb_tree_stats tst = {};
struct qht_stats hst;
size_t nb_tbs, flush_full, flush_part, flush_elide;
tcg_tb_foreach(tb_tree_stats_iter, &tst);
nb_tbs = tst.nb_tbs;
/* XXX: avoid using doubles ? */
g_string_append_printf(buf, "Translation buffer state:\n");
/*
* Report total code size including the padding and TB structs;
* otherwise users might think "-accel tcg,tb-size" is not honoured.
* For avg host size we use the precise numbers from tb_tree_stats though.
*/
g_string_append_printf(buf, "gen code size %zu/%zu\n",
tcg_code_size(), tcg_code_capacity());
g_string_append_printf(buf, "TB count %zu\n", nb_tbs);
g_string_append_printf(buf, "TB avg target size %zu max=%zu bytes\n",
nb_tbs ? tst.target_size / nb_tbs : 0,
tst.max_target_size);
g_string_append_printf(buf, "TB avg host size %zu bytes "
"(expansion ratio: %0.1f)\n",
nb_tbs ? tst.host_size / nb_tbs : 0,
tst.target_size ?
(double)tst.host_size / tst.target_size : 0);
g_string_append_printf(buf, "cross page TB count %zu (%zu%%)\n",
tst.cross_page,
nb_tbs ? (tst.cross_page * 100) / nb_tbs : 0);
g_string_append_printf(buf, "direct jump count %zu (%zu%%) "
"(2 jumps=%zu %zu%%)\n",
tst.direct_jmp_count,
nb_tbs ? (tst.direct_jmp_count * 100) / nb_tbs : 0,
tst.direct_jmp2_count,
nb_tbs ? (tst.direct_jmp2_count * 100) / nb_tbs : 0);
qht_statistics_init(&tb_ctx.htable, &hst);
print_qht_statistics(hst, buf);
qht_statistics_destroy(&hst);
g_string_append_printf(buf, "\nStatistics:\n");
g_string_append_printf(buf, "TB flush count %u\n",
qatomic_read(&tb_ctx.tb_flush_count));
g_string_append_printf(buf, "TB invalidate count %u\n",
qatomic_read(&tb_ctx.tb_phys_invalidate_count));
tlb_flush_counts(&flush_full, &flush_part, &flush_elide);
g_string_append_printf(buf, "TLB full flushes %zu\n", flush_full);
g_string_append_printf(buf, "TLB partial flushes %zu\n", flush_part);
g_string_append_printf(buf, "TLB elided flushes %zu\n", flush_elide);
tcg_dump_info(buf);
}
HumanReadableText *qmp_x_query_jit(Error **errp) HumanReadableText *qmp_x_query_jit(Error **errp)
{ {
g_autoptr(GString) buf = g_string_new(""); g_autoptr(GString) buf = g_string_new("");
@@ -215,11 +66,6 @@ HumanReadableText *qmp_x_query_jit(Error **errp)
return human_readable_text_from_str(buf); return human_readable_text_from_str(buf);
} }
static void tcg_dump_op_count(GString *buf)
{
g_string_append_printf(buf, "[TCG profiler not compiled]\n");
}
HumanReadableText *qmp_x_query_opcount(Error **errp) HumanReadableText *qmp_x_query_opcount(Error **errp)
{ {
g_autoptr(GString) buf = g_string_new(""); g_autoptr(GString) buf = g_string_new("");

View File

@@ -34,7 +34,6 @@
#include "qemu/timer.h" #include "qemu/timer.h"
#include "exec/exec-all.h" #include "exec/exec-all.h"
#include "exec/hwaddr.h" #include "exec/hwaddr.h"
#include "exec/tb-flush.h"
#include "exec/gdbstub.h" #include "exec/gdbstub.h"
#include "tcg-accel-ops.h" #include "tcg-accel-ops.h"
@@ -78,13 +77,6 @@ int tcg_cpus_exec(CPUState *cpu)
return ret; return ret;
} }
static void tcg_cpu_reset_hold(CPUState *cpu)
{
tcg_flush_jmp_cache(cpu);
tlb_flush(cpu);
}
/* mask must never be zero, except for A20 change call */ /* mask must never be zero, except for A20 change call */
void tcg_handle_interrupt(CPUState *cpu, int mask) void tcg_handle_interrupt(CPUState *cpu, int mask)
{ {
@@ -213,7 +205,6 @@ static void tcg_accel_ops_init(AccelOpsClass *ops)
} }
} }
ops->cpu_reset_hold = tcg_cpu_reset_hold;
ops->supports_guest_debug = tcg_supports_guest_debug; ops->supports_guest_debug = tcg_supports_guest_debug;
ops->insert_breakpoint = tcg_insert_breakpoint; ops->insert_breakpoint = tcg_insert_breakpoint;
ops->remove_breakpoint = tcg_remove_breakpoint; ops->remove_breakpoint = tcg_remove_breakpoint;

View File

@@ -645,6 +645,133 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
cpu_loop_exit_noexc(cpu); cpu_loop_exit_noexc(cpu);
} }
static void print_qht_statistics(struct qht_stats hst, GString *buf)
{
uint32_t hgram_opts;
size_t hgram_bins;
char *hgram;
if (!hst.head_buckets) {
return;
}
g_string_append_printf(buf, "TB hash buckets %zu/%zu "
"(%0.2f%% head buckets used)\n",
hst.used_head_buckets, hst.head_buckets,
(double)hst.used_head_buckets /
hst.head_buckets * 100);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_opts |= QDIST_PR_100X | QDIST_PR_PERCENT;
if (qdist_xmax(&hst.occupancy) - qdist_xmin(&hst.occupancy) == 1) {
hgram_opts |= QDIST_PR_NODECIMAL;
}
hgram = qdist_pr(&hst.occupancy, 10, hgram_opts);
g_string_append_printf(buf, "TB hash occupancy %0.2f%% avg chain occ. "
"Histogram: %s\n",
qdist_avg(&hst.occupancy) * 100, hgram);
g_free(hgram);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_bins = qdist_xmax(&hst.chain) - qdist_xmin(&hst.chain);
if (hgram_bins > 10) {
hgram_bins = 10;
} else {
hgram_bins = 0;
hgram_opts |= QDIST_PR_NODECIMAL | QDIST_PR_NOBINRANGE;
}
hgram = qdist_pr(&hst.chain, hgram_bins, hgram_opts);
g_string_append_printf(buf, "TB hash avg chain %0.3f buckets. "
"Histogram: %s\n",
qdist_avg(&hst.chain), hgram);
g_free(hgram);
}
struct tb_tree_stats {
size_t nb_tbs;
size_t host_size;
size_t target_size;
size_t max_target_size;
size_t direct_jmp_count;
size_t direct_jmp2_count;
size_t cross_page;
};
static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data)
{
const TranslationBlock *tb = value;
struct tb_tree_stats *tst = data;
tst->nb_tbs++;
tst->host_size += tb->tc.size;
tst->target_size += tb->size;
if (tb->size > tst->max_target_size) {
tst->max_target_size = tb->size;
}
if (tb_page_addr1(tb) != -1) {
tst->cross_page++;
}
if (tb->jmp_reset_offset[0] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp_count++;
if (tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp2_count++;
}
}
return false;
}
void dump_exec_info(GString *buf)
{
struct tb_tree_stats tst = {};
struct qht_stats hst;
size_t nb_tbs, flush_full, flush_part, flush_elide;
tcg_tb_foreach(tb_tree_stats_iter, &tst);
nb_tbs = tst.nb_tbs;
/* XXX: avoid using doubles ? */
g_string_append_printf(buf, "Translation buffer state:\n");
/*
* Report total code size including the padding and TB structs;
* otherwise users might think "-accel tcg,tb-size" is not honoured.
* For avg host size we use the precise numbers from tb_tree_stats though.
*/
g_string_append_printf(buf, "gen code size %zu/%zu\n",
tcg_code_size(), tcg_code_capacity());
g_string_append_printf(buf, "TB count %zu\n", nb_tbs);
g_string_append_printf(buf, "TB avg target size %zu max=%zu bytes\n",
nb_tbs ? tst.target_size / nb_tbs : 0,
tst.max_target_size);
g_string_append_printf(buf, "TB avg host size %zu bytes "
"(expansion ratio: %0.1f)\n",
nb_tbs ? tst.host_size / nb_tbs : 0,
tst.target_size ?
(double)tst.host_size / tst.target_size : 0);
g_string_append_printf(buf, "cross page TB count %zu (%zu%%)\n",
tst.cross_page,
nb_tbs ? (tst.cross_page * 100) / nb_tbs : 0);
g_string_append_printf(buf, "direct jump count %zu (%zu%%) "
"(2 jumps=%zu %zu%%)\n",
tst.direct_jmp_count,
nb_tbs ? (tst.direct_jmp_count * 100) / nb_tbs : 0,
tst.direct_jmp2_count,
nb_tbs ? (tst.direct_jmp2_count * 100) / nb_tbs : 0);
qht_statistics_init(&tb_ctx.htable, &hst);
print_qht_statistics(hst, buf);
qht_statistics_destroy(&hst);
g_string_append_printf(buf, "\nStatistics:\n");
g_string_append_printf(buf, "TB flush count %u\n",
qatomic_read(&tb_ctx.tb_flush_count));
g_string_append_printf(buf, "TB invalidate count %u\n",
qatomic_read(&tb_ctx.tb_phys_invalidate_count));
tlb_flush_counts(&flush_full, &flush_part, &flush_elide);
g_string_append_printf(buf, "TLB full flushes %zu\n", flush_full);
g_string_append_printf(buf, "TLB partial flushes %zu\n", flush_part);
g_string_append_printf(buf, "TLB elided flushes %zu\n", flush_elide);
tcg_dump_info(buf);
}
#else /* CONFIG_USER_ONLY */ #else /* CONFIG_USER_ONLY */
void cpu_interrupt(CPUState *cpu, int mask) void cpu_interrupt(CPUState *cpu, int mask)
@@ -673,3 +800,11 @@ void tcg_flush_jmp_cache(CPUState *cpu)
qatomic_set(&jc->array[i].tb, NULL); qatomic_set(&jc->array[i].tb, NULL);
} }
} }
/* This is a wrapper for common code that can not use CONFIG_SOFTMMU */
void tcg_flush_softmmu_tlb(CPUState *cs)
{
#ifdef CONFIG_SOFTMMU
tlb_flush(cs);
#endif
}

View File

@@ -14,10 +14,6 @@ void qemu_init_vcpu(CPUState *cpu)
{ {
} }
void cpu_exec_reset_hold(CPUState *cpu)
{
}
/* User mode emulation does not support record/replay yet. */ /* User mode emulation does not support record/replay yet. */
bool replay_exception(void) bool replay_exception(void)

View File

@@ -97,10 +97,6 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
dolog ("WAVE files can not handle 32bit formats\n"); dolog ("WAVE files can not handle 32bit formats\n");
return -1; return -1;
case AUDIO_FORMAT_F32:
dolog("WAVE files can not handle float formats\n");
return -1;
default: default:
abort(); abort();
} }

192
block.c
View File

@@ -820,17 +820,12 @@ int bdrv_probe_blocksizes(BlockDriverState *bs, BlockSizes *bsz)
int bdrv_probe_geometry(BlockDriverState *bs, HDGeometry *geo) int bdrv_probe_geometry(BlockDriverState *bs, HDGeometry *geo)
{ {
BlockDriver *drv = bs->drv; BlockDriver *drv = bs->drv;
BlockDriverState *filtered; BlockDriverState *filtered = bdrv_filter_bs(bs);
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (drv && drv->bdrv_probe_geometry) { if (drv && drv->bdrv_probe_geometry) {
return drv->bdrv_probe_geometry(bs, geo); return drv->bdrv_probe_geometry(bs, geo);
} } else if (filtered) {
filtered = bdrv_filter_bs(bs);
if (filtered) {
return bdrv_probe_geometry(filtered, geo); return bdrv_probe_geometry(filtered, geo);
} }
@@ -1707,14 +1702,12 @@ bdrv_open_driver(BlockDriverState *bs, BlockDriver *drv, const char *node_name,
return 0; return 0;
open_failed: open_failed:
bs->drv = NULL; bs->drv = NULL;
bdrv_graph_wrlock(NULL);
if (bs->file != NULL) { if (bs->file != NULL) {
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, bs->file); bdrv_unref_child(bs, bs->file);
bdrv_graph_wrunlock();
assert(!bs->file); assert(!bs->file);
} }
bdrv_graph_wrunlock();
g_free(bs->opaque); g_free(bs->opaque);
bs->opaque = NULL; bs->opaque = NULL;
return ret; return ret;
@@ -1856,12 +1849,9 @@ static int bdrv_open_common(BlockDriverState *bs, BlockBackend *file,
Error *local_err = NULL; Error *local_err = NULL;
bool ro; bool ro;
GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop();
assert(bs->file == NULL); assert(bs->file == NULL);
assert(options != NULL && bs->options != options); assert(options != NULL && bs->options != options);
bdrv_graph_rdunlock_main_loop(); GLOBAL_STATE_CODE();
opts = qemu_opts_create(&bdrv_runtime_opts, NULL, 0, &error_abort); opts = qemu_opts_create(&bdrv_runtime_opts, NULL, 0, &error_abort);
if (!qemu_opts_absorb_qdict(opts, options, errp)) { if (!qemu_opts_absorb_qdict(opts, options, errp)) {
@@ -3219,6 +3209,8 @@ BdrvChild *bdrv_root_attach_child(BlockDriverState *child_bs,
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_graph_wrlock(child_bs);
child = bdrv_attach_child_common(child_bs, child_name, child_class, child = bdrv_attach_child_common(child_bs, child_name, child_class,
child_role, perm, shared_perm, opaque, child_role, perm, shared_perm, opaque,
tran, errp); tran, errp);
@@ -3231,8 +3223,9 @@ BdrvChild *bdrv_root_attach_child(BlockDriverState *child_bs,
out: out:
tran_finalize(tran, ret); tran_finalize(tran, ret);
bdrv_graph_wrunlock();
bdrv_schedule_unref(child_bs); bdrv_unref(child_bs);
return ret < 0 ? NULL : child; return ret < 0 ? NULL : child;
} }
@@ -3537,7 +3530,19 @@ out:
* *
* If a backing child is already present (i.e. we're detaching a node), that * If a backing child is already present (i.e. we're detaching a node), that
* child node must be drained. * child node must be drained.
*
* After calling this function, the transaction @tran may only be completed
* while holding a writer lock for the graph.
*/ */
static int GRAPH_WRLOCK
bdrv_set_backing_noperm(BlockDriverState *bs,
BlockDriverState *backing_hd,
Transaction *tran, Error **errp)
{
GLOBAL_STATE_CODE();
return bdrv_set_file_or_backing_noperm(bs, backing_hd, true, tran, errp);
}
int bdrv_set_backing_hd_drained(BlockDriverState *bs, int bdrv_set_backing_hd_drained(BlockDriverState *bs,
BlockDriverState *backing_hd, BlockDriverState *backing_hd,
Error **errp) Error **errp)
@@ -3550,8 +3555,9 @@ int bdrv_set_backing_hd_drained(BlockDriverState *bs,
if (bs->backing) { if (bs->backing) {
assert(bs->backing->bs->quiesce_counter > 0); assert(bs->backing->bs->quiesce_counter > 0);
} }
bdrv_graph_wrlock(backing_hd);
ret = bdrv_set_file_or_backing_noperm(bs, backing_hd, true, tran, errp); ret = bdrv_set_backing_noperm(bs, backing_hd, tran, errp);
if (ret < 0) { if (ret < 0) {
goto out; goto out;
} }
@@ -3559,25 +3565,20 @@ int bdrv_set_backing_hd_drained(BlockDriverState *bs,
ret = bdrv_refresh_perms(bs, tran, errp); ret = bdrv_refresh_perms(bs, tran, errp);
out: out:
tran_finalize(tran, ret); tran_finalize(tran, ret);
bdrv_graph_wrunlock();
return ret; return ret;
} }
int bdrv_set_backing_hd(BlockDriverState *bs, BlockDriverState *backing_hd, int bdrv_set_backing_hd(BlockDriverState *bs, BlockDriverState *backing_hd,
Error **errp) Error **errp)
{ {
BlockDriverState *drain_bs; BlockDriverState *drain_bs = bs->backing ? bs->backing->bs : bs;
int ret; int ret;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop();
drain_bs = bs->backing ? bs->backing->bs : bs;
bdrv_graph_rdunlock_main_loop();
bdrv_ref(drain_bs); bdrv_ref(drain_bs);
bdrv_drained_begin(drain_bs); bdrv_drained_begin(drain_bs);
bdrv_graph_wrlock(backing_hd);
ret = bdrv_set_backing_hd_drained(bs, backing_hd, errp); ret = bdrv_set_backing_hd_drained(bs, backing_hd, errp);
bdrv_graph_wrunlock();
bdrv_drained_end(drain_bs); bdrv_drained_end(drain_bs);
bdrv_unref(drain_bs); bdrv_unref(drain_bs);
@@ -3611,7 +3612,6 @@ int bdrv_open_backing_file(BlockDriverState *bs, QDict *parent_options,
Error *local_err = NULL; Error *local_err = NULL;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (bs->backing != NULL) { if (bs->backing != NULL) {
goto free_exit; goto free_exit;
@@ -3653,7 +3653,10 @@ int bdrv_open_backing_file(BlockDriverState *bs, QDict *parent_options,
implicit_backing = !strcmp(bs->auto_backing_file, bs->backing_file); implicit_backing = !strcmp(bs->auto_backing_file, bs->backing_file);
} }
bdrv_graph_rdlock_main_loop();
backing_filename = bdrv_get_full_backing_filename(bs, &local_err); backing_filename = bdrv_get_full_backing_filename(bs, &local_err);
bdrv_graph_rdunlock_main_loop();
if (local_err) { if (local_err) {
ret = -EINVAL; ret = -EINVAL;
error_propagate(errp, local_err); error_propagate(errp, local_err);
@@ -3684,7 +3687,9 @@ int bdrv_open_backing_file(BlockDriverState *bs, QDict *parent_options,
} }
if (implicit_backing) { if (implicit_backing) {
bdrv_graph_rdlock_main_loop();
bdrv_refresh_filename(backing_hd); bdrv_refresh_filename(backing_hd);
bdrv_graph_rdunlock_main_loop();
pstrcpy(bs->auto_backing_file, sizeof(bs->auto_backing_file), pstrcpy(bs->auto_backing_file, sizeof(bs->auto_backing_file),
backing_hd->filename); backing_hd->filename);
} }
@@ -4755,8 +4760,8 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
{ {
BlockDriverState *bs = reopen_state->bs; BlockDriverState *bs = reopen_state->bs;
BlockDriverState *new_child_bs; BlockDriverState *new_child_bs;
BlockDriverState *old_child_bs; BlockDriverState *old_child_bs = is_backing ? child_bs(bs->backing) :
child_bs(bs->file);
const char *child_name = is_backing ? "backing" : "file"; const char *child_name = is_backing ? "backing" : "file";
QObject *value; QObject *value;
const char *str; const char *str;
@@ -4771,8 +4776,6 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
return 0; return 0;
} }
bdrv_graph_rdlock_main_loop();
switch (qobject_type(value)) { switch (qobject_type(value)) {
case QTYPE_QNULL: case QTYPE_QNULL:
assert(is_backing); /* The 'file' option does not allow a null value */ assert(is_backing); /* The 'file' option does not allow a null value */
@@ -4782,16 +4785,17 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
str = qstring_get_str(qobject_to(QString, value)); str = qstring_get_str(qobject_to(QString, value));
new_child_bs = bdrv_lookup_bs(NULL, str, errp); new_child_bs = bdrv_lookup_bs(NULL, str, errp);
if (new_child_bs == NULL) { if (new_child_bs == NULL) {
ret = -EINVAL; return -EINVAL;
goto out_rdlock;
} }
bdrv_graph_rdlock_main_loop();
has_child = bdrv_recurse_has_child(new_child_bs, bs); has_child = bdrv_recurse_has_child(new_child_bs, bs);
bdrv_graph_rdunlock_main_loop();
if (has_child) { if (has_child) {
error_setg(errp, "Making '%s' a %s child of '%s' would create a " error_setg(errp, "Making '%s' a %s child of '%s' would create a "
"cycle", str, child_name, bs->node_name); "cycle", str, child_name, bs->node_name);
ret = -EINVAL; return -EINVAL;
goto out_rdlock;
} }
break; break;
default: default:
@@ -4802,23 +4806,19 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
g_assert_not_reached(); g_assert_not_reached();
} }
old_child_bs = is_backing ? child_bs(bs->backing) : child_bs(bs->file);
if (old_child_bs == new_child_bs) { if (old_child_bs == new_child_bs) {
ret = 0; return 0;
goto out_rdlock;
} }
if (old_child_bs) { if (old_child_bs) {
if (bdrv_skip_implicit_filters(old_child_bs) == new_child_bs) { if (bdrv_skip_implicit_filters(old_child_bs) == new_child_bs) {
ret = 0; return 0;
goto out_rdlock;
} }
if (old_child_bs->implicit) { if (old_child_bs->implicit) {
error_setg(errp, "Cannot replace implicit %s child of %s", error_setg(errp, "Cannot replace implicit %s child of %s",
child_name, bs->node_name); child_name, bs->node_name);
ret = -EPERM; return -EPERM;
goto out_rdlock;
} }
} }
@@ -4829,8 +4829,7 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
*/ */
error_setg(errp, "'%s' is a %s filter node that does not support a " error_setg(errp, "'%s' is a %s filter node that does not support a "
"%s child", bs->node_name, bs->drv->format_name, child_name); "%s child", bs->node_name, bs->drv->format_name, child_name);
ret = -EINVAL; return -EINVAL;
goto out_rdlock;
} }
if (is_backing) { if (is_backing) {
@@ -4851,7 +4850,6 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
aio_context_acquire(ctx); aio_context_acquire(ctx);
} }
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(new_child_bs); bdrv_graph_wrlock(new_child_bs);
ret = bdrv_set_file_or_backing_noperm(bs, new_child_bs, is_backing, ret = bdrv_set_file_or_backing_noperm(bs, new_child_bs, is_backing,
@@ -4870,10 +4868,6 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
} }
return ret; return ret;
out_rdlock:
bdrv_graph_rdunlock_main_loop();
return ret;
} }
/* /*
@@ -5014,16 +5008,13 @@ bdrv_reopen_prepare(BDRVReopenState *reopen_state, BlockReopenQueue *queue,
* file or if the image file has a backing file name as part of * file or if the image file has a backing file name as part of
* its metadata. Otherwise the 'backing' option can be omitted. * its metadata. Otherwise the 'backing' option can be omitted.
*/ */
bdrv_graph_rdlock_main_loop();
if (drv->supports_backing && reopen_state->backing_missing && if (drv->supports_backing && reopen_state->backing_missing &&
(reopen_state->bs->backing || reopen_state->bs->backing_file[0])) { (reopen_state->bs->backing || reopen_state->bs->backing_file[0])) {
error_setg(errp, "backing is missing for '%s'", error_setg(errp, "backing is missing for '%s'",
reopen_state->bs->node_name); reopen_state->bs->node_name);
bdrv_graph_rdunlock_main_loop();
ret = -EINVAL; ret = -EINVAL;
goto error; goto error;
} }
bdrv_graph_rdunlock_main_loop();
/* /*
* Allow changing the 'backing' option. The new value can be * Allow changing the 'backing' option. The new value can be
@@ -5213,11 +5204,10 @@ static void bdrv_close(BlockDriverState *bs)
QLIST_FOREACH_SAFE(child, &bs->children, next, next) { QLIST_FOREACH_SAFE(child, &bs->children, next, next) {
bdrv_unref_child(bs, child); bdrv_unref_child(bs, child);
} }
bdrv_graph_wrunlock();
assert(!bs->backing); assert(!bs->backing);
assert(!bs->file); assert(!bs->file);
bdrv_graph_wrunlock();
g_free(bs->opaque); g_free(bs->opaque);
bs->opaque = NULL; bs->opaque = NULL;
qatomic_set(&bs->copy_on_read, 0); qatomic_set(&bs->copy_on_read, 0);
@@ -5422,9 +5412,6 @@ bdrv_replace_node_noperm(BlockDriverState *from,
} }
/* /*
* Switch all parents of @from to point to @to instead. @from and @to must be in
* the same AioContext and both must be drained.
*
* With auto_skip=true bdrv_replace_node_common skips updating from parents * With auto_skip=true bdrv_replace_node_common skips updating from parents
* if it creates a parent-child relation loop or if parent is block-job. * if it creates a parent-child relation loop or if parent is block-job.
* *
@@ -5434,9 +5421,10 @@ bdrv_replace_node_noperm(BlockDriverState *from,
* With @detach_subchain=true @to must be in a backing chain of @from. In this * With @detach_subchain=true @to must be in a backing chain of @from. In this
* case backing link of the cow-parent of @to is removed. * case backing link of the cow-parent of @to is removed.
*/ */
static int GRAPH_WRLOCK static int bdrv_replace_node_common(BlockDriverState *from,
bdrv_replace_node_common(BlockDriverState *from, BlockDriverState *to, BlockDriverState *to,
bool auto_skip, bool detach_subchain, Error **errp) bool auto_skip, bool detach_subchain,
Error **errp)
{ {
Transaction *tran = tran_new(); Transaction *tran = tran_new();
g_autoptr(GSList) refresh_list = NULL; g_autoptr(GSList) refresh_list = NULL;
@@ -5445,10 +5433,6 @@ bdrv_replace_node_common(BlockDriverState *from, BlockDriverState *to,
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
assert(from->quiesce_counter);
assert(to->quiesce_counter);
assert(bdrv_get_aio_context(from) == bdrv_get_aio_context(to));
if (detach_subchain) { if (detach_subchain) {
assert(bdrv_chain_contains(from, to)); assert(bdrv_chain_contains(from, to));
assert(from != to); assert(from != to);
@@ -5460,6 +5444,17 @@ bdrv_replace_node_common(BlockDriverState *from, BlockDriverState *to,
} }
} }
/* Make sure that @from doesn't go away until we have successfully attached
* all of its parents to @to. */
bdrv_ref(from);
assert(qemu_get_current_aio_context() == qemu_get_aio_context());
assert(bdrv_get_aio_context(from) == bdrv_get_aio_context(to));
bdrv_drained_begin(from);
bdrv_drained_begin(to);
bdrv_graph_wrlock(to);
/* /*
* Do the replacement without permission update. * Do the replacement without permission update.
* Replacement may influence the permissions, we should calculate new * Replacement may influence the permissions, we should calculate new
@@ -5488,33 +5483,29 @@ bdrv_replace_node_common(BlockDriverState *from, BlockDriverState *to,
out: out:
tran_finalize(tran, ret); tran_finalize(tran, ret);
bdrv_graph_wrunlock();
bdrv_drained_end(to);
bdrv_drained_end(from);
bdrv_unref(from);
return ret; return ret;
} }
int bdrv_replace_node(BlockDriverState *from, BlockDriverState *to, int bdrv_replace_node(BlockDriverState *from, BlockDriverState *to,
Error **errp) Error **errp)
{ {
GLOBAL_STATE_CODE();
return bdrv_replace_node_common(from, to, true, false, errp); return bdrv_replace_node_common(from, to, true, false, errp);
} }
int bdrv_drop_filter(BlockDriverState *bs, Error **errp) int bdrv_drop_filter(BlockDriverState *bs, Error **errp)
{ {
BlockDriverState *child_bs;
int ret;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop(); return bdrv_replace_node_common(bs, bdrv_filter_or_cow_bs(bs), true, true,
child_bs = bdrv_filter_or_cow_bs(bs); errp);
bdrv_graph_rdunlock_main_loop();
bdrv_drained_begin(child_bs);
bdrv_graph_wrlock(bs);
ret = bdrv_replace_node_common(bs, child_bs, true, true, errp);
bdrv_graph_wrunlock();
bdrv_drained_end(child_bs);
return ret;
} }
/* /*
@@ -5541,9 +5532,7 @@ int bdrv_append(BlockDriverState *bs_new, BlockDriverState *bs_top,
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop();
assert(!bs_new->backing); assert(!bs_new->backing);
bdrv_graph_rdunlock_main_loop();
old_context = bdrv_get_aio_context(bs_top); old_context = bdrv_get_aio_context(bs_top);
bdrv_drained_begin(bs_top); bdrv_drained_begin(bs_top);
@@ -5711,19 +5700,9 @@ BlockDriverState *bdrv_insert_node(BlockDriverState *bs, QDict *options,
goto fail; goto fail;
} }
/*
* Make sure that @bs doesn't go away until we have successfully attached
* all of its parents to @new_node_bs and undrained it again.
*/
bdrv_ref(bs);
bdrv_drained_begin(bs); bdrv_drained_begin(bs);
bdrv_drained_begin(new_node_bs);
bdrv_graph_wrlock(new_node_bs);
ret = bdrv_replace_node(bs, new_node_bs, errp); ret = bdrv_replace_node(bs, new_node_bs, errp);
bdrv_graph_wrunlock();
bdrv_drained_end(new_node_bs);
bdrv_drained_end(bs); bdrv_drained_end(bs);
bdrv_unref(bs);
if (ret < 0) { if (ret < 0) {
error_prepend(errp, "Could not replace node: "); error_prepend(errp, "Could not replace node: ");
@@ -5769,14 +5748,13 @@ int coroutine_fn bdrv_co_check(BlockDriverState *bs,
* image file header * image file header
* -ENOTSUP - format driver doesn't support changing the backing file * -ENOTSUP - format driver doesn't support changing the backing file
*/ */
int coroutine_fn int bdrv_change_backing_file(BlockDriverState *bs, const char *backing_file,
bdrv_co_change_backing_file(BlockDriverState *bs, const char *backing_file, const char *backing_fmt, bool require)
const char *backing_fmt, bool require)
{ {
BlockDriver *drv = bs->drv; BlockDriver *drv = bs->drv;
int ret; int ret;
IO_CODE(); GLOBAL_STATE_CODE();
if (!drv) { if (!drv) {
return -ENOMEDIUM; return -ENOMEDIUM;
@@ -5791,8 +5769,8 @@ bdrv_co_change_backing_file(BlockDriverState *bs, const char *backing_file,
return -EINVAL; return -EINVAL;
} }
if (drv->bdrv_co_change_backing_file != NULL) { if (drv->bdrv_change_backing_file != NULL) {
ret = drv->bdrv_co_change_backing_file(bs, backing_file, backing_fmt); ret = drv->bdrv_change_backing_file(bs, backing_file, backing_fmt);
} else { } else {
ret = -ENOTSUP; ret = -ENOTSUP;
} }
@@ -5849,9 +5827,8 @@ BlockDriverState *bdrv_find_base(BlockDriverState *bs)
* between @bs and @base is frozen. @errp is set if that's the case. * between @bs and @base is frozen. @errp is set if that's the case.
* @base must be reachable from @bs, or NULL. * @base must be reachable from @bs, or NULL.
*/ */
static bool GRAPH_RDLOCK bool bdrv_is_backing_chain_frozen(BlockDriverState *bs, BlockDriverState *base,
bdrv_is_backing_chain_frozen(BlockDriverState *bs, BlockDriverState *base, Error **errp)
Error **errp)
{ {
BlockDriverState *i; BlockDriverState *i;
BdrvChild *child; BdrvChild *child;
@@ -5975,15 +5952,15 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
bdrv_ref(top); bdrv_ref(top);
bdrv_drained_begin(base); bdrv_drained_begin(base);
bdrv_graph_wrlock(base); bdrv_graph_rdlock_main_loop();
if (!top->drv || !base->drv) { if (!top->drv || !base->drv) {
goto exit_wrlock; goto exit;
} }
/* Make sure that base is in the backing chain of top */ /* Make sure that base is in the backing chain of top */
if (!bdrv_chain_contains(top, base)) { if (!bdrv_chain_contains(top, base)) {
goto exit_wrlock; goto exit;
} }
/* If 'base' recursively inherits from 'top' then we should set /* If 'base' recursively inherits from 'top' then we should set
@@ -6015,8 +5992,6 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
* That's a FIXME. * That's a FIXME.
*/ */
bdrv_replace_node_common(top, base, false, false, &local_err); bdrv_replace_node_common(top, base, false, false, &local_err);
bdrv_graph_wrunlock();
if (local_err) { if (local_err) {
error_report_err(local_err); error_report_err(local_err);
goto exit; goto exit;
@@ -6049,11 +6024,8 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
} }
ret = 0; ret = 0;
goto exit;
exit_wrlock:
bdrv_graph_wrunlock();
exit: exit:
bdrv_graph_rdunlock_main_loop();
bdrv_drained_end(base); bdrv_drained_end(base);
bdrv_unref(top); bdrv_unref(top);
return ret; return ret;
@@ -6615,7 +6587,7 @@ int bdrv_has_zero_init_1(BlockDriverState *bs)
return 1; return 1;
} }
int coroutine_mixed_fn bdrv_has_zero_init(BlockDriverState *bs) int bdrv_has_zero_init(BlockDriverState *bs)
{ {
BlockDriverState *filtered; BlockDriverState *filtered;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
@@ -8128,7 +8100,7 @@ static bool append_strong_runtime_options(QDict *d, BlockDriverState *bs)
/* Note: This function may return false positives; it may return true /* Note: This function may return false positives; it may return true
* even if opening the backing file specified by bs's image header * even if opening the backing file specified by bs's image header
* would result in exactly bs->backing. */ * would result in exactly bs->backing. */
static bool GRAPH_RDLOCK bdrv_backing_overridden(BlockDriverState *bs) static bool bdrv_backing_overridden(BlockDriverState *bs)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
if (bs->backing) { if (bs->backing) {
@@ -8502,8 +8474,8 @@ BdrvChild *bdrv_primary_child(BlockDriverState *bs)
return found; return found;
} }
static BlockDriverState * GRAPH_RDLOCK static BlockDriverState *bdrv_do_skip_filters(BlockDriverState *bs,
bdrv_do_skip_filters(BlockDriverState *bs, bool stop_on_explicit_filter) bool stop_on_explicit_filter)
{ {
BdrvChild *c; BdrvChild *c;

View File

@@ -374,6 +374,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
assert(bs); assert(bs);
assert(target); assert(target);
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* QMP interface protects us from these cases */ /* QMP interface protects us from these cases */
assert(sync_mode != MIRROR_SYNC_MODE_INCREMENTAL); assert(sync_mode != MIRROR_SYNC_MODE_INCREMENTAL);
@@ -384,33 +385,31 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
return NULL; return NULL;
} }
bdrv_graph_rdlock_main_loop();
if (!bdrv_is_inserted(bs)) { if (!bdrv_is_inserted(bs)) {
error_setg(errp, "Device is not inserted: %s", error_setg(errp, "Device is not inserted: %s",
bdrv_get_device_name(bs)); bdrv_get_device_name(bs));
goto error_rdlock; return NULL;
} }
if (!bdrv_is_inserted(target)) { if (!bdrv_is_inserted(target)) {
error_setg(errp, "Device is not inserted: %s", error_setg(errp, "Device is not inserted: %s",
bdrv_get_device_name(target)); bdrv_get_device_name(target));
goto error_rdlock; return NULL;
} }
if (compress && !bdrv_supports_compressed_writes(target)) { if (compress && !bdrv_supports_compressed_writes(target)) {
error_setg(errp, "Compression is not supported for this drive %s", error_setg(errp, "Compression is not supported for this drive %s",
bdrv_get_device_name(target)); bdrv_get_device_name(target));
goto error_rdlock; return NULL;
} }
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) { if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) {
goto error_rdlock; return NULL;
} }
if (bdrv_op_is_blocked(target, BLOCK_OP_TYPE_BACKUP_TARGET, errp)) { if (bdrv_op_is_blocked(target, BLOCK_OP_TYPE_BACKUP_TARGET, errp)) {
goto error_rdlock; return NULL;
} }
bdrv_graph_rdunlock_main_loop();
if (perf->max_workers < 1 || perf->max_workers > INT_MAX) { if (perf->max_workers < 1 || perf->max_workers > INT_MAX) {
error_setg(errp, "max-workers must be between 1 and %d", INT_MAX); error_setg(errp, "max-workers must be between 1 and %d", INT_MAX);
@@ -438,7 +437,6 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
len = bdrv_getlength(bs); len = bdrv_getlength(bs);
if (len < 0) { if (len < 0) {
GRAPH_RDLOCK_GUARD_MAINLOOP();
error_setg_errno(errp, -len, "Unable to get length for '%s'", error_setg_errno(errp, -len, "Unable to get length for '%s'",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
goto error; goto error;
@@ -446,7 +444,6 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
target_len = bdrv_getlength(target); target_len = bdrv_getlength(target);
if (target_len < 0) { if (target_len < 0) {
GRAPH_RDLOCK_GUARD_MAINLOOP();
error_setg_errno(errp, -target_len, "Unable to get length for '%s'", error_setg_errno(errp, -target_len, "Unable to get length for '%s'",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
goto error; goto error;
@@ -496,10 +493,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
block_copy_set_speed(bcs, speed); block_copy_set_speed(bcs, speed);
/* Required permissions are taken by copy-before-write filter target */ /* Required permissions are taken by copy-before-write filter target */
bdrv_graph_wrlock(target);
block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL, block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
&error_abort); &error_abort);
bdrv_graph_wrunlock();
return &job->common; return &job->common;
@@ -512,8 +507,4 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
} }
return NULL; return NULL;
error_rdlock:
bdrv_graph_rdunlock_main_loop();
return NULL;
} }

View File

@@ -508,8 +508,6 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
goto out; goto out;
} }
bdrv_graph_rdlock_main_loop();
bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED | bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
(BDRV_REQ_FUA & bs->file->bs->supported_write_flags); (BDRV_REQ_FUA & bs->file->bs->supported_write_flags);
bs->supported_zero_flags = BDRV_REQ_WRITE_UNCHANGED | bs->supported_zero_flags = BDRV_REQ_WRITE_UNCHANGED |
@@ -522,7 +520,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
if (s->align && (s->align >= INT_MAX || !is_power_of_2(s->align))) { if (s->align && (s->align >= INT_MAX || !is_power_of_2(s->align))) {
error_setg(errp, "Cannot meet constraints with align %" PRIu64, error_setg(errp, "Cannot meet constraints with align %" PRIu64,
s->align); s->align);
goto out_rdlock; goto out;
} }
align = MAX(s->align, bs->file->bs->bl.request_alignment); align = MAX(s->align, bs->file->bs->bl.request_alignment);
@@ -532,7 +530,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
!QEMU_IS_ALIGNED(s->max_transfer, align))) { !QEMU_IS_ALIGNED(s->max_transfer, align))) {
error_setg(errp, "Cannot meet constraints with max-transfer %" PRIu64, error_setg(errp, "Cannot meet constraints with max-transfer %" PRIu64,
s->max_transfer); s->max_transfer);
goto out_rdlock; goto out;
} }
s->opt_write_zero = qemu_opt_get_size(opts, "opt-write-zero", 0); s->opt_write_zero = qemu_opt_get_size(opts, "opt-write-zero", 0);
@@ -541,7 +539,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
!QEMU_IS_ALIGNED(s->opt_write_zero, align))) { !QEMU_IS_ALIGNED(s->opt_write_zero, align))) {
error_setg(errp, "Cannot meet constraints with opt-write-zero %" PRIu64, error_setg(errp, "Cannot meet constraints with opt-write-zero %" PRIu64,
s->opt_write_zero); s->opt_write_zero);
goto out_rdlock; goto out;
} }
s->max_write_zero = qemu_opt_get_size(opts, "max-write-zero", 0); s->max_write_zero = qemu_opt_get_size(opts, "max-write-zero", 0);
@@ -551,7 +549,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
MAX(s->opt_write_zero, align)))) { MAX(s->opt_write_zero, align)))) {
error_setg(errp, "Cannot meet constraints with max-write-zero %" PRIu64, error_setg(errp, "Cannot meet constraints with max-write-zero %" PRIu64,
s->max_write_zero); s->max_write_zero);
goto out_rdlock; goto out;
} }
s->opt_discard = qemu_opt_get_size(opts, "opt-discard", 0); s->opt_discard = qemu_opt_get_size(opts, "opt-discard", 0);
@@ -560,7 +558,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
!QEMU_IS_ALIGNED(s->opt_discard, align))) { !QEMU_IS_ALIGNED(s->opt_discard, align))) {
error_setg(errp, "Cannot meet constraints with opt-discard %" PRIu64, error_setg(errp, "Cannot meet constraints with opt-discard %" PRIu64,
s->opt_discard); s->opt_discard);
goto out_rdlock; goto out;
} }
s->max_discard = qemu_opt_get_size(opts, "max-discard", 0); s->max_discard = qemu_opt_get_size(opts, "max-discard", 0);
@@ -570,14 +568,12 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
MAX(s->opt_discard, align)))) { MAX(s->opt_discard, align)))) {
error_setg(errp, "Cannot meet constraints with max-discard %" PRIu64, error_setg(errp, "Cannot meet constraints with max-discard %" PRIu64,
s->max_discard); s->max_discard);
goto out_rdlock; goto out;
} }
bdrv_debug_event(bs, BLKDBG_NONE); bdrv_debug_event(bs, BLKDBG_NONE);
ret = 0; ret = 0;
out_rdlock:
bdrv_graph_rdunlock_main_loop();
out: out:
if (ret < 0) { if (ret < 0) {
qemu_mutex_destroy(&s->lock); qemu_mutex_destroy(&s->lock);
@@ -750,10 +746,13 @@ blkdebug_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
return bdrv_co_pdiscard(bs->file, offset, bytes); return bdrv_co_pdiscard(bs->file, offset, bytes);
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn blkdebug_co_block_status(BlockDriverState *bs,
blkdebug_co_block_status(BlockDriverState *bs, bool want_zero, int64_t offset, bool want_zero,
int64_t bytes, int64_t *pnum, int64_t *map, int64_t offset,
BlockDriverState **file) int64_t bytes,
int64_t *pnum,
int64_t *map,
BlockDriverState **file)
{ {
int err; int err;
@@ -974,7 +973,7 @@ blkdebug_co_getlength(BlockDriverState *bs)
return bdrv_co_getlength(bs->file->bs); return bdrv_co_getlength(bs->file->bs);
} }
static void GRAPH_RDLOCK blkdebug_refresh_filename(BlockDriverState *bs) static void blkdebug_refresh_filename(BlockDriverState *bs)
{ {
BDRVBlkdebugState *s = bs->opaque; BDRVBlkdebugState *s = bs->opaque;
const QDictEntry *e; const QDictEntry *e;

View File

@@ -130,13 +130,7 @@ static int coroutine_fn GRAPH_RDLOCK blkreplay_co_flush(BlockDriverState *bs)
static int blkreplay_snapshot_goto(BlockDriverState *bs, static int blkreplay_snapshot_goto(BlockDriverState *bs,
const char *snapshot_id) const char *snapshot_id)
{ {
BlockDriverState *file_bs; return bdrv_snapshot_goto(bs->file->bs, snapshot_id, NULL);
bdrv_graph_rdlock_main_loop();
file_bs = bs->file->bs;
bdrv_graph_rdunlock_main_loop();
return bdrv_snapshot_goto(file_bs, snapshot_id, NULL);
} }
static BlockDriver bdrv_blkreplay = { static BlockDriver bdrv_blkreplay = {

View File

@@ -33,8 +33,8 @@ typedef struct BlkverifyRequest {
uint64_t bytes; uint64_t bytes;
int flags; int flags;
int GRAPH_RDLOCK_PTR (*request_fn)( int (*request_fn)(BdrvChild *, int64_t, int64_t, QEMUIOVector *,
BdrvChild *, int64_t, int64_t, QEMUIOVector *, BdrvRequestFlags); BdrvRequestFlags);
int ret; /* test image result */ int ret; /* test image result */
int raw_ret; /* raw image result */ int raw_ret; /* raw image result */
@@ -170,11 +170,8 @@ static void coroutine_fn blkverify_do_test_req(void *opaque)
BlkverifyRequest *r = opaque; BlkverifyRequest *r = opaque;
BDRVBlkverifyState *s = r->bs->opaque; BDRVBlkverifyState *s = r->bs->opaque;
bdrv_graph_co_rdlock();
r->ret = r->request_fn(s->test_file, r->offset, r->bytes, r->qiov, r->ret = r->request_fn(s->test_file, r->offset, r->bytes, r->qiov,
r->flags); r->flags);
bdrv_graph_co_rdunlock();
r->done++; r->done++;
qemu_coroutine_enter_if_inactive(r->co); qemu_coroutine_enter_if_inactive(r->co);
} }
@@ -183,16 +180,13 @@ static void coroutine_fn blkverify_do_raw_req(void *opaque)
{ {
BlkverifyRequest *r = opaque; BlkverifyRequest *r = opaque;
bdrv_graph_co_rdlock();
r->raw_ret = r->request_fn(r->bs->file, r->offset, r->bytes, r->raw_qiov, r->raw_ret = r->request_fn(r->bs->file, r->offset, r->bytes, r->raw_qiov,
r->flags); r->flags);
bdrv_graph_co_rdunlock();
r->done++; r->done++;
qemu_coroutine_enter_if_inactive(r->co); qemu_coroutine_enter_if_inactive(r->co);
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn
blkverify_co_prwv(BlockDriverState *bs, BlkverifyRequest *r, uint64_t offset, blkverify_co_prwv(BlockDriverState *bs, BlkverifyRequest *r, uint64_t offset,
uint64_t bytes, QEMUIOVector *qiov, QEMUIOVector *raw_qiov, uint64_t bytes, QEMUIOVector *qiov, QEMUIOVector *raw_qiov,
int flags, bool is_write) int flags, bool is_write)
@@ -228,7 +222,7 @@ blkverify_co_prwv(BlockDriverState *bs, BlkverifyRequest *r, uint64_t offset,
return r->ret; return r->ret;
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn
blkverify_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes, blkverify_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
QEMUIOVector *qiov, BdrvRequestFlags flags) QEMUIOVector *qiov, BdrvRequestFlags flags)
{ {
@@ -257,7 +251,7 @@ blkverify_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
return ret; return ret;
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn
blkverify_co_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes, blkverify_co_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes,
QEMUIOVector *qiov, BdrvRequestFlags flags) QEMUIOVector *qiov, BdrvRequestFlags flags)
{ {
@@ -288,7 +282,7 @@ blkverify_recurse_can_replace(BlockDriverState *bs,
bdrv_recurse_can_replace(s->test_file->bs, to_replace); bdrv_recurse_can_replace(s->test_file->bs, to_replace);
} }
static void GRAPH_RDLOCK blkverify_refresh_filename(BlockDriverState *bs) static void blkverify_refresh_filename(BlockDriverState *bs)
{ {
BDRVBlkverifyState *s = bs->opaque; BDRVBlkverifyState *s = bs->opaque;

View File

@@ -931,12 +931,10 @@ int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp)
ThrottleGroupMember *tgm = &blk->public.throttle_group_member; ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_ref(bs); bdrv_ref(bs);
bdrv_graph_wrlock(bs);
blk->root = bdrv_root_attach_child(bs, "root", &child_root, blk->root = bdrv_root_attach_child(bs, "root", &child_root,
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY, BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
blk->perm, blk->shared_perm, blk->perm, blk->shared_perm,
blk, errp); blk, errp);
bdrv_graph_wrunlock();
if (blk->root == NULL) { if (blk->root == NULL) {
return -EPERM; return -EPERM;
} }
@@ -2668,8 +2666,6 @@ int blk_load_vmstate(BlockBackend *blk, uint8_t *buf, int64_t pos, int size)
int blk_probe_blocksizes(BlockBackend *blk, BlockSizes *bsz) int blk_probe_blocksizes(BlockBackend *blk, BlockSizes *bsz)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!blk_is_available(blk)) { if (!blk_is_available(blk)) {
return -ENOMEDIUM; return -ENOMEDIUM;
} }
@@ -2730,7 +2726,6 @@ int blk_commit_all(void)
{ {
BlockBackend *blk = NULL; BlockBackend *blk = NULL;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
while ((blk = blk_all_next(blk)) != NULL) { while ((blk = blk_all_next(blk)) != NULL) {
AioContext *aio_context = blk_get_aio_context(blk); AioContext *aio_context = blk_get_aio_context(blk);

View File

@@ -313,12 +313,7 @@ static int64_t block_copy_calculate_cluster_size(BlockDriverState *target,
{ {
int ret; int ret;
BlockDriverInfo bdi; BlockDriverInfo bdi;
bool target_does_cow; bool target_does_cow = bdrv_backing_chain_next(target);
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
target_does_cow = bdrv_backing_chain_next(target);
/* /*
* If there is no backing file on the target, we cannot rely on COW if our * If there is no backing file on the target, we cannot rely on COW if our
@@ -360,8 +355,6 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
BdrvDirtyBitmap *copy_bitmap; BdrvDirtyBitmap *copy_bitmap;
bool is_fleecing; bool is_fleecing;
GLOBAL_STATE_CODE();
cluster_size = block_copy_calculate_cluster_size(target->bs, errp); cluster_size = block_copy_calculate_cluster_size(target->bs, errp);
if (cluster_size < 0) { if (cluster_size < 0) {
return NULL; return NULL;
@@ -399,9 +392,7 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
* For more information see commit f8d59dfb40bb and test * For more information see commit f8d59dfb40bb and test
* tests/qemu-iotests/222 * tests/qemu-iotests/222
*/ */
bdrv_graph_rdlock_main_loop();
is_fleecing = bdrv_chain_contains(target->bs, source->bs); is_fleecing = bdrv_chain_contains(target->bs, source->bs);
bdrv_graph_rdunlock_main_loop();
s = g_new(BlockCopyState, 1); s = g_new(BlockCopyState, 1);
*s = (BlockCopyState) { *s = (BlockCopyState) {

View File

@@ -105,8 +105,6 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
struct bochs_header bochs; struct bochs_header bochs;
int ret; int ret;
GLOBAL_STATE_CODE();
/* No write support yet */ /* No write support yet */
bdrv_graph_rdlock_main_loop(); bdrv_graph_rdlock_main_loop();
ret = bdrv_apply_auto_read_only(bs, NULL, errp); ret = bdrv_apply_auto_read_only(bs, NULL, errp);
@@ -120,8 +118,6 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
ret = bdrv_pread(bs->file, 0, sizeof(bochs), &bochs, 0); ret = bdrv_pread(bs->file, 0, sizeof(bochs), &bochs, 0);
if (ret < 0) { if (ret < 0) {
return ret; return ret;

View File

@@ -67,8 +67,6 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
uint32_t offsets_size, max_compressed_block_size = 1, i; uint32_t offsets_size, max_compressed_block_size = 1, i;
int ret; int ret;
GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop(); bdrv_graph_rdlock_main_loop();
ret = bdrv_apply_auto_read_only(bs, NULL, errp); ret = bdrv_apply_auto_read_only(bs, NULL, errp);
bdrv_graph_rdunlock_main_loop(); bdrv_graph_rdunlock_main_loop();
@@ -81,8 +79,6 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* read header */ /* read header */
ret = bdrv_pread(bs->file, 128, 4, &s->block_size, 0); ret = bdrv_pread(bs->file, 128, 4, &s->block_size, 0);
if (ret < 0) { if (ret < 0) {

View File

@@ -48,10 +48,8 @@ static int commit_prepare(Job *job)
{ {
CommitBlockJob *s = container_of(job, CommitBlockJob, common.job); CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
bdrv_graph_rdlock_main_loop();
bdrv_unfreeze_backing_chain(s->commit_top_bs, s->base_bs); bdrv_unfreeze_backing_chain(s->commit_top_bs, s->base_bs);
s->chain_frozen = false; s->chain_frozen = false;
bdrv_graph_rdunlock_main_loop();
/* Remove base node parent that still uses BLK_PERM_WRITE/RESIZE before /* Remove base node parent that still uses BLK_PERM_WRITE/RESIZE before
* the normal backing chain can be restored. */ * the normal backing chain can be restored. */
@@ -68,12 +66,9 @@ static void commit_abort(Job *job)
{ {
CommitBlockJob *s = container_of(job, CommitBlockJob, common.job); CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
BlockDriverState *top_bs = blk_bs(s->top); BlockDriverState *top_bs = blk_bs(s->top);
BlockDriverState *commit_top_backing_bs;
if (s->chain_frozen) { if (s->chain_frozen) {
bdrv_graph_rdlock_main_loop();
bdrv_unfreeze_backing_chain(s->commit_top_bs, s->base_bs); bdrv_unfreeze_backing_chain(s->commit_top_bs, s->base_bs);
bdrv_graph_rdunlock_main_loop();
} }
/* Make sure commit_top_bs and top stay around until bdrv_replace_node() */ /* Make sure commit_top_bs and top stay around until bdrv_replace_node() */
@@ -95,15 +90,8 @@ static void commit_abort(Job *job)
* XXX Can (or should) we somehow keep 'consistent read' blocked even * XXX Can (or should) we somehow keep 'consistent read' blocked even
* after the failed/cancelled commit job is gone? If we already wrote * after the failed/cancelled commit job is gone? If we already wrote
* something to base, the intermediate images aren't valid any more. */ * something to base, the intermediate images aren't valid any more. */
bdrv_graph_rdlock_main_loop(); bdrv_replace_node(s->commit_top_bs, s->commit_top_bs->backing->bs,
commit_top_backing_bs = s->commit_top_bs->backing->bs; &error_abort);
bdrv_graph_rdunlock_main_loop();
bdrv_drained_begin(commit_top_backing_bs);
bdrv_graph_wrlock(commit_top_backing_bs);
bdrv_replace_node(s->commit_top_bs, commit_top_backing_bs, &error_abort);
bdrv_graph_wrunlock();
bdrv_drained_end(commit_top_backing_bs);
bdrv_unref(s->commit_top_bs); bdrv_unref(s->commit_top_bs);
bdrv_unref(top_bs); bdrv_unref(top_bs);
@@ -222,7 +210,7 @@ bdrv_commit_top_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags); return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags);
} }
static GRAPH_RDLOCK void bdrv_commit_top_refresh_filename(BlockDriverState *bs) static void bdrv_commit_top_refresh_filename(BlockDriverState *bs)
{ {
pstrcpy(bs->exact_filename, sizeof(bs->exact_filename), pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
bs->backing->bs->filename); bs->backing->bs->filename);
@@ -267,13 +255,10 @@ void commit_start(const char *job_id, BlockDriverState *bs,
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
assert(top != bs); assert(top != bs);
bdrv_graph_rdlock_main_loop();
if (bdrv_skip_filters(top) == bdrv_skip_filters(base)) { if (bdrv_skip_filters(top) == bdrv_skip_filters(base)) {
error_setg(errp, "Invalid files for merge: top and base are the same"); error_setg(errp, "Invalid files for merge: top and base are the same");
bdrv_graph_rdunlock_main_loop();
return; return;
} }
bdrv_graph_rdunlock_main_loop();
base_size = bdrv_getlength(base); base_size = bdrv_getlength(base);
if (base_size < 0) { if (base_size < 0) {
@@ -339,7 +324,6 @@ void commit_start(const char *job_id, BlockDriverState *bs,
* this is the responsibility of the interface (i.e. whoever calls * this is the responsibility of the interface (i.e. whoever calls
* commit_start()). * commit_start()).
*/ */
bdrv_graph_wrlock(top);
s->base_overlay = bdrv_find_overlay(top, base); s->base_overlay = bdrv_find_overlay(top, base);
assert(s->base_overlay); assert(s->base_overlay);
@@ -370,20 +354,16 @@ void commit_start(const char *job_id, BlockDriverState *bs,
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0, ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
iter_shared_perms, errp); iter_shared_perms, errp);
if (ret < 0) { if (ret < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
} }
if (bdrv_freeze_backing_chain(commit_top_bs, base, errp) < 0) { if (bdrv_freeze_backing_chain(commit_top_bs, base, errp) < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
s->chain_frozen = true; s->chain_frozen = true;
ret = block_job_add_bdrv(&s->common, "base", base, 0, BLK_PERM_ALL, errp); ret = block_job_add_bdrv(&s->common, "base", base, 0, BLK_PERM_ALL, errp);
bdrv_graph_wrunlock();
if (ret < 0) { if (ret < 0) {
goto fail; goto fail;
} }
@@ -416,9 +396,7 @@ void commit_start(const char *job_id, BlockDriverState *bs,
fail: fail:
if (s->chain_frozen) { if (s->chain_frozen) {
bdrv_graph_rdlock_main_loop();
bdrv_unfreeze_backing_chain(commit_top_bs, base); bdrv_unfreeze_backing_chain(commit_top_bs, base);
bdrv_graph_rdunlock_main_loop();
} }
if (s->base) { if (s->base) {
blk_unref(s->base); blk_unref(s->base);
@@ -433,11 +411,7 @@ fail:
/* commit_top_bs has to be replaced after deleting the block job, /* commit_top_bs has to be replaced after deleting the block job,
* otherwise this would fail because of lack of permissions. */ * otherwise this would fail because of lack of permissions. */
if (commit_top_bs) { if (commit_top_bs) {
bdrv_drained_begin(top);
bdrv_graph_wrlock(top);
bdrv_replace_node(commit_top_bs, top, &error_abort); bdrv_replace_node(commit_top_bs, top, &error_abort);
bdrv_graph_wrunlock();
bdrv_drained_end(top);
} }
} }

View File

@@ -203,7 +203,7 @@ static int coroutine_fn GRAPH_RDLOCK cbw_co_flush(BlockDriverState *bs)
* It's guaranteed that guest writes will not interact in the region until * It's guaranteed that guest writes will not interact in the region until
* cbw_snapshot_read_unlock() called. * cbw_snapshot_read_unlock() called.
*/ */
static BlockReq * coroutine_fn GRAPH_RDLOCK static coroutine_fn BlockReq *
cbw_snapshot_read_lock(BlockDriverState *bs, int64_t offset, int64_t bytes, cbw_snapshot_read_lock(BlockDriverState *bs, int64_t offset, int64_t bytes,
int64_t *pnum, BdrvChild **file) int64_t *pnum, BdrvChild **file)
{ {
@@ -335,7 +335,7 @@ cbw_co_pdiscard_snapshot(BlockDriverState *bs, int64_t offset, int64_t bytes)
return bdrv_co_pdiscard(s->target, offset, bytes); return bdrv_co_pdiscard(s->target, offset, bytes);
} }
static void GRAPH_RDLOCK cbw_refresh_filename(BlockDriverState *bs) static void cbw_refresh_filename(BlockDriverState *bs)
{ {
pstrcpy(bs->exact_filename, sizeof(bs->exact_filename), pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
bs->file->bs->filename); bs->file->bs->filename);
@@ -433,8 +433,6 @@ static int cbw_open(BlockDriverState *bs, QDict *options, int flags,
return -EINVAL; return -EINVAL;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
ctx = bdrv_get_aio_context(bs); ctx = bdrv_get_aio_context(bs);
aio_context_acquire(ctx); aio_context_acquire(ctx);

View File

@@ -35,8 +35,8 @@ typedef struct BDRVStateCOR {
} BDRVStateCOR; } BDRVStateCOR;
static int GRAPH_UNLOCKED static int cor_open(BlockDriverState *bs, QDict *options, int flags,
cor_open(BlockDriverState *bs, QDict *options, int flags, Error **errp) Error **errp)
{ {
BlockDriverState *bottom_bs = NULL; BlockDriverState *bottom_bs = NULL;
BDRVStateCOR *state = bs->opaque; BDRVStateCOR *state = bs->opaque;
@@ -44,15 +44,11 @@ cor_open(BlockDriverState *bs, QDict *options, int flags, Error **errp)
const char *bottom_node = qdict_get_try_str(options, "bottom"); const char *bottom_node = qdict_get_try_str(options, "bottom");
int ret; int ret;
GLOBAL_STATE_CODE();
ret = bdrv_open_file_child(NULL, options, "file", bs, errp); ret = bdrv_open_file_child(NULL, options, "file", bs, errp);
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs->supported_read_flags = BDRV_REQ_PREFETCH; bs->supported_read_flags = BDRV_REQ_PREFETCH;
bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED | bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
@@ -231,17 +227,13 @@ cor_co_lock_medium(BlockDriverState *bs, bool locked)
} }
static void GRAPH_UNLOCKED cor_close(BlockDriverState *bs) static void cor_close(BlockDriverState *bs)
{ {
BDRVStateCOR *s = bs->opaque; BDRVStateCOR *s = bs->opaque;
GLOBAL_STATE_CODE();
if (s->chain_frozen) { if (s->chain_frozen) {
bdrv_graph_rdlock_main_loop();
s->chain_frozen = false; s->chain_frozen = false;
bdrv_unfreeze_backing_chain(bs, s->bottom_bs); bdrv_unfreeze_backing_chain(bs, s->bottom_bs);
bdrv_graph_rdunlock_main_loop();
} }
bdrv_unref(s->bottom_bs); bdrv_unref(s->bottom_bs);
@@ -271,15 +263,12 @@ static BlockDriver bdrv_copy_on_read = {
}; };
void no_coroutine_fn bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs) void bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs)
{ {
BDRVStateCOR *s = cor_filter_bs->opaque; BDRVStateCOR *s = cor_filter_bs->opaque;
GLOBAL_STATE_CODE();
/* unfreeze, as otherwise bdrv_replace_node() will fail */ /* unfreeze, as otherwise bdrv_replace_node() will fail */
if (s->chain_frozen) { if (s->chain_frozen) {
GRAPH_RDLOCK_GUARD_MAINLOOP();
s->chain_frozen = false; s->chain_frozen = false;
bdrv_unfreeze_backing_chain(cor_filter_bs, s->bottom_bs); bdrv_unfreeze_backing_chain(cor_filter_bs, s->bottom_bs);
} }

View File

@@ -27,7 +27,6 @@
#include "block/block_int.h" #include "block/block_int.h"
void no_coroutine_fn GRAPH_UNLOCKED void bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs);
bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs);
#endif /* BLOCK_COPY_ON_READ_H */ #endif /* BLOCK_COPY_ON_READ_H */

View File

@@ -65,9 +65,6 @@ static int block_crypto_read_func(QCryptoBlock *block,
BlockDriverState *bs = opaque; BlockDriverState *bs = opaque;
ssize_t ret; ssize_t ret;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
ret = bdrv_pread(bs->file, offset, buflen, buf, 0); ret = bdrv_pread(bs->file, offset, buflen, buf, 0);
if (ret < 0) { if (ret < 0) {
error_setg_errno(errp, -ret, "Could not read encryption header"); error_setg_errno(errp, -ret, "Could not read encryption header");
@@ -86,9 +83,6 @@ static int block_crypto_write_func(QCryptoBlock *block,
BlockDriverState *bs = opaque; BlockDriverState *bs = opaque;
ssize_t ret; ssize_t ret;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
ret = bdrv_pwrite(bs->file, offset, buflen, buf, 0); ret = bdrv_pwrite(bs->file, offset, buflen, buf, 0);
if (ret < 0) { if (ret < 0) {
error_setg_errno(errp, -ret, "Could not write encryption header"); error_setg_errno(errp, -ret, "Could not write encryption header");
@@ -269,15 +263,11 @@ static int block_crypto_open_generic(QCryptoBlockFormat format,
unsigned int cflags = 0; unsigned int cflags = 0;
QDict *cryptoopts = NULL; QDict *cryptoopts = NULL;
GLOBAL_STATE_CODE();
ret = bdrv_open_file_child(NULL, options, "file", bs, errp); ret = bdrv_open_file_child(NULL, options, "file", bs, errp);
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs->supported_write_flags = BDRV_REQ_FUA & bs->supported_write_flags = BDRV_REQ_FUA &
bs->file->bs->supported_write_flags; bs->file->bs->supported_write_flags;

View File

@@ -70,8 +70,7 @@ static int dmg_probe(const uint8_t *buf, int buf_size, const char *filename)
return 0; return 0;
} }
static int GRAPH_RDLOCK static int read_uint64(BlockDriverState *bs, int64_t offset, uint64_t *result)
read_uint64(BlockDriverState *bs, int64_t offset, uint64_t *result)
{ {
uint64_t buffer; uint64_t buffer;
int ret; int ret;
@@ -85,8 +84,7 @@ read_uint64(BlockDriverState *bs, int64_t offset, uint64_t *result)
return 0; return 0;
} }
static int GRAPH_RDLOCK static int read_uint32(BlockDriverState *bs, int64_t offset, uint32_t *result)
read_uint32(BlockDriverState *bs, int64_t offset, uint32_t *result)
{ {
uint32_t buffer; uint32_t buffer;
int ret; int ret;
@@ -323,9 +321,8 @@ fail:
return ret; return ret;
} }
static int GRAPH_RDLOCK static int dmg_read_resource_fork(BlockDriverState *bs, DmgHeaderState *ds,
dmg_read_resource_fork(BlockDriverState *bs, DmgHeaderState *ds, uint64_t info_begin, uint64_t info_length)
uint64_t info_begin, uint64_t info_length)
{ {
BDRVDMGState *s = bs->opaque; BDRVDMGState *s = bs->opaque;
int ret; int ret;
@@ -391,9 +388,8 @@ fail:
return ret; return ret;
} }
static int GRAPH_RDLOCK static int dmg_read_plist_xml(BlockDriverState *bs, DmgHeaderState *ds,
dmg_read_plist_xml(BlockDriverState *bs, DmgHeaderState *ds, uint64_t info_begin, uint64_t info_length)
uint64_t info_begin, uint64_t info_length)
{ {
BDRVDMGState *s = bs->opaque; BDRVDMGState *s = bs->opaque;
int ret; int ret;
@@ -456,8 +452,6 @@ static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
int64_t offset; int64_t offset;
int ret; int ret;
GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop(); bdrv_graph_rdlock_main_loop();
ret = bdrv_apply_auto_read_only(bs, NULL, errp); ret = bdrv_apply_auto_read_only(bs, NULL, errp);
bdrv_graph_rdunlock_main_loop(); bdrv_graph_rdunlock_main_loop();
@@ -469,9 +463,6 @@ static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* /*
* NB: if uncompress submodules are absent, * NB: if uncompress submodules are absent,
* ie block_module_load return value == 0, the function pointers * ie block_module_load return value == 0, the function pointers

View File

@@ -160,6 +160,7 @@ typedef struct BDRVRawState {
bool has_write_zeroes:1; bool has_write_zeroes:1;
bool use_linux_aio:1; bool use_linux_aio:1;
bool use_linux_io_uring:1; bool use_linux_io_uring:1;
int64_t *offset; /* offset of zone append operation */
int page_cache_inconsistent; /* errno from fdatasync failure */ int page_cache_inconsistent; /* errno from fdatasync failure */
bool has_fallocate; bool has_fallocate;
bool needs_alignment; bool needs_alignment;
@@ -2444,13 +2445,12 @@ static bool bdrv_qiov_is_aligned(BlockDriverState *bs, QEMUIOVector *qiov)
return true; return true;
} }
static int coroutine_fn raw_co_prw(BlockDriverState *bs, int64_t *offset_ptr, static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
uint64_t bytes, QEMUIOVector *qiov, int type) uint64_t bytes, QEMUIOVector *qiov, int type)
{ {
BDRVRawState *s = bs->opaque; BDRVRawState *s = bs->opaque;
RawPosixAIOData acb; RawPosixAIOData acb;
int ret; int ret;
uint64_t offset = *offset_ptr;
if (fd_open(bs) < 0) if (fd_open(bs) < 0)
return -EIO; return -EIO;
@@ -2513,8 +2513,8 @@ out:
uint64_t *wp = &wps->wp[offset / bs->bl.zone_size]; uint64_t *wp = &wps->wp[offset / bs->bl.zone_size];
if (!BDRV_ZT_IS_CONV(*wp)) { if (!BDRV_ZT_IS_CONV(*wp)) {
if (type & QEMU_AIO_ZONE_APPEND) { if (type & QEMU_AIO_ZONE_APPEND) {
*offset_ptr = *wp; *s->offset = *wp;
trace_zbd_zone_append_complete(bs, *offset_ptr trace_zbd_zone_append_complete(bs, *s->offset
>> BDRV_SECTOR_BITS); >> BDRV_SECTOR_BITS);
} }
/* Advance the wp if needed */ /* Advance the wp if needed */
@@ -2523,10 +2523,7 @@ out:
} }
} }
} else { } else {
/* update_zones_wp(bs, s->fd, 0, 1);
* write and append write are not allowed to cross zone boundaries
*/
update_zones_wp(bs, s->fd, offset, 1);
} }
qemu_co_mutex_unlock(&wps->colock); qemu_co_mutex_unlock(&wps->colock);
@@ -2539,14 +2536,14 @@ static int coroutine_fn raw_co_preadv(BlockDriverState *bs, int64_t offset,
int64_t bytes, QEMUIOVector *qiov, int64_t bytes, QEMUIOVector *qiov,
BdrvRequestFlags flags) BdrvRequestFlags flags)
{ {
return raw_co_prw(bs, &offset, bytes, qiov, QEMU_AIO_READ); return raw_co_prw(bs, offset, bytes, qiov, QEMU_AIO_READ);
} }
static int coroutine_fn raw_co_pwritev(BlockDriverState *bs, int64_t offset, static int coroutine_fn raw_co_pwritev(BlockDriverState *bs, int64_t offset,
int64_t bytes, QEMUIOVector *qiov, int64_t bytes, QEMUIOVector *qiov,
BdrvRequestFlags flags) BdrvRequestFlags flags)
{ {
return raw_co_prw(bs, &offset, bytes, qiov, QEMU_AIO_WRITE); return raw_co_prw(bs, offset, bytes, qiov, QEMU_AIO_WRITE);
} }
static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs) static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
@@ -3473,7 +3470,7 @@ static int coroutine_fn raw_co_zone_mgmt(BlockDriverState *bs, BlockZoneOp op,
len >> BDRV_SECTOR_BITS); len >> BDRV_SECTOR_BITS);
ret = raw_thread_pool_submit(handle_aiocb_zone_mgmt, &acb); ret = raw_thread_pool_submit(handle_aiocb_zone_mgmt, &acb);
if (ret != 0) { if (ret != 0) {
update_zones_wp(bs, s->fd, offset, nrz); update_zones_wp(bs, s->fd, offset, i);
error_report("ioctl %s failed %d", op_name, ret); error_report("ioctl %s failed %d", op_name, ret);
return ret; return ret;
} }
@@ -3509,6 +3506,8 @@ static int coroutine_fn raw_co_zone_append(BlockDriverState *bs,
int64_t zone_size_mask = bs->bl.zone_size - 1; int64_t zone_size_mask = bs->bl.zone_size - 1;
int64_t iov_len = 0; int64_t iov_len = 0;
int64_t len = 0; int64_t len = 0;
BDRVRawState *s = bs->opaque;
s->offset = offset;
if (*offset & zone_size_mask) { if (*offset & zone_size_mask) {
error_report("sector offset %" PRId64 " is not aligned to zone size " error_report("sector offset %" PRId64 " is not aligned to zone size "
@@ -3529,7 +3528,7 @@ static int coroutine_fn raw_co_zone_append(BlockDriverState *bs,
} }
trace_zbd_zone_append(bs, *offset >> BDRV_SECTOR_BITS); trace_zbd_zone_append(bs, *offset >> BDRV_SECTOR_BITS);
return raw_co_prw(bs, offset, len, qiov, QEMU_AIO_ZONE_APPEND); return raw_co_prw(bs, *offset, len, qiov, QEMU_AIO_ZONE_APPEND);
} }
#endif #endif

View File

@@ -36,8 +36,6 @@ static int compress_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!bs->file->bs->drv || !block_driver_can_compress(bs->file->bs->drv)) { if (!bs->file->bs->drv || !block_driver_can_compress(bs->file->bs->drv)) {
error_setg(errp, error_setg(errp,
"Compression is not supported for underlying format: %s", "Compression is not supported for underlying format: %s",
@@ -99,8 +97,7 @@ compress_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
} }
static void GRAPH_RDLOCK static void compress_refresh_limits(BlockDriverState *bs, Error **errp)
compress_refresh_limits(BlockDriverState *bs, Error **errp)
{ {
BlockDriverInfo bdi; BlockDriverInfo bdi;
int ret; int ret;

View File

@@ -3685,8 +3685,6 @@ out:
void bdrv_cancel_in_flight(BlockDriverState *bs) void bdrv_cancel_in_flight(BlockDriverState *bs)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!bs || !bs->drv) { if (!bs || !bs->drv) {
return; return;
} }

View File

@@ -479,7 +479,7 @@ static unsigned mirror_perform(MirrorBlockJob *s, int64_t offset,
return bytes_handled; return bytes_handled;
} }
static void coroutine_fn GRAPH_RDLOCK mirror_iteration(MirrorBlockJob *s) static void coroutine_fn mirror_iteration(MirrorBlockJob *s)
{ {
BlockDriverState *source = s->mirror_top_bs->backing->bs; BlockDriverState *source = s->mirror_top_bs->backing->bs;
MirrorOp *pseudo_op; MirrorOp *pseudo_op;
@@ -678,7 +678,6 @@ static int mirror_exit_common(Job *job)
s->prepared = true; s->prepared = true;
aio_context_acquire(qemu_get_aio_context()); aio_context_acquire(qemu_get_aio_context());
bdrv_graph_rdlock_main_loop();
mirror_top_bs = s->mirror_top_bs; mirror_top_bs = s->mirror_top_bs;
bs_opaque = mirror_top_bs->opaque; bs_opaque = mirror_top_bs->opaque;
@@ -697,8 +696,6 @@ static int mirror_exit_common(Job *job)
bdrv_ref(mirror_top_bs); bdrv_ref(mirror_top_bs);
bdrv_ref(target_bs); bdrv_ref(target_bs);
bdrv_graph_rdunlock_main_loop();
/* /*
* Remove target parent that still uses BLK_PERM_WRITE/RESIZE before * Remove target parent that still uses BLK_PERM_WRITE/RESIZE before
* inserting target_bs at s->to_replace, where we might not be able to get * inserting target_bs at s->to_replace, where we might not be able to get
@@ -712,12 +709,12 @@ static int mirror_exit_common(Job *job)
* these permissions any more means that we can't allow any new requests on * these permissions any more means that we can't allow any new requests on
* mirror_top_bs from now on, so keep it drained. */ * mirror_top_bs from now on, so keep it drained. */
bdrv_drained_begin(mirror_top_bs); bdrv_drained_begin(mirror_top_bs);
bdrv_drained_begin(target_bs);
bs_opaque->stop = true; bs_opaque->stop = true;
bdrv_graph_rdlock_main_loop(); bdrv_graph_rdlock_main_loop();
bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing, bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing,
&error_abort); &error_abort);
bdrv_graph_rdunlock_main_loop();
if (!abort && s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) { if (!abort && s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) {
BlockDriverState *backing = s->is_none_mode ? src : s->base; BlockDriverState *backing = s->is_none_mode ? src : s->base;
@@ -740,7 +737,6 @@ static int mirror_exit_common(Job *job)
local_err = NULL; local_err = NULL;
} }
} }
bdrv_graph_rdunlock_main_loop();
if (s->to_replace) { if (s->to_replace) {
replace_aio_context = bdrv_get_aio_context(s->to_replace); replace_aio_context = bdrv_get_aio_context(s->to_replace);
@@ -758,13 +754,15 @@ static int mirror_exit_common(Job *job)
/* The mirror job has no requests in flight any more, but we need to /* The mirror job has no requests in flight any more, but we need to
* drain potential other users of the BDS before changing the graph. */ * drain potential other users of the BDS before changing the graph. */
assert(s->in_drain); assert(s->in_drain);
bdrv_drained_begin(to_replace); bdrv_drained_begin(target_bs);
/* /*
* Cannot use check_to_replace_node() here, because that would * Cannot use check_to_replace_node() here, because that would
* check for an op blocker on @to_replace, and we have our own * check for an op blocker on @to_replace, and we have our own
* there. * there.
*
* TODO Pull out the writer lock from bdrv_replace_node() to here
*/ */
bdrv_graph_wrlock(target_bs); bdrv_graph_rdlock_main_loop();
if (bdrv_recurse_can_replace(src, to_replace)) { if (bdrv_recurse_can_replace(src, to_replace)) {
bdrv_replace_node(to_replace, target_bs, &local_err); bdrv_replace_node(to_replace, target_bs, &local_err);
} else { } else {
@@ -773,8 +771,8 @@ static int mirror_exit_common(Job *job)
"would not lead to an abrupt change of visible data", "would not lead to an abrupt change of visible data",
to_replace->node_name, target_bs->node_name); to_replace->node_name, target_bs->node_name);
} }
bdrv_graph_wrunlock(); bdrv_graph_rdunlock_main_loop();
bdrv_drained_end(to_replace); bdrv_drained_end(target_bs);
if (local_err) { if (local_err) {
error_report_err(local_err); error_report_err(local_err);
ret = -EPERM; ret = -EPERM;
@@ -789,6 +787,7 @@ static int mirror_exit_common(Job *job)
aio_context_release(replace_aio_context); aio_context_release(replace_aio_context);
} }
g_free(s->replaces); g_free(s->replaces);
bdrv_unref(target_bs);
/* /*
* Remove the mirror filter driver from the graph. Before this, get rid of * Remove the mirror filter driver from the graph. Before this, get rid of
@@ -796,12 +795,7 @@ static int mirror_exit_common(Job *job)
* valid. * valid.
*/ */
block_job_remove_all_bdrv(bjob); block_job_remove_all_bdrv(bjob);
bdrv_graph_wrlock(mirror_top_bs);
bdrv_replace_node(mirror_top_bs, mirror_top_bs->backing->bs, &error_abort); bdrv_replace_node(mirror_top_bs, mirror_top_bs->backing->bs, &error_abort);
bdrv_graph_wrunlock();
bdrv_drained_end(target_bs);
bdrv_unref(target_bs);
bs_opaque->job = NULL; bs_opaque->job = NULL;
@@ -839,18 +833,14 @@ static void coroutine_fn mirror_throttle(MirrorBlockJob *s)
} }
} }
static int coroutine_fn GRAPH_UNLOCKED mirror_dirty_init(MirrorBlockJob *s) static int coroutine_fn mirror_dirty_init(MirrorBlockJob *s)
{ {
int64_t offset; int64_t offset;
BlockDriverState *bs; BlockDriverState *bs = s->mirror_top_bs->backing->bs;
BlockDriverState *target_bs = blk_bs(s->target); BlockDriverState *target_bs = blk_bs(s->target);
int ret; int ret;
int64_t count; int64_t count;
bdrv_graph_co_rdlock();
bs = s->mirror_top_bs->backing->bs;
bdrv_graph_co_rdunlock();
if (s->zero_target) { if (s->zero_target) {
if (!bdrv_can_write_zeroes_with_unmap(target_bs)) { if (!bdrv_can_write_zeroes_with_unmap(target_bs)) {
bdrv_set_dirty_bitmap(s->dirty_bitmap, 0, s->bdev_length); bdrv_set_dirty_bitmap(s->dirty_bitmap, 0, s->bdev_length);
@@ -930,7 +920,7 @@ static int coroutine_fn mirror_flush(MirrorBlockJob *s)
static int coroutine_fn mirror_run(Job *job, Error **errp) static int coroutine_fn mirror_run(Job *job, Error **errp)
{ {
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job); MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job);
BlockDriverState *bs; BlockDriverState *bs = s->mirror_top_bs->backing->bs;
MirrorBDSOpaque *mirror_top_opaque = s->mirror_top_bs->opaque; MirrorBDSOpaque *mirror_top_opaque = s->mirror_top_bs->opaque;
BlockDriverState *target_bs = blk_bs(s->target); BlockDriverState *target_bs = blk_bs(s->target);
bool need_drain = true; bool need_drain = true;
@@ -942,10 +932,6 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
checking for a NULL string */ checking for a NULL string */
int ret = 0; int ret = 0;
bdrv_graph_co_rdlock();
bs = bdrv_filter_bs(s->mirror_top_bs);
bdrv_graph_co_rdunlock();
if (job_is_cancelled(&s->common.job)) { if (job_is_cancelled(&s->common.job)) {
goto immediate_exit; goto immediate_exit;
} }
@@ -1006,13 +992,13 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
} else { } else {
s->target_cluster_size = BDRV_SECTOR_SIZE; s->target_cluster_size = BDRV_SECTOR_SIZE;
} }
bdrv_graph_co_rdunlock();
if (backing_filename[0] && !bdrv_backing_chain_next(target_bs) && if (backing_filename[0] && !bdrv_backing_chain_next(target_bs) &&
s->granularity < s->target_cluster_size) { s->granularity < s->target_cluster_size) {
s->buf_size = MAX(s->buf_size, s->target_cluster_size); s->buf_size = MAX(s->buf_size, s->target_cluster_size);
s->cow_bitmap = bitmap_new(length); s->cow_bitmap = bitmap_new(length);
} }
s->max_iov = MIN(bs->bl.max_iov, target_bs->bl.max_iov); s->max_iov = MIN(bs->bl.max_iov, target_bs->bl.max_iov);
bdrv_graph_co_rdunlock();
s->buf = qemu_try_blockalign(bs, s->buf_size); s->buf = qemu_try_blockalign(bs, s->buf_size);
if (s->buf == NULL) { if (s->buf == NULL) {
@@ -1078,9 +1064,7 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
mirror_wait_for_free_in_flight_slot(s); mirror_wait_for_free_in_flight_slot(s);
continue; continue;
} else if (cnt != 0) { } else if (cnt != 0) {
bdrv_graph_co_rdlock();
mirror_iteration(s); mirror_iteration(s);
bdrv_graph_co_rdunlock();
} }
} }
@@ -1650,7 +1634,7 @@ bdrv_mirror_top_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
offset, bytes, NULL, 0); offset, bytes, NULL, 0);
} }
static void GRAPH_RDLOCK bdrv_mirror_top_refresh_filename(BlockDriverState *bs) static void bdrv_mirror_top_refresh_filename(BlockDriverState *bs)
{ {
if (bs->backing == NULL) { if (bs->backing == NULL) {
/* we can be here after failed bdrv_attach_child in /* we can be here after failed bdrv_attach_child in
@@ -1760,15 +1744,12 @@ static BlockJob *mirror_start_job(
buf_size = DEFAULT_MIRROR_BUF_SIZE; buf_size = DEFAULT_MIRROR_BUF_SIZE;
} }
bdrv_graph_rdlock_main_loop();
if (bdrv_skip_filters(bs) == bdrv_skip_filters(target)) { if (bdrv_skip_filters(bs) == bdrv_skip_filters(target)) {
error_setg(errp, "Can't mirror node into itself"); error_setg(errp, "Can't mirror node into itself");
bdrv_graph_rdunlock_main_loop();
return NULL; return NULL;
} }
target_is_backing = bdrv_chain_contains(bs, target); target_is_backing = bdrv_chain_contains(bs, target);
bdrv_graph_rdunlock_main_loop();
/* In the case of active commit, add dummy driver to provide consistent /* In the case of active commit, add dummy driver to provide consistent
* reads on the top, while disabling it in the intermediate nodes, and make * reads on the top, while disabling it in the intermediate nodes, and make
@@ -1851,19 +1832,14 @@ static BlockJob *mirror_start_job(
} }
target_shared_perms |= BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE; target_shared_perms |= BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE;
} else { } else if (bdrv_chain_contains(bs, bdrv_skip_filters(target))) {
bdrv_graph_rdlock_main_loop(); /*
if (bdrv_chain_contains(bs, bdrv_skip_filters(target))) { * We may want to allow this in the future, but it would
/* * require taking some extra care.
* We may want to allow this in the future, but it would */
* require taking some extra care. error_setg(errp, "Cannot mirror to a filter on top of a node in the "
*/ "source's backing chain");
error_setg(errp, "Cannot mirror to a filter on top of a node in " goto fail;
"the source's backing chain");
bdrv_graph_rdunlock_main_loop();
goto fail;
}
bdrv_graph_rdunlock_main_loop();
} }
s->target = blk_new(s->common.job.aio_context, s->target = blk_new(s->common.job.aio_context,
@@ -1884,7 +1860,6 @@ static BlockJob *mirror_start_job(
blk_set_allow_aio_context_change(s->target, true); blk_set_allow_aio_context_change(s->target, true);
blk_set_disable_request_queuing(s->target, true); blk_set_disable_request_queuing(s->target, true);
bdrv_graph_rdlock_main_loop();
s->replaces = g_strdup(replaces); s->replaces = g_strdup(replaces);
s->on_source_error = on_source_error; s->on_source_error = on_source_error;
s->on_target_error = on_target_error; s->on_target_error = on_target_error;
@@ -1900,7 +1875,6 @@ static BlockJob *mirror_start_job(
if (auto_complete) { if (auto_complete) {
s->should_complete = true; s->should_complete = true;
} }
bdrv_graph_rdunlock_main_loop();
s->dirty_bitmap = bdrv_create_dirty_bitmap(s->mirror_top_bs, granularity, s->dirty_bitmap = bdrv_create_dirty_bitmap(s->mirror_top_bs, granularity,
NULL, errp); NULL, errp);
@@ -1914,13 +1888,11 @@ static BlockJob *mirror_start_job(
*/ */
bdrv_disable_dirty_bitmap(s->dirty_bitmap); bdrv_disable_dirty_bitmap(s->dirty_bitmap);
bdrv_graph_wrlock(bs);
ret = block_job_add_bdrv(&s->common, "source", bs, 0, ret = block_job_add_bdrv(&s->common, "source", bs, 0,
BLK_PERM_WRITE_UNCHANGED | BLK_PERM_WRITE | BLK_PERM_WRITE_UNCHANGED | BLK_PERM_WRITE |
BLK_PERM_CONSISTENT_READ, BLK_PERM_CONSISTENT_READ,
errp); errp);
if (ret < 0) { if (ret < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
@@ -1965,17 +1937,14 @@ static BlockJob *mirror_start_job(
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0, ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
iter_shared_perms, errp); iter_shared_perms, errp);
if (ret < 0) { if (ret < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
} }
if (bdrv_freeze_backing_chain(mirror_top_bs, target, errp) < 0) { if (bdrv_freeze_backing_chain(mirror_top_bs, target, errp) < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
} }
bdrv_graph_wrunlock();
QTAILQ_INIT(&s->ops_in_flight); QTAILQ_INIT(&s->ops_in_flight);
@@ -2000,14 +1969,11 @@ fail:
} }
bs_opaque->stop = true; bs_opaque->stop = true;
bdrv_drained_begin(bs); bdrv_graph_rdlock_main_loop();
bdrv_graph_wrlock(bs);
assert(mirror_top_bs->backing->bs == bs);
bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing, bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing,
&error_abort); &error_abort);
bdrv_replace_node(mirror_top_bs, bs, &error_abort); bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrunlock(); bdrv_replace_node(mirror_top_bs, mirror_top_bs->backing->bs, &error_abort);
bdrv_drained_end(bs);
bdrv_unref(mirror_top_bs); bdrv_unref(mirror_top_bs);
@@ -2036,12 +2002,8 @@ void mirror_start(const char *job_id, BlockDriverState *bs,
MirrorSyncMode_str(mode)); MirrorSyncMode_str(mode));
return; return;
} }
bdrv_graph_rdlock_main_loop();
is_none_mode = mode == MIRROR_SYNC_MODE_NONE; is_none_mode = mode == MIRROR_SYNC_MODE_NONE;
base = mode == MIRROR_SYNC_MODE_TOP ? bdrv_backing_chain_next(bs) : NULL; base = mode == MIRROR_SYNC_MODE_TOP ? bdrv_backing_chain_next(bs) : NULL;
bdrv_graph_rdunlock_main_loop();
mirror_start_job(job_id, bs, creation_flags, target, replaces, mirror_start_job(job_id, bs, creation_flags, target, replaces,
speed, granularity, buf_size, backing_mode, zero_target, speed, granularity, buf_size, backing_mode, zero_target,
on_source_error, on_target_error, unmap, NULL, NULL, on_source_error, on_target_error, unmap, NULL, NULL,

View File

@@ -206,9 +206,6 @@ void hmp_commit(Monitor *mon, const QDict *qdict)
BlockBackend *blk; BlockBackend *blk;
int ret; int ret;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!strcmp(device, "all")) { if (!strcmp(device, "all")) {
ret = blk_commit_all(); ret = blk_commit_all();
} else { } else {

View File

@@ -417,10 +417,9 @@ static bool nvme_process_completion(NVMeQueuePair *q)
q->cq_phase = !q->cq_phase; q->cq_phase = !q->cq_phase;
} }
cid = le16_to_cpu(c->cid); cid = le16_to_cpu(c->cid);
if (cid == 0 || cid > NVME_NUM_REQS) { if (cid == 0 || cid > NVME_QUEUE_SIZE) {
warn_report("NVMe: Unexpected CID in completion queue: %" PRIu32 warn_report("NVMe: Unexpected CID in completion queue: %"PRIu32", "
", should be within: 1..%u inclusively", cid, "queue size: %u", cid, NVME_QUEUE_SIZE);
NVME_NUM_REQS);
continue; continue;
} }
trace_nvme_complete_command(s, q->index, cid); trace_nvme_complete_command(s, q->index, cid);

View File

@@ -59,10 +59,11 @@ typedef struct ParallelsDirtyBitmapFeature {
} QEMU_PACKED ParallelsDirtyBitmapFeature; } QEMU_PACKED ParallelsDirtyBitmapFeature;
/* Given L1 table read bitmap data from the image and populate @bitmap */ /* Given L1 table read bitmap data from the image and populate @bitmap */
static int GRAPH_RDLOCK static int parallels_load_bitmap_data(BlockDriverState *bs,
parallels_load_bitmap_data(BlockDriverState *bs, const uint64_t *l1_table, const uint64_t *l1_table,
uint32_t l1_size, BdrvDirtyBitmap *bitmap, uint32_t l1_size,
Error **errp) BdrvDirtyBitmap *bitmap,
Error **errp)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
int ret = 0; int ret = 0;
@@ -119,16 +120,17 @@ finish:
* @data buffer (of @data_size size) is the Dirty bitmaps feature which * @data buffer (of @data_size size) is the Dirty bitmaps feature which
* consists of ParallelsDirtyBitmapFeature followed by L1 table. * consists of ParallelsDirtyBitmapFeature followed by L1 table.
*/ */
static BdrvDirtyBitmap * GRAPH_RDLOCK static BdrvDirtyBitmap *parallels_load_bitmap(BlockDriverState *bs,
parallels_load_bitmap(BlockDriverState *bs, uint8_t *data, size_t data_size, uint8_t *data,
Error **errp) size_t data_size,
Error **errp)
{ {
int ret; int ret;
ParallelsDirtyBitmapFeature bf; ParallelsDirtyBitmapFeature bf;
g_autofree uint64_t *l1_table = NULL; g_autofree uint64_t *l1_table = NULL;
BdrvDirtyBitmap *bitmap; BdrvDirtyBitmap *bitmap;
QemuUUID uuid; QemuUUID uuid;
char uuidstr[UUID_STR_LEN]; char uuidstr[UUID_FMT_LEN + 1];
int i; int i;
if (data_size < sizeof(bf)) { if (data_size < sizeof(bf)) {
@@ -181,9 +183,8 @@ parallels_load_bitmap(BlockDriverState *bs, uint8_t *data, size_t data_size,
return bitmap; return bitmap;
} }
static int GRAPH_RDLOCK static int parallels_parse_format_extension(BlockDriverState *bs,
parallels_parse_format_extension(BlockDriverState *bs, uint8_t *ext_cluster, uint8_t *ext_cluster, Error **errp)
Error **errp)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
int ret; int ret;

View File

@@ -200,7 +200,7 @@ static int mark_used(BlockDriverState *bs, unsigned long *bitmap,
* bitmap anyway, as much as we can. This information will be used for * bitmap anyway, as much as we can. This information will be used for
* error resolution. * error resolution.
*/ */
static int GRAPH_RDLOCK parallels_fill_used_bitmap(BlockDriverState *bs) static int parallels_fill_used_bitmap(BlockDriverState *bs)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
int64_t payload_bytes; int64_t payload_bytes;
@@ -415,10 +415,14 @@ parallels_co_flush_to_os(BlockDriverState *bs)
return 0; return 0;
} }
static int coroutine_fn GRAPH_RDLOCK
parallels_co_block_status(BlockDriverState *bs, bool want_zero, int64_t offset, static int coroutine_fn parallels_co_block_status(BlockDriverState *bs,
int64_t bytes, int64_t *pnum, int64_t *map, bool want_zero,
BlockDriverState **file) int64_t offset,
int64_t bytes,
int64_t *pnum,
int64_t *map,
BlockDriverState **file)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
int count; int count;
@@ -1185,7 +1189,7 @@ static int parallels_probe(const uint8_t *buf, int buf_size,
return 0; return 0;
} }
static int GRAPH_RDLOCK parallels_update_header(BlockDriverState *bs) static int parallels_update_header(BlockDriverState *bs)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
unsigned size = MAX(bdrv_opt_mem_align(bs->file->bs), unsigned size = MAX(bdrv_opt_mem_align(bs->file->bs),
@@ -1255,8 +1259,6 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
file_nb_sectors = bdrv_nb_sectors(bs->file->bs); file_nb_sectors = bdrv_nb_sectors(bs->file->bs);
if (file_nb_sectors < 0) { if (file_nb_sectors < 0) {
return -EINVAL; return -EINVAL;
@@ -1361,9 +1363,11 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
bitmap_new(DIV_ROUND_UP(s->header_size, s->bat_dirty_block)); bitmap_new(DIV_ROUND_UP(s->header_size, s->bat_dirty_block));
/* Disable migration until bdrv_activate method is added */ /* Disable migration until bdrv_activate method is added */
bdrv_graph_rdlock_main_loop();
error_setg(&s->migration_blocker, "The Parallels format used by node '%s' " error_setg(&s->migration_blocker, "The Parallels format used by node '%s' "
"does not support live migration", "does not support live migration",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
bdrv_graph_rdunlock_main_loop();
ret = migrate_add_blocker_normal(&s->migration_blocker, errp); ret = migrate_add_blocker_normal(&s->migration_blocker, errp);
if (ret < 0) { if (ret < 0) {
@@ -1428,8 +1432,6 @@ static void parallels_close(BlockDriverState *bs)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
GRAPH_RDLOCK_GUARD_MAINLOOP();
if ((bs->open_flags & BDRV_O_RDWR) && !(bs->open_flags & BDRV_O_INACTIVE)) { if ((bs->open_flags & BDRV_O_RDWR) && !(bs->open_flags & BDRV_O_INACTIVE)) {
s->header->inuse = 0; s->header->inuse = 0;
parallels_update_header(bs); parallels_update_header(bs);

View File

@@ -90,8 +90,7 @@ typedef struct BDRVParallelsState {
Error *migration_blocker; Error *migration_blocker;
} BDRVParallelsState; } BDRVParallelsState;
int GRAPH_RDLOCK int parallels_read_format_extension(BlockDriverState *bs,
parallels_read_format_extension(BlockDriverState *bs, int64_t ext_off, int64_t ext_off, Error **errp);
Error **errp);
#endif #endif

View File

@@ -143,8 +143,6 @@ static int preallocate_open(BlockDriverState *bs, QDict *options, int flags,
BDRVPreallocateState *s = bs->opaque; BDRVPreallocateState *s = bs->opaque;
int ret; int ret;
GLOBAL_STATE_CODE();
/* /*
* s->data_end and friends should be initialized on permission update. * s->data_end and friends should be initialized on permission update.
* For this to work, mark them invalid. * For this to work, mark them invalid.
@@ -157,8 +155,6 @@ static int preallocate_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!preallocate_absorb_opts(&s->opts, options, bs->file->bs, errp)) { if (!preallocate_absorb_opts(&s->opts, options, bs->file->bs, errp)) {
return -EINVAL; return -EINVAL;
} }
@@ -173,8 +169,7 @@ static int preallocate_open(BlockDriverState *bs, QDict *options, int flags,
return 0; return 0;
} }
static int GRAPH_RDLOCK static int preallocate_truncate_to_real_size(BlockDriverState *bs, Error **errp)
preallocate_truncate_to_real_size(BlockDriverState *bs, Error **errp)
{ {
BDRVPreallocateState *s = bs->opaque; BDRVPreallocateState *s = bs->opaque;
int ret; int ret;
@@ -205,9 +200,6 @@ static void preallocate_close(BlockDriverState *bs)
{ {
BDRVPreallocateState *s = bs->opaque; BDRVPreallocateState *s = bs->opaque;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
qemu_bh_cancel(s->drop_resize_bh); qemu_bh_cancel(s->drop_resize_bh);
qemu_bh_delete(s->drop_resize_bh); qemu_bh_delete(s->drop_resize_bh);
@@ -231,9 +223,6 @@ static int preallocate_reopen_prepare(BDRVReopenState *reopen_state,
PreallocateOpts *opts = g_new0(PreallocateOpts, 1); PreallocateOpts *opts = g_new0(PreallocateOpts, 1);
int ret; int ret;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!preallocate_absorb_opts(opts, reopen_state->options, if (!preallocate_absorb_opts(opts, reopen_state->options,
reopen_state->bs->file->bs, errp)) { reopen_state->bs->file->bs, errp)) {
g_free(opts); g_free(opts);
@@ -294,7 +283,7 @@ static bool can_write_resize(uint64_t perm)
return (perm & BLK_PERM_WRITE) && (perm & BLK_PERM_RESIZE); return (perm & BLK_PERM_WRITE) && (perm & BLK_PERM_RESIZE);
} }
static bool GRAPH_RDLOCK has_prealloc_perms(BlockDriverState *bs) static bool has_prealloc_perms(BlockDriverState *bs)
{ {
BDRVPreallocateState *s = bs->opaque; BDRVPreallocateState *s = bs->opaque;
@@ -510,8 +499,7 @@ preallocate_co_getlength(BlockDriverState *bs)
return ret; return ret;
} }
static int GRAPH_RDLOCK static int preallocate_drop_resize(BlockDriverState *bs, Error **errp)
preallocate_drop_resize(BlockDriverState *bs, Error **errp)
{ {
BDRVPreallocateState *s = bs->opaque; BDRVPreallocateState *s = bs->opaque;
int ret; int ret;
@@ -537,16 +525,15 @@ preallocate_drop_resize(BlockDriverState *bs, Error **errp)
*/ */
s->data_end = s->file_end = s->zero_start = -EINVAL; s->data_end = s->file_end = s->zero_start = -EINVAL;
bdrv_graph_rdlock_main_loop();
bdrv_child_refresh_perms(bs, bs->file, NULL); bdrv_child_refresh_perms(bs, bs->file, NULL);
bdrv_graph_rdunlock_main_loop();
return 0; return 0;
} }
static void preallocate_drop_resize_bh(void *opaque) static void preallocate_drop_resize_bh(void *opaque)
{ {
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* /*
* In case of errors, we'll simply keep the exclusive lock on the image * In case of errors, we'll simply keep the exclusive lock on the image
* indefinitely. * indefinitely.
@@ -554,8 +541,8 @@ static void preallocate_drop_resize_bh(void *opaque)
preallocate_drop_resize(opaque, NULL); preallocate_drop_resize(opaque, NULL);
} }
static void GRAPH_RDLOCK static void preallocate_set_perm(BlockDriverState *bs,
preallocate_set_perm(BlockDriverState *bs, uint64_t perm, uint64_t shared) uint64_t perm, uint64_t shared)
{ {
BDRVPreallocateState *s = bs->opaque; BDRVPreallocateState *s = bs->opaque;

View File

@@ -124,11 +124,9 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
ret = bdrv_open_file_child(NULL, options, "file", bs, errp); ret = bdrv_open_file_child(NULL, options, "file", bs, errp);
if (ret < 0) { if (ret < 0) {
goto fail_unlocked; goto fail;
} }
bdrv_graph_rdlock_main_loop();
ret = bdrv_pread(bs->file, 0, sizeof(header), &header, 0); ret = bdrv_pread(bs->file, 0, sizeof(header), &header, 0);
if (ret < 0) { if (ret < 0) {
goto fail; goto fail;
@@ -303,9 +301,11 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
} }
/* Disable migration when qcow images are used */ /* Disable migration when qcow images are used */
bdrv_graph_rdlock_main_loop();
error_setg(&s->migration_blocker, "The qcow format used by node '%s' " error_setg(&s->migration_blocker, "The qcow format used by node '%s' "
"does not support live migration", "does not support live migration",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
bdrv_graph_rdunlock_main_loop();
ret = migrate_add_blocker_normal(&s->migration_blocker, errp); ret = migrate_add_blocker_normal(&s->migration_blocker, errp);
if (ret < 0) { if (ret < 0) {
@@ -315,12 +315,9 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
qobject_unref(encryptopts); qobject_unref(encryptopts);
qapi_free_QCryptoBlockOpenOptions(crypto_opts); qapi_free_QCryptoBlockOpenOptions(crypto_opts);
qemu_co_mutex_init(&s->lock); qemu_co_mutex_init(&s->lock);
bdrv_graph_rdunlock_main_loop();
return 0; return 0;
fail: fail:
bdrv_graph_rdunlock_main_loop();
fail_unlocked:
g_free(s->l1_table); g_free(s->l1_table);
qemu_vfree(s->l2_cache); qemu_vfree(s->l2_cache);
g_free(s->cluster_cache); g_free(s->cluster_cache);
@@ -1027,7 +1024,7 @@ fail:
return ret; return ret;
} }
static int GRAPH_RDLOCK qcow_make_empty(BlockDriverState *bs) static int qcow_make_empty(BlockDriverState *bs)
{ {
BDRVQcowState *s = bs->opaque; BDRVQcowState *s = bs->opaque;
uint32_t l1_length = s->l1_size * sizeof(uint64_t); uint32_t l1_length = s->l1_size * sizeof(uint64_t);

View File

@@ -105,7 +105,7 @@ static inline bool can_write(BlockDriverState *bs)
return !bdrv_is_read_only(bs) && !(bdrv_get_flags(bs) & BDRV_O_INACTIVE); return !bdrv_is_read_only(bs) && !(bdrv_get_flags(bs) & BDRV_O_INACTIVE);
} }
static int GRAPH_RDLOCK update_header_sync(BlockDriverState *bs) static int update_header_sync(BlockDriverState *bs)
{ {
int ret; int ret;
@@ -221,9 +221,8 @@ clear_bitmap_table(BlockDriverState *bs, uint64_t *bitmap_table,
} }
} }
static int GRAPH_RDLOCK static int bitmap_table_load(BlockDriverState *bs, Qcow2BitmapTable *tb,
bitmap_table_load(BlockDriverState *bs, Qcow2BitmapTable *tb, uint64_t **bitmap_table)
uint64_t **bitmap_table)
{ {
int ret; int ret;
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -552,9 +551,8 @@ static uint32_t bitmap_list_count(Qcow2BitmapList *bm_list)
* Get bitmap list from qcow2 image. Actually reads bitmap directory, * Get bitmap list from qcow2 image. Actually reads bitmap directory,
* checks it and convert to bitmap list. * checks it and convert to bitmap list.
*/ */
static Qcow2BitmapList * GRAPH_RDLOCK static Qcow2BitmapList *bitmap_list_load(BlockDriverState *bs, uint64_t offset,
bitmap_list_load(BlockDriverState *bs, uint64_t offset, uint64_t size, uint64_t size, Error **errp)
Error **errp)
{ {
int ret; int ret;
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -963,7 +961,7 @@ static void set_readonly_helper(gpointer bitmap, gpointer value)
* If header_updated is not NULL then it is set appropriately regardless of * If header_updated is not NULL then it is set appropriately regardless of
* the return value. * the return value.
*/ */
bool coroutine_fn bool coroutine_fn GRAPH_RDLOCK
qcow2_load_dirty_bitmaps(BlockDriverState *bs, qcow2_load_dirty_bitmaps(BlockDriverState *bs,
bool *header_updated, Error **errp) bool *header_updated, Error **errp)
{ {

View File

@@ -391,10 +391,11 @@ fail:
* If the L2 entry is invalid return -errno and set @type to * If the L2 entry is invalid return -errno and set @type to
* QCOW2_SUBCLUSTER_INVALID. * QCOW2_SUBCLUSTER_INVALID.
*/ */
static int GRAPH_RDLOCK static int qcow2_get_subcluster_range_type(BlockDriverState *bs,
qcow2_get_subcluster_range_type(BlockDriverState *bs, uint64_t l2_entry, uint64_t l2_entry,
uint64_t l2_bitmap, unsigned sc_from, uint64_t l2_bitmap,
QCow2SubclusterType *type) unsigned sc_from,
QCow2SubclusterType *type)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
uint32_t val; uint32_t val;
@@ -441,10 +442,9 @@ qcow2_get_subcluster_range_type(BlockDriverState *bs, uint64_t l2_entry,
* On failure return -errno and update @l2_index to point to the * On failure return -errno and update @l2_index to point to the
* invalid entry. * invalid entry.
*/ */
static int GRAPH_RDLOCK static int count_contiguous_subclusters(BlockDriverState *bs, int nb_clusters,
count_contiguous_subclusters(BlockDriverState *bs, int nb_clusters, unsigned sc_index, uint64_t *l2_slice,
unsigned sc_index, uint64_t *l2_slice, unsigned *l2_index)
unsigned *l2_index)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
int i, count = 0; int i, count = 0;
@@ -1329,8 +1329,7 @@ calculate_l2_meta(BlockDriverState *bs, uint64_t host_cluster_offset,
* requires a new allocation (that is, if the cluster is unallocated * requires a new allocation (that is, if the cluster is unallocated
* or has refcount > 1 and therefore cannot be written in-place). * or has refcount > 1 and therefore cannot be written in-place).
*/ */
static bool GRAPH_RDLOCK static bool cluster_needs_new_alloc(BlockDriverState *bs, uint64_t l2_entry)
cluster_needs_new_alloc(BlockDriverState *bs, uint64_t l2_entry)
{ {
switch (qcow2_get_cluster_type(bs, l2_entry)) { switch (qcow2_get_cluster_type(bs, l2_entry)) {
case QCOW2_CLUSTER_NORMAL: case QCOW2_CLUSTER_NORMAL:
@@ -1361,9 +1360,9 @@ cluster_needs_new_alloc(BlockDriverState *bs, uint64_t l2_entry)
* allocated and can be overwritten in-place (this includes clusters * allocated and can be overwritten in-place (this includes clusters
* of type QCOW2_CLUSTER_ZERO_ALLOC). * of type QCOW2_CLUSTER_ZERO_ALLOC).
*/ */
static int GRAPH_RDLOCK static int count_single_write_clusters(BlockDriverState *bs, int nb_clusters,
count_single_write_clusters(BlockDriverState *bs, int nb_clusters, uint64_t *l2_slice, int l2_index,
uint64_t *l2_slice, int l2_index, bool new_alloc) bool new_alloc)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
uint64_t l2_entry = get_l2_entry(s, l2_slice, l2_index); uint64_t l2_entry = get_l2_entry(s, l2_slice, l2_index);
@@ -1984,7 +1983,7 @@ discard_in_l2_slice(BlockDriverState *bs, uint64_t offset, uint64_t nb_clusters,
/* If we keep the reference, pass on the discard still */ /* If we keep the reference, pass on the discard still */
bdrv_pdiscard(s->data_file, old_l2_entry & L2E_OFFSET_MASK, bdrv_pdiscard(s->data_file, old_l2_entry & L2E_OFFSET_MASK,
s->cluster_size); s->cluster_size);
} }
} }
qcow2_cache_put(s->l2_table_cache, (void **) &l2_slice); qcow2_cache_put(s->l2_table_cache, (void **) &l2_slice);
@@ -2062,15 +2061,9 @@ zero_in_l2_slice(BlockDriverState *bs, uint64_t offset,
QCow2ClusterType type = qcow2_get_cluster_type(bs, old_l2_entry); QCow2ClusterType type = qcow2_get_cluster_type(bs, old_l2_entry);
bool unmap = (type == QCOW2_CLUSTER_COMPRESSED) || bool unmap = (type == QCOW2_CLUSTER_COMPRESSED) ||
((flags & BDRV_REQ_MAY_UNMAP) && qcow2_cluster_is_allocated(type)); ((flags & BDRV_REQ_MAY_UNMAP) && qcow2_cluster_is_allocated(type));
bool keep_reference = uint64_t new_l2_entry = unmap ? 0 : old_l2_entry;
(s->discard_no_unref && type != QCOW2_CLUSTER_COMPRESSED);
uint64_t new_l2_entry = old_l2_entry;
uint64_t new_l2_bitmap = old_l2_bitmap; uint64_t new_l2_bitmap = old_l2_bitmap;
if (unmap && !keep_reference) {
new_l2_entry = 0;
}
if (has_subclusters(s)) { if (has_subclusters(s)) {
new_l2_bitmap = QCOW_L2_BITMAP_ALL_ZEROES; new_l2_bitmap = QCOW_L2_BITMAP_ALL_ZEROES;
} else { } else {
@@ -2088,17 +2081,9 @@ zero_in_l2_slice(BlockDriverState *bs, uint64_t offset,
set_l2_bitmap(s, l2_slice, l2_index + i, new_l2_bitmap); set_l2_bitmap(s, l2_slice, l2_index + i, new_l2_bitmap);
} }
/* Then decrease the refcount */
if (unmap) { if (unmap) {
if (!keep_reference) { qcow2_free_any_cluster(bs, old_l2_entry, QCOW2_DISCARD_REQUEST);
/* Then decrease the refcount */
qcow2_free_any_cluster(bs, old_l2_entry, QCOW2_DISCARD_REQUEST);
} else if (s->discard_passthrough[QCOW2_DISCARD_REQUEST] &&
(type == QCOW2_CLUSTER_NORMAL ||
type == QCOW2_CLUSTER_ZERO_ALLOC)) {
/* If we keep the reference, pass on the discard still */
bdrv_pdiscard(s->data_file, old_l2_entry & L2E_OFFSET_MASK,
s->cluster_size);
}
} }
} }

View File

@@ -95,10 +95,9 @@ static int qcow2_probe(const uint8_t *buf, int buf_size, const char *filename)
} }
static int GRAPH_RDLOCK static int qcow2_crypto_hdr_read_func(QCryptoBlock *block, size_t offset,
qcow2_crypto_hdr_read_func(QCryptoBlock *block, size_t offset, uint8_t *buf, size_t buflen,
uint8_t *buf, size_t buflen, void *opaque, Error **errp)
void *opaque, Error **errp)
{ {
BlockDriverState *bs = opaque; BlockDriverState *bs = opaque;
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -157,7 +156,7 @@ qcow2_crypto_hdr_init_func(QCryptoBlock *block, size_t headerlen, void *opaque,
/* The graph lock must be held when called in coroutine context */ /* The graph lock must be held when called in coroutine context */
static int coroutine_mixed_fn GRAPH_RDLOCK static int coroutine_mixed_fn
qcow2_crypto_hdr_write_func(QCryptoBlock *block, size_t offset, qcow2_crypto_hdr_write_func(QCryptoBlock *block, size_t offset,
const uint8_t *buf, size_t buflen, const uint8_t *buf, size_t buflen,
void *opaque, Error **errp) void *opaque, Error **errp)
@@ -2030,8 +2029,6 @@ static void qcow2_reopen_commit(BDRVReopenState *state)
{ {
BDRVQcow2State *s = state->bs->opaque; BDRVQcow2State *s = state->bs->opaque;
GRAPH_RDLOCK_GUARD_MAINLOOP();
qcow2_update_options_commit(state->bs, state->opaque); qcow2_update_options_commit(state->bs, state->opaque);
if (!s->data_file) { if (!s->data_file) {
/* /*
@@ -2067,8 +2064,6 @@ static void qcow2_reopen_abort(BDRVReopenState *state)
{ {
BDRVQcow2State *s = state->bs->opaque; BDRVQcow2State *s = state->bs->opaque;
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!s->data_file) { if (!s->data_file) {
/* /*
* If we don't have an external data file, s->data_file was cleared by * If we don't have an external data file, s->data_file was cleared by
@@ -3160,9 +3155,8 @@ fail:
return ret; return ret;
} }
static int coroutine_fn GRAPH_RDLOCK static int qcow2_change_backing_file(BlockDriverState *bs,
qcow2_co_change_backing_file(BlockDriverState *bs, const char *backing_file, const char *backing_file, const char *backing_fmt)
const char *backing_fmt)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -3822,11 +3816,8 @@ qcow2_co_create(BlockdevCreateOptions *create_options, Error **errp)
backing_format = BlockdevDriver_str(qcow2_opts->backing_fmt); backing_format = BlockdevDriver_str(qcow2_opts->backing_fmt);
} }
bdrv_graph_co_rdlock(); ret = bdrv_change_backing_file(blk_bs(blk), qcow2_opts->backing_file,
ret = bdrv_co_change_backing_file(blk_bs(blk), qcow2_opts->backing_file, backing_format, false);
backing_format, false);
bdrv_graph_co_rdunlock();
if (ret < 0) { if (ret < 0) {
error_setg_errno(errp, -ret, "Could not assign backing file '%s' " error_setg_errno(errp, -ret, "Could not assign backing file '%s' "
"with format '%s'", qcow2_opts->backing_file, "with format '%s'", qcow2_opts->backing_file,
@@ -5231,8 +5222,8 @@ qcow2_co_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
return 0; return 0;
} }
static ImageInfoSpecific * GRAPH_RDLOCK static ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *bs,
qcow2_get_specific_info(BlockDriverState *bs, Error **errp) Error **errp)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
ImageInfoSpecific *spec_info; ImageInfoSpecific *spec_info;
@@ -5311,8 +5302,7 @@ qcow2_get_specific_info(BlockDriverState *bs, Error **errp)
return spec_info; return spec_info;
} }
static int coroutine_mixed_fn GRAPH_RDLOCK static int coroutine_mixed_fn qcow2_has_zero_init(BlockDriverState *bs)
qcow2_has_zero_init(BlockDriverState *bs)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
bool preallocated; bool preallocated;
@@ -6124,64 +6114,64 @@ static const char *const qcow2_strong_runtime_opts[] = {
}; };
BlockDriver bdrv_qcow2 = { BlockDriver bdrv_qcow2 = {
.format_name = "qcow2", .format_name = "qcow2",
.instance_size = sizeof(BDRVQcow2State), .instance_size = sizeof(BDRVQcow2State),
.bdrv_probe = qcow2_probe, .bdrv_probe = qcow2_probe,
.bdrv_open = qcow2_open, .bdrv_open = qcow2_open,
.bdrv_close = qcow2_close, .bdrv_close = qcow2_close,
.bdrv_reopen_prepare = qcow2_reopen_prepare, .bdrv_reopen_prepare = qcow2_reopen_prepare,
.bdrv_reopen_commit = qcow2_reopen_commit, .bdrv_reopen_commit = qcow2_reopen_commit,
.bdrv_reopen_commit_post = qcow2_reopen_commit_post, .bdrv_reopen_commit_post = qcow2_reopen_commit_post,
.bdrv_reopen_abort = qcow2_reopen_abort, .bdrv_reopen_abort = qcow2_reopen_abort,
.bdrv_join_options = qcow2_join_options, .bdrv_join_options = qcow2_join_options,
.bdrv_child_perm = bdrv_default_perms, .bdrv_child_perm = bdrv_default_perms,
.bdrv_co_create_opts = qcow2_co_create_opts, .bdrv_co_create_opts = qcow2_co_create_opts,
.bdrv_co_create = qcow2_co_create, .bdrv_co_create = qcow2_co_create,
.bdrv_has_zero_init = qcow2_has_zero_init, .bdrv_has_zero_init = qcow2_has_zero_init,
.bdrv_co_block_status = qcow2_co_block_status, .bdrv_co_block_status = qcow2_co_block_status,
.bdrv_co_preadv_part = qcow2_co_preadv_part, .bdrv_co_preadv_part = qcow2_co_preadv_part,
.bdrv_co_pwritev_part = qcow2_co_pwritev_part, .bdrv_co_pwritev_part = qcow2_co_pwritev_part,
.bdrv_co_flush_to_os = qcow2_co_flush_to_os, .bdrv_co_flush_to_os = qcow2_co_flush_to_os,
.bdrv_co_pwrite_zeroes = qcow2_co_pwrite_zeroes, .bdrv_co_pwrite_zeroes = qcow2_co_pwrite_zeroes,
.bdrv_co_pdiscard = qcow2_co_pdiscard, .bdrv_co_pdiscard = qcow2_co_pdiscard,
.bdrv_co_copy_range_from = qcow2_co_copy_range_from, .bdrv_co_copy_range_from = qcow2_co_copy_range_from,
.bdrv_co_copy_range_to = qcow2_co_copy_range_to, .bdrv_co_copy_range_to = qcow2_co_copy_range_to,
.bdrv_co_truncate = qcow2_co_truncate, .bdrv_co_truncate = qcow2_co_truncate,
.bdrv_co_pwritev_compressed_part = qcow2_co_pwritev_compressed_part, .bdrv_co_pwritev_compressed_part = qcow2_co_pwritev_compressed_part,
.bdrv_make_empty = qcow2_make_empty, .bdrv_make_empty = qcow2_make_empty,
.bdrv_snapshot_create = qcow2_snapshot_create, .bdrv_snapshot_create = qcow2_snapshot_create,
.bdrv_snapshot_goto = qcow2_snapshot_goto, .bdrv_snapshot_goto = qcow2_snapshot_goto,
.bdrv_snapshot_delete = qcow2_snapshot_delete, .bdrv_snapshot_delete = qcow2_snapshot_delete,
.bdrv_snapshot_list = qcow2_snapshot_list, .bdrv_snapshot_list = qcow2_snapshot_list,
.bdrv_snapshot_load_tmp = qcow2_snapshot_load_tmp, .bdrv_snapshot_load_tmp = qcow2_snapshot_load_tmp,
.bdrv_measure = qcow2_measure, .bdrv_measure = qcow2_measure,
.bdrv_co_get_info = qcow2_co_get_info, .bdrv_co_get_info = qcow2_co_get_info,
.bdrv_get_specific_info = qcow2_get_specific_info, .bdrv_get_specific_info = qcow2_get_specific_info,
.bdrv_co_save_vmstate = qcow2_co_save_vmstate, .bdrv_co_save_vmstate = qcow2_co_save_vmstate,
.bdrv_co_load_vmstate = qcow2_co_load_vmstate, .bdrv_co_load_vmstate = qcow2_co_load_vmstate,
.is_format = true, .is_format = true,
.supports_backing = true, .supports_backing = true,
.bdrv_co_change_backing_file = qcow2_co_change_backing_file, .bdrv_change_backing_file = qcow2_change_backing_file,
.bdrv_refresh_limits = qcow2_refresh_limits, .bdrv_refresh_limits = qcow2_refresh_limits,
.bdrv_co_invalidate_cache = qcow2_co_invalidate_cache, .bdrv_co_invalidate_cache = qcow2_co_invalidate_cache,
.bdrv_inactivate = qcow2_inactivate, .bdrv_inactivate = qcow2_inactivate,
.create_opts = &qcow2_create_opts, .create_opts = &qcow2_create_opts,
.amend_opts = &qcow2_amend_opts, .amend_opts = &qcow2_amend_opts,
.strong_runtime_opts = qcow2_strong_runtime_opts, .strong_runtime_opts = qcow2_strong_runtime_opts,
.mutable_opts = mutable_opts, .mutable_opts = mutable_opts,
.bdrv_co_check = qcow2_co_check, .bdrv_co_check = qcow2_co_check,
.bdrv_amend_options = qcow2_amend_options, .bdrv_amend_options = qcow2_amend_options,
.bdrv_co_amend = qcow2_co_amend, .bdrv_co_amend = qcow2_co_amend,
.bdrv_detach_aio_context = qcow2_detach_aio_context, .bdrv_detach_aio_context = qcow2_detach_aio_context,
.bdrv_attach_aio_context = qcow2_attach_aio_context, .bdrv_attach_aio_context = qcow2_attach_aio_context,
.bdrv_supports_persistent_dirty_bitmap = .bdrv_supports_persistent_dirty_bitmap =
qcow2_supports_persistent_dirty_bitmap, qcow2_supports_persistent_dirty_bitmap,

View File

@@ -641,7 +641,7 @@ static inline void set_l2_bitmap(BDRVQcow2State *s, uint64_t *l2_slice,
l2_slice[idx + 1] = cpu_to_be64(bitmap); l2_slice[idx + 1] = cpu_to_be64(bitmap);
} }
static inline bool GRAPH_RDLOCK has_data_file(BlockDriverState *bs) static inline bool has_data_file(BlockDriverState *bs)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
return (s->data_file != bs->file); return (s->data_file != bs->file);
@@ -709,8 +709,8 @@ static inline int64_t qcow2_vm_state_offset(BDRVQcow2State *s)
return (int64_t)s->l1_vm_state_index << (s->cluster_bits + s->l2_bits); return (int64_t)s->l1_vm_state_index << (s->cluster_bits + s->l2_bits);
} }
static inline QCow2ClusterType GRAPH_RDLOCK static inline QCow2ClusterType qcow2_get_cluster_type(BlockDriverState *bs,
qcow2_get_cluster_type(BlockDriverState *bs, uint64_t l2_entry) uint64_t l2_entry)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -743,7 +743,7 @@ qcow2_get_cluster_type(BlockDriverState *bs, uint64_t l2_entry)
* (this checks the whole entry and bitmap, not only the bits related * (this checks the whole entry and bitmap, not only the bits related
* to subcluster @sc_index). * to subcluster @sc_index).
*/ */
static inline GRAPH_RDLOCK static inline
QCow2SubclusterType qcow2_get_subcluster_type(BlockDriverState *bs, QCow2SubclusterType qcow2_get_subcluster_type(BlockDriverState *bs,
uint64_t l2_entry, uint64_t l2_entry,
uint64_t l2_bitmap, uint64_t l2_bitmap,
@@ -834,9 +834,9 @@ int64_t qcow2_refcount_metadata_size(int64_t clusters, size_t cluster_size,
int refcount_order, bool generous_increase, int refcount_order, bool generous_increase,
uint64_t *refblock_count); uint64_t *refblock_count);
int GRAPH_RDLOCK qcow2_mark_dirty(BlockDriverState *bs); int qcow2_mark_dirty(BlockDriverState *bs);
int GRAPH_RDLOCK qcow2_mark_corrupt(BlockDriverState *bs); int qcow2_mark_corrupt(BlockDriverState *bs);
int GRAPH_RDLOCK qcow2_update_header(BlockDriverState *bs); int qcow2_update_header(BlockDriverState *bs);
void GRAPH_RDLOCK void GRAPH_RDLOCK
qcow2_signal_corruption(BlockDriverState *bs, bool fatal, int64_t offset, qcow2_signal_corruption(BlockDriverState *bs, bool fatal, int64_t offset,
@@ -890,11 +890,10 @@ int GRAPH_RDLOCK qcow2_write_caches(BlockDriverState *bs);
int coroutine_fn qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res, int coroutine_fn qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix); BdrvCheckMode fix);
void GRAPH_RDLOCK qcow2_process_discards(BlockDriverState *bs, int ret); void qcow2_process_discards(BlockDriverState *bs, int ret);
int GRAPH_RDLOCK int qcow2_check_metadata_overlap(BlockDriverState *bs, int ign, int64_t offset,
qcow2_check_metadata_overlap(BlockDriverState *bs, int ign, int64_t offset, int64_t size);
int64_t size);
int GRAPH_RDLOCK int GRAPH_RDLOCK
qcow2_pre_write_overlap_check(BlockDriverState *bs, int ign, int64_t offset, qcow2_pre_write_overlap_check(BlockDriverState *bs, int ign, int64_t offset,
int64_t size, bool data_file); int64_t size, bool data_file);
@@ -940,9 +939,8 @@ qcow2_alloc_host_offset(BlockDriverState *bs, uint64_t offset,
int coroutine_fn GRAPH_RDLOCK int coroutine_fn GRAPH_RDLOCK
qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs, uint64_t offset, qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs, uint64_t offset,
int compressed_size, uint64_t *host_offset); int compressed_size, uint64_t *host_offset);
void GRAPH_RDLOCK void qcow2_parse_compressed_l2_entry(BlockDriverState *bs, uint64_t l2_entry,
qcow2_parse_compressed_l2_entry(BlockDriverState *bs, uint64_t l2_entry, uint64_t *coffset, int *csize);
uint64_t *coffset, int *csize);
int coroutine_fn GRAPH_RDLOCK int coroutine_fn GRAPH_RDLOCK
qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m); qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m);
@@ -974,12 +972,11 @@ int GRAPH_RDLOCK
qcow2_snapshot_delete(BlockDriverState *bs, const char *snapshot_id, qcow2_snapshot_delete(BlockDriverState *bs, const char *snapshot_id,
const char *name, Error **errp); const char *name, Error **errp);
int GRAPH_RDLOCK int qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab);
qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab); int qcow2_snapshot_load_tmp(BlockDriverState *bs,
const char *snapshot_id,
int GRAPH_RDLOCK const char *name,
qcow2_snapshot_load_tmp(BlockDriverState *bs, const char *snapshot_id, Error **errp);
const char *name, Error **errp);
void qcow2_free_snapshots(BlockDriverState *bs); void qcow2_free_snapshots(BlockDriverState *bs);
int coroutine_fn GRAPH_RDLOCK int coroutine_fn GRAPH_RDLOCK
@@ -995,9 +992,8 @@ qcow2_check_fix_snapshot_table(BlockDriverState *bs, BdrvCheckResult *result,
BdrvCheckMode fix); BdrvCheckMode fix);
/* qcow2-cache.c functions */ /* qcow2-cache.c functions */
Qcow2Cache * GRAPH_RDLOCK Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables,
qcow2_cache_create(BlockDriverState *bs, int num_tables, unsigned table_size); unsigned table_size);
int qcow2_cache_destroy(Qcow2Cache *c); int qcow2_cache_destroy(Qcow2Cache *c);
void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table); void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table);
@@ -1023,24 +1019,17 @@ void *qcow2_cache_is_table_offset(Qcow2Cache *c, uint64_t offset);
void qcow2_cache_discard(Qcow2Cache *c, void *table); void qcow2_cache_discard(Qcow2Cache *c, void *table);
/* qcow2-bitmap.c functions */ /* qcow2-bitmap.c functions */
int coroutine_fn GRAPH_RDLOCK int coroutine_fn
qcow2_check_bitmaps_refcounts(BlockDriverState *bs, BdrvCheckResult *res, qcow2_check_bitmaps_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
void **refcount_table, void **refcount_table,
int64_t *refcount_table_size); int64_t *refcount_table_size);
bool coroutine_fn GRAPH_RDLOCK bool coroutine_fn GRAPH_RDLOCK
qcow2_load_dirty_bitmaps(BlockDriverState *bs, bool *header_updated, qcow2_load_dirty_bitmaps(BlockDriverState *bs, bool *header_updated, Error **errp);
Error **errp); bool qcow2_get_bitmap_info_list(BlockDriverState *bs,
Qcow2BitmapInfoList **info_list, Error **errp);
bool GRAPH_RDLOCK
qcow2_get_bitmap_info_list(BlockDriverState *bs,
Qcow2BitmapInfoList **info_list, Error **errp);
int GRAPH_RDLOCK qcow2_reopen_bitmaps_rw(BlockDriverState *bs, Error **errp); int GRAPH_RDLOCK qcow2_reopen_bitmaps_rw(BlockDriverState *bs, Error **errp);
int GRAPH_RDLOCK qcow2_reopen_bitmaps_ro(BlockDriverState *bs, Error **errp); int GRAPH_RDLOCK qcow2_reopen_bitmaps_ro(BlockDriverState *bs, Error **errp);
int coroutine_fn qcow2_truncate_bitmaps_check(BlockDriverState *bs, Error **errp);
int coroutine_fn GRAPH_RDLOCK
qcow2_truncate_bitmaps_check(BlockDriverState *bs, Error **errp);
bool GRAPH_RDLOCK bool GRAPH_RDLOCK
qcow2_store_persistent_dirty_bitmaps(BlockDriverState *bs, bool release_stored, qcow2_store_persistent_dirty_bitmaps(BlockDriverState *bs, bool release_stored,

View File

@@ -612,7 +612,7 @@ static int bdrv_qed_reopen_prepare(BDRVReopenState *state,
return 0; return 0;
} }
static void GRAPH_RDLOCK bdrv_qed_do_close(BlockDriverState *bs) static void bdrv_qed_close(BlockDriverState *bs)
{ {
BDRVQEDState *s = bs->opaque; BDRVQEDState *s = bs->opaque;
@@ -631,14 +631,6 @@ static void GRAPH_RDLOCK bdrv_qed_do_close(BlockDriverState *bs)
qemu_vfree(s->l1_table); qemu_vfree(s->l1_table);
} }
static void GRAPH_UNLOCKED bdrv_qed_close(BlockDriverState *bs)
{
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
bdrv_qed_do_close(bs);
}
static int coroutine_fn GRAPH_UNLOCKED static int coroutine_fn GRAPH_UNLOCKED
bdrv_qed_co_create(BlockdevCreateOptions *opts, Error **errp) bdrv_qed_co_create(BlockdevCreateOptions *opts, Error **errp)
{ {
@@ -1146,7 +1138,7 @@ out:
/** /**
* Check if the QED_F_NEED_CHECK bit should be set during allocating write * Check if the QED_F_NEED_CHECK bit should be set during allocating write
*/ */
static bool GRAPH_RDLOCK qed_should_set_need_check(BDRVQEDState *s) static bool qed_should_set_need_check(BDRVQEDState *s)
{ {
/* The flush before L2 update path ensures consistency */ /* The flush before L2 update path ensures consistency */
if (s->bs->backing) { if (s->bs->backing) {
@@ -1451,10 +1443,12 @@ bdrv_qed_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset, int64_t bytes,
QED_AIOCB_WRITE | QED_AIOCB_ZERO); QED_AIOCB_WRITE | QED_AIOCB_ZERO);
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn bdrv_qed_co_truncate(BlockDriverState *bs,
bdrv_qed_co_truncate(BlockDriverState *bs, int64_t offset, bool exact, int64_t offset,
PreallocMode prealloc, BdrvRequestFlags flags, bool exact,
Error **errp) PreallocMode prealloc,
BdrvRequestFlags flags,
Error **errp)
{ {
BDRVQEDState *s = bs->opaque; BDRVQEDState *s = bs->opaque;
uint64_t old_image_size; uint64_t old_image_size;
@@ -1504,9 +1498,9 @@ bdrv_qed_co_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
return 0; return 0;
} }
static int coroutine_fn GRAPH_RDLOCK static int bdrv_qed_change_backing_file(BlockDriverState *bs,
bdrv_qed_co_change_backing_file(BlockDriverState *bs, const char *backing_file, const char *backing_file,
const char *backing_fmt) const char *backing_fmt)
{ {
BDRVQEDState *s = bs->opaque; BDRVQEDState *s = bs->opaque;
QEDHeader new_header, le_header; QEDHeader new_header, le_header;
@@ -1568,7 +1562,7 @@ bdrv_qed_co_change_backing_file(BlockDriverState *bs, const char *backing_file,
} }
/* Write new header */ /* Write new header */
ret = bdrv_co_pwrite_sync(bs->file, 0, buffer_len, buffer, 0); ret = bdrv_pwrite_sync(bs->file, 0, buffer_len, buffer, 0);
g_free(buffer); g_free(buffer);
if (ret == 0) { if (ret == 0) {
memcpy(&s->header, &new_header, sizeof(new_header)); memcpy(&s->header, &new_header, sizeof(new_header));
@@ -1582,7 +1576,7 @@ bdrv_qed_co_invalidate_cache(BlockDriverState *bs, Error **errp)
BDRVQEDState *s = bs->opaque; BDRVQEDState *s = bs->opaque;
int ret; int ret;
bdrv_qed_do_close(bs); bdrv_qed_close(bs);
bdrv_qed_init_state(bs); bdrv_qed_init_state(bs);
qemu_co_mutex_lock(&s->table_lock); qemu_co_mutex_lock(&s->table_lock);
@@ -1642,34 +1636,34 @@ static QemuOptsList qed_create_opts = {
}; };
static BlockDriver bdrv_qed = { static BlockDriver bdrv_qed = {
.format_name = "qed", .format_name = "qed",
.instance_size = sizeof(BDRVQEDState), .instance_size = sizeof(BDRVQEDState),
.create_opts = &qed_create_opts, .create_opts = &qed_create_opts,
.is_format = true, .is_format = true,
.supports_backing = true, .supports_backing = true,
.bdrv_probe = bdrv_qed_probe, .bdrv_probe = bdrv_qed_probe,
.bdrv_open = bdrv_qed_open, .bdrv_open = bdrv_qed_open,
.bdrv_close = bdrv_qed_close, .bdrv_close = bdrv_qed_close,
.bdrv_reopen_prepare = bdrv_qed_reopen_prepare, .bdrv_reopen_prepare = bdrv_qed_reopen_prepare,
.bdrv_child_perm = bdrv_default_perms, .bdrv_child_perm = bdrv_default_perms,
.bdrv_co_create = bdrv_qed_co_create, .bdrv_co_create = bdrv_qed_co_create,
.bdrv_co_create_opts = bdrv_qed_co_create_opts, .bdrv_co_create_opts = bdrv_qed_co_create_opts,
.bdrv_has_zero_init = bdrv_has_zero_init_1, .bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_co_block_status = bdrv_qed_co_block_status, .bdrv_co_block_status = bdrv_qed_co_block_status,
.bdrv_co_readv = bdrv_qed_co_readv, .bdrv_co_readv = bdrv_qed_co_readv,
.bdrv_co_writev = bdrv_qed_co_writev, .bdrv_co_writev = bdrv_qed_co_writev,
.bdrv_co_pwrite_zeroes = bdrv_qed_co_pwrite_zeroes, .bdrv_co_pwrite_zeroes = bdrv_qed_co_pwrite_zeroes,
.bdrv_co_truncate = bdrv_qed_co_truncate, .bdrv_co_truncate = bdrv_qed_co_truncate,
.bdrv_co_getlength = bdrv_qed_co_getlength, .bdrv_co_getlength = bdrv_qed_co_getlength,
.bdrv_co_get_info = bdrv_qed_co_get_info, .bdrv_co_get_info = bdrv_qed_co_get_info,
.bdrv_refresh_limits = bdrv_qed_refresh_limits, .bdrv_refresh_limits = bdrv_qed_refresh_limits,
.bdrv_co_change_backing_file = bdrv_qed_co_change_backing_file, .bdrv_change_backing_file = bdrv_qed_change_backing_file,
.bdrv_co_invalidate_cache = bdrv_qed_co_invalidate_cache, .bdrv_co_invalidate_cache = bdrv_qed_co_invalidate_cache,
.bdrv_co_check = bdrv_qed_co_check, .bdrv_co_check = bdrv_qed_co_check,
.bdrv_detach_aio_context = bdrv_qed_detach_aio_context, .bdrv_detach_aio_context = bdrv_qed_detach_aio_context,
.bdrv_attach_aio_context = bdrv_qed_attach_aio_context, .bdrv_attach_aio_context = bdrv_qed_attach_aio_context,
.bdrv_drain_begin = bdrv_qed_drain_begin, .bdrv_drain_begin = bdrv_qed_drain_begin,
}; };
static void bdrv_qed_init(void) static void bdrv_qed_init(void)

View File

@@ -185,7 +185,7 @@ enum {
/** /**
* Header functions * Header functions
*/ */
int GRAPH_RDLOCK qed_write_header_sync(BDRVQEDState *s); int qed_write_header_sync(BDRVQEDState *s);
/** /**
* L2 cache functions * L2 cache functions

View File

@@ -95,9 +95,9 @@ end:
return ret; return ret;
} }
static int GRAPH_RDLOCK static int raw_apply_options(BlockDriverState *bs, BDRVRawState *s,
raw_apply_options(BlockDriverState *bs, BDRVRawState *s, uint64_t offset, uint64_t offset, bool has_size, uint64_t size,
bool has_size, uint64_t size, Error **errp) Error **errp)
{ {
int64_t real_size = 0; int64_t real_size = 0;
@@ -145,9 +145,6 @@ static int raw_reopen_prepare(BDRVReopenState *reopen_state,
uint64_t offset, size; uint64_t offset, size;
int ret; int ret;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
assert(reopen_state != NULL); assert(reopen_state != NULL);
assert(reopen_state->bs != NULL); assert(reopen_state->bs != NULL);
@@ -282,10 +279,11 @@ fail:
return ret; return ret;
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn raw_co_block_status(BlockDriverState *bs,
raw_co_block_status(BlockDriverState *bs, bool want_zero, int64_t offset, bool want_zero, int64_t offset,
int64_t bytes, int64_t *pnum, int64_t *map, int64_t bytes, int64_t *pnum,
BlockDriverState **file) int64_t *map,
BlockDriverState **file)
{ {
BDRVRawState *s = bs->opaque; BDRVRawState *s = bs->opaque;
*pnum = bytes; *pnum = bytes;
@@ -399,7 +397,7 @@ raw_co_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
return bdrv_co_get_info(bs->file->bs, bdi); return bdrv_co_get_info(bs->file->bs, bdi);
} }
static void GRAPH_RDLOCK raw_refresh_limits(BlockDriverState *bs, Error **errp) static void raw_refresh_limits(BlockDriverState *bs, Error **errp)
{ {
bs->bl.has_variable_length = bs->file->bs->bl.has_variable_length; bs->bl.has_variable_length = bs->file->bs->bl.has_variable_length;
@@ -454,7 +452,7 @@ raw_co_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
return bdrv_co_ioctl(bs->file->bs, req, buf); return bdrv_co_ioctl(bs->file->bs, req, buf);
} }
static int GRAPH_RDLOCK raw_has_zero_init(BlockDriverState *bs) static int raw_has_zero_init(BlockDriverState *bs)
{ {
return bdrv_has_zero_init(bs->file->bs); return bdrv_has_zero_init(bs->file->bs);
} }
@@ -476,8 +474,6 @@ static int raw_open(BlockDriverState *bs, QDict *options, int flags,
BdrvChildRole file_role; BdrvChildRole file_role;
int ret; int ret;
GLOBAL_STATE_CODE();
ret = raw_read_options(options, &offset, &has_size, &size, errp); ret = raw_read_options(options, &offset, &has_size, &size, errp);
if (ret < 0) { if (ret < 0) {
return ret; return ret;
@@ -495,8 +491,6 @@ static int raw_open(BlockDriverState *bs, QDict *options, int flags,
bdrv_open_child(NULL, options, "file", bs, &child_of_bds, bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
file_role, false, errp); file_role, false, errp);
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!bs->file) { if (!bs->file) {
return -EINVAL; return -EINVAL;
} }
@@ -511,7 +505,9 @@ static int raw_open(BlockDriverState *bs, QDict *options, int flags,
BDRV_REQ_ZERO_WRITE; BDRV_REQ_ZERO_WRITE;
if (bs->probed && !bdrv_is_read_only(bs)) { if (bs->probed && !bdrv_is_read_only(bs)) {
bdrv_graph_rdlock_main_loop();
bdrv_refresh_filename(bs->file->bs); bdrv_refresh_filename(bs->file->bs);
bdrv_graph_rdunlock_main_loop();
fprintf(stderr, fprintf(stderr,
"WARNING: Image format was not specified for '%s' and probing " "WARNING: Image format was not specified for '%s' and probing "
"guessed raw.\n" "guessed raw.\n"
@@ -547,8 +543,7 @@ static int raw_probe(const uint8_t *buf, int buf_size, const char *filename)
return 1; return 1;
} }
static int GRAPH_RDLOCK static int raw_probe_blocksizes(BlockDriverState *bs, BlockSizes *bsz)
raw_probe_blocksizes(BlockDriverState *bs, BlockSizes *bsz)
{ {
BDRVRawState *s = bs->opaque; BDRVRawState *s = bs->opaque;
int ret; int ret;
@@ -565,8 +560,7 @@ raw_probe_blocksizes(BlockDriverState *bs, BlockSizes *bsz)
return 0; return 0;
} }
static int GRAPH_RDLOCK static int raw_probe_geometry(BlockDriverState *bs, HDGeometry *geo)
raw_probe_geometry(BlockDriverState *bs, HDGeometry *geo)
{ {
BDRVRawState *s = bs->opaque; BDRVRawState *s = bs->opaque;
if (s->offset || s->has_size) { if (s->offset || s->has_size) {
@@ -616,7 +610,7 @@ static const char *const raw_strong_runtime_opts[] = {
NULL NULL
}; };
static void GRAPH_RDLOCK raw_cancel_in_flight(BlockDriverState *bs) static void raw_cancel_in_flight(BlockDriverState *bs)
{ {
bdrv_cancel_in_flight(bs->file->bs); bdrv_cancel_in_flight(bs->file->bs);
} }

View File

@@ -311,7 +311,7 @@ static void GRAPH_UNLOCKED
secondary_do_checkpoint(BlockDriverState *bs, Error **errp) secondary_do_checkpoint(BlockDriverState *bs, Error **errp)
{ {
BDRVReplicationState *s = bs->opaque; BDRVReplicationState *s = bs->opaque;
BdrvChild *active_disk; BdrvChild *active_disk = bs->file;
Error *local_err = NULL; Error *local_err = NULL;
int ret; int ret;
@@ -328,7 +328,6 @@ secondary_do_checkpoint(BlockDriverState *bs, Error **errp)
return; return;
} }
active_disk = bs->file;
if (!active_disk->bs->drv) { if (!active_disk->bs->drv) {
error_setg(errp, "Active disk %s is ejected", error_setg(errp, "Active disk %s is ejected",
active_disk->bs->node_name); active_disk->bs->node_name);
@@ -364,9 +363,6 @@ static void reopen_backing_file(BlockDriverState *bs, bool writable,
BdrvChild *hidden_disk, *secondary_disk; BdrvChild *hidden_disk, *secondary_disk;
BlockReopenQueue *reopen_queue = NULL; BlockReopenQueue *reopen_queue = NULL;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* /*
* s->hidden_disk and s->secondary_disk may not be set yet, as they will * s->hidden_disk and s->secondary_disk may not be set yet, as they will
* only be set after the children are writable. * only be set after the children are writable.
@@ -500,11 +496,9 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
case REPLICATION_MODE_PRIMARY: case REPLICATION_MODE_PRIMARY:
break; break;
case REPLICATION_MODE_SECONDARY: case REPLICATION_MODE_SECONDARY:
bdrv_graph_rdlock_main_loop();
active_disk = bs->file; active_disk = bs->file;
if (!active_disk || !active_disk->bs || !active_disk->bs->backing) { if (!active_disk || !active_disk->bs || !active_disk->bs->backing) {
error_setg(errp, "Active disk doesn't have backing file"); error_setg(errp, "Active disk doesn't have backing file");
bdrv_graph_rdunlock_main_loop();
aio_context_release(aio_context); aio_context_release(aio_context);
return; return;
} }
@@ -512,11 +506,11 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
hidden_disk = active_disk->bs->backing; hidden_disk = active_disk->bs->backing;
if (!hidden_disk->bs || !hidden_disk->bs->backing) { if (!hidden_disk->bs || !hidden_disk->bs->backing) {
error_setg(errp, "Hidden disk doesn't have backing file"); error_setg(errp, "Hidden disk doesn't have backing file");
bdrv_graph_rdunlock_main_loop();
aio_context_release(aio_context); aio_context_release(aio_context);
return; return;
} }
bdrv_graph_rdlock_main_loop();
secondary_disk = hidden_disk->bs->backing; secondary_disk = hidden_disk->bs->backing;
if (!secondary_disk->bs || !bdrv_has_blk(secondary_disk->bs)) { if (!secondary_disk->bs || !bdrv_has_blk(secondary_disk->bs)) {
error_setg(errp, "The secondary disk doesn't have block backend"); error_setg(errp, "The secondary disk doesn't have block backend");
@@ -756,13 +750,11 @@ static void replication_stop(ReplicationState *rs, bool failover, Error **errp)
return; return;
} }
bdrv_graph_rdlock_main_loop();
s->stage = BLOCK_REPLICATION_FAILOVER; s->stage = BLOCK_REPLICATION_FAILOVER;
s->commit_job = commit_active_start( s->commit_job = commit_active_start(
NULL, bs->file->bs, s->secondary_disk->bs, NULL, bs->file->bs, s->secondary_disk->bs,
JOB_INTERNAL, 0, BLOCKDEV_ON_ERROR_REPORT, JOB_INTERNAL, 0, BLOCKDEV_ON_ERROR_REPORT,
NULL, replication_done, bs, true, errp); NULL, replication_done, bs, true, errp);
bdrv_graph_rdunlock_main_loop();
break; break;
default: default:
aio_context_release(aio_context); aio_context_release(aio_context);

View File

@@ -73,7 +73,7 @@ snapshot_access_co_pwritev_part(BlockDriverState *bs,
} }
static void GRAPH_RDLOCK snapshot_access_refresh_filename(BlockDriverState *bs) static void snapshot_access_refresh_filename(BlockDriverState *bs)
{ {
pstrcpy(bs->exact_filename, sizeof(bs->exact_filename), pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
bs->file->bs->filename); bs->file->bs->filename);
@@ -85,9 +85,6 @@ static int snapshot_access_open(BlockDriverState *bs, QDict *options, int flags,
bdrv_open_child(NULL, options, "file", bs, &child_of_bds, bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
BDRV_CHILD_DATA | BDRV_CHILD_PRIMARY, BDRV_CHILD_DATA | BDRV_CHILD_PRIMARY,
false, errp); false, errp);
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!bs->file) { if (!bs->file) {
return -EINVAL; return -EINVAL;
} }

View File

@@ -53,20 +53,13 @@ static int coroutine_fn stream_populate(BlockBackend *blk,
static int stream_prepare(Job *job) static int stream_prepare(Job *job)
{ {
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job); StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
BlockDriverState *unfiltered_bs; BlockDriverState *unfiltered_bs = bdrv_skip_filters(s->target_bs);
BlockDriverState *unfiltered_bs_cow; BlockDriverState *unfiltered_bs_cow = bdrv_cow_bs(unfiltered_bs);
BlockDriverState *base; BlockDriverState *base;
BlockDriverState *unfiltered_base; BlockDriverState *unfiltered_base;
Error *local_err = NULL; Error *local_err = NULL;
int ret = 0; int ret = 0;
GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop();
unfiltered_bs = bdrv_skip_filters(s->target_bs);
unfiltered_bs_cow = bdrv_cow_bs(unfiltered_bs);
bdrv_graph_rdunlock_main_loop();
/* We should drop filter at this point, as filter hold the backing chain */ /* We should drop filter at this point, as filter hold the backing chain */
bdrv_cor_filter_drop(s->cor_filter_bs); bdrv_cor_filter_drop(s->cor_filter_bs);
s->cor_filter_bs = NULL; s->cor_filter_bs = NULL;
@@ -85,12 +78,10 @@ static int stream_prepare(Job *job)
bdrv_drained_begin(unfiltered_bs_cow); bdrv_drained_begin(unfiltered_bs_cow);
} }
bdrv_graph_rdlock_main_loop();
base = bdrv_filter_or_cow_bs(s->above_base); base = bdrv_filter_or_cow_bs(s->above_base);
unfiltered_base = bdrv_skip_filters(base); unfiltered_base = bdrv_skip_filters(base);
bdrv_graph_rdunlock_main_loop();
if (unfiltered_bs_cow) { if (bdrv_cow_child(unfiltered_bs)) {
const char *base_id = NULL, *base_fmt = NULL; const char *base_id = NULL, *base_fmt = NULL;
if (unfiltered_base) { if (unfiltered_base) {
base_id = s->backing_file_str ?: unfiltered_base->filename; base_id = s->backing_file_str ?: unfiltered_base->filename;
@@ -99,9 +90,7 @@ static int stream_prepare(Job *job)
} }
} }
bdrv_graph_wrlock(base);
bdrv_set_backing_hd_drained(unfiltered_bs, base, &local_err); bdrv_set_backing_hd_drained(unfiltered_bs, base, &local_err);
bdrv_graph_wrunlock();
/* /*
* This call will do I/O, so the graph can change again from here on. * This call will do I/O, so the graph can change again from here on.
@@ -149,19 +138,18 @@ static void stream_clean(Job *job)
static int coroutine_fn stream_run(Job *job, Error **errp) static int coroutine_fn stream_run(Job *job, Error **errp)
{ {
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job); StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
BlockDriverState *unfiltered_bs; BlockDriverState *unfiltered_bs = bdrv_skip_filters(s->target_bs);
int64_t len; int64_t len;
int64_t offset = 0; int64_t offset = 0;
int error = 0; int error = 0;
int64_t n = 0; /* bytes */ int64_t n = 0; /* bytes */
WITH_GRAPH_RDLOCK_GUARD() { if (unfiltered_bs == s->base_overlay) {
unfiltered_bs = bdrv_skip_filters(s->target_bs); /* Nothing to stream */
if (unfiltered_bs == s->base_overlay) { return 0;
/* Nothing to stream */ }
return 0;
}
WITH_GRAPH_RDLOCK_GUARD() {
len = bdrv_co_getlength(s->target_bs); len = bdrv_co_getlength(s->target_bs);
if (len < 0) { if (len < 0) {
return len; return len;
@@ -268,8 +256,6 @@ void stream_start(const char *job_id, BlockDriverState *bs,
assert(!(base && bottom)); assert(!(base && bottom));
assert(!(backing_file_str && bottom)); assert(!(backing_file_str && bottom));
bdrv_graph_rdlock_main_loop();
if (bottom) { if (bottom) {
/* /*
* New simple interface. The code is written in terms of old interface * New simple interface. The code is written in terms of old interface
@@ -286,7 +272,7 @@ void stream_start(const char *job_id, BlockDriverState *bs,
if (!base_overlay) { if (!base_overlay) {
error_setg(errp, "'%s' is not in the backing chain of '%s'", error_setg(errp, "'%s' is not in the backing chain of '%s'",
base->node_name, bs->node_name); base->node_name, bs->node_name);
goto out_rdlock; return;
} }
/* /*
@@ -308,7 +294,7 @@ void stream_start(const char *job_id, BlockDriverState *bs,
if (bs_read_only) { if (bs_read_only) {
/* Hold the chain during reopen */ /* Hold the chain during reopen */
if (bdrv_freeze_backing_chain(bs, above_base, errp) < 0) { if (bdrv_freeze_backing_chain(bs, above_base, errp) < 0) {
goto out_rdlock; return;
} }
ret = bdrv_reopen_set_read_only(bs, false, errp); ret = bdrv_reopen_set_read_only(bs, false, errp);
@@ -317,12 +303,10 @@ void stream_start(const char *job_id, BlockDriverState *bs,
bdrv_unfreeze_backing_chain(bs, above_base); bdrv_unfreeze_backing_chain(bs, above_base);
if (ret < 0) { if (ret < 0) {
goto out_rdlock; return;
} }
} }
bdrv_graph_rdunlock_main_loop();
opts = qdict_new(); opts = qdict_new();
qdict_put_str(opts, "driver", "copy-on-read"); qdict_put_str(opts, "driver", "copy-on-read");
@@ -366,10 +350,8 @@ void stream_start(const char *job_id, BlockDriverState *bs,
* already have our own plans. Also don't allow resize as the image size is * already have our own plans. Also don't allow resize as the image size is
* queried only at the job start and then cached. * queried only at the job start and then cached.
*/ */
bdrv_graph_wrlock(bs);
if (block_job_add_bdrv(&s->common, "active node", bs, 0, if (block_job_add_bdrv(&s->common, "active node", bs, 0,
basic_flags | BLK_PERM_WRITE, errp)) { basic_flags | BLK_PERM_WRITE, errp)) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
@@ -389,11 +371,9 @@ void stream_start(const char *job_id, BlockDriverState *bs,
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0, ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
basic_flags, errp); basic_flags, errp);
if (ret < 0) { if (ret < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
} }
bdrv_graph_wrunlock();
s->base_overlay = base_overlay; s->base_overlay = base_overlay;
s->above_base = above_base; s->above_base = above_base;
@@ -417,8 +397,4 @@ fail:
if (bs_read_only) { if (bs_read_only) {
bdrv_reopen_set_read_only(bs, true, NULL); bdrv_reopen_set_read_only(bs, true, NULL);
} }
return;
out_rdlock:
bdrv_graph_rdunlock_main_loop();
} }

View File

@@ -84,9 +84,6 @@ static int throttle_open(BlockDriverState *bs, QDict *options,
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs->supported_write_flags = bs->file->bs->supported_write_flags | bs->supported_write_flags = bs->file->bs->supported_write_flags |
BDRV_REQ_WRITE_UNCHANGED; BDRV_REQ_WRITE_UNCHANGED;
bs->supported_zero_flags = bs->file->bs->supported_zero_flags | bs->supported_zero_flags = bs->file->bs->supported_zero_flags |

View File

@@ -239,7 +239,7 @@ static void vdi_header_to_le(VdiHeader *header)
static void vdi_header_print(VdiHeader *header) static void vdi_header_print(VdiHeader *header)
{ {
char uuidstr[UUID_STR_LEN]; char uuidstr[37];
QemuUUID uuid; QemuUUID uuid;
logout("text %s", header->text); logout("text %s", header->text);
logout("signature 0x%08x\n", header->signature); logout("signature 0x%08x\n", header->signature);
@@ -383,8 +383,6 @@ static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
logout("\n"); logout("\n");
ret = bdrv_pread(bs->file, 0, sizeof(header), &header, 0); ret = bdrv_pread(bs->file, 0, sizeof(header), &header, 0);
@@ -494,9 +492,11 @@ static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
} }
/* Disable migration when vdi images are used */ /* Disable migration when vdi images are used */
bdrv_graph_rdlock_main_loop();
error_setg(&s->migration_blocker, "The vdi format used by node '%s' " error_setg(&s->migration_blocker, "The vdi format used by node '%s' "
"does not support live migration", "does not support live migration",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
bdrv_graph_rdunlock_main_loop();
ret = migrate_add_blocker_normal(&s->migration_blocker, errp); ret = migrate_add_blocker_normal(&s->migration_blocker, errp);
if (ret < 0) { if (ret < 0) {
@@ -520,10 +520,11 @@ static int vdi_reopen_prepare(BDRVReopenState *state,
return 0; return 0;
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn vdi_co_block_status(BlockDriverState *bs,
vdi_co_block_status(BlockDriverState *bs, bool want_zero, int64_t offset, bool want_zero,
int64_t bytes, int64_t *pnum, int64_t *map, int64_t offset, int64_t bytes,
BlockDriverState **file) int64_t *pnum, int64_t *map,
BlockDriverState **file)
{ {
BDRVVdiState *s = (BDRVVdiState *)bs->opaque; BDRVVdiState *s = (BDRVVdiState *)bs->opaque;
size_t bmap_index = offset / s->block_size; size_t bmap_index = offset / s->block_size;
@@ -989,7 +990,7 @@ static void vdi_close(BlockDriverState *bs)
migrate_del_blocker(&s->migration_blocker); migrate_del_blocker(&s->migration_blocker);
} }
static int GRAPH_RDLOCK vdi_has_zero_init(BlockDriverState *bs) static int vdi_has_zero_init(BlockDriverState *bs)
{ {
BDRVVdiState *s = bs->opaque; BDRVVdiState *s = bs->opaque;

View File

@@ -55,9 +55,8 @@ static const MSGUID zero_guid = { 0 };
/* Allow peeking at the hdr entry at the beginning of the current /* Allow peeking at the hdr entry at the beginning of the current
* read index, without advancing the read index */ * read index, without advancing the read index */
static int GRAPH_RDLOCK static int vhdx_log_peek_hdr(BlockDriverState *bs, VHDXLogEntries *log,
vhdx_log_peek_hdr(BlockDriverState *bs, VHDXLogEntries *log, VHDXLogEntryHeader *hdr)
VHDXLogEntryHeader *hdr)
{ {
int ret = 0; int ret = 0;
uint64_t offset; uint64_t offset;
@@ -108,7 +107,7 @@ static int vhdx_log_inc_idx(uint32_t idx, uint64_t length)
/* Reset the log to empty */ /* Reset the log to empty */
static void GRAPH_RDLOCK vhdx_log_reset(BlockDriverState *bs, BDRVVHDXState *s) static void vhdx_log_reset(BlockDriverState *bs, BDRVVHDXState *s)
{ {
MSGUID guid = { 0 }; MSGUID guid = { 0 };
s->log.read = s->log.write = 0; s->log.read = s->log.write = 0;
@@ -128,10 +127,9 @@ static void GRAPH_RDLOCK vhdx_log_reset(BlockDriverState *bs, BDRVVHDXState *s)
* not modified. * not modified.
* *
* 0 is returned on success, -errno otherwise. */ * 0 is returned on success, -errno otherwise. */
static int GRAPH_RDLOCK static int vhdx_log_read_sectors(BlockDriverState *bs, VHDXLogEntries *log,
vhdx_log_read_sectors(BlockDriverState *bs, VHDXLogEntries *log, uint32_t *sectors_read, void *buffer,
uint32_t *sectors_read, void *buffer, uint32_t num_sectors, bool peek)
uint32_t num_sectors, bool peek)
{ {
int ret = 0; int ret = 0;
uint64_t offset; uint64_t offset;
@@ -335,9 +333,9 @@ static int vhdx_compute_desc_sectors(uint32_t desc_cnt)
* will allocate all the space for buffer, which must be NULL when * will allocate all the space for buffer, which must be NULL when
* passed into this function. Each descriptor will also be validated, * passed into this function. Each descriptor will also be validated,
* and error returned if any are invalid. */ * and error returned if any are invalid. */
static int GRAPH_RDLOCK static int vhdx_log_read_desc(BlockDriverState *bs, BDRVVHDXState *s,
vhdx_log_read_desc(BlockDriverState *bs, BDRVVHDXState *s, VHDXLogEntries *log, VHDXLogEntries *log, VHDXLogDescEntries **buffer,
VHDXLogDescEntries **buffer, bool convert_endian) bool convert_endian)
{ {
int ret = 0; int ret = 0;
uint32_t desc_sectors; uint32_t desc_sectors;
@@ -414,9 +412,8 @@ exit:
* For a zero descriptor, it may describe multiple sectors to fill with zeroes. * For a zero descriptor, it may describe multiple sectors to fill with zeroes.
* In this case, it should be noted that zeroes are written to disk, and the * In this case, it should be noted that zeroes are written to disk, and the
* image file is not extended as a sparse file. */ * image file is not extended as a sparse file. */
static int GRAPH_RDLOCK static int vhdx_log_flush_desc(BlockDriverState *bs, VHDXLogDescriptor *desc,
vhdx_log_flush_desc(BlockDriverState *bs, VHDXLogDescriptor *desc, VHDXLogDataSector *data)
VHDXLogDataSector *data)
{ {
int ret = 0; int ret = 0;
uint64_t seq, file_offset; uint64_t seq, file_offset;
@@ -487,8 +484,8 @@ exit:
* file, and then set the log to 'empty' status once complete. * file, and then set the log to 'empty' status once complete.
* *
* The log entries should be validate prior to flushing */ * The log entries should be validate prior to flushing */
static int GRAPH_RDLOCK static int vhdx_log_flush(BlockDriverState *bs, BDRVVHDXState *s,
vhdx_log_flush(BlockDriverState *bs, BDRVVHDXState *s, VHDXLogSequence *logs) VHDXLogSequence *logs)
{ {
int ret = 0; int ret = 0;
int i; int i;
@@ -587,10 +584,9 @@ exit:
return ret; return ret;
} }
static int GRAPH_RDLOCK static int vhdx_validate_log_entry(BlockDriverState *bs, BDRVVHDXState *s,
vhdx_validate_log_entry(BlockDriverState *bs, BDRVVHDXState *s, VHDXLogEntries *log, uint64_t seq,
VHDXLogEntries *log, uint64_t seq, bool *valid, VHDXLogEntryHeader *entry)
bool *valid, VHDXLogEntryHeader *entry)
{ {
int ret = 0; int ret = 0;
VHDXLogEntryHeader hdr; VHDXLogEntryHeader hdr;
@@ -667,8 +663,8 @@ free_and_exit:
/* Search through the log circular buffer, and find the valid, active /* Search through the log circular buffer, and find the valid, active
* log sequence, if any exists * log sequence, if any exists
* */ * */
static int GRAPH_RDLOCK static int vhdx_log_search(BlockDriverState *bs, BDRVVHDXState *s,
vhdx_log_search(BlockDriverState *bs, BDRVVHDXState *s, VHDXLogSequence *logs) VHDXLogSequence *logs)
{ {
int ret = 0; int ret = 0;
uint32_t tail; uint32_t tail;

View File

@@ -353,9 +353,8 @@ exit:
* *
* - non-current header is updated with largest sequence number * - non-current header is updated with largest sequence number
*/ */
static int GRAPH_RDLOCK static int vhdx_update_header(BlockDriverState *bs, BDRVVHDXState *s,
vhdx_update_header(BlockDriverState *bs, BDRVVHDXState *s, bool generate_data_write_guid, MSGUID *log_guid)
bool generate_data_write_guid, MSGUID *log_guid)
{ {
int ret = 0; int ret = 0;
int hdr_idx = 0; int hdr_idx = 0;
@@ -417,8 +416,8 @@ int vhdx_update_headers(BlockDriverState *bs, BDRVVHDXState *s,
} }
/* opens the specified header block from the VHDX file header section */ /* opens the specified header block from the VHDX file header section */
static void GRAPH_RDLOCK static void vhdx_parse_header(BlockDriverState *bs, BDRVVHDXState *s,
vhdx_parse_header(BlockDriverState *bs, BDRVVHDXState *s, Error **errp) Error **errp)
{ {
int ret; int ret;
VHDXHeader *header1; VHDXHeader *header1;
@@ -518,8 +517,7 @@ exit:
} }
static int GRAPH_RDLOCK static int vhdx_open_region_tables(BlockDriverState *bs, BDRVVHDXState *s)
vhdx_open_region_tables(BlockDriverState *bs, BDRVVHDXState *s)
{ {
int ret = 0; int ret = 0;
uint8_t *buffer; uint8_t *buffer;
@@ -636,8 +634,7 @@ fail:
* Also, if the File Parameters indicate this is a differencing file, * Also, if the File Parameters indicate this is a differencing file,
* we must also look for the Parent Locator metadata item. * we must also look for the Parent Locator metadata item.
*/ */
static int GRAPH_RDLOCK static int vhdx_parse_metadata(BlockDriverState *bs, BDRVVHDXState *s)
vhdx_parse_metadata(BlockDriverState *bs, BDRVVHDXState *s)
{ {
int ret = 0; int ret = 0;
uint8_t *buffer; uint8_t *buffer;
@@ -888,8 +885,7 @@ static void vhdx_calc_bat_entries(BDRVVHDXState *s)
} }
static int coroutine_mixed_fn GRAPH_RDLOCK static int vhdx_check_bat_entries(BlockDriverState *bs, int *errcnt)
vhdx_check_bat_entries(BlockDriverState *bs, int *errcnt)
{ {
BDRVVHDXState *s = bs->opaque; BDRVVHDXState *s = bs->opaque;
int64_t image_file_size = bdrv_getlength(bs->file->bs); int64_t image_file_size = bdrv_getlength(bs->file->bs);
@@ -1699,7 +1695,7 @@ exit:
* Fixed images: default state of the BAT is fully populated, with * Fixed images: default state of the BAT is fully populated, with
* file offsets and state PAYLOAD_BLOCK_FULLY_PRESENT. * file offsets and state PAYLOAD_BLOCK_FULLY_PRESENT.
*/ */
static int coroutine_fn GRAPH_UNLOCKED static int coroutine_fn
vhdx_create_bat(BlockBackend *blk, BDRVVHDXState *s, vhdx_create_bat(BlockBackend *blk, BDRVVHDXState *s,
uint64_t image_size, VHDXImageType type, uint64_t image_size, VHDXImageType type,
bool use_zero_blocks, uint64_t file_offset, bool use_zero_blocks, uint64_t file_offset,
@@ -1712,7 +1708,6 @@ vhdx_create_bat(BlockBackend *blk, BDRVVHDXState *s,
uint64_t unused; uint64_t unused;
int block_state; int block_state;
VHDXSectorInfo sinfo; VHDXSectorInfo sinfo;
bool has_zero_init;
assert(s->bat == NULL); assert(s->bat == NULL);
@@ -1742,13 +1737,9 @@ vhdx_create_bat(BlockBackend *blk, BDRVVHDXState *s,
goto exit; goto exit;
} }
bdrv_graph_co_rdlock();
has_zero_init = bdrv_has_zero_init(blk_bs(blk));
bdrv_graph_co_rdunlock();
if (type == VHDX_TYPE_FIXED || if (type == VHDX_TYPE_FIXED ||
use_zero_blocks || use_zero_blocks ||
has_zero_init == 0) { bdrv_has_zero_init(blk_bs(blk)) == 0) {
/* for a fixed file, the default BAT entry is not zero */ /* for a fixed file, the default BAT entry is not zero */
s->bat = g_try_malloc0(length); s->bat = g_try_malloc0(length);
if (length && s->bat == NULL) { if (length && s->bat == NULL) {
@@ -1791,7 +1782,7 @@ exit:
* to create the BAT itself, we will also cause the BAT to be * to create the BAT itself, we will also cause the BAT to be
* created. * created.
*/ */
static int coroutine_fn GRAPH_UNLOCKED static int coroutine_fn
vhdx_create_new_region_table(BlockBackend *blk, uint64_t image_size, vhdx_create_new_region_table(BlockBackend *blk, uint64_t image_size,
uint32_t block_size, uint32_t sector_size, uint32_t block_size, uint32_t sector_size,
uint32_t log_size, bool use_zero_blocks, uint32_t log_size, bool use_zero_blocks,
@@ -2167,9 +2158,9 @@ fail:
* r/w and any log has already been replayed, so there is nothing (currently) * r/w and any log has already been replayed, so there is nothing (currently)
* for us to do here * for us to do here
*/ */
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn vhdx_co_check(BlockDriverState *bs,
vhdx_co_check(BlockDriverState *bs, BdrvCheckResult *result, BdrvCheckResult *result,
BdrvCheckMode fix) BdrvCheckMode fix)
{ {
BDRVVHDXState *s = bs->opaque; BDRVVHDXState *s = bs->opaque;
@@ -2182,7 +2173,7 @@ vhdx_co_check(BlockDriverState *bs, BdrvCheckResult *result,
return 0; return 0;
} }
static int GRAPH_RDLOCK vhdx_has_zero_init(BlockDriverState *bs) static int vhdx_has_zero_init(BlockDriverState *bs)
{ {
BDRVVHDXState *s = bs->opaque; BDRVVHDXState *s = bs->opaque;
int state; int state;

View File

@@ -401,9 +401,8 @@ typedef struct BDRVVHDXState {
void vhdx_guid_generate(MSGUID *guid); void vhdx_guid_generate(MSGUID *guid);
int GRAPH_RDLOCK int vhdx_update_headers(BlockDriverState *bs, BDRVVHDXState *s, bool rw,
vhdx_update_headers(BlockDriverState *bs, BDRVVHDXState *s, bool rw, MSGUID *log_guid);
MSGUID *log_guid);
uint32_t vhdx_update_checksum(uint8_t *buf, size_t size, int crc_offset); uint32_t vhdx_update_checksum(uint8_t *buf, size_t size, int crc_offset);
uint32_t vhdx_checksum_calc(uint32_t crc, uint8_t *buf, size_t size, uint32_t vhdx_checksum_calc(uint32_t crc, uint8_t *buf, size_t size,
@@ -449,8 +448,6 @@ void vhdx_metadata_header_le_import(VHDXMetadataTableHeader *hdr);
void vhdx_metadata_header_le_export(VHDXMetadataTableHeader *hdr); void vhdx_metadata_header_le_export(VHDXMetadataTableHeader *hdr);
void vhdx_metadata_entry_le_import(VHDXMetadataTableEntry *e); void vhdx_metadata_entry_le_import(VHDXMetadataTableEntry *e);
void vhdx_metadata_entry_le_export(VHDXMetadataTableEntry *e); void vhdx_metadata_entry_le_export(VHDXMetadataTableEntry *e);
int vhdx_user_visible_write(BlockDriverState *bs, BDRVVHDXState *s);
int GRAPH_RDLOCK
vhdx_user_visible_write(BlockDriverState *bs, BDRVVHDXState *s);
#endif #endif

View File

@@ -300,8 +300,7 @@ static void vmdk_free_last_extent(BlockDriverState *bs)
} }
/* Return -ve errno, or 0 on success and write CID into *pcid. */ /* Return -ve errno, or 0 on success and write CID into *pcid. */
static int GRAPH_RDLOCK static int vmdk_read_cid(BlockDriverState *bs, int parent, uint32_t *pcid)
vmdk_read_cid(BlockDriverState *bs, int parent, uint32_t *pcid)
{ {
char *desc; char *desc;
uint32_t cid; uint32_t cid;
@@ -381,7 +380,7 @@ out:
return ret; return ret;
} }
static int coroutine_fn GRAPH_RDLOCK vmdk_is_cid_valid(BlockDriverState *bs) static int coroutine_fn vmdk_is_cid_valid(BlockDriverState *bs)
{ {
BDRVVmdkState *s = bs->opaque; BDRVVmdkState *s = bs->opaque;
uint32_t cur_pcid; uint32_t cur_pcid;
@@ -416,9 +415,6 @@ static int vmdk_reopen_prepare(BDRVReopenState *state,
BDRVVmdkReopenState *rs; BDRVVmdkReopenState *rs;
int i; int i;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
assert(state != NULL); assert(state != NULL);
assert(state->bs != NULL); assert(state->bs != NULL);
assert(state->opaque == NULL); assert(state->opaque == NULL);
@@ -455,9 +451,6 @@ static void vmdk_reopen_commit(BDRVReopenState *state)
BDRVVmdkReopenState *rs = state->opaque; BDRVVmdkReopenState *rs = state->opaque;
int i; int i;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
for (i = 0; i < s->num_extents; i++) { for (i = 0; i < s->num_extents; i++) {
if (rs->extents_using_bs_file[i]) { if (rs->extents_using_bs_file[i]) {
s->extents[i].file = state->bs->file; s->extents[i].file = state->bs->file;
@@ -472,7 +465,7 @@ static void vmdk_reopen_abort(BDRVReopenState *state)
vmdk_reopen_clean(state); vmdk_reopen_clean(state);
} }
static int GRAPH_RDLOCK vmdk_parent_open(BlockDriverState *bs) static int vmdk_parent_open(BlockDriverState *bs)
{ {
char *p_name; char *p_name;
char *desc; char *desc;
@@ -2554,10 +2547,7 @@ vmdk_co_do_create(int64_t size,
ret = -EINVAL; ret = -EINVAL;
goto exit; goto exit;
} }
bdrv_graph_co_rdlock();
ret = vmdk_read_cid(blk_bs(backing), 0, &parent_cid); ret = vmdk_read_cid(blk_bs(backing), 0, &parent_cid);
bdrv_graph_co_rdunlock();
blk_co_unref(backing); blk_co_unref(backing);
if (ret) { if (ret) {
error_setg(errp, "Failed to read parent CID"); error_setg(errp, "Failed to read parent CID");
@@ -2904,7 +2894,7 @@ vmdk_co_get_allocated_file_size(BlockDriverState *bs)
return ret; return ret;
} }
static int GRAPH_RDLOCK vmdk_has_zero_init(BlockDriverState *bs) static int vmdk_has_zero_init(BlockDriverState *bs)
{ {
int i; int i;
BDRVVmdkState *s = bs->opaque; BDRVVmdkState *s = bs->opaque;
@@ -3054,9 +3044,8 @@ vmdk_co_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
return 0; return 0;
} }
static void GRAPH_RDLOCK static void vmdk_gather_child_options(BlockDriverState *bs, QDict *target,
vmdk_gather_child_options(BlockDriverState *bs, QDict *target, bool backing_overridden)
bool backing_overridden)
{ {
/* No children but file and backing can be explicitly specified (TODO) */ /* No children but file and backing can be explicitly specified (TODO) */
qdict_put(target, "file", qdict_put(target, "file",

View File

@@ -238,8 +238,6 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
opts = qemu_opts_create(&vpc_runtime_opts, NULL, 0, &error_abort); opts = qemu_opts_create(&vpc_runtime_opts, NULL, 0, &error_abort);
if (!qemu_opts_absorb_qdict(opts, options, errp)) { if (!qemu_opts_absorb_qdict(opts, options, errp)) {
ret = -EINVAL; ret = -EINVAL;
@@ -448,9 +446,11 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
} }
/* Disable migration when VHD images are used */ /* Disable migration when VHD images are used */
bdrv_graph_rdlock_main_loop();
error_setg(&s->migration_blocker, "The vpc format used by node '%s' " error_setg(&s->migration_blocker, "The vpc format used by node '%s' "
"does not support live migration", "does not support live migration",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
bdrv_graph_rdunlock_main_loop();
ret = migrate_add_blocker_normal(&s->migration_blocker, errp); ret = migrate_add_blocker_normal(&s->migration_blocker, errp);
if (ret < 0) { if (ret < 0) {
@@ -1170,7 +1170,7 @@ fail:
} }
static int GRAPH_RDLOCK vpc_has_zero_init(BlockDriverState *bs) static int vpc_has_zero_init(BlockDriverState *bs)
{ {
BDRVVPCState *s = bs->opaque; BDRVVPCState *s = bs->opaque;

View File

@@ -255,13 +255,13 @@ void drive_check_orphaned(void)
* Ignore default drives, because we create certain default * Ignore default drives, because we create certain default
* drives unconditionally, then leave them unclaimed. Not the * drives unconditionally, then leave them unclaimed. Not the
* users fault. * users fault.
* Ignore IF_VIRTIO or IF_XEN, because it gets desugared into * Ignore IF_VIRTIO, because it gets desugared into -device,
* -device, so we can leave failing to -device. * so we can leave failing to -device.
* Ignore IF_NONE, because leaving unclaimed IF_NONE remains * Ignore IF_NONE, because leaving unclaimed IF_NONE remains
* available for device_add is a feature. * available for device_add is a feature.
*/ */
if (dinfo->is_default || dinfo->type == IF_VIRTIO if (dinfo->is_default || dinfo->type == IF_VIRTIO
|| dinfo->type == IF_XEN || dinfo->type == IF_NONE) { || dinfo->type == IF_NONE) {
continue; continue;
} }
if (!blk_get_attached_dev(blk)) { if (!blk_get_attached_dev(blk)) {
@@ -977,15 +977,6 @@ DriveInfo *drive_new(QemuOpts *all_opts, BlockInterfaceType block_default_type,
qemu_opt_set(devopts, "driver", "virtio-blk", &error_abort); qemu_opt_set(devopts, "driver", "virtio-blk", &error_abort);
qemu_opt_set(devopts, "drive", qdict_get_str(bs_opts, "id"), qemu_opt_set(devopts, "drive", qdict_get_str(bs_opts, "id"),
&error_abort); &error_abort);
} else if (type == IF_XEN) {
QemuOpts *devopts;
devopts = qemu_opts_create(qemu_find_opts("device"), NULL, 0,
&error_abort);
qemu_opt_set(devopts, "driver",
(media == MEDIA_CDROM) ? "xen-cdrom" : "xen-disk",
&error_abort);
qemu_opt_set(devopts, "drive", qdict_get_str(bs_opts, "id"),
&error_abort);
} }
filename = qemu_opt_get(legacy_opts, "file"); filename = qemu_opt_get(legacy_opts, "file");
@@ -1610,12 +1601,7 @@ static void external_snapshot_abort(void *opaque)
aio_context_acquire(aio_context); aio_context_acquire(aio_context);
} }
bdrv_drained_begin(state->new_bs);
bdrv_graph_wrlock(state->old_bs);
bdrv_replace_node(state->new_bs, state->old_bs, &error_abort); bdrv_replace_node(state->new_bs, state->old_bs, &error_abort);
bdrv_graph_wrunlock();
bdrv_drained_end(state->new_bs);
bdrv_unref(state->old_bs); /* bdrv_replace_node() ref'ed old_bs */ bdrv_unref(state->old_bs); /* bdrv_replace_node() ref'ed old_bs */
aio_context_release(aio_context); aio_context_release(aio_context);
@@ -1715,6 +1701,7 @@ static void drive_backup_action(DriveBackup *backup,
bdrv_graph_rdunlock_main_loop(); bdrv_graph_rdunlock_main_loop();
goto out; goto out;
} }
bdrv_graph_rdunlock_main_loop();
flags = bs->open_flags | BDRV_O_RDWR; flags = bs->open_flags | BDRV_O_RDWR;
@@ -1739,7 +1726,6 @@ static void drive_backup_action(DriveBackup *backup,
flags |= BDRV_O_NO_BACKING; flags |= BDRV_O_NO_BACKING;
set_backing_hd = true; set_backing_hd = true;
} }
bdrv_graph_rdunlock_main_loop();
size = bdrv_getlength(bs); size = bdrv_getlength(bs);
if (size < 0) { if (size < 0) {
@@ -1751,10 +1737,10 @@ static void drive_backup_action(DriveBackup *backup,
assert(format); assert(format);
if (source) { if (source) {
/* Implicit filters should not appear in the filename */ /* Implicit filters should not appear in the filename */
BlockDriverState *explicit_backing; BlockDriverState *explicit_backing =
bdrv_skip_implicit_filters(source);
bdrv_graph_rdlock_main_loop(); bdrv_graph_rdlock_main_loop();
explicit_backing = bdrv_skip_implicit_filters(source);
bdrv_refresh_filename(explicit_backing); bdrv_refresh_filename(explicit_backing);
bdrv_graph_rdunlock_main_loop(); bdrv_graph_rdunlock_main_loop();
@@ -2455,12 +2441,11 @@ void qmp_block_stream(const char *job_id, const char *device,
aio_context = bdrv_get_aio_context(bs); aio_context = bdrv_get_aio_context(bs);
aio_context_acquire(aio_context); aio_context_acquire(aio_context);
bdrv_graph_rdlock_main_loop();
if (base) { if (base) {
base_bs = bdrv_find_backing_image(bs, base); base_bs = bdrv_find_backing_image(bs, base);
if (base_bs == NULL) { if (base_bs == NULL) {
error_setg(errp, "Can't find '%s' in the backing chain", base); error_setg(errp, "Can't find '%s' in the backing chain", base);
goto out_rdlock; goto out;
} }
assert(bdrv_get_aio_context(base_bs) == aio_context); assert(bdrv_get_aio_context(base_bs) == aio_context);
} }
@@ -2468,36 +2453,38 @@ void qmp_block_stream(const char *job_id, const char *device,
if (base_node) { if (base_node) {
base_bs = bdrv_lookup_bs(NULL, base_node, errp); base_bs = bdrv_lookup_bs(NULL, base_node, errp);
if (!base_bs) { if (!base_bs) {
goto out_rdlock; goto out;
} }
if (bs == base_bs || !bdrv_chain_contains(bs, base_bs)) { if (bs == base_bs || !bdrv_chain_contains(bs, base_bs)) {
error_setg(errp, "Node '%s' is not a backing image of '%s'", error_setg(errp, "Node '%s' is not a backing image of '%s'",
base_node, device); base_node, device);
goto out_rdlock; goto out;
} }
assert(bdrv_get_aio_context(base_bs) == aio_context); assert(bdrv_get_aio_context(base_bs) == aio_context);
bdrv_graph_rdlock_main_loop();
bdrv_refresh_filename(base_bs); bdrv_refresh_filename(base_bs);
bdrv_graph_rdunlock_main_loop();
} }
if (bottom) { if (bottom) {
bottom_bs = bdrv_lookup_bs(NULL, bottom, errp); bottom_bs = bdrv_lookup_bs(NULL, bottom, errp);
if (!bottom_bs) { if (!bottom_bs) {
goto out_rdlock; goto out;
} }
if (!bottom_bs->drv) { if (!bottom_bs->drv) {
error_setg(errp, "Node '%s' is not open", bottom); error_setg(errp, "Node '%s' is not open", bottom);
goto out_rdlock; goto out;
} }
if (bottom_bs->drv->is_filter) { if (bottom_bs->drv->is_filter) {
error_setg(errp, "Node '%s' is a filter, use a non-filter node " error_setg(errp, "Node '%s' is a filter, use a non-filter node "
"as 'bottom'", bottom); "as 'bottom'", bottom);
goto out_rdlock; goto out;
} }
if (!bdrv_chain_contains(bs, bottom_bs)) { if (!bdrv_chain_contains(bs, bottom_bs)) {
error_setg(errp, "Node '%s' is not in a chain starting from '%s'", error_setg(errp, "Node '%s' is not in a chain starting from '%s'",
bottom, device); bottom, device);
goto out_rdlock; goto out;
} }
assert(bdrv_get_aio_context(bottom_bs) == aio_context); assert(bdrv_get_aio_context(bottom_bs) == aio_context);
} }
@@ -2506,11 +2493,13 @@ void qmp_block_stream(const char *job_id, const char *device,
* Check for op blockers in the whole chain between bs and base (or bottom) * Check for op blockers in the whole chain between bs and base (or bottom)
*/ */
iter_end = bottom ? bdrv_filter_or_cow_bs(bottom_bs) : base_bs; iter_end = bottom ? bdrv_filter_or_cow_bs(bottom_bs) : base_bs;
bdrv_graph_rdlock_main_loop();
for (iter = bs; iter && iter != iter_end; for (iter = bs; iter && iter != iter_end;
iter = bdrv_filter_or_cow_bs(iter)) iter = bdrv_filter_or_cow_bs(iter))
{ {
if (bdrv_op_is_blocked(iter, BLOCK_OP_TYPE_STREAM, errp)) { if (bdrv_op_is_blocked(iter, BLOCK_OP_TYPE_STREAM, errp)) {
goto out_rdlock; bdrv_graph_rdunlock_main_loop();
goto out;
} }
} }
bdrv_graph_rdunlock_main_loop(); bdrv_graph_rdunlock_main_loop();
@@ -2542,11 +2531,6 @@ void qmp_block_stream(const char *job_id, const char *device,
out: out:
aio_context_release(aio_context); aio_context_release(aio_context);
return;
out_rdlock:
bdrv_graph_rdunlock_main_loop();
aio_context_release(aio_context);
} }
void qmp_block_commit(const char *job_id, const char *device, void qmp_block_commit(const char *job_id, const char *device,
@@ -3061,6 +3045,7 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
bdrv_graph_rdunlock_main_loop(); bdrv_graph_rdunlock_main_loop();
return; return;
} }
bdrv_graph_rdunlock_main_loop();
aio_context = bdrv_get_aio_context(bs); aio_context = bdrv_get_aio_context(bs);
aio_context_acquire(aio_context); aio_context_acquire(aio_context);
@@ -3082,7 +3067,6 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
if (arg->sync == MIRROR_SYNC_MODE_NONE) { if (arg->sync == MIRROR_SYNC_MODE_NONE) {
target_backing_bs = bs; target_backing_bs = bs;
} }
bdrv_graph_rdunlock_main_loop();
size = bdrv_getlength(bs); size = bdrv_getlength(bs);
if (size < 0) { if (size < 0) {
@@ -3115,18 +3099,16 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
bdrv_img_create(arg->target, format, bdrv_img_create(arg->target, format,
NULL, NULL, NULL, size, flags, false, &local_err); NULL, NULL, NULL, size, flags, false, &local_err);
} else { } else {
BlockDriverState *explicit_backing; /* Implicit filters should not appear in the filename */
BlockDriverState *explicit_backing =
bdrv_skip_implicit_filters(target_backing_bs);
switch (arg->mode) { switch (arg->mode) {
case NEW_IMAGE_MODE_EXISTING: case NEW_IMAGE_MODE_EXISTING:
break; break;
case NEW_IMAGE_MODE_ABSOLUTE_PATHS: case NEW_IMAGE_MODE_ABSOLUTE_PATHS:
/* /* create new image with backing file */
* Create new image with backing file.
* Implicit filters should not appear in the filename.
*/
bdrv_graph_rdlock_main_loop(); bdrv_graph_rdlock_main_loop();
explicit_backing = bdrv_skip_implicit_filters(target_backing_bs);
bdrv_refresh_filename(explicit_backing); bdrv_refresh_filename(explicit_backing);
bdrv_graph_rdunlock_main_loop(); bdrv_graph_rdunlock_main_loop();
@@ -3165,11 +3147,9 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
return; return;
} }
bdrv_graph_rdlock_main_loop();
zero_target = (arg->sync == MIRROR_SYNC_MODE_FULL && zero_target = (arg->sync == MIRROR_SYNC_MODE_FULL &&
(arg->mode == NEW_IMAGE_MODE_EXISTING || (arg->mode == NEW_IMAGE_MODE_EXISTING ||
!bdrv_has_zero_init(target_bs))); !bdrv_has_zero_init(target_bs)));
bdrv_graph_rdunlock_main_loop();
/* Honor bdrv_try_change_aio_context() context acquisition requirements. */ /* Honor bdrv_try_change_aio_context() context acquisition requirements. */
@@ -3446,38 +3426,38 @@ void qmp_change_backing_file(const char *device,
aio_context = bdrv_get_aio_context(bs); aio_context = bdrv_get_aio_context(bs);
aio_context_acquire(aio_context); aio_context_acquire(aio_context);
bdrv_graph_rdlock_main_loop();
image_bs = bdrv_lookup_bs(NULL, image_node_name, &local_err); image_bs = bdrv_lookup_bs(NULL, image_node_name, &local_err);
if (local_err) { if (local_err) {
error_propagate(errp, local_err); error_propagate(errp, local_err);
goto out_rdlock; goto out;
} }
if (!image_bs) { if (!image_bs) {
error_setg(errp, "image file not found"); error_setg(errp, "image file not found");
goto out_rdlock; goto out;
} }
if (bdrv_find_base(image_bs) == image_bs) { if (bdrv_find_base(image_bs) == image_bs) {
error_setg(errp, "not allowing backing file change on an image " error_setg(errp, "not allowing backing file change on an image "
"without a backing file"); "without a backing file");
goto out_rdlock; goto out;
} }
/* even though we are not necessarily operating on bs, we need it to /* even though we are not necessarily operating on bs, we need it to
* determine if block ops are currently prohibited on the chain */ * determine if block ops are currently prohibited on the chain */
bdrv_graph_rdlock_main_loop();
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_CHANGE, errp)) { if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_CHANGE, errp)) {
goto out_rdlock; bdrv_graph_rdunlock_main_loop();
goto out;
} }
bdrv_graph_rdunlock_main_loop();
/* final sanity check */ /* final sanity check */
if (!bdrv_chain_contains(bs, image_bs)) { if (!bdrv_chain_contains(bs, image_bs)) {
error_setg(errp, "'%s' and image file are not in the same chain", error_setg(errp, "'%s' and image file are not in the same chain",
device); device);
goto out_rdlock; goto out;
} }
bdrv_graph_rdunlock_main_loop();
/* if not r/w, reopen to make r/w */ /* if not r/w, reopen to make r/w */
ro = bdrv_is_read_only(image_bs); ro = bdrv_is_read_only(image_bs);
@@ -3505,11 +3485,6 @@ void qmp_change_backing_file(const char *device,
out: out:
aio_context_release(aio_context); aio_context_release(aio_context);
return;
out_rdlock:
bdrv_graph_rdunlock_main_loop();
aio_context_release(aio_context);
} }
void qmp_blockdev_add(BlockdevOptions *options, Error **errp) void qmp_blockdev_add(BlockdevOptions *options, Error **errp)

View File

@@ -513,8 +513,7 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
BlockJob *job; BlockJob *job;
int ret; int ret;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
bdrv_graph_wrlock(bs);
if (job_id == NULL && !(flags & JOB_INTERNAL)) { if (job_id == NULL && !(flags & JOB_INTERNAL)) {
job_id = bdrv_get_device_name(bs); job_id = bdrv_get_device_name(bs);
@@ -523,7 +522,6 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
job = job_create(job_id, &driver->job_driver, txn, bdrv_get_aio_context(bs), job = job_create(job_id, &driver->job_driver, txn, bdrv_get_aio_context(bs),
flags, cb, opaque, errp); flags, cb, opaque, errp);
if (job == NULL) { if (job == NULL) {
bdrv_graph_wrunlock();
return NULL; return NULL;
} }
@@ -563,11 +561,9 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
goto fail; goto fail;
} }
bdrv_graph_wrunlock();
return job; return job;
fail: fail:
bdrv_graph_wrunlock();
job_early_fail(&job->job); job_early_fail(&job->job);
return NULL; return NULL;
} }

View File

@@ -118,7 +118,7 @@ void fork_end(int child)
*/ */
CPU_FOREACH_SAFE(cpu, next_cpu) { CPU_FOREACH_SAFE(cpu, next_cpu) {
if (cpu != thread_cpu) { if (cpu != thread_cpu) {
QTAILQ_REMOVE_RCU(&cpus_queue, cpu, node); QTAILQ_REMOVE_RCU(&cpus, cpu, node);
} }
} }
mmap_fork_end(child); mmap_fork_end(child);

View File

@@ -14,7 +14,6 @@ CONFIG_SAM460EX=y
CONFIG_MAC_OLDWORLD=y CONFIG_MAC_OLDWORLD=y
CONFIG_MAC_NEWWORLD=y CONFIG_MAC_NEWWORLD=y
CONFIG_AMIGAONE=y
CONFIG_PEGASOS2=y CONFIG_PEGASOS2=y
# For PReP # For PReP

View File

@@ -1,5 +1,4 @@
TARGET_ARCH=hppa TARGET_ARCH=hppa
TARGET_ABI32=y
TARGET_SYSTBL_ABI=common,32 TARGET_SYSTBL_ABI=common,32
TARGET_SYSTBL=syscall.tbl TARGET_SYSTBL=syscall.tbl
TARGET_BIG_ENDIAN=y TARGET_BIG_ENDIAN=y

View File

@@ -1,4 +1,3 @@
# Default configuration for loongarch64-linux-user # Default configuration for loongarch64-linux-user
TARGET_ARCH=loongarch64 TARGET_ARCH=loongarch64
TARGET_BASE_ARCH=loongarch TARGET_BASE_ARCH=loongarch
TARGET_XML_FILES=gdb-xml/loongarch-base64.xml gdb-xml/loongarch-fpu.xml

13
configure vendored
View File

@@ -309,7 +309,6 @@ fi
ar="${AR-${cross_prefix}ar}" ar="${AR-${cross_prefix}ar}"
as="${AS-${cross_prefix}as}" as="${AS-${cross_prefix}as}"
ccas="${CCAS-$cc}" ccas="${CCAS-$cc}"
dlltool="${DLLTOOL-${cross_prefix}dlltool}"
objcopy="${OBJCOPY-${cross_prefix}objcopy}" objcopy="${OBJCOPY-${cross_prefix}objcopy}"
ld="${LD-${cross_prefix}ld}" ld="${LD-${cross_prefix}ld}"
ranlib="${RANLIB-${cross_prefix}ranlib}" ranlib="${RANLIB-${cross_prefix}ranlib}"
@@ -1011,9 +1010,9 @@ if test "$targetos" = "bogus"; then
fi fi
# test for any invalid configuration combinations # test for any invalid configuration combinations
if test "$targetos" = "windows" && ! has "$dlltool"; then if test "$targetos" = "windows"; then
if test "$plugins" = "yes"; then if test "$plugins" = "yes"; then
error_exit "TCG plugins requires dlltool to build on Windows platforms" error_exit "TCG plugins not currently supported on Windows platforms"
fi fi
plugins="no" plugins="no"
fi fi
@@ -1660,15 +1659,9 @@ echo "SRC_PATH=$source_path/contrib/plugins" >> contrib/plugins/$config_host_mak
echo "PKG_CONFIG=${pkg_config}" >> contrib/plugins/$config_host_mak echo "PKG_CONFIG=${pkg_config}" >> contrib/plugins/$config_host_mak
echo "CC=$cc $CPU_CFLAGS" >> contrib/plugins/$config_host_mak echo "CC=$cc $CPU_CFLAGS" >> contrib/plugins/$config_host_mak
echo "CFLAGS=${CFLAGS-$default_cflags} $EXTRA_CFLAGS" >> contrib/plugins/$config_host_mak echo "CFLAGS=${CFLAGS-$default_cflags} $EXTRA_CFLAGS" >> contrib/plugins/$config_host_mak
if test "$targetos" = windows; then
echo "DLLTOOL=$dlltool" >> contrib/plugins/$config_host_mak
fi
if test "$targetos" = darwin; then if test "$targetos" = darwin; then
echo "CONFIG_DARWIN=y" >> contrib/plugins/$config_host_mak echo "CONFIG_DARWIN=y" >> contrib/plugins/$config_host_mak
fi fi
if test "$targetos" = windows; then
echo "CONFIG_WIN32=y" >> contrib/plugins/$config_host_mak
fi
# tests/tcg configuration # tests/tcg configuration
(config_host_mak=tests/tcg/config-host.mak (config_host_mak=tests/tcg/config-host.mak
@@ -1771,7 +1764,6 @@ if test "$skip_meson" = no; then
test -n "$cxx" && echo "cpp = [$(meson_quote $cxx $CPU_CFLAGS)]" >> $cross test -n "$cxx" && echo "cpp = [$(meson_quote $cxx $CPU_CFLAGS)]" >> $cross
test -n "$objcc" && echo "objc = [$(meson_quote $objcc $CPU_CFLAGS)]" >> $cross test -n "$objcc" && echo "objc = [$(meson_quote $objcc $CPU_CFLAGS)]" >> $cross
echo "ar = [$(meson_quote $ar)]" >> $cross echo "ar = [$(meson_quote $ar)]" >> $cross
echo "dlltool = [$(meson_quote $dlltool)]" >> $cross
echo "nm = [$(meson_quote $nm)]" >> $cross echo "nm = [$(meson_quote $nm)]" >> $cross
echo "pkgconfig = [$(meson_quote $pkg_config)]" >> $cross echo "pkgconfig = [$(meson_quote $pkg_config)]" >> $cross
echo "pkg-config = [$(meson_quote $pkg_config)]" >> $cross echo "pkg-config = [$(meson_quote $pkg_config)]" >> $cross
@@ -1877,7 +1869,6 @@ preserve_env CC
preserve_env CFLAGS preserve_env CFLAGS
preserve_env CXX preserve_env CXX
preserve_env CXXFLAGS preserve_env CXXFLAGS
preserve_env DLLTOOL
preserve_env LD preserve_env LD
preserve_env LDFLAGS preserve_env LDFLAGS
preserve_env LD_LIBRARY_PATH preserve_env LD_LIBRARY_PATH

View File

@@ -12,18 +12,15 @@ amd.com AMD
aspeedtech.com ASPEED Technology Inc. aspeedtech.com ASPEED Technology Inc.
baidu.com Baidu baidu.com Baidu
bytedance.com ByteDance bytedance.com ByteDance
cestc.cn Cestc
cmss.chinamobile.com China Mobile cmss.chinamobile.com China Mobile
citrix.com Citrix citrix.com Citrix
crudebyte.com Crudebyte crudebyte.com Crudebyte
chinatelecom.cn China Telecom chinatelecom.cn China Telecom
daynix.com Daynix
eldorado.org.br Instituto de Pesquisas Eldorado eldorado.org.br Instituto de Pesquisas Eldorado
fb.com Facebook fb.com Facebook
fujitsu.com Fujitsu fujitsu.com Fujitsu
google.com Google google.com Google
greensocs.com GreenSocs greensocs.com GreenSocs
hisilicon.com Huawei
huawei.com Huawei huawei.com Huawei
ibm.com IBM ibm.com IBM
igalia.com Igalia igalia.com Igalia
@@ -41,7 +38,6 @@ proxmox.com Proxmox
quicinc.com Qualcomm Innovation Center quicinc.com Qualcomm Innovation Center
redhat.com Red Hat redhat.com Red Hat
rev.ng rev.ng Labs rev.ng rev.ng Labs
rivosinc.com Rivos Inc
rt-rk.com RT-RK rt-rk.com RT-RK
samsung.com Samsung samsung.com Samsung
siemens.com Siemens siemens.com Siemens

View File

@@ -17,25 +17,12 @@ NAMES += execlog
NAMES += hotblocks NAMES += hotblocks
NAMES += hotpages NAMES += hotpages
NAMES += howvec NAMES += howvec
# The lockstep example communicates using unix sockets,
# and can't be easily made to work on windows.
ifneq ($(CONFIG_WIN32),y)
NAMES += lockstep NAMES += lockstep
endif
NAMES += hwprofile NAMES += hwprofile
NAMES += cache NAMES += cache
NAMES += drcov NAMES += drcov
ifeq ($(CONFIG_WIN32),y) SONAMES := $(addsuffix .so,$(addprefix lib,$(NAMES)))
SO_SUFFIX := .dll
LDLIBS += $(shell $(PKG_CONFIG) --libs glib-2.0)
else
SO_SUFFIX := .so
endif
SONAMES := $(addsuffix $(SO_SUFFIX),$(addprefix lib,$(NAMES)))
# The main QEMU uses Glib extensively so it's perfectly fine to use it # The main QEMU uses Glib extensively so it's perfectly fine to use it
# in plugins (which many example do). # in plugins (which many example do).
@@ -48,20 +35,15 @@ all: $(SONAMES)
%.o: %.c %.o: %.c
$(CC) $(CFLAGS) $(PLUGIN_CFLAGS) -c -o $@ $< $(CC) $(CFLAGS) $(PLUGIN_CFLAGS) -c -o $@ $<
ifeq ($(CONFIG_WIN32),y) lib%.so: %.o
lib%$(SO_SUFFIX): %.o win32_linker.o ../../plugins/qemu_plugin_api.lib ifeq ($(CONFIG_DARWIN),y)
$(CC) -shared -o $@ $^ $(LDLIBS)
else ifeq ($(CONFIG_DARWIN),y)
lib%$(SO_SUFFIX): %.o
$(CC) -bundle -Wl,-undefined,dynamic_lookup -o $@ $^ $(LDLIBS) $(CC) -bundle -Wl,-undefined,dynamic_lookup -o $@ $^ $(LDLIBS)
else else
lib%$(SO_SUFFIX): %.o
$(CC) -shared -o $@ $^ $(LDLIBS) $(CC) -shared -o $@ $^ $(LDLIBS)
endif endif
clean: clean:
rm -f *.o *$(SO_SUFFIX) *.d rm -f *.o *.so *.d
rm -Rf .libs rm -Rf .libs
.PHONY: all clean .PHONY: all clean

View File

@@ -1,34 +0,0 @@
/*
* Copyright (C) 2023, Greg Manning <gmanning@rapitasystems.com>
*
* This hook, __pfnDliFailureHook2, is documented in the microsoft documentation here:
* https://learn.microsoft.com/en-us/cpp/build/reference/error-handling-and-notification
* It gets called when a delay-loaded DLL encounters various errors.
* We handle the specific case of a DLL looking for a "qemu.exe",
* and give it the running executable (regardless of what it is named).
*
* This work is licensed under the terms of the GNU LGPL, version 2 or later.
* See the COPYING.LIB file in the top-level directory.
*/
#include <windows.h>
#include <delayimp.h>
FARPROC WINAPI dll_failure_hook(unsigned dliNotify, PDelayLoadInfo pdli);
PfnDliHook __pfnDliFailureHook2 = dll_failure_hook;
FARPROC WINAPI dll_failure_hook(unsigned dliNotify, PDelayLoadInfo pdli) {
if (dliNotify == dliFailLoadLib) {
/* If the failing request was for qemu.exe, ... */
if (strcmp(pdli->szDll, "qemu.exe") == 0) {
/* Then pass back a pointer to the top level module. */
HMODULE top = GetModuleHandle(NULL);
return (FARPROC) top;
}
}
/* Otherwise we can't do anything special. */
return 0;
}

View File

@@ -73,7 +73,7 @@ static int cpu_get_free_index(void)
return max_cpu_index; return max_cpu_index;
} }
CPUTailQ cpus_queue = QTAILQ_HEAD_INITIALIZER(cpus_queue); CPUTailQ cpus = QTAILQ_HEAD_INITIALIZER(cpus);
static unsigned int cpu_list_generation_id; static unsigned int cpu_list_generation_id;
unsigned int cpu_list_generation_id_get(void) unsigned int cpu_list_generation_id_get(void)
@@ -90,7 +90,7 @@ void cpu_list_add(CPUState *cpu)
} else { } else {
assert(!cpu_index_auto_assigned); assert(!cpu_index_auto_assigned);
} }
QTAILQ_INSERT_TAIL_RCU(&cpus_queue, cpu, node); QTAILQ_INSERT_TAIL_RCU(&cpus, cpu, node);
cpu_list_generation_id++; cpu_list_generation_id++;
} }
@@ -102,7 +102,7 @@ void cpu_list_remove(CPUState *cpu)
return; return;
} }
QTAILQ_REMOVE_RCU(&cpus_queue, cpu, node); QTAILQ_REMOVE_RCU(&cpus, cpu, node);
cpu->cpu_index = UNASSIGNED_CPU_INDEX; cpu->cpu_index = UNASSIGNED_CPU_INDEX;
cpu_list_generation_id++; cpu_list_generation_id++;
} }

View File

@@ -42,6 +42,7 @@
#include "hw/core/accel-cpu.h" #include "hw/core/accel-cpu.h"
#include "trace/trace-root.h" #include "trace/trace-root.h"
#include "qemu/accel.h" #include "qemu/accel.h"
#include "qemu/plugin.h"
uintptr_t qemu_host_page_size; uintptr_t qemu_host_page_size;
intptr_t qemu_host_page_mask; intptr_t qemu_host_page_mask;
@@ -130,18 +131,23 @@ const VMStateDescription vmstate_cpu_common = {
}; };
#endif #endif
bool cpu_exec_realizefn(CPUState *cpu, Error **errp) void cpu_exec_realizefn(CPUState *cpu, Error **errp)
{ {
/* cache the cpu class for the hotpath */ /* cache the cpu class for the hotpath */
cpu->cc = CPU_GET_CLASS(cpu); cpu->cc = CPU_GET_CLASS(cpu);
if (!accel_cpu_common_realize(cpu, errp)) { if (!accel_cpu_common_realize(cpu, errp)) {
return false; return;
} }
/* Wait until cpu initialization complete before exposing cpu. */ /* Wait until cpu initialization complete before exposing cpu. */
cpu_list_add(cpu); cpu_list_add(cpu);
/* Plugin initialization must wait until cpu_index assigned. */
if (tcg_enabled()) {
qemu_plugin_vcpu_init_hook(cpu);
}
#ifdef CONFIG_USER_ONLY #ifdef CONFIG_USER_ONLY
assert(qdev_get_vmsd(DEVICE(cpu)) == NULL || assert(qdev_get_vmsd(DEVICE(cpu)) == NULL ||
qdev_get_vmsd(DEVICE(cpu))->unmigratable); qdev_get_vmsd(DEVICE(cpu))->unmigratable);
@@ -153,8 +159,6 @@ bool cpu_exec_realizefn(CPUState *cpu, Error **errp)
vmstate_register(NULL, cpu->cpu_index, cpu->cc->sysemu_ops->legacy_vmsd, cpu); vmstate_register(NULL, cpu->cpu_index, cpu->cc->sysemu_ops->legacy_vmsd, cpu);
} }
#endif /* CONFIG_USER_ONLY */ #endif /* CONFIG_USER_ONLY */
return true;
} }
void cpu_exec_unrealizefn(CPUState *cpu) void cpu_exec_unrealizefn(CPUState *cpu)
@@ -170,6 +174,11 @@ void cpu_exec_unrealizefn(CPUState *cpu)
} }
#endif #endif
/* Call the plugin hook before clearing cpu->cpu_index in cpu_list_remove */
if (tcg_enabled()) {
qemu_plugin_vcpu_exit_hook(cpu);
}
cpu_list_remove(cpu); cpu_list_remove(cpu);
/* /*
* Now that the vCPU has been removed from the RCU list, we can call * Now that the vCPU has been removed from the RCU list, we can call

View File

@@ -88,13 +88,15 @@ static QCryptoAkCipherRSAKey *qcrypto_builtin_rsa_public_key_parse(
goto error; goto error;
} }
if (seq_length != 0) { if (seq_length != 0) {
error_setg(errp, "Invalid RSA public key");
goto error; goto error;
} }
return rsa; return rsa;
error: error:
if (errp && !*errp) {
error_setg(errp, "Invalid RSA public key");
}
qcrypto_akcipher_rsakey_free(rsa); qcrypto_akcipher_rsakey_free(rsa);
return NULL; return NULL;
} }
@@ -167,13 +169,15 @@ static QCryptoAkCipherRSAKey *qcrypto_builtin_rsa_private_key_parse(
return rsa; return rsa;
} }
if (seq_length != 0) { if (seq_length != 0) {
error_setg(errp, "Invalid RSA private key");
goto error; goto error;
} }
return rsa; return rsa;
error: error:
if (errp && !*errp) {
error_setg(errp, "Invalid RSA private key");
}
qcrypto_akcipher_rsakey_free(rsa); qcrypto_akcipher_rsakey_free(rsa);
return NULL; return NULL;
} }

View File

@@ -862,47 +862,6 @@ typedef enum {
rv_op_fltq_q = 831, rv_op_fltq_q = 831,
rv_op_fleq_h = 832, rv_op_fleq_h = 832,
rv_op_fltq_h = 833, rv_op_fltq_h = 833,
rv_op_vaesdf_vv = 834,
rv_op_vaesdf_vs = 835,
rv_op_vaesdm_vv = 836,
rv_op_vaesdm_vs = 837,
rv_op_vaesef_vv = 838,
rv_op_vaesef_vs = 839,
rv_op_vaesem_vv = 840,
rv_op_vaesem_vs = 841,
rv_op_vaeskf1_vi = 842,
rv_op_vaeskf2_vi = 843,
rv_op_vaesz_vs = 844,
rv_op_vandn_vv = 845,
rv_op_vandn_vx = 846,
rv_op_vbrev_v = 847,
rv_op_vbrev8_v = 848,
rv_op_vclmul_vv = 849,
rv_op_vclmul_vx = 850,
rv_op_vclmulh_vv = 851,
rv_op_vclmulh_vx = 852,
rv_op_vclz_v = 853,
rv_op_vcpop_v = 854,
rv_op_vctz_v = 855,
rv_op_vghsh_vv = 856,
rv_op_vgmul_vv = 857,
rv_op_vrev8_v = 858,
rv_op_vrol_vv = 859,
rv_op_vrol_vx = 860,
rv_op_vror_vv = 861,
rv_op_vror_vx = 862,
rv_op_vror_vi = 863,
rv_op_vsha2ch_vv = 864,
rv_op_vsha2cl_vv = 865,
rv_op_vsha2ms_vv = 866,
rv_op_vsm3c_vi = 867,
rv_op_vsm3me_vv = 868,
rv_op_vsm4k_vi = 869,
rv_op_vsm4r_vv = 870,
rv_op_vsm4r_vs = 871,
rv_op_vwsll_vv = 872,
rv_op_vwsll_vx = 873,
rv_op_vwsll_vi = 874,
} rv_op; } rv_op;
/* register names */ /* register names */
@@ -2049,47 +2008,6 @@ const rv_opcode_data rvi_opcode_data[] = {
{ "fltq.q", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 }, { "fltq.q", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
{ "fleq.h", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 }, { "fleq.h", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
{ "fltq.h", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 }, { "fltq.h", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
{ "vaesdf.vv", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
{ "vaesdf.vs", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
{ "vaesdm.vv", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
{ "vaesdm.vs", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
{ "vaesef.vv", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
{ "vaesef.vs", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
{ "vaesem.vv", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
{ "vaesem.vs", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
{ "vaeskf1.vi", rv_codec_v_i, rv_fmt_vd_vs2_uimm, NULL, 0, 0, 0 },
{ "vaeskf2.vi", rv_codec_v_i, rv_fmt_vd_vs2_uimm, NULL, 0, 0, 0 },
{ "vaesz.vs", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
{ "vandn.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1_vm, NULL, 0, 0, 0 },
{ "vandn.vx", rv_codec_v_r, rv_fmt_vd_vs2_rs1_vm, NULL, 0, 0, 0 },
{ "vbrev.v", rv_codec_v_r, rv_fmt_vd_vs2_vm, NULL, 0, 0, 0 },
{ "vbrev8.v", rv_codec_v_r, rv_fmt_vd_vs2_vm, NULL, 0, 0, 0 },
{ "vclmul.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1_vm, NULL, 0, 0, 0 },
{ "vclmul.vx", rv_codec_v_r, rv_fmt_vd_vs2_rs1_vm, NULL, 0, 0, 0 },
{ "vclmulh.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1_vm, NULL, 0, 0, 0 },
{ "vclmulh.vx", rv_codec_v_r, rv_fmt_vd_vs2_rs1_vm, NULL, 0, 0, 0 },
{ "vclz.v", rv_codec_v_r, rv_fmt_vd_vs2_vm, NULL, 0, 0, 0 },
{ "vcpop.v", rv_codec_v_r, rv_fmt_vd_vs2_vm, NULL, 0, 0, 0 },
{ "vctz.v", rv_codec_v_r, rv_fmt_vd_vs2_vm, NULL, 0, 0, 0 },
{ "vghsh.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1, NULL, 0, 0, 0 },
{ "vgmul.vv", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
{ "vrev8.v", rv_codec_v_r, rv_fmt_vd_vs2_vm, NULL, 0, 0, 0 },
{ "vrol.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1_vm, NULL, 0, 0, 0 },
{ "vrol.vx", rv_codec_v_r, rv_fmt_vd_vs2_rs1_vm, NULL, 0, 0, 0 },
{ "vror.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1_vm, NULL, 0, 0, 0 },
{ "vror.vx", rv_codec_v_r, rv_fmt_vd_vs2_rs1_vm, NULL, 0, 0, 0 },
{ "vror.vi", rv_codec_vror_vi, rv_fmt_vd_vs2_uimm_vm, NULL, 0, 0, 0 },
{ "vsha2ch.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1, NULL, 0, 0, 0 },
{ "vsha2cl.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1, NULL, 0, 0, 0 },
{ "vsha2ms.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1, NULL, 0, 0, 0 },
{ "vsm3c.vi", rv_codec_v_i, rv_fmt_vd_vs2_uimm, NULL, 0, 0, 0 },
{ "vsm3me.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1, NULL, 0, 0, 0 },
{ "vsm4k.vi", rv_codec_v_i, rv_fmt_vd_vs2_uimm, NULL, 0, 0, 0 },
{ "vsm4r.vv", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
{ "vsm4r.vs", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
{ "vwsll.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1_vm, NULL, 0, 0, 0 },
{ "vwsll.vx", rv_codec_v_r, rv_fmt_vd_vs2_rs1_vm, NULL, 0, 0, 0 },
{ "vwsll.vi", rv_codec_v_i, rv_fmt_vd_vs2_uimm_vm, NULL, 0, 0, 0 },
}; };
/* CSR names */ /* CSR names */
@@ -3136,12 +3054,12 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
} }
break; break;
case 89: case 89:
switch (((inst >> 12) & 0b111)) { switch (((inst >> 12) & 0b111)) {
case 0: op = rv_op_fmvp_d_x; break; case 0: op = rv_op_fmvp_d_x; break;
} }
break; break;
case 91: case 91:
switch (((inst >> 12) & 0b111)) { switch (((inst >> 12) & 0b111)) {
case 0: op = rv_op_fmvp_q_x; break; case 0: op = rv_op_fmvp_q_x; break;
} }
break; break;
@@ -3258,7 +3176,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 0: case 0:
switch ((inst >> 26) & 0b111111) { switch ((inst >> 26) & 0b111111) {
case 0: op = rv_op_vadd_vv; break; case 0: op = rv_op_vadd_vv; break;
case 1: op = rv_op_vandn_vv; break;
case 2: op = rv_op_vsub_vv; break; case 2: op = rv_op_vsub_vv; break;
case 4: op = rv_op_vminu_vv; break; case 4: op = rv_op_vminu_vv; break;
case 5: op = rv_op_vmin_vv; break; case 5: op = rv_op_vmin_vv; break;
@@ -3281,8 +3198,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
} }
break; break;
case 19: op = rv_op_vmsbc_vvm; break; case 19: op = rv_op_vmsbc_vvm; break;
case 20: op = rv_op_vror_vv; break;
case 21: op = rv_op_vrol_vv; break;
case 23: case 23:
if (((inst >> 20) & 0b111111) == 32) if (((inst >> 20) & 0b111111) == 32)
op = rv_op_vmv_v_v; op = rv_op_vmv_v_v;
@@ -3311,7 +3226,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 47: op = rv_op_vnclip_wv; break; case 47: op = rv_op_vnclip_wv; break;
case 48: op = rv_op_vwredsumu_vs; break; case 48: op = rv_op_vwredsumu_vs; break;
case 49: op = rv_op_vwredsum_vs; break; case 49: op = rv_op_vwredsum_vs; break;
case 53: op = rv_op_vwsll_vv; break;
} }
break; break;
case 1: case 1:
@@ -3409,8 +3323,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 9: op = rv_op_vaadd_vv; break; case 9: op = rv_op_vaadd_vv; break;
case 10: op = rv_op_vasubu_vv; break; case 10: op = rv_op_vasubu_vv; break;
case 11: op = rv_op_vasub_vv; break; case 11: op = rv_op_vasub_vv; break;
case 12: op = rv_op_vclmul_vv; break;
case 13: op = rv_op_vclmulh_vv; break;
case 16: case 16:
switch ((inst >> 15) & 0b11111) { switch ((inst >> 15) & 0b11111) {
case 0: if ((inst >> 25) & 1) op = rv_op_vmv_x_s; break; case 0: if ((inst >> 25) & 1) op = rv_op_vmv_x_s; break;
@@ -3426,12 +3338,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 5: op = rv_op_vsext_vf4; break; case 5: op = rv_op_vsext_vf4; break;
case 6: op = rv_op_vzext_vf2; break; case 6: op = rv_op_vzext_vf2; break;
case 7: op = rv_op_vsext_vf2; break; case 7: op = rv_op_vsext_vf2; break;
case 8: op = rv_op_vbrev8_v; break;
case 9: op = rv_op_vrev8_v; break;
case 10: op = rv_op_vbrev_v; break;
case 12: op = rv_op_vclz_v; break;
case 13: op = rv_op_vctz_v; break;
case 14: op = rv_op_vcpop_v; break;
} }
break; break;
case 20: case 20:
@@ -3500,7 +3406,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
} }
break; break;
case 17: op = rv_op_vmadc_vim; break; case 17: op = rv_op_vmadc_vim; break;
case 20: case 21: op = rv_op_vror_vi; break;
case 23: case 23:
if (((inst >> 20) & 0b111111) == 32) if (((inst >> 20) & 0b111111) == 32)
op = rv_op_vmv_v_i; op = rv_op_vmv_v_i;
@@ -3532,13 +3437,11 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 45: op = rv_op_vnsra_wi; break; case 45: op = rv_op_vnsra_wi; break;
case 46: op = rv_op_vnclipu_wi; break; case 46: op = rv_op_vnclipu_wi; break;
case 47: op = rv_op_vnclip_wi; break; case 47: op = rv_op_vnclip_wi; break;
case 53: op = rv_op_vwsll_vi; break;
} }
break; break;
case 4: case 4:
switch ((inst >> 26) & 0b111111) { switch ((inst >> 26) & 0b111111) {
case 0: op = rv_op_vadd_vx; break; case 0: op = rv_op_vadd_vx; break;
case 1: op = rv_op_vandn_vx; break;
case 2: op = rv_op_vsub_vx; break; case 2: op = rv_op_vsub_vx; break;
case 3: op = rv_op_vrsub_vx; break; case 3: op = rv_op_vrsub_vx; break;
case 4: op = rv_op_vminu_vx; break; case 4: op = rv_op_vminu_vx; break;
@@ -3563,8 +3466,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
} }
break; break;
case 19: op = rv_op_vmsbc_vxm; break; case 19: op = rv_op_vmsbc_vxm; break;
case 20: op = rv_op_vror_vx; break;
case 21: op = rv_op_vrol_vx; break;
case 23: case 23:
if (((inst >> 20) & 0b111111) == 32) if (((inst >> 20) & 0b111111) == 32)
op = rv_op_vmv_v_x; op = rv_op_vmv_v_x;
@@ -3593,7 +3494,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 45: op = rv_op_vnsra_wx; break; case 45: op = rv_op_vnsra_wx; break;
case 46: op = rv_op_vnclipu_wx; break; case 46: op = rv_op_vnclipu_wx; break;
case 47: op = rv_op_vnclip_wx; break; case 47: op = rv_op_vnclip_wx; break;
case 53: op = rv_op_vwsll_vx; break;
} }
break; break;
case 5: case 5:
@@ -3654,8 +3554,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 9: op = rv_op_vaadd_vx; break; case 9: op = rv_op_vaadd_vx; break;
case 10: op = rv_op_vasubu_vx; break; case 10: op = rv_op_vasubu_vx; break;
case 11: op = rv_op_vasub_vx; break; case 11: op = rv_op_vasub_vx; break;
case 12: op = rv_op_vclmul_vx; break;
case 13: op = rv_op_vclmulh_vx; break;
case 14: op = rv_op_vslide1up_vx; break; case 14: op = rv_op_vslide1up_vx; break;
case 15: op = rv_op_vslide1down_vx; break; case 15: op = rv_op_vslide1down_vx; break;
case 16: case 16:
@@ -3788,41 +3686,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 7: op = rv_op_csrrci; break; case 7: op = rv_op_csrrci; break;
} }
break; break;
case 29:
if (((inst >> 25) & 1) == 1 && ((inst >> 12) & 0b111) == 2) {
switch ((inst >> 26) & 0b111111) {
case 32: op = rv_op_vsm3me_vv; break;
case 33: op = rv_op_vsm4k_vi; break;
case 34: op = rv_op_vaeskf1_vi; break;
case 40:
switch ((inst >> 15) & 0b11111) {
case 0: op = rv_op_vaesdm_vv; break;
case 1: op = rv_op_vaesdf_vv; break;
case 2: op = rv_op_vaesem_vv; break;
case 3: op = rv_op_vaesef_vv; break;
case 16: op = rv_op_vsm4r_vv; break;
case 17: op = rv_op_vgmul_vv; break;
}
break;
case 41:
switch ((inst >> 15) & 0b11111) {
case 0: op = rv_op_vaesdm_vs; break;
case 1: op = rv_op_vaesdf_vs; break;
case 2: op = rv_op_vaesem_vs; break;
case 3: op = rv_op_vaesef_vs; break;
case 7: op = rv_op_vaesz_vs; break;
case 16: op = rv_op_vsm4r_vs; break;
}
break;
case 42: op = rv_op_vaeskf2_vi; break;
case 43: op = rv_op_vsm3c_vi; break;
case 44: op = rv_op_vghsh_vv; break;
case 45: op = rv_op_vsha2ms_vv; break;
case 46: op = rv_op_vsha2ch_vv; break;
case 47: op = rv_op_vsha2cl_vv; break;
}
}
break;
case 30: case 30:
switch (((inst >> 22) & 0b1111111000) | switch (((inst >> 22) & 0b1111111000) |
((inst >> 12) & 0b0000000111)) { ((inst >> 12) & 0b0000000111)) {
@@ -4148,12 +4011,6 @@ static uint32_t operand_vzimm10(rv_inst inst)
return (inst << 34) >> 54; return (inst << 34) >> 54;
} }
static uint32_t operand_vzimm6(rv_inst inst)
{
return ((inst << 37) >> 63) << 5 |
((inst << 44) >> 59);
}
static uint32_t operand_bs(rv_inst inst) static uint32_t operand_bs(rv_inst inst)
{ {
return (inst << 32) >> 62; return (inst << 32) >> 62;
@@ -4536,12 +4393,6 @@ static void decode_inst_operands(rv_decode *dec, rv_isa isa)
dec->imm = operand_vimm(inst); dec->imm = operand_vimm(inst);
dec->vm = operand_vm(inst); dec->vm = operand_vm(inst);
break; break;
case rv_codec_vror_vi:
dec->rd = operand_rd(inst);
dec->rs2 = operand_rs2(inst);
dec->imm = operand_vzimm6(inst);
dec->vm = operand_vm(inst);
break;
case rv_codec_vsetvli: case rv_codec_vsetvli:
dec->rd = operand_rd(inst); dec->rd = operand_rd(inst);
dec->rs1 = operand_rs1(inst); dec->rs1 = operand_rs1(inst);
@@ -4579,7 +4430,7 @@ static void decode_inst_operands(rv_decode *dec, rv_isa isa)
break; break;
case rv_codec_zcmt_jt: case rv_codec_zcmt_jt:
dec->imm = operand_tbl_index(inst); dec->imm = operand_tbl_index(inst);
break; break;
case rv_codec_fli: case rv_codec_fli:
dec->rd = operand_rd(inst); dec->rd = operand_rd(inst);
dec->imm = operand_rs1(inst); dec->imm = operand_rs1(inst);
@@ -4826,7 +4677,7 @@ static void format_inst(char *buf, size_t buflen, size_t tab, rv_decode *dec)
append(buf, tmp, buflen); append(buf, tmp, buflen);
break; break;
case 'u': case 'u':
snprintf(tmp, sizeof(tmp), "%u", ((uint32_t)dec->imm & 0b111111)); snprintf(tmp, sizeof(tmp), "%u", ((uint32_t)dec->imm & 0b11111));
append(buf, tmp, buflen); append(buf, tmp, buflen);
break; break;
case 'j': case 'j':

View File

@@ -152,7 +152,6 @@ typedef enum {
rv_codec_v_i, rv_codec_v_i,
rv_codec_vsetvli, rv_codec_vsetvli,
rv_codec_vsetivli, rv_codec_vsetivli,
rv_codec_vror_vi,
rv_codec_zcb_ext, rv_codec_zcb_ext,
rv_codec_zcb_mul, rv_codec_zcb_mul,
rv_codec_zcb_lb, rv_codec_zcb_lb,
@@ -275,7 +274,6 @@ enum {
#define rv_fmt_vd_vs2_fs1_vm "O\tD,F,4m" #define rv_fmt_vd_vs2_fs1_vm "O\tD,F,4m"
#define rv_fmt_vd_vs2_imm_vl "O\tD,F,il" #define rv_fmt_vd_vs2_imm_vl "O\tD,F,il"
#define rv_fmt_vd_vs2_imm_vm "O\tD,F,im" #define rv_fmt_vd_vs2_imm_vm "O\tD,F,im"
#define rv_fmt_vd_vs2_uimm "O\tD,F,u"
#define rv_fmt_vd_vs2_uimm_vm "O\tD,F,um" #define rv_fmt_vd_vs2_uimm_vm "O\tD,F,um"
#define rv_fmt_vd_vs1_vs2_vm "O\tD,E,Fm" #define rv_fmt_vd_vs1_vs2_vm "O\tD,E,Fm"
#define rv_fmt_vd_rs1_vs2_vm "O\tD,1,Fm" #define rv_fmt_vd_rs1_vs2_vm "O\tD,1,Fm"

View File

@@ -413,18 +413,6 @@ Specifying the iSCSI password in plain text on the command line using the
used instead, to refer to a ``--object secret...`` instance that provides used instead, to refer to a ``--object secret...`` instance that provides
a password via a file, or encrypted. a password via a file, or encrypted.
CPU device properties
'''''''''''''''''''''
``pmu-num=n`` on RISC-V CPUs (since 8.2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In order to support more flexible counter configurations this has been replaced
by a ``pmu-mask`` property. If set of counters is continuous then the mask can
be calculated with ``((2 ^ n) - 1) << 3``. The least significant three bits
must be left clear.
Backwards compatibility Backwards compatibility
----------------------- -----------------------

View File

@@ -11,7 +11,6 @@ generated from in-code annotations to function prototypes.
loads-stores loads-stores
memory memory
modules modules
pci
qom-api qom-api
qdev-api qdev-api
ui ui

View File

@@ -572,6 +572,27 @@ Others (especially either older devices or system devices which for
some reason don't have a bus concept) make use of the ``instance id`` some reason don't have a bus concept) make use of the ``instance id``
for otherwise identically named devices. for otherwise identically named devices.
Fixed-ram format
----------------
When the ``fixed-ram`` capability is enabled, a slightly different
stream format is used for the RAM section. Instead of having a
sequential stream of pages that follow the RAMBlock headers, the dirty
pages for a RAMBlock follow its header. This ensures that each RAM
page has a fixed offset in the resulting migration file.
The ``fixed-ram`` capability must be enabled in both source and
destination with:
``migrate_set_capability fixed-ram on``
Since pages are written to their relatives offsets and out of order
(due to the memory dirtying patterns), streaming channels such as
sockets are not supported. A seekable channel such as a file is
required. This can be verified in the QIOChannel by the presence of
the QIO_CHANNEL_FEATURE_SEEKABLE. In more practical terms, this
migration format requires the ``file:`` URI when migrating.
Return path Return path
----------- -----------
@@ -594,77 +615,6 @@ path.
Return path - opened by main thread, written by main thread AND postcopy Return path - opened by main thread, written by main thread AND postcopy
thread (protected by rp_mutex) thread (protected by rp_mutex)
Dirty limit
=====================
The dirty limit, short for dirty page rate upper limit, is a new capability
introduced in the 8.1 QEMU release that uses a new algorithm based on the KVM
dirty ring to throttle down the guest during live migration.
The algorithm framework is as follows:
::
------------------------------------------------------------------------------
main --------------> throttle thread ------------> PREPARE(1) <--------
thread \ | |
\ | |
\ V |
-\ CALCULATE(2) |
\ | |
\ | |
\ V |
\ SET PENALTY(3) -----
-\ |
\ |
\ V
-> virtual CPU thread -------> ACCEPT PENALTY(4)
------------------------------------------------------------------------------
When the qmp command qmp_set_vcpu_dirty_limit is called for the first time,
the QEMU main thread starts the throttle thread. The throttle thread, once
launched, executes the loop, which consists of three steps:
- PREPARE (1)
The entire work of PREPARE (1) is preparation for the second stage,
CALCULATE(2), as the name implies. It involves preparing the dirty
page rate value and the corresponding upper limit of the VM:
The dirty page rate is calculated via the KVM dirty ring mechanism,
which tells QEMU how many dirty pages a virtual CPU has had since the
last KVM_EXIT_DIRTY_RING_FULL exception; The dirty page rate upper
limit is specified by caller, therefore fetch it directly.
- CALCULATE (2)
Calculate a suitable sleep period for each virtual CPU, which will be
used to determine the penalty for the target virtual CPU. The
computation must be done carefully in order to reduce the dirty page
rate progressively down to the upper limit without oscillation. To
achieve this, two strategies are provided: the first is to add or
subtract sleep time based on the ratio of the current dirty page rate
to the limit, which is used when the current dirty page rate is far
from the limit; the second is to add or subtract a fixed time when
the current dirty page rate is close to the limit.
- SET PENALTY (3)
Set the sleep time for each virtual CPU that should be penalized based
on the results of the calculation supplied by step CALCULATE (2).
After completing the three above stages, the throttle thread loops back
to step PREPARE (1) until the dirty limit is reached.
On the other hand, each virtual CPU thread reads the sleep duration and
sleeps in the path of the KVM_EXIT_DIRTY_RING_FULL exception handler, that
is ACCEPT PENALTY (4). Virtual CPUs tied with writing processes will
obviously exit to the path and get penalized, whereas virtual CPUs involved
with read processes will not.
In summary, thanks to the KVM dirty ring technology, the dirty limit
algorithm will restrict virtual CPUs as needed to keep their dirty page
rate inside the limit. This leads to more steady reading performance during
live migration and can aid in improving large guest responsiveness.
Postcopy Postcopy
======== ========

View File

@@ -1,8 +0,0 @@
=============
PCI subsystem
=============
API Reference
-------------
.. kernel-doc:: include/hw/pci/pci.h

View File

@@ -108,43 +108,6 @@ A vring state description
:num: a 32-bit number :num: a 32-bit number
A vring descriptor index for split virtqueues
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-------------+---------------------+
| vring index | index in avail ring |
+-------------+---------------------+
:vring index: 32-bit index of the respective virtqueue
:index in avail ring: 32-bit value, of which currently only the lower 16
bits are used:
- Bits 015: Index of the next *Available Ring* descriptor that the
back-end will process. This is a free-running index that is not
wrapped by the ring size.
- Bits 1631: Reserved (set to zero)
Vring descriptor indices for packed virtqueues
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-------------+--------------------+
| vring index | descriptor indices |
+-------------+--------------------+
:vring index: 32-bit index of the respective virtqueue
:descriptor indices: 32-bit value:
- Bits 014: Index of the next *Available Ring* descriptor that the
back-end will process. This is a free-running index that is not
wrapped by the ring size.
- Bit 15: Driver (Available) Ring Wrap Counter
- Bits 1630: Index of the entry in the *Used Ring* where the back-end
will place the next descriptor. This is a free-running index that
is not wrapped by the ring size.
- Bit 31: Device (Used) Ring Wrap Counter
A vring address description A vring address description
^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -322,32 +285,6 @@ VhostUserShared
:UUID: 16 bytes UUID, whose first three components (a 32-bit value, then :UUID: 16 bytes UUID, whose first three components (a 32-bit value, then
two 16-bit values) are stored in big endian. two 16-bit values) are stored in big endian.
Device state transfer parameters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+--------------------+-----------------+
| transfer direction | migration phase |
+--------------------+-----------------+
:transfer direction: a 32-bit enum, describing the direction in which
the state is transferred:
- 0: Save: Transfer the state from the back-end to the front-end,
which happens on the source side of migration
- 1: Load: Transfer the state from the front-end to the back-end,
which happens on the destination side of migration
:migration phase: a 32-bit enum, describing the state in which the VM
guest and devices are:
- 0: Stopped (in the period after the transfer of memory-mapped
regions before switch-over to the destination): The VM guest is
stopped, and the vhost-user device is suspended (see
:ref:`Suspended device state <suspended_device_state>`).
In the future, additional phases might be added e.g. to allow
iterative migration while the device is running.
C structure C structure
----------- -----------
@@ -407,7 +344,6 @@ in the ancillary data:
* ``VHOST_USER_SET_VRING_ERR`` * ``VHOST_USER_SET_VRING_ERR``
* ``VHOST_USER_SET_BACKEND_REQ_FD`` (previous name ``VHOST_USER_SET_SLAVE_REQ_FD``) * ``VHOST_USER_SET_BACKEND_REQ_FD`` (previous name ``VHOST_USER_SET_SLAVE_REQ_FD``)
* ``VHOST_USER_SET_INFLIGHT_FD`` (if ``VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD``) * ``VHOST_USER_SET_INFLIGHT_FD`` (if ``VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD``)
* ``VHOST_USER_SET_DEVICE_STATE_FD``
If *front-end* is unable to send the full message or receives a wrong If *front-end* is unable to send the full message or receives a wrong
reply it will close the connection. An optional reconnection mechanism reply it will close the connection. An optional reconnection mechanism
@@ -438,50 +374,35 @@ negotiation.
Ring states Ring states
----------- -----------
Rings have two independent states: started/stopped, and enabled/disabled. Rings can be in one of three states:
* While a ring is stopped, the back-end must not process the ring at * stopped: the back-end must not process the ring at all.
all, regardless of whether it is enabled or disabled. The
enabled/disabled state should still be tracked, though, so it can come
into effect once the ring is started.
* started and disabled: The back-end must process the ring without * started but disabled: the back-end must process the ring without
causing any side effects. For example, for a networking device, causing any side effects. For example, for a networking device,
in the disabled state the back-end must not supply any new RX packets, in the disabled state the back-end must not supply any new RX packets,
but must process and discard any TX packets. but must process and discard any TX packets.
* started and enabled: The back-end must process the ring normally, i.e. * started and enabled.
process all requests and execute them.
Each ring is initialized in a stopped and disabled state. The back-end Each ring is initialized in a stopped state. The back-end must start
must start a ring upon receiving a kick (that is, detecting that file ring upon receiving a kick (that is, detecting that file descriptor is
descriptor is readable) on the descriptor specified by readable) on the descriptor specified by ``VHOST_USER_SET_VRING_KICK``
``VHOST_USER_SET_VRING_KICK`` or receiving the in-band message or receiving the in-band message ``VHOST_USER_VRING_KICK`` if negotiated,
``VHOST_USER_VRING_KICK`` if negotiated, and stop a ring upon receiving and stop ring upon receiving ``VHOST_USER_GET_VRING_BASE``.
``VHOST_USER_GET_VRING_BASE``.
Rings can be enabled or disabled by ``VHOST_USER_SET_VRING_ENABLE``. Rings can be enabled or disabled by ``VHOST_USER_SET_VRING_ENABLE``.
In addition, upon receiving a ``VHOST_USER_SET_FEATURES`` message from If ``VHOST_USER_F_PROTOCOL_FEATURES`` has not been negotiated, the
the front-end without ``VHOST_USER_F_PROTOCOL_FEATURES`` set, the ring starts directly in the enabled state.
back-end must enable all rings immediately.
If ``VHOST_USER_F_PROTOCOL_FEATURES`` has been negotiated, the ring is
initialized in a disabled state and is enabled by
``VHOST_USER_SET_VRING_ENABLE`` with parameter 1.
While processing the rings (whether they are enabled or not), the back-end While processing the rings (whether they are enabled or not), the back-end
must support changing some configuration aspects on the fly. must support changing some configuration aspects on the fly.
.. _suspended_device_state:
Suspended device state
^^^^^^^^^^^^^^^^^^^^^^
While all vrings are stopped, the device is *suspended*. In addition to
not processing any vring (because they are stopped), the device must:
* not write to any guest memory regions,
* not send any notifications to the guest,
* not send any messages to the front-end,
* still process and reply to messages from the front-end.
Multiple queue support Multiple queue support
---------------------- ----------------------
@@ -569,8 +490,7 @@ ancillary data, it may be used to inform the front-end that the log has
been modified. been modified.
Once the source has finished migration, rings will be stopped by the Once the source has finished migration, rings will be stopped by the
source (:ref:`Suspended device state <suspended_device_state>`). No source. No further update must be done before rings are restarted.
further update must be done before rings are restarted.
In postcopy migration the back-end is started before all the memory has In postcopy migration the back-end is started before all the memory has
been received from the source host, and care must be taken to avoid been received from the source host, and care must be taken to avoid
@@ -582,80 +502,6 @@ it performs WAKE ioctl's on the userfaultfd to wake the stalled
back-end. The front-end indicates support for this via the back-end. The front-end indicates support for this via the
``VHOST_USER_PROTOCOL_F_PAGEFAULT`` feature. ``VHOST_USER_PROTOCOL_F_PAGEFAULT`` feature.
.. _migrating_backend_state:
Migrating back-end state
^^^^^^^^^^^^^^^^^^^^^^^^
Migrating device state involves transferring the state from one
back-end, called the source, to another back-end, called the
destination. After migration, the destination transparently resumes
operation without requiring the driver to re-initialize the device at
the VIRTIO level. If the migration fails, then the source can
transparently resume operation until another migration attempt is made.
Generally, the front-end is connected to a virtual machine guest (which
contains the driver), which has its own state to transfer between source
and destination, and therefore will have an implementation-specific
mechanism to do so. The ``VHOST_USER_PROTOCOL_F_DEVICE_STATE`` feature
provides functionality to have the front-end include the back-end's
state in this transfer operation so the back-end does not need to
implement its own mechanism, and so the virtual machine may have its
complete state, including vhost-user devices' states, contained within a
single stream of data.
To do this, the back-end state is transferred from back-end to front-end
on the source side, and vice versa on the destination side. This
transfer happens over a channel that is negotiated using the
``VHOST_USER_SET_DEVICE_STATE_FD`` message. This message has two
parameters:
* Direction of transfer: On the source, the data is saved, transferring
it from the back-end to the front-end. On the destination, the data
is loaded, transferring it from the front-end to the back-end.
* Migration phase: Currently, the only supported phase is the period
after the transfer of memory-mapped regions before switch-over to the
destination, when both the source and destination devices are
suspended (:ref:`Suspended device state <suspended_device_state>`).
In the future, additional phases might be supported to allow iterative
migration while the device is running.
The nature of the channel is implementation-defined, but it must
generally behave like a pipe: The writing end will write all the data it
has into it, signalling the end of data by closing its end. The reading
end must read all of this data (until encountering the end of file) and
process it.
* When saving, the writing end is the source back-end, and the reading
end is the source front-end. After reading the state data from the
channel, the source front-end must transfer it to the destination
front-end through an implementation-defined mechanism.
* When loading, the writing end is the destination front-end, and the
reading end is the destination back-end. After reading the state data
from the channel, the destination back-end must deserialize its
internal state from that data and set itself up to allow the driver to
seamlessly resume operation on the VIRTIO level.
Seamlessly resuming operation means that the migration must be
transparent to the guest driver, which operates on the VIRTIO level.
This driver will not perform any re-initialization steps, but continue
to use the device as if no migration had occurred. The vhost-user
front-end, however, will re-initialize the vhost state on the
destination, following the usual protocol for establishing a connection
to a vhost-user back-end: This includes, for example, setting up memory
mappings and kick and call FDs as necessary, negotiating protocol
features, or setting the initial vring base indices (to the same value
as on the source side, so that operation can resume).
Both on the source and on the destination side, after the respective
front-end has seen all data transferred (when the transfer FD has been
closed), it sends the ``VHOST_USER_CHECK_DEVICE_STATE`` message to
verify that data transfer was successful in the back-end, too. The
back-end responds once it knows whether the transfer and processing was
successful or not.
Memory access Memory access
------------- -------------
@@ -1050,7 +896,6 @@ Protocol features
#define VHOST_USER_PROTOCOL_F_STATUS 16 #define VHOST_USER_PROTOCOL_F_STATUS 16
#define VHOST_USER_PROTOCOL_F_XEN_MMAP 17 #define VHOST_USER_PROTOCOL_F_XEN_MMAP 17
#define VHOST_USER_PROTOCOL_F_SHARED_OBJECT 18 #define VHOST_USER_PROTOCOL_F_SHARED_OBJECT 18
#define VHOST_USER_PROTOCOL_F_DEVICE_STATE 19
Front-end message types Front-end message types
----------------------- -----------------------
@@ -1197,54 +1042,18 @@ Front-end message types
``VHOST_USER_SET_VRING_BASE`` ``VHOST_USER_SET_VRING_BASE``
:id: 10 :id: 10
:equivalent ioctl: ``VHOST_SET_VRING_BASE`` :equivalent ioctl: ``VHOST_SET_VRING_BASE``
:request payload: vring descriptor index/indices :request payload: vring state description
:reply payload: N/A :reply payload: N/A
Sets the next index to use for descriptors in this vring: Sets the base offset in the available vring.
* For a split virtqueue, sets only the next descriptor index to
process in the *Available Ring*. The device is supposed to read the
next index in the *Used Ring* from the respective vring structure in
guest memory.
* For a packed virtqueue, both indices are supplied, as they are not
explicitly available in memory.
Consequently, the payload type is specific to the type of virt queue
(*a vring descriptor index for split virtqueues* vs. *vring descriptor
indices for packed virtqueues*).
``VHOST_USER_GET_VRING_BASE`` ``VHOST_USER_GET_VRING_BASE``
:id: 11 :id: 11
:equivalent ioctl: ``VHOST_USER_GET_VRING_BASE`` :equivalent ioctl: ``VHOST_USER_GET_VRING_BASE``
:request payload: vring state description :request payload: vring state description
:reply payload: vring descriptor index/indices :reply payload: vring state description
Stops the vring and returns the current descriptor index or indices: Get the available vring base offset.
* For a split virtqueue, returns only the 16-bit next descriptor
index to process in the *Available Ring*. Note that this may
differ from the available ring index in the vring structure in
memory, which points to where the driver will put new available
descriptors. For the *Used Ring*, the device only needs the next
descriptor index at which to put new descriptors, which is the
value in the vring structure in memory, so this value is not
covered by this message.
* For a packed virtqueue, neither index is explicitly available to
read from memory, so both indices (as maintained by the device) are
returned.
Consequently, the payload type is specific to the type of virt queue
(*a vring descriptor index for split virtqueues* vs. *vring descriptor
indices for packed virtqueues*).
When and as long as all of a devices vrings are stopped, it is
*suspended*, see :ref:`Suspended device state
<suspended_device_state>`.
The request payloads *num* field is currently reserved and must be
set to 0.
``VHOST_USER_SET_VRING_KICK`` ``VHOST_USER_SET_VRING_KICK``
:id: 12 :id: 12
@@ -1655,76 +1464,6 @@ Front-end message types
the requested UUID. Back-end will reply passing the fd when the operation the requested UUID. Back-end will reply passing the fd when the operation
is successful, or no fd otherwise. is successful, or no fd otherwise.
``VHOST_USER_SET_DEVICE_STATE_FD``
:id: 42
:equivalent ioctl: N/A
:request payload: device state transfer parameters
:reply payload: ``u64``
Front-end and back-end negotiate a channel over which to transfer the
back-ends internal state during migration. Either side (front-end or
back-end) may create the channel. The nature of this channel is not
restricted or defined in this document, but whichever side creates it
must create a file descriptor that is provided to the respectively
other side, allowing access to the channel. This FD must behave as
follows:
* For the writing end, it must allow writing the whole back-end state
sequentially. Closing the file descriptor signals the end of
transfer.
* For the reading end, it must allow reading the whole back-end state
sequentially. The end of file signals the end of the transfer.
For example, the channel may be a pipe, in which case the two ends of
the pipe fulfill these requirements respectively.
Initially, the front-end creates a channel along with such an FD. It
passes the FD to the back-end as ancillary data of a
``VHOST_USER_SET_DEVICE_STATE_FD`` message. The back-end may create a
different transfer channel, passing the respective FD back to the
front-end as ancillary data of the reply. If so, the front-end must
then discard its channel and use the one provided by the back-end.
Whether the back-end should decide to use its own channel is decided
based on efficiency: If the channel is a pipe, both ends will most
likely need to copy data into and out of it. Any channel that allows
for more efficient processing on at least one end, e.g. through
zero-copy, is considered more efficient and thus preferred. If the
back-end can provide such a channel, it should decide to use it.
The request payload contains parameters for the subsequent data
transfer, as described in the :ref:`Migrating back-end state
<migrating_backend_state>` section.
The value returned is both an indication for success, and whether a
file descriptor for a back-end-provided channel is returned: Bits 07
are 0 on success, and non-zero on error. Bit 8 is the invalid FD
flag; this flag is set when there is no file descriptor returned.
When this flag is not set, the front-end must use the returned file
descriptor as its end of the transfer channel. The back-end must not
both indicate an error and return a file descriptor.
Using this function requires prior negotiation of the
``VHOST_USER_PROTOCOL_F_DEVICE_STATE`` feature.
``VHOST_USER_CHECK_DEVICE_STATE``
:id: 43
:equivalent ioctl: N/A
:request payload: N/A
:reply payload: ``u64``
After transferring the back-ends internal state during migration (see
the :ref:`Migrating back-end state <migrating_backend_state>`
section), check whether the back-end was able to successfully fully
process the state.
The value returned indicates success or error; 0 is success, any
non-zero value is an error.
Using this function requires prior negotiation of the
``VHOST_USER_PROTOCOL_F_DEVICE_STATE`` feature.
Back-end message types Back-end message types
---------------------- ----------------------

View File

@@ -58,9 +58,6 @@ Other differences between the hardware and the QEMU model:
``vexpress-a15``, and have IRQs from 40 upwards. If a dtb is ``vexpress-a15``, and have IRQs from 40 upwards. If a dtb is
provided on the command line then QEMU will edit it to include provided on the command line then QEMU will edit it to include
suitable entries describing these transports for the guest. suitable entries describing these transports for the guest.
- QEMU does not currently support either dynamic or static remapping
of the area of memory at address 0: it is always mapped to alias
the first flash bank
Booting a Linux kernel Booting a Linux kernel
---------------------- ----------------------

View File

@@ -93,7 +93,6 @@ Emulated Devices
devices/vhost-user.rst devices/vhost-user.rst
devices/virtio-gpu.rst devices/virtio-gpu.rst
devices/virtio-pmem.rst devices/virtio-pmem.rst
devices/virtio-snd.rst
devices/vhost-user-rng.rst devices/vhost-user-rng.rst
devices/canokey.rst devices/canokey.rst
devices/usb-u2f.rst devices/usb-u2f.rst

View File

@@ -1,49 +0,0 @@
virtio sound
============
This document explains the setup and usage of the Virtio sound device.
The Virtio sound device is a paravirtualized sound card device.
Linux kernel support
--------------------
Virtio sound requires a guest Linux kernel built with the
``CONFIG_SND_VIRTIO`` option.
Description
-----------
Virtio sound implements capture and playback from inside a guest using the
configured audio backend of the host machine.
Device properties
-----------------
The Virtio sound device can be configured with the following properties:
* ``jacks`` number of physical jacks (Unimplemented).
* ``streams`` number of PCM streams. At the moment, no stream configuration is supported: the first one will always be a playback stream, an optional second will always be a capture stream. Adding more will cycle stream directions from playback to capture.
* ``chmaps`` number of channel maps (Unimplemented).
All streams are stereo and have the default channel positions ``Front left, right``.
Examples
--------
Add an audio device and an audio backend at once with ``-audio`` and ``model=virtio``:
* pulseaudio: ``-audio driver=pa,model=virtio``
or ``-audio driver=pa,model=virtio,server=/run/user/1000/pulse/native``
* sdl: ``-audio driver=sdl,model=virtio``
* coreaudio: ``-audio driver=coreaudio,model=virtio``
etc.
To specifically add virtualized sound devices, you have to specify a PCI device
and an audio backend listed with ``-audio driver=help`` that works on your host
machine, e.g.:
::
-device virtio-sound-pci,audiodev=my_audiodev \
-audiodev alsa,id=my_audiodev

View File

@@ -15,24 +15,46 @@ Setup
----- -----
Xen mode is enabled by setting the ``xen-version`` property of the KVM Xen mode is enabled by setting the ``xen-version`` property of the KVM
accelerator, for example for Xen 4.17: accelerator, for example for Xen 4.10:
.. parsed-literal:: .. parsed-literal::
|qemu_system| --accel kvm,xen-version=0x40011,kernel-irqchip=split |qemu_system| --accel kvm,xen-version=0x4000a,kernel-irqchip=split
Additionally, virtual APIC support can be advertised to the guest through the Additionally, virtual APIC support can be advertised to the guest through the
``xen-vapic`` CPU flag: ``xen-vapic`` CPU flag:
.. parsed-literal:: .. parsed-literal::
|qemu_system| --accel kvm,xen-version=0x40011,kernel-irqchip=split --cpu host,+xen-vapic |qemu_system| --accel kvm,xen-version=0x4000a,kernel-irqchip=split --cpu host,+xen_vapic
When Xen support is enabled, QEMU changes hypervisor identification (CPUID When Xen support is enabled, QEMU changes hypervisor identification (CPUID
0x40000000..0x4000000A) to Xen. The KVM identification and features are not 0x40000000..0x4000000A) to Xen. The KVM identification and features are not
advertised to a Xen guest. If Hyper-V is also enabled, the Xen identification advertised to a Xen guest. If Hyper-V is also enabled, the Xen identification
moves to leaves 0x40000100..0x4000010A. moves to leaves 0x40000100..0x4000010A.
The Xen platform device is enabled automatically for a Xen guest. This allows
a guest to unplug all emulated devices, in order to use Xen PV block and network
drivers instead. Under Xen, the boot disk is typically available both via IDE
emulation, and as a PV block device. Guest bootloaders typically use IDE to load
the guest kernel, which then unplugs the IDE and continues with the Xen PV block
device.
This configuration can be achieved as follows
.. parsed-literal::
|qemu_system| -M pc --accel kvm,xen-version=0x4000a,kernel-irqchip=split \\
-drive file=${GUEST_IMAGE},if=none,id=disk,file.locking=off -device xen-disk,drive=disk,vdev=xvda \\
-drive file=${GUEST_IMAGE},index=2,media=disk,file.locking=off,if=ide
It is necessary to use the pc machine type, as the q35 machine uses AHCI instead
of legacy IDE, and AHCI disks are not unplugged through the Xen PV unplug
mechanism.
VirtIO devices can also be used; Linux guests may need to be dissuaded from
umplugging them by adding 'xen_emul_unplug=never' on their command line.
Properties Properties
---------- ----------
@@ -41,10 +63,7 @@ The following properties exist on the KVM accelerator object:
``xen-version`` ``xen-version``
This property contains the Xen version in ``XENVER_version`` form, with the This property contains the Xen version in ``XENVER_version`` form, with the
major version in the top 16 bits and the minor version in the low 16 bits. major version in the top 16 bits and the minor version in the low 16 bits.
Setting this property enables the Xen guest support. If Xen version 4.5 or Setting this property enables the Xen guest support.
greater is specified, the HVM leaf in Xen CPUID is populated. Xen version
4.6 enables the vCPU ID in CPUID, and version 4.17 advertises vCPU upcall
vector support to the guest.
``xen-evtchn-max-pirq`` ``xen-evtchn-max-pirq``
Xen PIRQs represent an emulated physical interrupt, either GSI or MSI, which Xen PIRQs represent an emulated physical interrupt, either GSI or MSI, which
@@ -64,78 +83,8 @@ The following properties exist on the KVM accelerator object:
through simultaneous grants. For guests with large numbers of PV devices and through simultaneous grants. For guests with large numbers of PV devices and
high throughput, it may be desirable to increase this value. high throughput, it may be desirable to increase this value.
Xen paravirtual devices OS requirements
----------------------- ---------------
The Xen PCI platform device is enabled automatically for a Xen guest. This
allows a guest to unplug all emulated devices, in order to use paravirtual
block and network drivers instead.
Those paravirtual Xen block, network (and console) devices can be created
through the command line, and/or hot-plugged.
To provide a Xen console device, define a character device and then a device
of type ``xen-console`` to connect to it. For the Xen console equivalent of
the handy ``-serial mon:stdio`` option, for example:
.. parsed-literal::
-chardev stdio,mux=on,id=char0,signal=off -mon char0 \\
-device xen-console,chardev=char0
The Xen network device is ``xen-net-device``, which becomes the default NIC
model for emulated Xen guests, meaning that just the default NIC provided
by QEMU should automatically work and present a Xen network device to the
guest.
Disks can be configured with '``-drive file=${GUEST_IMAGE},if=xen``' and will
appear to the guest as ``xvda`` onwards.
Under Xen, the boot disk is typically available both via IDE emulation, and
as a PV block device. Guest bootloaders typically use IDE to load the guest
kernel, which then unplugs the IDE and continues with the Xen PV block device.
This configuration can be achieved as follows:
.. parsed-literal::
|qemu_system| --accel kvm,xen-version=0x40011,kernel-irqchip=split \\
-drive file=${GUEST_IMAGE},if=xen \\
-drive file=${GUEST_IMAGE},file.locking=off,if=ide
VirtIO devices can also be used; Linux guests may need to be dissuaded from
umplugging them by adding '``xen_emul_unplug=never``' on their command line.
Booting Xen PV guests
---------------------
Booting PV guest kernels is possible by using the Xen PV shim (a version of Xen
itself, designed to run inside a Xen HVM guest and provide memory management
services for one guest alone).
The Xen binary is provided as the ``-kernel`` and the guest kernel itself (or
PV Grub image) as the ``-initrd`` image, which actually just means the first
multiboot "module". For example:
.. parsed-literal::
|qemu_system| --accel kvm,xen-version=0x40011,kernel-irqchip=split \\
-chardev stdio,id=char0 -device xen-console,chardev=char0 \\
-display none -m 1G -kernel xen -initrd bzImage \\
-append "pv-shim console=xen,pv -- console=hvc0 root=/dev/xvda1" \\
-drive file=${GUEST_IMAGE},if=xen
The Xen image must be built with the ``CONFIG_XEN_GUEST`` and ``CONFIG_PV_SHIM``
options, and as of Xen 4.17, Xen's PV shim mode does not support using a serial
port; it must have a Xen console or it will panic.
The example above provides the guest kernel command line after a separator
(" ``--`` ") on the Xen command line, and does not provide the guest kernel
with an actual initramfs, which would need to listed as a second multiboot
module. For more complicated alternatives, see the command line
documentation for the ``-initrd`` option.
Host OS requirements
--------------------
The minimal Xen support in the KVM accelerator requires the host to be running The minimal Xen support in the KVM accelerator requires the host to be running
Linux v5.12 or newer. Later versions add optimisations: Linux v5.17 added Linux v5.12 or newer. Later versions add optimisations: Linux v5.17 added

View File

@@ -12,7 +12,7 @@ Supported devices
The ``virt`` machine supports the following devices: The ``virt`` machine supports the following devices:
* Up to 512 generic RV32GC/RV64GC cores, with optional extensions * Up to 8 generic RV32GC/RV64GC cores, with optional extensions
* Core Local Interruptor (CLINT) * Core Local Interruptor (CLINT)
* Platform-Level Interrupt Controller (PLIC) * Platform-Level Interrupt Controller (PLIC)
* CFI parallel NOR flash memory * CFI parallel NOR flash memory

View File

@@ -19,7 +19,6 @@ void hmp_dump_guest_memory(Monitor *mon, const QDict *qdict)
bool paging = qdict_get_try_bool(qdict, "paging", false); bool paging = qdict_get_try_bool(qdict, "paging", false);
bool zlib = qdict_get_try_bool(qdict, "zlib", false); bool zlib = qdict_get_try_bool(qdict, "zlib", false);
bool lzo = qdict_get_try_bool(qdict, "lzo", false); bool lzo = qdict_get_try_bool(qdict, "lzo", false);
bool raw = qdict_get_try_bool(qdict, "raw", false);
bool snappy = qdict_get_try_bool(qdict, "snappy", false); bool snappy = qdict_get_try_bool(qdict, "snappy", false);
const char *file = qdict_get_str(qdict, "filename"); const char *file = qdict_get_str(qdict, "filename");
bool has_begin = qdict_haskey(qdict, "begin"); bool has_begin = qdict_haskey(qdict, "begin");
@@ -41,28 +40,16 @@ void hmp_dump_guest_memory(Monitor *mon, const QDict *qdict)
dump_format = DUMP_GUEST_MEMORY_FORMAT_WIN_DMP; dump_format = DUMP_GUEST_MEMORY_FORMAT_WIN_DMP;
} }
if (zlib && raw) { if (zlib) {
if (raw) { dump_format = DUMP_GUEST_MEMORY_FORMAT_KDUMP_ZLIB;
dump_format = DUMP_GUEST_MEMORY_FORMAT_KDUMP_RAW_ZLIB;
} else {
dump_format = DUMP_GUEST_MEMORY_FORMAT_KDUMP_ZLIB;
}
} }
if (lzo) { if (lzo) {
if (raw) { dump_format = DUMP_GUEST_MEMORY_FORMAT_KDUMP_LZO;
dump_format = DUMP_GUEST_MEMORY_FORMAT_KDUMP_RAW_LZO;
} else {
dump_format = DUMP_GUEST_MEMORY_FORMAT_KDUMP_LZO;
}
} }
if (snappy) { if (snappy) {
if (raw) { dump_format = DUMP_GUEST_MEMORY_FORMAT_KDUMP_SNAPPY;
dump_format = DUMP_GUEST_MEMORY_FORMAT_KDUMP_RAW_SNAPPY;
} else {
dump_format = DUMP_GUEST_MEMORY_FORMAT_KDUMP_SNAPPY;
}
} }
if (has_begin) { if (has_begin) {

View File

@@ -100,7 +100,7 @@ static int dump_cleanup(DumpState *s)
memory_mapping_list_free(&s->list); memory_mapping_list_free(&s->list);
close(s->fd); close(s->fd);
g_free(s->guest_note); g_free(s->guest_note);
g_clear_pointer(&s->string_table_buf, g_array_unref); g_array_unref(s->string_table_buf);
s->guest_note = NULL; s->guest_note = NULL;
if (s->resume) { if (s->resume) {
if (s->detached) { if (s->detached) {
@@ -809,15 +809,11 @@ static void create_vmcore(DumpState *s, Error **errp)
dump_end(s, errp); dump_end(s, errp);
} }
static int write_start_flat_header(DumpState *s) static int write_start_flat_header(int fd)
{ {
MakedumpfileHeader *mh; MakedumpfileHeader *mh;
int ret = 0; int ret = 0;
if (s->kdump_raw) {
return 0;
}
QEMU_BUILD_BUG_ON(sizeof *mh > MAX_SIZE_MDF_HEADER); QEMU_BUILD_BUG_ON(sizeof *mh > MAX_SIZE_MDF_HEADER);
mh = g_malloc0(MAX_SIZE_MDF_HEADER); mh = g_malloc0(MAX_SIZE_MDF_HEADER);
@@ -828,7 +824,7 @@ static int write_start_flat_header(DumpState *s)
mh->version = cpu_to_be64(VERSION_FLAT_HEADER); mh->version = cpu_to_be64(VERSION_FLAT_HEADER);
size_t written_size; size_t written_size;
written_size = qemu_write_full(s->fd, mh, MAX_SIZE_MDF_HEADER); written_size = qemu_write_full(fd, mh, MAX_SIZE_MDF_HEADER);
if (written_size != MAX_SIZE_MDF_HEADER) { if (written_size != MAX_SIZE_MDF_HEADER) {
ret = -1; ret = -1;
} }
@@ -837,19 +833,15 @@ static int write_start_flat_header(DumpState *s)
return ret; return ret;
} }
static int write_end_flat_header(DumpState *s) static int write_end_flat_header(int fd)
{ {
MakedumpfileDataHeader mdh; MakedumpfileDataHeader mdh;
if (s->kdump_raw) {
return 0;
}
mdh.offset = END_FLAG_FLAT_HEADER; mdh.offset = END_FLAG_FLAT_HEADER;
mdh.buf_size = END_FLAG_FLAT_HEADER; mdh.buf_size = END_FLAG_FLAT_HEADER;
size_t written_size; size_t written_size;
written_size = qemu_write_full(s->fd, &mdh, sizeof(mdh)); written_size = qemu_write_full(fd, &mdh, sizeof(mdh));
if (written_size != sizeof(mdh)) { if (written_size != sizeof(mdh)) {
return -1; return -1;
} }
@@ -857,28 +849,20 @@ static int write_end_flat_header(DumpState *s)
return 0; return 0;
} }
static int write_buffer(DumpState *s, off_t offset, const void *buf, size_t size) static int write_buffer(int fd, off_t offset, const void *buf, size_t size)
{ {
size_t written_size; size_t written_size;
MakedumpfileDataHeader mdh; MakedumpfileDataHeader mdh;
off_t seek_loc;
if (s->kdump_raw) { mdh.offset = cpu_to_be64(offset);
seek_loc = lseek(s->fd, offset, SEEK_SET); mdh.buf_size = cpu_to_be64(size);
if (seek_loc == (off_t) -1) {
return -1;
}
} else {
mdh.offset = cpu_to_be64(offset);
mdh.buf_size = cpu_to_be64(size);
written_size = qemu_write_full(s->fd, &mdh, sizeof(mdh)); written_size = qemu_write_full(fd, &mdh, sizeof(mdh));
if (written_size != sizeof(mdh)) { if (written_size != sizeof(mdh)) {
return -1; return -1;
}
} }
written_size = qemu_write_full(s->fd, buf, size); written_size = qemu_write_full(fd, buf, size);
if (written_size != size) { if (written_size != size) {
return -1; return -1;
} }
@@ -998,7 +982,7 @@ static void create_header32(DumpState *s, Error **errp)
#endif #endif
dh->status = cpu_to_dump32(s, status); dh->status = cpu_to_dump32(s, status);
if (write_buffer(s, 0, dh, size) < 0) { if (write_buffer(s->fd, 0, dh, size) < 0) {
error_setg(errp, "dump: failed to write disk dump header"); error_setg(errp, "dump: failed to write disk dump header");
goto out; goto out;
} }
@@ -1028,7 +1012,7 @@ static void create_header32(DumpState *s, Error **errp)
kh->offset_note = cpu_to_dump64(s, offset_note); kh->offset_note = cpu_to_dump64(s, offset_note);
kh->note_size = cpu_to_dump32(s, s->note_size); kh->note_size = cpu_to_dump32(s, s->note_size);
if (write_buffer(s, DISKDUMP_HEADER_BLOCKS * if (write_buffer(s->fd, DISKDUMP_HEADER_BLOCKS *
block_size, kh, size) < 0) { block_size, kh, size) < 0) {
error_setg(errp, "dump: failed to write kdump sub header"); error_setg(errp, "dump: failed to write kdump sub header");
goto out; goto out;
@@ -1043,7 +1027,7 @@ static void create_header32(DumpState *s, Error **errp)
if (*errp) { if (*errp) {
goto out; goto out;
} }
if (write_buffer(s, offset_note, s->note_buf, if (write_buffer(s->fd, offset_note, s->note_buf,
s->note_size) < 0) { s->note_size) < 0) {
error_setg(errp, "dump: failed to write notes"); error_setg(errp, "dump: failed to write notes");
goto out; goto out;
@@ -1109,7 +1093,7 @@ static void create_header64(DumpState *s, Error **errp)
#endif #endif
dh->status = cpu_to_dump32(s, status); dh->status = cpu_to_dump32(s, status);
if (write_buffer(s, 0, dh, size) < 0) { if (write_buffer(s->fd, 0, dh, size) < 0) {
error_setg(errp, "dump: failed to write disk dump header"); error_setg(errp, "dump: failed to write disk dump header");
goto out; goto out;
} }
@@ -1139,7 +1123,7 @@ static void create_header64(DumpState *s, Error **errp)
kh->offset_note = cpu_to_dump64(s, offset_note); kh->offset_note = cpu_to_dump64(s, offset_note);
kh->note_size = cpu_to_dump64(s, s->note_size); kh->note_size = cpu_to_dump64(s, s->note_size);
if (write_buffer(s, DISKDUMP_HEADER_BLOCKS * if (write_buffer(s->fd, DISKDUMP_HEADER_BLOCKS *
block_size, kh, size) < 0) { block_size, kh, size) < 0) {
error_setg(errp, "dump: failed to write kdump sub header"); error_setg(errp, "dump: failed to write kdump sub header");
goto out; goto out;
@@ -1155,7 +1139,7 @@ static void create_header64(DumpState *s, Error **errp)
goto out; goto out;
} }
if (write_buffer(s, offset_note, s->note_buf, if (write_buffer(s->fd, offset_note, s->note_buf,
s->note_size) < 0) { s->note_size) < 0) {
error_setg(errp, "dump: failed to write notes"); error_setg(errp, "dump: failed to write notes");
goto out; goto out;
@@ -1220,7 +1204,7 @@ static int set_dump_bitmap(uint64_t last_pfn, uint64_t pfn, bool value,
while (old_offset < new_offset) { while (old_offset < new_offset) {
/* calculate the offset and write dump_bitmap */ /* calculate the offset and write dump_bitmap */
offset_bitmap1 = s->offset_dump_bitmap + old_offset; offset_bitmap1 = s->offset_dump_bitmap + old_offset;
if (write_buffer(s, offset_bitmap1, buf, if (write_buffer(s->fd, offset_bitmap1, buf,
bitmap_bufsize) < 0) { bitmap_bufsize) < 0) {
return -1; return -1;
} }
@@ -1228,7 +1212,7 @@ static int set_dump_bitmap(uint64_t last_pfn, uint64_t pfn, bool value,
/* dump level 1 is chosen, so 1st and 2nd bitmap are same */ /* dump level 1 is chosen, so 1st and 2nd bitmap are same */
offset_bitmap2 = s->offset_dump_bitmap + s->len_dump_bitmap + offset_bitmap2 = s->offset_dump_bitmap + s->len_dump_bitmap +
old_offset; old_offset;
if (write_buffer(s, offset_bitmap2, buf, if (write_buffer(s->fd, offset_bitmap2, buf,
bitmap_bufsize) < 0) { bitmap_bufsize) < 0) {
return -1; return -1;
} }
@@ -1396,7 +1380,7 @@ out:
static void prepare_data_cache(DataCache *data_cache, DumpState *s, static void prepare_data_cache(DataCache *data_cache, DumpState *s,
off_t offset) off_t offset)
{ {
data_cache->state = s; data_cache->fd = s->fd;
data_cache->data_size = 0; data_cache->data_size = 0;
data_cache->buf_size = 4 * dump_bitmap_get_bufsize(s); data_cache->buf_size = 4 * dump_bitmap_get_bufsize(s);
data_cache->buf = g_malloc0(data_cache->buf_size); data_cache->buf = g_malloc0(data_cache->buf_size);
@@ -1415,11 +1399,11 @@ static int write_cache(DataCache *dc, const void *buf, size_t size,
/* /*
* if flag_sync is set, synchronize data in dc->buf into vmcore. * if flag_sync is set, synchronize data in dc->buf into vmcore.
* otherwise check if the space is enough for caching data in buf, if not, * otherwise check if the space is enough for caching data in buf, if not,
* write the data in dc->buf to dc->state->fd and reset dc->buf * write the data in dc->buf to dc->fd and reset dc->buf
*/ */
if ((!flag_sync && dc->data_size + size > dc->buf_size) || if ((!flag_sync && dc->data_size + size > dc->buf_size) ||
(flag_sync && dc->data_size > 0)) { (flag_sync && dc->data_size > 0)) {
if (write_buffer(dc->state, dc->offset, dc->buf, dc->data_size) < 0) { if (write_buffer(dc->fd, dc->offset, dc->buf, dc->data_size) < 0) {
return -1; return -1;
} }
@@ -1660,7 +1644,7 @@ static void create_kdump_vmcore(DumpState *s, Error **errp)
* +------------------------------------------+ * +------------------------------------------+
*/ */
ret = write_start_flat_header(s); ret = write_start_flat_header(s->fd);
if (ret < 0) { if (ret < 0) {
error_setg(errp, "dump: failed to write start flat header"); error_setg(errp, "dump: failed to write start flat header");
return; return;
@@ -1681,13 +1665,33 @@ static void create_kdump_vmcore(DumpState *s, Error **errp)
return; return;
} }
ret = write_end_flat_header(s); ret = write_end_flat_header(s->fd);
if (ret < 0) { if (ret < 0) {
error_setg(errp, "dump: failed to write end flat header"); error_setg(errp, "dump: failed to write end flat header");
return; return;
} }
} }
static int validate_start_block(DumpState *s)
{
GuestPhysBlock *block;
if (!dump_has_filter(s)) {
return 0;
}
QTAILQ_FOREACH(block, &s->guest_phys_blocks.head, next) {
/* This block is out of the range */
if (block->target_start >= s->filter_area_begin + s->filter_area_length ||
block->target_end <= s->filter_area_begin) {
continue;
}
return 0;
}
return -1;
}
static void get_max_mapnr(DumpState *s) static void get_max_mapnr(DumpState *s)
{ {
GuestPhysBlock *last_block; GuestPhysBlock *last_block;
@@ -1771,8 +1775,7 @@ static void vmcoreinfo_update_phys_base(DumpState *s)
static void dump_init(DumpState *s, int fd, bool has_format, static void dump_init(DumpState *s, int fd, bool has_format,
DumpGuestMemoryFormat format, bool paging, bool has_filter, DumpGuestMemoryFormat format, bool paging, bool has_filter,
int64_t begin, int64_t length, bool kdump_raw, int64_t begin, int64_t length, Error **errp)
Error **errp)
{ {
ERRP_GUARD(); ERRP_GUARD();
VMCoreInfoState *vmci = vmcoreinfo_find(); VMCoreInfoState *vmci = vmcoreinfo_find();
@@ -1783,7 +1786,6 @@ static void dump_init(DumpState *s, int fd, bool has_format,
s->has_format = has_format; s->has_format = has_format;
s->format = format; s->format = format;
s->written_size = 0; s->written_size = 0;
s->kdump_raw = kdump_raw;
/* kdump-compressed is conflict with paging and filter */ /* kdump-compressed is conflict with paging and filter */
if (has_format && format != DUMP_GUEST_MEMORY_FORMAT_ELF) { if (has_format && format != DUMP_GUEST_MEMORY_FORMAT_ELF) {
@@ -1808,7 +1810,7 @@ static void dump_init(DumpState *s, int fd, bool has_format,
s->fd = fd; s->fd = fd;
if (has_filter && !length) { if (has_filter && !length) {
error_setg(errp, "parameter 'length' expects a non-zero size"); error_setg(errp, QERR_INVALID_PARAMETER, "length");
goto cleanup; goto cleanup;
} }
s->filter_area_begin = begin; s->filter_area_begin = begin;
@@ -1837,6 +1839,12 @@ static void dump_init(DumpState *s, int fd, bool has_format,
goto cleanup; goto cleanup;
} }
/* Is the filter filtering everything? */
if (validate_start_block(s) == -1) {
error_setg(errp, QERR_INVALID_PARAMETER, "begin");
goto cleanup;
}
/* get dump info: endian, class and architecture. /* get dump info: endian, class and architecture.
* If the target architecture is not supported, cpu_get_dump_info() will * If the target architecture is not supported, cpu_get_dump_info() will
* return -1. * return -1.
@@ -2053,19 +2061,17 @@ DumpQueryResult *qmp_query_dump(Error **errp)
return result; return result;
} }
void qmp_dump_guest_memory(bool paging, const char *protocol, void qmp_dump_guest_memory(bool paging, const char *file,
bool has_detach, bool detach, bool has_detach, bool detach,
bool has_begin, int64_t begin, bool has_begin, int64_t begin, bool has_length,
bool has_length, int64_t length, int64_t length, bool has_format,
bool has_format, DumpGuestMemoryFormat format, DumpGuestMemoryFormat format, Error **errp)
Error **errp)
{ {
ERRP_GUARD(); ERRP_GUARD();
const char *p; const char *p;
int fd; int fd = -1;
DumpState *s; DumpState *s;
bool detach_p = false; bool detach_p = false;
bool kdump_raw = false;
if (runstate_check(RUN_STATE_INMIGRATE)) { if (runstate_check(RUN_STATE_INMIGRATE)) {
error_setg(errp, "Dump not allowed during incoming migration."); error_setg(errp, "Dump not allowed during incoming migration.");
@@ -2079,29 +2085,6 @@ void qmp_dump_guest_memory(bool paging, const char *protocol,
return; return;
} }
/*
* externally, we represent kdump-raw-* as separate formats, but internally
* they are handled the same, except for the "raw" flag
*/
if (has_format) {
switch (format) {
case DUMP_GUEST_MEMORY_FORMAT_KDUMP_RAW_ZLIB:
format = DUMP_GUEST_MEMORY_FORMAT_KDUMP_ZLIB;
kdump_raw = true;
break;
case DUMP_GUEST_MEMORY_FORMAT_KDUMP_RAW_LZO:
format = DUMP_GUEST_MEMORY_FORMAT_KDUMP_LZO;
kdump_raw = true;
break;
case DUMP_GUEST_MEMORY_FORMAT_KDUMP_RAW_SNAPPY:
format = DUMP_GUEST_MEMORY_FORMAT_KDUMP_SNAPPY;
kdump_raw = true;
break;
default:
break;
}
}
/* /*
* kdump-compressed format need the whole memory dumped, so paging or * kdump-compressed format need the whole memory dumped, so paging or
* filter is not supported here. * filter is not supported here.
@@ -2144,24 +2127,25 @@ void qmp_dump_guest_memory(bool paging, const char *protocol,
return; return;
} }
if (strstart(protocol, "fd:", &p)) { #if !defined(WIN32)
if (strstart(file, "fd:", &p)) {
fd = monitor_get_fd(monitor_cur(), p, errp); fd = monitor_get_fd(monitor_cur(), p, errp);
if (fd == -1) { if (fd == -1) {
return; return;
} }
} else if (strstart(protocol, "file:", &p)) { }
fd = qemu_create(p, O_WRONLY | O_TRUNC | O_BINARY, S_IRUSR, errp); #endif
if (strstart(file, "file:", &p)) {
fd = qemu_open_old(p, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR);
if (fd < 0) { if (fd < 0) {
error_setg_file_open(errp, errno, p);
return; return;
} }
} else {
error_setg(errp,
"parameter 'protocol' must start with 'file:' or 'fd:'");
return;
} }
if (kdump_raw && lseek(fd, 0, SEEK_CUR) == (off_t) -1) {
close(fd); if (fd == -1) {
error_setg(errp, "kdump-raw formats require a seekable file"); error_setg(errp, QERR_INVALID_PARAMETER, "protocol");
return; return;
} }
@@ -2184,7 +2168,7 @@ void qmp_dump_guest_memory(bool paging, const char *protocol,
dump_state_prepare(s); dump_state_prepare(s);
dump_init(s, fd, has_format, format, paging, has_begin, dump_init(s, fd, has_format, format, paging, has_begin,
begin, length, kdump_raw, errp); begin, length, errp);
if (*errp) { if (*errp) {
qatomic_set(&s->status, DUMP_STATUS_FAILED); qatomic_set(&s->status, DUMP_STATUS_FAILED);
return; return;
@@ -2212,18 +2196,15 @@ DumpGuestMemoryCapability *qmp_query_dump_guest_memory_capability(Error **errp)
/* kdump-zlib is always available */ /* kdump-zlib is always available */
QAPI_LIST_APPEND(tail, DUMP_GUEST_MEMORY_FORMAT_KDUMP_ZLIB); QAPI_LIST_APPEND(tail, DUMP_GUEST_MEMORY_FORMAT_KDUMP_ZLIB);
QAPI_LIST_APPEND(tail, DUMP_GUEST_MEMORY_FORMAT_KDUMP_RAW_ZLIB);
/* add new item if kdump-lzo is available */ /* add new item if kdump-lzo is available */
#ifdef CONFIG_LZO #ifdef CONFIG_LZO
QAPI_LIST_APPEND(tail, DUMP_GUEST_MEMORY_FORMAT_KDUMP_LZO); QAPI_LIST_APPEND(tail, DUMP_GUEST_MEMORY_FORMAT_KDUMP_LZO);
QAPI_LIST_APPEND(tail, DUMP_GUEST_MEMORY_FORMAT_KDUMP_RAW_LZO);
#endif #endif
/* add new item if kdump-snappy is available */ /* add new item if kdump-snappy is available */
#ifdef CONFIG_SNAPPY #ifdef CONFIG_SNAPPY
QAPI_LIST_APPEND(tail, DUMP_GUEST_MEMORY_FORMAT_KDUMP_SNAPPY); QAPI_LIST_APPEND(tail, DUMP_GUEST_MEMORY_FORMAT_KDUMP_SNAPPY);
QAPI_LIST_APPEND(tail, DUMP_GUEST_MEMORY_FORMAT_KDUMP_RAW_SNAPPY);
#endif #endif
if (win_dump_available(NULL)) { if (win_dump_available(NULL)) {

View File

@@ -76,7 +76,7 @@
<reg name="q8" bitsize="128" type="neon_q"/> <reg name="q8" bitsize="128" type="neon_q"/>
<reg name="q9" bitsize="128" type="neon_q"/> <reg name="q9" bitsize="128" type="neon_q"/>
<reg name="q10" bitsize="128" type="neon_q"/> <reg name="q10" bitsize="128" type="neon_q"/>
<reg name="q11" bitsize="128" type="neon_q"/> <reg name="q10" bitsize="128" type="neon_q"/>
<reg name="q12" bitsize="128" type="neon_q"/> <reg name="q12" bitsize="128" type="neon_q"/>
<reg name="q13" bitsize="128" type="neon_q"/> <reg name="q13" bitsize="128" type="neon_q"/>
<reg name="q14" bitsize="128" type="neon_q"/> <reg name="q14" bitsize="128" type="neon_q"/>

View File

@@ -422,84 +422,6 @@ static const char *get_feature_xml(const char *p, const char **newp,
return NULL; return NULL;
} }
void gdb_feature_builder_init(GDBFeatureBuilder *builder, GDBFeature *feature,
const char *name, const char *xmlname,
int base_reg)
{
char *header = g_markup_printf_escaped(
"<?xml version=\"1.0\"?>"
"<!DOCTYPE feature SYSTEM \"gdb-target.dtd\">"
"<feature name=\"%s\">",
name);
builder->feature = feature;
builder->xml = g_ptr_array_new();
g_ptr_array_add(builder->xml, header);
builder->base_reg = base_reg;
feature->xmlname = xmlname;
feature->num_regs = 0;
}
void gdb_feature_builder_append_tag(const GDBFeatureBuilder *builder,
const char *format, ...)
{
va_list ap;
va_start(ap, format);
g_ptr_array_add(builder->xml, g_markup_vprintf_escaped(format, ap));
va_end(ap);
}
void gdb_feature_builder_append_reg(const GDBFeatureBuilder *builder,
const char *name,
int bitsize,
int regnum,
const char *type,
const char *group)
{
if (builder->feature->num_regs < regnum) {
builder->feature->num_regs = regnum;
}
if (group) {
gdb_feature_builder_append_tag(
builder,
"<reg name=\"%s\" bitsize=\"%d\" regnum=\"%d\" type=\"%s\" group=\"%s\"/>",
name, bitsize, builder->base_reg + regnum, type, group);
} else {
gdb_feature_builder_append_tag(
builder,
"<reg name=\"%s\" bitsize=\"%d\" regnum=\"%d\" type=\"%s\"/>",
name, bitsize, builder->base_reg + regnum, type);
}
}
void gdb_feature_builder_end(const GDBFeatureBuilder *builder)
{
g_ptr_array_add(builder->xml, (void *)"</feature>");
g_ptr_array_add(builder->xml, NULL);
builder->feature->xml = g_strjoinv(NULL, (void *)builder->xml->pdata);
for (guint i = 0; i < builder->xml->len - 2; i++) {
g_free(g_ptr_array_index(builder->xml, i));
}
g_ptr_array_free(builder->xml, TRUE);
}
const GDBFeature *gdb_find_static_feature(const char *xmlname)
{
const GDBFeature *feature;
for (feature = gdb_static_features; feature->xmlname; feature++) {
if (!strcmp(feature->xmlname, xmlname)) {
return feature;
}
}
g_assert_not_reached();
}
static int gdb_read_register(CPUState *cpu, GByteArray *buf, int reg) static int gdb_read_register(CPUState *cpu, GByteArray *buf, int reg)
{ {
CPUClass *cc = CPU_GET_CLASS(cpu); CPUClass *cc = CPU_GET_CLASS(cpu);

View File

@@ -252,7 +252,6 @@ SRST
ERST ERST
#ifdef CONFIG_PIXMAN
{ {
.name = "screendump", .name = "screendump",
.args_type = "filename:F,format:-fs,device:s?,head:i?", .args_type = "filename:F,format:-fs,device:s?,head:i?",
@@ -268,7 +267,6 @@ SRST
``screendump`` *filename* ``screendump`` *filename*
Save screen into PPM image *filename*. Save screen into PPM image *filename*.
ERST ERST
#endif
{ {
.name = "logfile", .name = "logfile",
@@ -1087,16 +1085,14 @@ ERST
{ {
.name = "dump-guest-memory", .name = "dump-guest-memory",
.args_type = "paging:-p,detach:-d,windmp:-w,zlib:-z,lzo:-l,snappy:-s,raw:-R,filename:F,begin:l?,length:l?", .args_type = "paging:-p,detach:-d,windmp:-w,zlib:-z,lzo:-l,snappy:-s,filename:F,begin:l?,length:l?",
.params = "[-p] [-d] [-z|-l|-s|-w] [-R] filename [begin length]", .params = "[-p] [-d] [-z|-l|-s|-w] filename [begin length]",
.help = "dump guest memory into file 'filename'.\n\t\t\t" .help = "dump guest memory into file 'filename'.\n\t\t\t"
"-p: do paging to get guest's memory mapping.\n\t\t\t" "-p: do paging to get guest's memory mapping.\n\t\t\t"
"-d: return immediately (do not wait for completion).\n\t\t\t" "-d: return immediately (do not wait for completion).\n\t\t\t"
"-z: dump in kdump-compressed format, with zlib compression.\n\t\t\t" "-z: dump in kdump-compressed format, with zlib compression.\n\t\t\t"
"-l: dump in kdump-compressed format, with lzo compression.\n\t\t\t" "-l: dump in kdump-compressed format, with lzo compression.\n\t\t\t"
"-s: dump in kdump-compressed format, with snappy compression.\n\t\t\t" "-s: dump in kdump-compressed format, with snappy compression.\n\t\t\t"
"-R: when using kdump (-z, -l, -s), use raw rather than makedumpfile-flattened\n\t\t\t"
" format\n\t\t\t"
"-w: dump in Windows crashdump format (can be used instead of ELF-dump converting),\n\t\t\t" "-w: dump in Windows crashdump format (can be used instead of ELF-dump converting),\n\t\t\t"
" for Windows x86 and x64 guests with vmcoreinfo driver only.\n\t\t\t" " for Windows x86 and x64 guests with vmcoreinfo driver only.\n\t\t\t"
"begin: the starting physical address.\n\t\t\t" "begin: the starting physical address.\n\t\t\t"
@@ -1119,9 +1115,6 @@ SRST
dump in kdump-compressed format, with lzo compression. dump in kdump-compressed format, with lzo compression.
``-s`` ``-s``
dump in kdump-compressed format, with snappy compression. dump in kdump-compressed format, with snappy compression.
``-R``
when using kdump (-z, -l, -s), use raw rather than makedumpfile-flattened
format
``-w`` ``-w``
dump in Windows crashdump format (can be used instead of ELF-dump converting), dump in Windows crashdump format (can be used instead of ELF-dump converting),
for Windows x64 guests with vmcoreinfo driver only for Windows x64 guests with vmcoreinfo driver only

View File

@@ -1,52 +0,0 @@
/*
* SPDX-License-Identifier: GPL-2.0-or-later
* Load/store for 128-bit atomic operations, LoongArch version.
*
* See docs/devel/atomics.rst for discussion about the guarantees each
* atomic primitive is meant to provide.
*/
#ifndef LOONGARCH_ATOMIC128_LDST_H
#define LOONGARCH_ATOMIC128_LDST_H
#include "host/cpuinfo.h"
#include "tcg/debug-assert.h"
#define HAVE_ATOMIC128_RO likely(cpuinfo & CPUINFO_LSX)
#define HAVE_ATOMIC128_RW HAVE_ATOMIC128_RO
/*
* As of gcc 13 and clang 16, there is no compiler support for LSX at all.
* Use inline assembly throughout.
*/
static inline Int128 atomic16_read_ro(const Int128 *ptr)
{
uint64_t l, h;
tcg_debug_assert(HAVE_ATOMIC128_RO);
asm("vld $vr0, %2, 0\n\t"
"vpickve2gr.d %0, $vr0, 0\n\t"
"vpickve2gr.d %1, $vr0, 1"
: "=r"(l), "=r"(h) : "r"(ptr), "m"(*ptr) : "f0");
return int128_make128(l, h);
}
static inline Int128 atomic16_read_rw(Int128 *ptr)
{
return atomic16_read_ro(ptr);
}
static inline void atomic16_set(Int128 *ptr, Int128 val)
{
uint64_t l = int128_getlo(val), h = int128_gethi(val);
tcg_debug_assert(HAVE_ATOMIC128_RW);
asm("vinsgr2vr.d $vr0, %1, 0\n\t"
"vinsgr2vr.d $vr0, %2, 1\n\t"
"vst $vr0, %3, 0"
: "=m"(*ptr) : "r"(l), "r"(h), "r"(ptr) : "f0");
}
#endif /* LOONGARCH_ATOMIC128_LDST_H */

View File

@@ -1,21 +0,0 @@
/*
* SPDX-License-Identifier: GPL-2.0-or-later
* Host specific cpu identification for LoongArch
*/
#ifndef HOST_CPUINFO_H
#define HOST_CPUINFO_H
#define CPUINFO_ALWAYS (1u << 0) /* so cpuinfo is nonzero */
#define CPUINFO_LSX (1u << 1)
/* Initialized with a constructor. */
extern unsigned cpuinfo;
/*
* We cannot rely on constructor ordering, so other constructors must
* use the function interface rather than the variable above.
*/
unsigned cpuinfo_init(void);
#endif /* HOST_CPUINFO_H */

View File

@@ -1,39 +0,0 @@
/*
* SPDX-License-Identifier: GPL-2.0-or-later
* Atomic extract 64 from 128-bit, LoongArch version.
*
* Copyright (C) 2023 Linaro, Ltd.
*/
#ifndef LOONGARCH_LOAD_EXTRACT_AL16_AL8_H
#define LOONGARCH_LOAD_EXTRACT_AL16_AL8_H
#include "host/cpuinfo.h"
#include "tcg/debug-assert.h"
/**
* load_atom_extract_al16_or_al8:
* @pv: host address
* @s: object size in bytes, @s <= 8.
*
* Load @s bytes from @pv, when pv % s != 0. If [p, p+s-1] does not
* cross an 16-byte boundary then the access must be 16-byte atomic,
* otherwise the access must be 8-byte atomic.
*/
static inline uint64_t load_atom_extract_al16_or_al8(void *pv, int s)
{
uintptr_t pi = (uintptr_t)pv;
Int128 *ptr_align = (Int128 *)(pi & ~7);
int shr = (pi & 7) * 8;
uint64_t l, h;
tcg_debug_assert(HAVE_ATOMIC128_RO);
asm("vld $vr0, %2, 0\n\t"
"vpickve2gr.d %0, $vr0, 0\n\t"
"vpickve2gr.d %1, $vr0, 1"
: "=r"(l), "=r"(h) : "r"(ptr_align), "m"(*ptr_align) : "f0");
return (l >> shr) | (h << (-shr & 63));
}
#endif /* LOONGARCH_LOAD_EXTRACT_AL16_AL8_H */

View File

@@ -1,12 +0,0 @@
/*
* SPDX-License-Identifier: GPL-2.0-or-later
* Atomic store insert into 128-bit, LoongArch version.
*/
#ifndef LOONGARCH_STORE_INSERT_AL16_H
#define LOONGARCH_STORE_INSERT_AL16_H
void store_atom_insert_al16(Int128 *ps, Int128 val, Int128 msk)
QEMU_ERROR("unsupported atomic");
#endif /* LOONGARCH_STORE_INSERT_AL16_H */

View File

@@ -738,10 +738,6 @@ static AddressSpace *typhoon_pci_dma_iommu(PCIBus *bus, void *opaque, int devfn)
return &s->pchip.iommu_as; return &s->pchip.iommu_as;
} }
static const PCIIOMMUOps typhoon_iommu_ops = {
.get_address_space = typhoon_pci_dma_iommu,
};
static void typhoon_set_irq(void *opaque, int irq, int level) static void typhoon_set_irq(void *opaque, int irq, int level)
{ {
TyphoonState *s = opaque; TyphoonState *s = opaque;
@@ -901,7 +897,7 @@ PCIBus *typhoon_init(MemoryRegion *ram, qemu_irq *p_isa_irq,
"iommu-typhoon", UINT64_MAX); "iommu-typhoon", UINT64_MAX);
address_space_init(&s->pchip.iommu_as, MEMORY_REGION(&s->pchip.iommu), address_space_init(&s->pchip.iommu_as, MEMORY_REGION(&s->pchip.iommu),
"pchip0-pci"); "pchip0-pci");
pci_setup_iommu(b, &typhoon_iommu_ops, s); pci_setup_iommu(b, typhoon_pci_dma_iommu, s);
/* Pchip0 PCI special/interrupt acknowledge, 0x801.F800.0000, 64MB. */ /* Pchip0 PCI special/interrupt acknowledge, 0x801.F800.0000, 64MB. */
memory_region_init_io(&s->pchip.reg_iack, OBJECT(s), &alpha_pci_iack_ops, memory_region_init_io(&s->pchip.reg_iack, OBJECT(s), &alpha_pci_iack_ops,

View File

@@ -450,7 +450,7 @@ config STM32F405_SOC
config XLNX_ZYNQMP_ARM config XLNX_ZYNQMP_ARM
bool bool
default y if PIXMAN default y
depends on TCG && AARCH64 depends on TCG && AARCH64
select AHCI select AHCI
select ARM_GIC select ARM_GIC
@@ -463,7 +463,6 @@ config XLNX_ZYNQMP_ARM
select XILINX_AXI select XILINX_AXI
select XILINX_SPIPS select XILINX_SPIPS
select XLNX_CSU_DMA select XLNX_CSU_DMA
select XLNX_DISPLAYPORT
select XLNX_ZYNQMP select XLNX_ZYNQMP
select XLNX_ZDMA select XLNX_ZDMA
select USB_DWC3 select USB_DWC3
@@ -484,14 +483,12 @@ config XLNX_VERSAL
select XLNX_EFUSE_VERSAL select XLNX_EFUSE_VERSAL
select XLNX_USB_SUBSYS select XLNX_USB_SUBSYS
select XLNX_VERSAL_TRNG select XLNX_VERSAL_TRNG
select XLNX_CSU_DMA
config NPCM7XX config NPCM7XX
bool bool
default y default y
depends on TCG && ARM depends on TCG && ARM
select A9MPCORE select A9MPCORE
select ADM1266
select ADM1272 select ADM1272
select ARM_GIC select ARM_GIC
select SMBUS select SMBUS

View File

@@ -48,7 +48,6 @@
#include "qemu/units.h" #include "qemu/units.h"
#include "qemu/cutils.h" #include "qemu/cutils.h"
#include "qapi/error.h" #include "qapi/error.h"
#include "qapi/qmp/qlist.h"
#include "qemu/error-report.h" #include "qemu/error-report.h"
#include "hw/arm/boot.h" #include "hw/arm/boot.h"
#include "hw/arm/armv7m.h" #include "hw/arm/armv7m.h"
@@ -462,7 +461,6 @@ static MemoryRegion *make_scc(MPS2TZMachineState *mms, void *opaque,
MPS2SCC *scc = opaque; MPS2SCC *scc = opaque;
DeviceState *sccdev; DeviceState *sccdev;
MPS2TZMachineClass *mmc = MPS2TZ_MACHINE_GET_CLASS(mms); MPS2TZMachineClass *mmc = MPS2TZ_MACHINE_GET_CLASS(mms);
QList *oscclk;
uint32_t i; uint32_t i;
object_initialize_child(OBJECT(mms), "scc", scc, TYPE_MPS2_SCC); object_initialize_child(OBJECT(mms), "scc", scc, TYPE_MPS2_SCC);
@@ -471,13 +469,11 @@ static MemoryRegion *make_scc(MPS2TZMachineState *mms, void *opaque,
qdev_prop_set_uint32(sccdev, "scc-cfg4", 0x2); qdev_prop_set_uint32(sccdev, "scc-cfg4", 0x2);
qdev_prop_set_uint32(sccdev, "scc-aid", 0x00200008); qdev_prop_set_uint32(sccdev, "scc-aid", 0x00200008);
qdev_prop_set_uint32(sccdev, "scc-id", mmc->scc_id); qdev_prop_set_uint32(sccdev, "scc-id", mmc->scc_id);
qdev_prop_set_uint32(sccdev, "len-oscclk", mmc->len_oscclk);
oscclk = qlist_new();
for (i = 0; i < mmc->len_oscclk; i++) { for (i = 0; i < mmc->len_oscclk; i++) {
qlist_append_int(oscclk, mmc->oscclk[i]); g_autofree char *propname = g_strdup_printf("oscclk[%u]", i);
qdev_prop_set_uint32(sccdev, propname, mmc->oscclk[i]);
} }
qdev_prop_set_array(sccdev, "oscclk", oscclk);
sysbus_realize(SYS_BUS_DEVICE(scc), &error_fatal); sysbus_realize(SYS_BUS_DEVICE(scc), &error_fatal);
return sysbus_mmio_get_region(SYS_BUS_DEVICE(sccdev), 0); return sysbus_mmio_get_region(SYS_BUS_DEVICE(sccdev), 0);
} }

View File

@@ -48,7 +48,6 @@
#include "net/net.h" #include "net/net.h"
#include "hw/watchdog/cmsdk-apb-watchdog.h" #include "hw/watchdog/cmsdk-apb-watchdog.h"
#include "hw/qdev-clock.h" #include "hw/qdev-clock.h"
#include "qapi/qmp/qlist.h"
#include "qom/object.h" #include "qom/object.h"
typedef enum MPS2FPGAType { typedef enum MPS2FPGAType {
@@ -139,7 +138,6 @@ static void mps2_common_init(MachineState *machine)
MemoryRegion *system_memory = get_system_memory(); MemoryRegion *system_memory = get_system_memory();
MachineClass *mc = MACHINE_GET_CLASS(machine); MachineClass *mc = MACHINE_GET_CLASS(machine);
DeviceState *armv7m, *sccdev; DeviceState *armv7m, *sccdev;
QList *oscclk;
int i; int i;
if (strcmp(machine->cpu_type, mc->default_cpu_type) != 0) { if (strcmp(machine->cpu_type, mc->default_cpu_type) != 0) {
@@ -404,12 +402,10 @@ static void mps2_common_init(MachineState *machine)
qdev_prop_set_uint32(sccdev, "scc-aid", 0x00200008); qdev_prop_set_uint32(sccdev, "scc-aid", 0x00200008);
qdev_prop_set_uint32(sccdev, "scc-id", mmc->scc_id); qdev_prop_set_uint32(sccdev, "scc-id", mmc->scc_id);
/* All these FPGA images have the same OSCCLK configuration */ /* All these FPGA images have the same OSCCLK configuration */
oscclk = qlist_new(); qdev_prop_set_uint32(sccdev, "len-oscclk", 3);
qlist_append_int(oscclk, 50000000); qdev_prop_set_uint32(sccdev, "oscclk[0]", 50000000);
qlist_append_int(oscclk, 24576000); qdev_prop_set_uint32(sccdev, "oscclk[1]", 24576000);
qlist_append_int(oscclk, 25000000); qdev_prop_set_uint32(sccdev, "oscclk[2]", 25000000);
qdev_prop_set_array(sccdev, "oscclk", oscclk);
sysbus_realize(SYS_BUS_DEVICE(&mms->scc), &error_fatal); sysbus_realize(SYS_BUS_DEVICE(&mms->scc), &error_fatal);
sysbus_mmio_map(SYS_BUS_DEVICE(sccdev), 0, 0x4002f000); sysbus_mmio_map(SYS_BUS_DEVICE(sccdev), 0, 0x4002f000);
object_initialize_child(OBJECT(mms), "fpgaio", object_initialize_child(OBJECT(mms), "fpgaio",

View File

@@ -48,7 +48,6 @@
#include "hw/char/pl011.h" #include "hw/char/pl011.h"
#include "hw/watchdog/sbsa_gwdt.h" #include "hw/watchdog/sbsa_gwdt.h"
#include "net/net.h" #include "net/net.h"
#include "qapi/qmp/qlist.h"
#include "qom/object.h" #include "qom/object.h"
#define RAMLIMIT_GB 8192 #define RAMLIMIT_GB 8192
@@ -438,7 +437,6 @@ static void create_gic(SBSAMachineState *sms, MemoryRegion *mem)
SysBusDevice *gicbusdev; SysBusDevice *gicbusdev;
const char *gictype; const char *gictype;
uint32_t redist0_capacity, redist0_count; uint32_t redist0_capacity, redist0_count;
QList *redist_region_count;
int i; int i;
gictype = gicv3_class_name(); gictype = gicv3_class_name();
@@ -457,9 +455,8 @@ static void create_gic(SBSAMachineState *sms, MemoryRegion *mem)
sbsa_ref_memmap[SBSA_GIC_REDIST].size / GICV3_REDIST_SIZE; sbsa_ref_memmap[SBSA_GIC_REDIST].size / GICV3_REDIST_SIZE;
redist0_count = MIN(smp_cpus, redist0_capacity); redist0_count = MIN(smp_cpus, redist0_capacity);
redist_region_count = qlist_new(); qdev_prop_set_uint32(sms->gic, "len-redist-region-count", 1);
qlist_append_int(redist_region_count, redist0_count); qdev_prop_set_uint32(sms->gic, "redist-region-count[0]", redist0_count);
qdev_prop_set_array(sms->gic, "redist-region-count", redist_region_count);
object_property_set_link(OBJECT(sms->gic), "sysmem", object_property_set_link(OBJECT(sms->gic), "sysmem",
OBJECT(mem), &error_fatal); OBJECT(mem), &error_fatal);

View File

@@ -605,10 +605,6 @@ static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int devfn)
return &sdev->as; return &sdev->as;
} }
static const PCIIOMMUOps smmu_ops = {
.get_address_space = smmu_find_add_as,
};
IOMMUMemoryRegion *smmu_iommu_mr(SMMUState *s, uint32_t sid) IOMMUMemoryRegion *smmu_iommu_mr(SMMUState *s, uint32_t sid)
{ {
uint8_t bus_n, devfn; uint8_t bus_n, devfn;
@@ -665,7 +661,7 @@ static void smmu_base_realize(DeviceState *dev, Error **errp)
s->smmu_pcibus_by_busptr = g_hash_table_new(NULL, NULL); s->smmu_pcibus_by_busptr = g_hash_table_new(NULL, NULL);
if (s->primary_bus) { if (s->primary_bus) {
pci_setup_iommu(s->primary_bus, &smmu_ops, s); pci_setup_iommu(s->primary_bus, smmu_find_add_as, s);
} else { } else {
error_setg(errp, "SMMU is not attached to any PCI bus!"); error_setg(errp, "SMMU is not attached to any PCI bus!");
} }

View File

@@ -43,7 +43,6 @@
#include "hw/cpu/a15mpcore.h" #include "hw/cpu/a15mpcore.h"
#include "hw/i2c/arm_sbcon_i2c.h" #include "hw/i2c/arm_sbcon_i2c.h"
#include "hw/sd/sd.h" #include "hw/sd/sd.h"
#include "qapi/qmp/qlist.h"
#include "qom/object.h" #include "qom/object.h"
#include "audio/audio.h" #include "audio/audio.h"
@@ -178,6 +177,7 @@ struct VexpressMachineState {
MemoryRegion vram; MemoryRegion vram;
MemoryRegion sram; MemoryRegion sram;
MemoryRegion flashalias; MemoryRegion flashalias;
MemoryRegion lowram;
MemoryRegion a15sram; MemoryRegion a15sram;
bool secure; bool secure;
bool virt; bool virt;
@@ -276,6 +276,7 @@ static void a9_daughterboard_init(VexpressMachineState *vms,
{ {
MachineState *machine = MACHINE(vms); MachineState *machine = MACHINE(vms);
MemoryRegion *sysmem = get_system_memory(); MemoryRegion *sysmem = get_system_memory();
ram_addr_t low_ram_size;
if (ram_size > 0x40000000) { if (ram_size > 0x40000000) {
/* 1GB is the maximum the address space permits */ /* 1GB is the maximum the address space permits */
@@ -283,11 +284,17 @@ static void a9_daughterboard_init(VexpressMachineState *vms,
exit(1); exit(1);
} }
/* low_ram_size = ram_size;
* RAM is from 0x60000000 upwards. The bottom 64MB of the if (low_ram_size > 0x4000000) {
low_ram_size = 0x4000000;
}
/* RAM is from 0x60000000 upwards. The bottom 64MB of the
* address space should in theory be remappable to various * address space should in theory be remappable to various
* things including ROM or RAM; we always map the flash there. * things including ROM or RAM; we always map the RAM there.
*/ */
memory_region_init_alias(&vms->lowram, NULL, "vexpress.lowmem",
machine->ram, 0, low_ram_size);
memory_region_add_subregion(sysmem, 0x0, &vms->lowram);
memory_region_add_subregion(sysmem, 0x60000000, machine->ram); memory_region_add_subregion(sysmem, 0x60000000, machine->ram);
/* 0x1e000000 A9MPCore (SCU) private memory region */ /* 0x1e000000 A9MPCore (SCU) private memory region */
@@ -545,7 +552,6 @@ static void vexpress_common_init(MachineState *machine)
ram_addr_t vram_size, sram_size; ram_addr_t vram_size, sram_size;
MemoryRegion *sysmem = get_system_memory(); MemoryRegion *sysmem = get_system_memory();
const hwaddr *map = daughterboard->motherboard_map; const hwaddr *map = daughterboard->motherboard_map;
QList *db_voltage, *db_clock;
int i; int i;
daughterboard->init(vms, machine->ram_size, machine->cpu_type, pic); daughterboard->init(vms, machine->ram_size, machine->cpu_type, pic);
@@ -586,19 +592,20 @@ static void vexpress_common_init(MachineState *machine)
sysctl = qdev_new("realview_sysctl"); sysctl = qdev_new("realview_sysctl");
qdev_prop_set_uint32(sysctl, "sys_id", sys_id); qdev_prop_set_uint32(sysctl, "sys_id", sys_id);
qdev_prop_set_uint32(sysctl, "proc_id", daughterboard->proc_id); qdev_prop_set_uint32(sysctl, "proc_id", daughterboard->proc_id);
qdev_prop_set_uint32(sysctl, "len-db-voltage",
db_voltage = qlist_new(); daughterboard->num_voltage_sensors);
for (i = 0; i < daughterboard->num_voltage_sensors; i++) { for (i = 0; i < daughterboard->num_voltage_sensors; i++) {
qlist_append_int(db_voltage, daughterboard->voltages[i]); char *propname = g_strdup_printf("db-voltage[%d]", i);
qdev_prop_set_uint32(sysctl, propname, daughterboard->voltages[i]);
g_free(propname);
} }
qdev_prop_set_array(sysctl, "db-voltage", db_voltage); qdev_prop_set_uint32(sysctl, "len-db-clock",
daughterboard->num_clocks);
db_clock = qlist_new();
for (i = 0; i < daughterboard->num_clocks; i++) { for (i = 0; i < daughterboard->num_clocks; i++) {
qlist_append_int(db_clock, daughterboard->clocks[i]); char *propname = g_strdup_printf("db-clock[%d]", i);
qdev_prop_set_uint32(sysctl, propname, daughterboard->clocks[i]);
g_free(propname);
} }
qdev_prop_set_array(sysctl, "db-clock", db_clock);
sysbus_realize_and_unref(SYS_BUS_DEVICE(sysctl), &error_fatal); sysbus_realize_and_unref(SYS_BUS_DEVICE(sysctl), &error_fatal);
sysbus_mmio_map(SYS_BUS_DEVICE(sysctl), 0, map[VE_SYSREGS]); sysbus_mmio_map(SYS_BUS_DEVICE(sysctl), 0, map[VE_SYSREGS]);

View File

@@ -482,7 +482,7 @@ build_spcr(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
build_append_int_noprefix(table_data, 3, 1); /* ARM PL011 UART */ build_append_int_noprefix(table_data, 3, 1); /* ARM PL011 UART */
build_append_int_noprefix(table_data, 0, 3); /* Reserved */ build_append_int_noprefix(table_data, 0, 3); /* Reserved */
/* Base Address */ /* Base Address */
build_append_gas(table_data, AML_AS_SYSTEM_MEMORY, 32, 0, 3, build_append_gas(table_data, AML_AS_SYSTEM_MEMORY, 8, 0, 1,
vms->memmap[VIRT_UART].base); vms->memmap[VIRT_UART].base);
/* Interrupt Type */ /* Interrupt Type */
build_append_int_noprefix(table_data, build_append_int_noprefix(table_data,
@@ -673,7 +673,7 @@ build_dbg2(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
build_append_int_noprefix(table_data, 34, 2); build_append_int_noprefix(table_data, 34, 2);
/* BaseAddressRegister[] */ /* BaseAddressRegister[] */
build_append_gas(table_data, AML_AS_SYSTEM_MEMORY, 32, 0, 3, build_append_gas(table_data, AML_AS_SYSTEM_MEMORY, 8, 0, 1,
vms->memmap[VIRT_UART].base); vms->memmap[VIRT_UART].base);
/* AddressSize[] */ /* AddressSize[] */

View File

@@ -69,7 +69,6 @@
#include "hw/firmware/smbios.h" #include "hw/firmware/smbios.h"
#include "qapi/visitor.h" #include "qapi/visitor.h"
#include "qapi/qapi-visit-common.h" #include "qapi/qapi-visit-common.h"
#include "qapi/qmp/qlist.h"
#include "standard-headers/linux/input.h" #include "standard-headers/linux/input.h"
#include "hw/arm/smmuv3.h" #include "hw/arm/smmuv3.h"
#include "hw/acpi/acpi.h" #include "hw/acpi/acpi.h"
@@ -632,8 +631,7 @@ static void fdt_add_pmu_nodes(const VirtMachineState *vms)
qemu_fdt_setprop(ms->fdt, "/pmu", "compatible", qemu_fdt_setprop(ms->fdt, "/pmu", "compatible",
compat, sizeof(compat)); compat, sizeof(compat));
qemu_fdt_setprop_cells(ms->fdt, "/pmu", "interrupts", qemu_fdt_setprop_cells(ms->fdt, "/pmu", "interrupts",
GIC_FDT_IRQ_TYPE_PPI, GIC_FDT_IRQ_TYPE_PPI, VIRTUAL_PMU_IRQ, irqflags);
INTID_TO_PPI(VIRTUAL_PMU_IRQ), irqflags);
} }
} }
@@ -753,23 +751,14 @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem)
} }
if (vms->gic_version != VIRT_GIC_VERSION_2) { if (vms->gic_version != VIRT_GIC_VERSION_2) {
QList *redist_region_count;
uint32_t redist0_capacity = virt_redist_capacity(vms, VIRT_GIC_REDIST); uint32_t redist0_capacity = virt_redist_capacity(vms, VIRT_GIC_REDIST);
uint32_t redist0_count = MIN(smp_cpus, redist0_capacity); uint32_t redist0_count = MIN(smp_cpus, redist0_capacity);
nb_redist_regions = virt_gicv3_redist_region_count(vms); nb_redist_regions = virt_gicv3_redist_region_count(vms);
redist_region_count = qlist_new(); qdev_prop_set_uint32(vms->gic, "len-redist-region-count",
qlist_append_int(redist_region_count, redist0_count); nb_redist_regions);
if (nb_redist_regions == 2) { qdev_prop_set_uint32(vms->gic, "redist-region-count[0]", redist0_count);
uint32_t redist1_capacity =
virt_redist_capacity(vms, VIRT_HIGH_GIC_REDIST2);
qlist_append_int(redist_region_count,
MIN(smp_cpus - redist0_count, redist1_capacity));
}
qdev_prop_set_array(vms->gic, "redist-region-count",
redist_region_count);
if (!kvm_irqchip_in_kernel()) { if (!kvm_irqchip_in_kernel()) {
if (vms->tcg_its) { if (vms->tcg_its) {
@@ -778,6 +767,14 @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem)
qdev_prop_set_bit(vms->gic, "has-lpi", true); qdev_prop_set_bit(vms->gic, "has-lpi", true);
} }
} }
if (nb_redist_regions == 2) {
uint32_t redist1_capacity =
virt_redist_capacity(vms, VIRT_HIGH_GIC_REDIST2);
qdev_prop_set_uint32(vms->gic, "redist-region-count[1]",
MIN(smp_cpus - redist0_count, redist1_capacity));
}
} else { } else {
if (!kvm_irqchip_in_kernel()) { if (!kvm_irqchip_in_kernel()) {
qdev_prop_set_bit(vms->gic, "has-virtualization-extensions", qdev_prop_set_bit(vms->gic, "has-virtualization-extensions",
@@ -2750,7 +2747,6 @@ static void virt_machine_device_pre_plug_cb(HotplugHandler *hotplug_dev,
virtio_md_pci_pre_plug(VIRTIO_MD_PCI(dev), MACHINE(hotplug_dev), errp); virtio_md_pci_pre_plug(VIRTIO_MD_PCI(dev), MACHINE(hotplug_dev), errp);
} else if (object_dynamic_cast(OBJECT(dev), TYPE_VIRTIO_IOMMU_PCI)) { } else if (object_dynamic_cast(OBJECT(dev), TYPE_VIRTIO_IOMMU_PCI)) {
hwaddr db_start = 0, db_end = 0; hwaddr db_start = 0, db_end = 0;
QList *reserved_regions;
char *resv_prop_str; char *resv_prop_str;
if (vms->iommu != VIRT_IOMMU_NONE) { if (vms->iommu != VIRT_IOMMU_NONE) {
@@ -2777,9 +2773,9 @@ static void virt_machine_device_pre_plug_cb(HotplugHandler *hotplug_dev,
db_start, db_end, db_start, db_end,
VIRTIO_IOMMU_RESV_MEM_T_MSI); VIRTIO_IOMMU_RESV_MEM_T_MSI);
reserved_regions = qlist_new(); object_property_set_uint(OBJECT(dev), "len-reserved-regions", 1, errp);
qlist_append_str(reserved_regions, resv_prop_str); object_property_set_str(OBJECT(dev), "reserved-regions[0]",
qdev_prop_set_array(dev, "reserved-regions", reserved_regions); resv_prop_str, errp);
g_free(resv_prop_str); g_free(resv_prop_str);
} }
} }

Some files were not shown because too many files have changed in this diff Show More