Compare commits

..

46 Commits

Author SHA1 Message Date
Fabiano Rosas
981624a51b tests/qtest: Add a test for migration with direct-io and multifd
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:56 -03:00
Fabiano Rosas
a401debe32 migration: Add direct-io parameter
Add the direct-io migration parameter that tells the migration code to
use O_DIRECT when opening the migration stream file whenever possible.

This is currently only used for the secondary channels of fixed-ram
migration, which can guarantee that writes are page aligned.

However the parameter could be made to affect other types of
file-based migrations in the future.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Fabiano Rosas
1bdcab53b0 tests/qtest: Add a multifd + fixed-ram migration test
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Fabiano Rosas
4439e376a2 migration/multifd: Support incoming fixed-ram stream format
For the incoming fixed-ram migration we need to read the ramblock
headers, get the pages bitmap and send the host address of each
non-zero page to the multifd channel thread for writing.

To read from the migration file we need a preadv function that can
read into the iovs in segments of contiguous pages because (as in the
writing case) the file offset applies to the entire iovec.

Usage on HMP is:

(qemu) migrate_set_capability multifd on
(qemu) migrate_set_capability fixed-ram on
(qemu) migrate_set_parameter max-bandwidth 0
(qemu) migrate_set_parameter multifd-channels 8
(qemu) migrate_incoming file:migfile
(qemu) info status
(qemu) c

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Fabiano Rosas
27ec0aea2b migration/multifd: Support outgoing fixed-ram stream format
The new fixed-ram stream format uses a file transport and puts ram
pages in the migration file at their respective offsets and can be
done in parallel by using the pwritev system call which takes iovecs
and an offset.

Add support to enabling the new format along with multifd to make use
of the threading and page handling already in place.

This requires multifd to stop sending headers and leaving the stream
format to the fixed-ram code. When it comes time to write the data, we
need to call a version of qio_channel_write that can take an offset.

Usage on HMP is:

(qemu) stop
(qemu) migrate_set_capability multifd on
(qemu) migrate_set_capability fixed-ram on
(qemu) migrate_set_parameter max-bandwidth 0
(qemu) migrate_set_parameter multifd-channels 8
(qemu) migrate file:migfile

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Fabiano Rosas
216f1c0799 migration/ram: Ignore multifd flush when doing fixed-ram migration
Some functionalities of multifd are incompatible with the 'fixed-ram'
migration format.

The MULTIFD_FLUSH flag in particular is not used because in fixed-ram
there is no sinchronicity between migration source and destination so
there is not need for a sync packet. In fact, fixed-ram disables
packets in multifd as a whole.

Make sure RAM_SAVE_FLAG_MULTIFD_FLUSH is never emitted when fixed-ram
is enabled.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Fabiano Rosas
7bfc72b82c migration/ram: Add a wrapper for fixed-ram shadow bitmap
We'll need to set the shadow_bmap bits from outside ram.c soon and
TARGET_PAGE_BITS is poisoned, so add a wrapper to it.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Fabiano Rosas
296b174351 io: Add a pwritev/preadv version that takes a discontiguous iovec
For the upcoming support to fixed-ram migration with multifd, we need
to be able to accept an iovec array with non-contiguous data.

Add a pwritev and preadv version that splits the array into contiguous
segments before writing. With that we can have the ram code continue
to add pages in any order and the multifd code continue to send large
arrays for reading and writing.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
Since iovs can be non contiguous, we'd need a separate array on the
side to carry an extra file offset for each of them, so I'm relying on
the fact that iovs are all within a same host page and passing in an
encoded offset that takes the host page into account.
2023-10-10 16:03:55 -03:00
Fabiano Rosas
bd006aa37a migration/multifd: Add pages to the receiving side
Currently multifd does not need to have knowledge of pages on the
receiving side because all the information needed is within the
packets that come in the stream.

We're about to add support to fixed-ram migration, which cannot use
packets because it expects the ramblock section in the migration file
to contain only the guest pages data.

Add a pointer to MultiFDPages in the multifd_recv_state and use the
pages similarly to what we already do on the sending side. The pages
are used to transfer data between the ram migration code in the main
migration thread and the multifd receiving threads.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Fabiano Rosas
3d66673424 migration/multifd: Add incoming QIOChannelFile support
On the receiving side we don't need to differentiate between main
channel and threads, so whichever channel is defined first gets to be
the main one. And since there are no packets, use the atomic channel
count to index into the params array.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Fabiano Rosas
4c84a5b854 migration/multifd: Add outgoing QIOChannelFile support
Allow multifd to open file-backed channels. This will be used when
enabling the fixed-ram migration stream format which expects a
seekable transport.

The QIOChannel read and write methods will use the preadv/pwritev
versions which don't update the file offset at each call so we can
reuse the fd without re-opening for every channel.

Note that this is just setup code and multifd cannot yet make use of
the file channels.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Fabiano Rosas
ac0f5a5c61 migration/multifd: Allow multifd without packets
For the upcoming support to the new 'fixed-ram' migration stream
format, we cannot use multifd packets because each write into the
ramblock section in the migration file is expected to contain only the
guest pages. They are written at their respective offsets relative to
the ramblock section header.

There is no space for the packet information and the expected gains
from the new approach come partly from being able to write the pages
sequentially without extraneous data in between.

The new format also doesn't need the packets and all necessary
information can be taken from the standard migration headers with some
(future) changes to multifd code.

Use the presence of the fixed-ram capability to decide whether to send
packets. For now this has no effect as fixed-ram cannot yet be enabled
with multifd.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Fabiano Rosas
7960c99f7d migration/multifd: Extract sem_done waiting into a function
This helps document the intent of the loop via the function name and
we can reuse this in the future.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Fabiano Rosas
e017872d55 migration/multifd: Decouple control flow from the SYNC packet
We currently have the sem_sync semaphore that is used:

1) on the sending side, to know when the multifd_send_thread has
   finished sending the MULTIFD_FLAG_SYNC packet;

  This is unnecessary. Multifd sends packets (not pages) one by one
  and completion is already bound by both the channels_ready and sem
  semaphores. The SYNC packet has nothing special that would require
  it to have a separate semaphore on the sending side.

2) on the receiving side, to know when the multifd_recv_thread has
   finished receiving the MULTIFD_FLAG_SYNC packet;

  This is unnecessary because the multifd_recv_state->sem_sync
  semaphore already does the same thing. We care that the SYNC arrived
  from the source, knowing that the SYNC has been received by the recv
  thread doesn't add anything.

3) on both sending and receiving sides, to wait for the multifd threads
   to finish before cleaning up;

   This happens because multifd_send_sync_main() blocks
   ram_save_complete() from finishing until the semaphore is
   posted. This is surprising and not documented.

Clarify the above situation by renaming 'sem_sync' to 'sem_done' and
making the #3 usage the main one. Stop tracking the SYNC packet on
source (#1) and leave multifd_recv_state->sem_sync untouched on the
destination (#2).

Due to the 'channels_ready' and 'sem' semaphores, we always send
packets in lockstep with switching MultiFDSendParams, so
p->pending_job is always either 1 or 0. The thread has no knowledge of
whether it will have more to send once it posts to
channels_ready. Send it on an extra loop so it sees no pending_job and
releases the semaphore.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Fabiano Rosas
1ed6e0fea1 migration/multifd: Move channels_ready semaphore
Commit d2026ee117 ("multifd: Fix the number of channels ready") moved
the "post" of channels_ready to the start of the multifd_send_thread()
loop and added a missing "wait" at multifd_send_sync_main(). While it
does work, the placement of the wait goes against what the rest of the
code does.

The sequence at multifd_send_thread() is:

    qemu_sem_post(&multifd_send_state->channels_ready);
    qemu_sem_wait(&p->sem);
    <work>
    if (flags & MULTIFD_FLAG_SYNC) {
        qemu_sem_post(&p->sem_sync);
    }

Which means that the sending thread makes itself available
(channels_ready) and waits for more work (sem). So the sequence in the
migration thread should be to check if any channel is available
(channels_ready), give it some work and set it off (sem):

    qemu_sem_wait(&multifd_send_state->channels_ready);
    <enqueue work>
    qemu_sem_post(&p->sem);
    if (flags & MULTIFD_FLAG_SYNC) {
        qemu_sem_wait(&p->sem_sync);
    }

The reason there's no deadlock today is that the migration thread
enqueues the SYNC packet right before the wait on channels_ready and
we end up taking advantage of the out-of-order post to sem:

        ...
        qemu_sem_post(&p->sem);
    }
    for (i = 0; i < migrate_multifd_channels(); i++) {
        MultiFDSendParams *p = &multifd_send_state->params[i];

        qemu_sem_wait(&multifd_send_state->channels_ready);
        trace_multifd_send_sync_main_wait(p->id);
        qemu_sem_wait(&p->sem_sync);
	...

Move the channels_ready wait before the sem post to keep the sequence
consistent. Also fix the error path to post to channels_ready and
sem_sync in the correct order.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Fabiano Rosas
5c256aa813 migration/multifd: Remove direct "socket" references
We're about to enable support for other transports in multifd, so
remove direct references to sockets.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Fabiano Rosas
a7c11aaf8f migration: Add completion tracepoint
Add a completion tracepoint that provides basic stats for
debug. Displays throughput (MB/s and pages/s) and total time (ms).

Usage:
  $QEMU ... -trace migration_status

Output:
  migration_status 1506 MB/s, 436725 pages/s, 8698 ms

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Nikolay Borisov
6b7fc46f44 tests/qtest: migration-test: Add tests for fixed-ram file-based migration
Add basic tests for 'fixed-ram' migration.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Nikolay Borisov
2e9b519644 migration/ram: Add support for 'fixed-ram' migration restore
Add the necessary code to parse the format changes for the 'fixed-ram'
capability.

One of the more notable changes in behavior is that in the 'fixed-ram'
case ram pages are restored in one go rather than constantly looping
through the migration stream.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
(farosas) reused more of the common code by making the fixed-ram
function take only one ramblock and calling it from inside
parse_ramblock.
2023-10-10 16:03:55 -03:00
Nikolay Borisov
3f8359b3aa migration/ram: Add support for 'fixed-ram' outgoing migration
Implement the outgoing migration side for the 'fixed-ram' capability.

A bitmap is introduced to track which pages have been written in the
migration file. Pages are written at a fixed location for every
ramblock. Zero pages are ignored as they'd be zero in the destination
migration as well.

The migration stream is altered to put the dirty pages for a ramblock
after its header instead of having a sequential stream of pages that
follow the ramblock headers. Since all pages have a fixed location,
RAM_SAVE_FLAG_EOS is no longer generated on every migration iteration.

Without fixed-ram (current):

ramblock 1 header|ramblock 2 header|...|RAM_SAVE_FLAG_EOS|stream of
 pages (iter 1)|RAM_SAVE_FLAG_EOS|stream of pages (iter 2)|...

With fixed-ram (new):

ramblock 1 header|ramblock 1 fixed-ram header|ramblock 1 pages (fixed
 offsets)|ramblock 2 header|ramblock 2 fixed-ram header|ramblock 2
 pages (fixed offsets)|...|RAM_SAVE_FLAG_EOS

where:
 - ramblock header: the generic information for a ramblock, such as
   idstr, used_len, etc.

 - ramblock fixed-ram header: the new information added by this
   feature: bitmap of pages written, bitmap size and offset of pages
   in the migration file.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:55 -03:00
Fabiano Rosas
733c2311f9 migration/ram: Introduce 'fixed-ram' migration capability
Add a new migration capability 'fixed-ram'.

The core of the feature is to ensure that each ram page has a specific
offset in the resulting migration stream. The reason why we'd want
such behavior are two fold:

 - When doing a 'fixed-ram' migration the resulting file will have a
   bounded size, since pages which are dirtied multiple times will
   always go to a fixed location in the file, rather than constantly
   being added to a sequential stream. This eliminates cases where a vm
   with, say, 1G of ram can result in a migration file that's 10s of
   GBs, provided that the workload constantly redirties memory.

 - It paves the way to implement DIRECT_IO-enabled save/restore of the
   migration stream as the pages are ensured to be written at aligned
   offsets.

For now, enabling the capability has no effect. The next couple of
patches implement the core funcionality.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:52 -03:00
Fabiano Rosas
8b424732bd migration: fixed-ram: Add URI compatibility check
The fixed-ram migration format needs a channel that supports seeking
to be able to write each page to an arbitrary offset in the migration
stream.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:52 -03:00
Nikolay Borisov
aeab71c90c migration/qemu-file: add utility methods for working with seekable channels
Add utility methods that will be needed when implementing 'fixed-ram'
migration capability.

qemu_file_is_seekable
qemu_put_buffer_at
qemu_get_buffer_at
qemu_set_offset
qemu_get_offset

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
fixed total_transferred accounting

restructured to use qio_channel_file_preadv instead of the _full
variant
2023-10-10 16:03:52 -03:00
Nikolay Borisov
61e1dc63de io: implement io_pwritev/preadv for QIOChannelFile
The upcoming 'fixed-ram' feature will require qemu to write data to
(and restore from) specific offsets of the migration file.

Add a minimal implementation of pwritev/preadv and expose them via the
io_pwritev and io_preadv interfaces.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-10-10 16:03:52 -03:00
Nikolay Borisov
f9e7e60b36 io: Add generic pwritev/preadv interface
Introduce basic pwritev/preadv support in the generic channel layer.
Specific implementation will follow for the file channel as this is
required in order to support migration streams with fixed location of
each ram page.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:52 -03:00
Nikolay Borisov
082acdd3b8 io: add and implement QIO_CHANNEL_FEATURE_SEEKABLE for channel file
Add a generic QIOChannel feature SEEKABLE which would be used by the
qemu_file* apis. For the time being this will be only implemented for
file channels.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-10-10 16:03:52 -03:00
Fabiano Rosas
bdd2f44ca6 tests/qtest: File migration auto-pause tests
Adapt the file migration tests to take into account the auto-pause
feature.

The test currently has a flag 'stop_src' that is used to know if the
test itself should stop the VM. Add a new flag 'auto_pause' to enable
QEMU to stop the VM instead.. The two in combination allow us to
migrate a already stopped VM and check that it is still stopped on the
destination (auto-pause in effect restoring the original state).

By adding a more precise tracking of migration state changes, we can
also make sure that auto-pause is actually stopping the VM right after
qmp_migrate(), as opposed to the vm_stop() that happens at
migration_complete().

When resuming the destination a similar situation occurs, we use
'stop_src' to have a stopped VM and check that the destination does
not get a "resume" event.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:33 -03:00
Fabiano Rosas
7d246857f2 migration: Run "file:" migration with a stopped VM
The file migration is asynchronous, so it benefits from being done
with a stopped VM. Allow the file migration to take benefit of the
auto-pause capability.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:33 -03:00
Fabiano Rosas
f31d26ed67 migration: Add auto-pause capability
Add a capability that allows the management layer to delegate to QEMU
the decision of whether to pause a VM and perform a non-live
migration. Depending on the type of migration being performed, this
could bring performance benefits.

Note that the capability is enabled by default but at this moment no
migration scheme is making use of it.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:33 -03:00
Fabiano Rosas
1a69ed8fb1 migration: Introduce global_state_store_once
There are some situations during migration when we want to change the
runstate of the VM, but don't actually want the new runstate to be put
on the wire to be restored on the destination VM. In those cases, the
pattern is to use global_state_store() to save the state for migration
before changing it.

One scenario where this happens is when switching the source VM into
the FINISH_MIGRATE state. This state only makes sense on the source
VM. Another situation is when pausing the source VM prior to migration
completion.

We are about to introduce a third scenario when the whole migration
should be performed with a paused VM. In this case we will want to
save the VM runstate at the very start of the migration and that state
will be the one restored on the destination regardless of all the
runstate changes that happen in between.

To achieve that we need to make sure that the other two calls to
global_state_store() do not overwrite the state that is to be
migrated.

Introduce a version of global_state_store() that only saves the state
if no other state has already been saved.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:33 -03:00
Fabiano Rosas
322a7e0b68 migration: Return the saved state from global_state_store
There is a pattern of calling runstate_get() to store the current
runstate and calling global_state_store() to save the current runstate
for migration. Since global_state_store() also calls runstate_get(),
make it return the runstate instead.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:33 -03:00
Fabiano Rosas
d04ea45102 tests/qtest: Allow waiting for migration events
Add support for waiting for a migration state change event to
happen. This can help disambiguate between runstate changes that
happen during VM lifecycle.

Specifically, the next couple of patches want to know whether STOP
events happened at the migration start or end. Add the "setup" and
"active" migration states for that purpose.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:32 -03:00
Fabiano Rosas
02b502d253 tests/qtest: Move QTestMigrationState to libqtest
Move the QTestMigrationState into QTestState so we don't have to pass
it around to the wait_for_* helpers anymore. Since QTestState is
private to libqtest.c, move the migration state struct to libqtest.h
and add a getter.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:32 -03:00
Steve Sistare
89767de55c tests/qtest: migration events
Define a state object to capture events seen by migration tests, to allow
more events to be captured in a subsequent patch, and simplify event
checking in wait_for_migration_pass.  No functional change.

Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
2023-10-10 16:03:32 -03:00
Fabiano Rosas
61d4f74ecb tests/qtest: migration-test: Add tests for file-based migration
Add basic tests for file-based migration.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:32 -03:00
Fabiano Rosas
68eff73a4f tests/qtest: migration: Add support for negative testing of qmp_migrate
There is currently no way to write a test for errors that happened in
qmp_migrate before the migration has started.

Add a version of qmp_migrate that ensures an error happens. To make
use of it a test needs to set MigrateCommon.result as
MIG_TEST_QMP_ERROR.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:32 -03:00
Fabiano Rosas
e2c4395aa5 migration: Set migration status early in incoming side
We are sending a migration event of MIGRATION_STATUS_SETUP at
qemu_start_incoming_migration but never actually setting the state.

This creates a window between qmp_migrate_incoming and
process_incoming_migration_co where the migration status is still
MIGRATION_STATUS_NONE. Calling query-migrate during this time will
return an empty response even though the incoming migration command
has already been issued.

Commit 7cf1fe6d68 ("migration: Add migration events on target side")
has added support to the 'events' capability to the incoming part of
migration, but chose to send the SETUP event without setting the
state. I'm assuming this was a mistake.

This introduces a change in behavior, any QMP client waiting for the
SETUP event will hang, unless it has previously enabled the 'events'
capability. Having the capability enabled is sufficient to continue to
receive the event.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:32 -03:00
Fabiano Rosas
905af99677 tests/qtest: migration: Use migrate_incoming_qmp where appropriate
Use the new migrate_incoming_qmp helper in the places that currently
open-code calling migrate-incoming.

Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:32 -03:00
Fabiano Rosas
b1d2461984 tests/qtest: migration: Add migrate_incoming_qmp helper
file-based migration requires the target to initiate its migration after
the source has finished writing out the data in the file. Currently
there's no easy way to initiate 'migrate-incoming', allow this by
introducing migrate_incoming_qmp helper, similarly to migrate_qmp.

Also make sure migration events are enabled and wait for the incoming
migration to start before returning. This avoid a race when querying
the migration status too soon after issuing the command.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:32 -03:00
Fabiano Rosas
948cb60563 tests/qtest: migration: Expose migrate_set_capability
The following patch will make use of this function from within
migrate-helpers.c, so move it there.

Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 16:03:32 -03:00
Fabiano Rosas
b00a856b9f migration/ram: Merge save_zero_page functions
We don't need to do this in two pieces. One single function makes it
easier to grasp, specially since it removes the indirection on the
return value handling.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 15:53:05 -03:00
Fabiano Rosas
e82c306e54 migration/ram: Move xbzrle zero page handling into save_zero_page
It makes a bit more sense to have the zero page handling of xbzrle
right where we save the zero page.

Also invert the exit condition to remove one level of indentation
which makes the next patch easier to grasp.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 15:53:05 -03:00
Fabiano Rosas
83ba1324df migration/ram: Stop passing QEMUFile around in save_zero_page
We don't need the QEMUFile when we're already passing the
PageSearchStatus.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 15:53:05 -03:00
Fabiano Rosas
0f2a1a3c30 migration/ram: Remove RAMState from xbzrle_cache_zero_page
'rs' is not used in that function. It's a leftover from commit
9360447d34 ("ram: Use MigrationStats for statistics").

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 15:53:05 -03:00
Nikolay Borisov
be29a05eb0 migration/ram: Refactor precopy ram loading code
Extract the ramblock parsing code into a routine that operates on the
sequence of headers from the stream and another the parses the
individual ramblock. This makes ram_load_precopy() easier to
comprehend.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 15:44:15 -03:00
Fabiano Rosas
a39011878e tests/qtest: Re-enable multifd cancel test
We've found the source of flakiness in this test, so re-enable it.

Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-10-10 14:51:14 -03:00
1351 changed files with 29750 additions and 58374 deletions

View File

@@ -30,7 +30,6 @@ avocado-system-alpine:
variables: variables:
IMAGE: alpine IMAGE: alpine
MAKE_CHECK_ARGS: check-avocado MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:avr arch:loongarch64 arch:mips64 arch:mipsel
build-system-ubuntu: build-system-ubuntu:
extends: extends:
@@ -41,7 +40,8 @@ build-system-ubuntu:
variables: variables:
IMAGE: ubuntu2204 IMAGE: ubuntu2204
CONFIGURE_ARGS: --enable-docs CONFIGURE_ARGS: --enable-docs
TARGETS: alpha-softmmu microblazeel-softmmu mips64el-softmmu TARGETS: alpha-softmmu cris-softmmu hppa-softmmu
microblazeel-softmmu mips64el-softmmu
MAKE_CHECK_ARGS: check-build MAKE_CHECK_ARGS: check-build
check-system-ubuntu: check-system-ubuntu:
@@ -61,7 +61,6 @@ avocado-system-ubuntu:
variables: variables:
IMAGE: ubuntu2204 IMAGE: ubuntu2204
MAKE_CHECK_ARGS: check-avocado MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:alpha arch:microblaze arch:mips64el
build-system-debian: build-system-debian:
extends: extends:
@@ -73,7 +72,7 @@ build-system-debian:
IMAGE: debian-amd64 IMAGE: debian-amd64
CONFIGURE_ARGS: --with-coroutine=sigaltstack CONFIGURE_ARGS: --with-coroutine=sigaltstack
TARGETS: arm-softmmu i386-softmmu riscv64-softmmu sh4eb-softmmu TARGETS: arm-softmmu i386-softmmu riscv64-softmmu sh4eb-softmmu
sparc-softmmu xtensa-softmmu sparc-softmmu xtensaeb-softmmu
MAKE_CHECK_ARGS: check-build MAKE_CHECK_ARGS: check-build
check-system-debian: check-system-debian:
@@ -93,7 +92,6 @@ avocado-system-debian:
variables: variables:
IMAGE: debian-amd64 IMAGE: debian-amd64
MAKE_CHECK_ARGS: check-avocado MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:arm arch:i386 arch:riscv64 arch:sh4 arch:sparc arch:xtensa
crash-test-debian: crash-test-debian:
extends: .native_test_job_template extends: .native_test_job_template
@@ -116,7 +114,7 @@ build-system-fedora:
variables: variables:
IMAGE: fedora IMAGE: fedora
CONFIGURE_ARGS: --disable-gcrypt --enable-nettle --enable-docs CONFIGURE_ARGS: --disable-gcrypt --enable-nettle --enable-docs
TARGETS: microblaze-softmmu mips-softmmu TARGETS: tricore-softmmu microblaze-softmmu mips-softmmu
xtensa-softmmu m68k-softmmu riscv32-softmmu ppc-softmmu sparc64-softmmu xtensa-softmmu m68k-softmmu riscv32-softmmu ppc-softmmu sparc64-softmmu
MAKE_CHECK_ARGS: check-build MAKE_CHECK_ARGS: check-build
@@ -137,8 +135,6 @@ avocado-system-fedora:
variables: variables:
IMAGE: fedora IMAGE: fedora
MAKE_CHECK_ARGS: check-avocado MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:microblaze arch:mips arch:xtensa arch:m68k
arch:riscv32 arch:ppc arch:sparc64
crash-test-fedora: crash-test-fedora:
extends: .native_test_job_template extends: .native_test_job_template
@@ -184,8 +180,6 @@ avocado-system-centos:
variables: variables:
IMAGE: centos8 IMAGE: centos8
MAKE_CHECK_ARGS: check-avocado MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:ppc64 arch:or1k arch:390x arch:x86_64 arch:rx
arch:sh4 arch:nios2
build-system-opensuse: build-system-opensuse:
extends: extends:
@@ -215,7 +209,6 @@ avocado-system-opensuse:
variables: variables:
IMAGE: opensuse-leap IMAGE: opensuse-leap
MAKE_CHECK_ARGS: check-avocado MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:s390x arch:x86_64 arch:aarch64
# This jobs explicitly disable TCG (--disable-tcg), KVM is detected by # This jobs explicitly disable TCG (--disable-tcg), KVM is detected by
@@ -256,7 +249,6 @@ build-user:
variables: variables:
IMAGE: debian-all-test-cross IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --disable-tools --disable-system CONFIGURE_ARGS: --disable-tools --disable-system
--target-list-exclude=alpha-linux-user,sh4-linux-user
MAKE_CHECK_ARGS: check-tcg MAKE_CHECK_ARGS: check-tcg
build-user-static: build-user-static:
@@ -266,18 +258,6 @@ build-user-static:
variables: variables:
IMAGE: debian-all-test-cross IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --disable-tools --disable-system --static CONFIGURE_ARGS: --disable-tools --disable-system --static
--target-list-exclude=alpha-linux-user,sh4-linux-user
MAKE_CHECK_ARGS: check-tcg
# targets stuck on older compilers
build-legacy:
extends: .native_build_job_template
needs:
job: amd64-debian-legacy-cross-container
variables:
IMAGE: debian-legacy-test-cross
TARGETS: alpha-linux-user alpha-softmmu sh4-linux-user
CONFIGURE_ARGS: --disable-tools
MAKE_CHECK_ARGS: check-tcg MAKE_CHECK_ARGS: check-tcg
build-user-hexagon: build-user-hexagon:
@@ -290,9 +270,7 @@ build-user-hexagon:
CONFIGURE_ARGS: --disable-tools --disable-docs --enable-debug-tcg CONFIGURE_ARGS: --disable-tools --disable-docs --enable-debug-tcg
MAKE_CHECK_ARGS: check-tcg MAKE_CHECK_ARGS: check-tcg
# Build the softmmu targets we have check-tcg tests and compilers in # Only build the softmmu targets we have check-tcg tests for
# our omnibus all-test-cross container. Those targets that haven't got
# Debian cross compiler support need to use special containers.
build-some-softmmu: build-some-softmmu:
extends: .native_build_job_template extends: .native_build_job_template
needs: needs:
@@ -300,18 +278,7 @@ build-some-softmmu:
variables: variables:
IMAGE: debian-all-test-cross IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --disable-tools --enable-debug CONFIGURE_ARGS: --disable-tools --enable-debug
TARGETS: arm-softmmu aarch64-softmmu i386-softmmu riscv64-softmmu TARGETS: xtensa-softmmu arm-softmmu aarch64-softmmu alpha-softmmu
s390x-softmmu x86_64-softmmu
MAKE_CHECK_ARGS: check-tcg
build-loongarch64:
extends: .native_build_job_template
needs:
job: loongarch-debian-cross-container
variables:
IMAGE: debian-loongarch-cross
CONFIGURE_ARGS: --disable-tools --enable-debug
TARGETS: loongarch64-linux-user loongarch64-softmmu
MAKE_CHECK_ARGS: check-tcg MAKE_CHECK_ARGS: check-tcg
# We build tricore in a very minimal tricore only container # We build tricore in a very minimal tricore only container
@@ -344,7 +311,7 @@ clang-user:
variables: variables:
IMAGE: debian-all-test-cross IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --cc=clang --cxx=clang++ --disable-system CONFIGURE_ARGS: --cc=clang --cxx=clang++ --disable-system
--target-list-exclude=alpha-linux-user,microblazeel-linux-user,aarch64_be-linux-user,i386-linux-user,m68k-linux-user,mipsn32el-linux-user,xtensaeb-linux-user --target-list-exclude=microblazeel-linux-user,aarch64_be-linux-user,i386-linux-user,m68k-linux-user,mipsn32el-linux-user,xtensaeb-linux-user
--extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined --extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined
MAKE_CHECK_ARGS: check-unit check-tcg MAKE_CHECK_ARGS: check-unit check-tcg
@@ -531,7 +498,7 @@ build-tci:
variables: variables:
IMAGE: debian-all-test-cross IMAGE: debian-all-test-cross
script: script:
- TARGETS="aarch64 arm hppa m68k microblaze ppc64 s390x x86_64" - TARGETS="aarch64 alpha arm hppa m68k microblaze ppc64 s390x x86_64"
- mkdir build - mkdir build
- cd build - cd build
- ../configure --enable-tcg-interpreter --disable-docs --disable-gtk --disable-vnc - ../configure --enable-tcg-interpreter --disable-docs --disable-gtk --disable-vnc

View File

@@ -11,6 +11,6 @@ MAKE='/opt/homebrew/bin/gmake'
NINJA='/opt/homebrew/bin/ninja' NINJA='/opt/homebrew/bin/ninja'
PACKAGING_COMMAND='brew' PACKAGING_COMMAND='brew'
PIP3='/opt/homebrew/bin/pip3' PIP3='/opt/homebrew/bin/pip3'
PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd' PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol tesseract usbredir vde vte3 xorriso zlib zstd'
PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli' PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli'
PYTHON='/opt/homebrew/bin/python3' PYTHON='/opt/homebrew/bin/python3'

View File

@@ -1,3 +1,9 @@
alpha-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-alpha-cross
amd64-debian-cross-container: amd64-debian-cross-container:
extends: .container_job_template extends: .container_job_template
stage: containers stage: containers
@@ -10,12 +16,6 @@ amd64-debian-user-cross-container:
variables: variables:
NAME: debian-all-test-cross NAME: debian-all-test-cross
amd64-debian-legacy-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-legacy-test-cross
arm64-debian-cross-container: arm64-debian-cross-container:
extends: .container_job_template extends: .container_job_template
stage: containers stage: containers
@@ -40,11 +40,23 @@ hexagon-cross-container:
variables: variables:
NAME: debian-hexagon-cross NAME: debian-hexagon-cross
loongarch-debian-cross-container: hppa-debian-cross-container:
extends: .container_job_template extends: .container_job_template
stage: containers stage: containers
variables: variables:
NAME: debian-loongarch-cross NAME: debian-hppa-cross
m68k-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-m68k-cross
mips64-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-mips64-cross
mips64el-debian-cross-container: mips64el-debian-cross-container:
extends: .container_job_template extends: .container_job_template
@@ -52,12 +64,24 @@ mips64el-debian-cross-container:
variables: variables:
NAME: debian-mips64el-cross NAME: debian-mips64el-cross
mips-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-mips-cross
mipsel-debian-cross-container: mipsel-debian-cross-container:
extends: .container_job_template extends: .container_job_template
stage: containers stage: containers
variables: variables:
NAME: debian-mipsel-cross NAME: debian-mipsel-cross
powerpc-test-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-powerpc-test-cross
ppc64el-debian-cross-container: ppc64el-debian-cross-container:
extends: .container_job_template extends: .container_job_template
stage: containers stage: containers
@@ -71,7 +95,13 @@ riscv64-debian-cross-container:
allow_failure: true allow_failure: true
variables: variables:
NAME: debian-riscv64-cross NAME: debian-riscv64-cross
QEMU_JOB_OPTIONAL: 1
# we can however build TCG tests using a non-sid base
riscv64-debian-test-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-riscv64-test-cross
s390x-debian-cross-container: s390x-debian-cross-container:
extends: .container_job_template extends: .container_job_template
@@ -79,6 +109,18 @@ s390x-debian-cross-container:
variables: variables:
NAME: debian-s390x-cross NAME: debian-s390x-cross
sh4-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-sh4-cross
sparc64-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-sparc64-cross
tricore-debian-cross-container: tricore-debian-cross-container:
extends: .container_job_template extends: .container_job_template
stage: containers stage: containers

View File

@@ -165,7 +165,7 @@ cross-win32-system:
job: win32-fedora-cross-container job: win32-fedora-cross-container
variables: variables:
IMAGE: fedora-win32-cross IMAGE: fedora-win32-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins EXTRA_CONFIGURE_OPTS: --enable-fdt=internal
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu m68k-softmmu CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu m68k-softmmu
microblazeel-softmmu mips64el-softmmu nios2-softmmu microblazeel-softmmu mips64el-softmmu nios2-softmmu
artifacts: artifacts:
@@ -179,7 +179,7 @@ cross-win64-system:
job: win64-fedora-cross-container job: win64-fedora-cross-container
variables: variables:
IMAGE: fedora-win64-cross IMAGE: fedora-win64-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins EXTRA_CONFIGURE_OPTS: --enable-fdt=internal
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu
m68k-softmmu microblazeel-softmmu nios2-softmmu m68k-softmmu microblazeel-softmmu nios2-softmmu
or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu

View File

@@ -72,7 +72,6 @@
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed - .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
bison diffutils flex bison diffutils flex
git grep make sed git grep make sed
$MINGW_TARGET-binutils
$MINGW_TARGET-capstone $MINGW_TARGET-capstone
$MINGW_TARGET-ccache $MINGW_TARGET-ccache
$MINGW_TARGET-curl $MINGW_TARGET-curl

View File

@@ -30,12 +30,10 @@ malc <av1474@comtv.ru> malc <malc@c046a42c-6fe2-441c-8c8c-71466251a162>
# Corrupted Author fields # Corrupted Author fields
Aaron Larson <alarson@ddci.com> alarson@ddci.com Aaron Larson <alarson@ddci.com> alarson@ddci.com
Andreas Färber <andreas.faerber@web.de> Andreas Färber <andreas.faerber> Andreas Färber <andreas.faerber@web.de> Andreas Färber <andreas.faerber>
fanwenjie <fanwj@mail.ustc.edu.cn> fanwj@mail.ustc.edu.cn <fanwj@mail.ustc.edu.cn>
Jason Wang <jasowang@redhat.com> Jason Wang <jasowang> Jason Wang <jasowang@redhat.com> Jason Wang <jasowang>
Marek Dolata <mkdolata@us.ibm.com> mkdolata@us.ibm.com <mkdolata@us.ibm.com> Marek Dolata <mkdolata@us.ibm.com> mkdolata@us.ibm.com <mkdolata@us.ibm.com>
Michael Ellerman <mpe@ellerman.id.au> michael@ozlabs.org <michael@ozlabs.org> Michael Ellerman <mpe@ellerman.id.au> michael@ozlabs.org <michael@ozlabs.org>
Nick Hudson <hnick@vmware.com> hnick@vmware.com <hnick@vmware.com> Nick Hudson <hnick@vmware.com> hnick@vmware.com <hnick@vmware.com>
Timothée Cocault <timothee.cocault@gmail.com> timothee.cocault@gmail.com <timothee.cocault@gmail.com>
# There is also a: # There is also a:
# (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162> # (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162>
@@ -83,9 +81,6 @@ Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com> James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org> Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com> Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
Luc Michel <luc@lmichel.fr> <luc.michel@git.antfield.fr>
Luc Michel <luc@lmichel.fr> <luc.michel@greensocs.com>
Luc Michel <luc@lmichel.fr> <lmichel@kalray.eu>
Radoslaw Biernacki <rad@semihalf.com> <radoslaw.biernacki@linaro.org> Radoslaw Biernacki <rad@semihalf.com> <radoslaw.biernacki@linaro.org>
Paul Brook <paul@nowt.org> <paul@codesourcery.com> Paul Brook <paul@nowt.org> <paul@codesourcery.com>
Paul Burton <paulburton@kernel.org> <paul.burton@mips.com> Paul Burton <paulburton@kernel.org> <paul.burton@mips.com>

View File

@@ -11,9 +11,6 @@ config OPENGL
config X11 config X11
bool bool
config PIXMAN
bool
config SPICE config SPICE
bool bool
@@ -49,6 +46,3 @@ config FUZZ
config VFIO_USER_SERVER_ALLOWED config VFIO_USER_SERVER_ALLOWED
bool bool
imply VFIO_USER_SERVER imply VFIO_USER_SERVER
config HV_BALLOON_POSSIBLE
bool

View File

@@ -245,10 +245,10 @@ M: Richard Henderson <richard.henderson@linaro.org>
S: Maintained S: Maintained
F: target/hppa/ F: target/hppa/
F: disas/hppa.c F: disas/hppa.c
F: tests/tcg/hppa/
LoongArch TCG CPUs LoongArch TCG CPUs
M: Song Gao <gaosong@loongson.cn> M: Song Gao <gaosong@loongson.cn>
M: Xiaojuan Yang <yangxiaojuan@loongson.cn>
S: Maintained S: Maintained
F: target/loongarch/ F: target/loongarch/
F: tests/tcg/loongarch64/ F: tests/tcg/loongarch64/
@@ -259,7 +259,6 @@ M: Laurent Vivier <laurent@vivier.eu>
S: Maintained S: Maintained
F: target/m68k/ F: target/m68k/
F: disas/m68k.c F: disas/m68k.c
F: tests/tcg/m68k/
MicroBlaze TCG CPUs MicroBlaze TCG CPUs
M: Edgar E. Iglesias <edgar.iglesias@gmail.com> M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
@@ -286,9 +285,7 @@ R: Marek Vasut <marex@denx.de>
S: Orphan S: Orphan
F: target/nios2/ F: target/nios2/
F: hw/nios2/ F: hw/nios2/
F: hw/intc/nios2_vic.c
F: disas/nios2.c F: disas/nios2.c
F: include/hw/intc/nios2_vic.h
F: configs/devices/nios2-softmmu/default.mak F: configs/devices/nios2-softmmu/default.mak
F: tests/docker/dockerfiles/debian-nios2-cross.d/build-toolchain.sh F: tests/docker/dockerfiles/debian-nios2-cross.d/build-toolchain.sh
F: tests/tcg/nios2/ F: tests/tcg/nios2/
@@ -299,7 +296,6 @@ S: Odd Fixes
F: docs/system/openrisc/cpu-features.rst F: docs/system/openrisc/cpu-features.rst
F: target/openrisc/ F: target/openrisc/
F: hw/openrisc/ F: hw/openrisc/
F: include/hw/openrisc/
F: tests/tcg/openrisc/ F: tests/tcg/openrisc/
PowerPC TCG CPUs PowerPC TCG CPUs
@@ -312,31 +308,21 @@ F: target/ppc/
F: hw/ppc/ppc.c F: hw/ppc/ppc.c
F: hw/ppc/ppc_booke.c F: hw/ppc/ppc_booke.c
F: include/hw/ppc/ppc.h F: include/hw/ppc/ppc.h
F: hw/ppc/meson.build
F: hw/ppc/trace*
F: configs/devices/ppc*
F: docs/system/ppc/embedded.rst
F: docs/system/target-ppc.rst
F: tests/tcg/ppc*/*
RISC-V TCG CPUs RISC-V TCG CPUs
M: Palmer Dabbelt <palmer@dabbelt.com> M: Palmer Dabbelt <palmer@dabbelt.com>
M: Alistair Francis <alistair.francis@wdc.com> M: Alistair Francis <alistair.francis@wdc.com>
M: Bin Meng <bin.meng@windriver.com> M: Bin Meng <bin.meng@windriver.com>
R: Weiwei Li <liwei1518@gmail.com> R: Weiwei Li <liweiwei@iscas.ac.cn>
R: Daniel Henrique Barboza <dbarboza@ventanamicro.com> R: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
R: Liu Zhiwei <zhiwei_liu@linux.alibaba.com> R: Liu Zhiwei <zhiwei_liu@linux.alibaba.com>
L: qemu-riscv@nongnu.org L: qemu-riscv@nongnu.org
S: Supported S: Supported
F: configs/targets/riscv*
F: docs/system/target-riscv.rst
F: target/riscv/ F: target/riscv/
F: hw/riscv/ F: hw/riscv/
F: hw/intc/riscv*
F: include/hw/riscv/ F: include/hw/riscv/
F: linux-user/host/riscv32/ F: linux-user/host/riscv32/
F: linux-user/host/riscv64/ F: linux-user/host/riscv64/
F: tests/tcg/riscv64/
RISC-V XThead* extensions RISC-V XThead* extensions
M: Christoph Muellner <christoph.muellner@vrull.eu> M: Christoph Muellner <christoph.muellner@vrull.eu>
@@ -345,7 +331,6 @@ L: qemu-riscv@nongnu.org
S: Supported S: Supported
F: target/riscv/insn_trans/trans_xthead.c.inc F: target/riscv/insn_trans/trans_xthead.c.inc
F: target/riscv/xthead*.decode F: target/riscv/xthead*.decode
F: disas/riscv-xthead*
RISC-V XVentanaCondOps extension RISC-V XVentanaCondOps extension
M: Philipp Tomsich <philipp.tomsich@vrull.eu> M: Philipp Tomsich <philipp.tomsich@vrull.eu>
@@ -353,7 +338,6 @@ L: qemu-riscv@nongnu.org
S: Maintained S: Maintained
F: target/riscv/XVentanaCondOps.decode F: target/riscv/XVentanaCondOps.decode
F: target/riscv/insn_trans/trans_xventanacondops.c.inc F: target/riscv/insn_trans/trans_xventanacondops.c.inc
F: disas/riscv-xventana*
RENESAS RX CPUs RENESAS RX CPUs
R: Yoshinori Sato <ysato@users.sourceforge.jp> R: Yoshinori Sato <ysato@users.sourceforge.jp>
@@ -378,7 +362,6 @@ F: target/sh4/
F: hw/sh4/ F: hw/sh4/
F: disas/sh4.c F: disas/sh4.c
F: include/hw/sh4/ F: include/hw/sh4/
F: tests/tcg/sh4/
SPARC TCG CPUs SPARC TCG CPUs
M: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> M: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
@@ -389,7 +372,6 @@ F: hw/sparc/
F: hw/sparc64/ F: hw/sparc64/
F: include/hw/sparc/sparc64.h F: include/hw/sparc/sparc64.h
F: disas/sparc.c F: disas/sparc.c
F: tests/tcg/sparc64/
X86 TCG CPUs X86 TCG CPUs
M: Paolo Bonzini <pbonzini@redhat.com> M: Paolo Bonzini <pbonzini@redhat.com>
@@ -490,7 +472,7 @@ S: Supported
F: include/sysemu/kvm_xen.h F: include/sysemu/kvm_xen.h
F: target/i386/kvm/xen* F: target/i386/kvm/xen*
F: hw/i386/kvm/xen* F: hw/i386/kvm/xen*
F: tests/avocado/kvm_xen_guest.py F: tests/avocado/xen_guest.py
Guest CPU Cores (other accelerators) Guest CPU Cores (other accelerators)
------------------------------------ ------------------------------------
@@ -575,7 +557,6 @@ M: Cornelia Huck <cohuck@redhat.com>
M: Paolo Bonzini <pbonzini@redhat.com> M: Paolo Bonzini <pbonzini@redhat.com>
S: Maintained S: Maintained
F: linux-headers/ F: linux-headers/
F: include/standard-headers/
F: scripts/update-linux-headers.sh F: scripts/update-linux-headers.sh
POSIX POSIX
@@ -687,7 +668,7 @@ M: Peter Maydell <peter.maydell@linaro.org>
L: qemu-arm@nongnu.org L: qemu-arm@nongnu.org
S: Maintained S: Maintained
F: hw/intc/arm* F: hw/intc/arm*
F: hw/intc/gic*_internal.h F: hw/intc/gic_internal.h
F: hw/misc/a9scu.c F: hw/misc/a9scu.c
F: hw/misc/arm11scu.c F: hw/misc/arm11scu.c
F: hw/misc/arm_l2x0.c F: hw/misc/arm_l2x0.c
@@ -859,10 +840,8 @@ M: Hao Wu <wuhaotsh@google.com>
L: qemu-arm@nongnu.org L: qemu-arm@nongnu.org
S: Supported S: Supported
F: hw/*/npcm* F: hw/*/npcm*
F: hw/sensor/adm1266.c
F: include/hw/*/npcm* F: include/hw/*/npcm*
F: tests/qtest/npcm* F: tests/qtest/npcm*
F: tests/qtest/adm1266-test.c
F: pc-bios/npcm7xx_bootrom.bin F: pc-bios/npcm7xx_bootrom.bin
F: roms/vbootrom F: roms/vbootrom
F: docs/system/arm/nuvoton.rst F: docs/system/arm/nuvoton.rst
@@ -901,7 +880,7 @@ S: Odd Fixes
F: hw/arm/raspi.c F: hw/arm/raspi.c
F: hw/arm/raspi_platform.h F: hw/arm/raspi_platform.h
F: hw/*/bcm283* F: hw/*/bcm283*
F: include/hw/arm/rasp* F: include/hw/arm/raspi*
F: include/hw/*/bcm283* F: include/hw/*/bcm283*
F: docs/system/arm/raspi.rst F: docs/system/arm/raspi.rst
@@ -960,9 +939,6 @@ R: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
L: qemu-arm@nongnu.org L: qemu-arm@nongnu.org
S: Maintained S: Maintained
F: hw/arm/sbsa-ref.c F: hw/arm/sbsa-ref.c
F: hw/misc/sbsa_ec.c
F: hw/watchdog/sbsa_gwdt.c
F: include/hw/watchdog/sbsa_gwdt.h
F: docs/system/arm/sbsa.rst F: docs/system/arm/sbsa.rst
F: tests/avocado/machine_aarch64_sbsaref.py F: tests/avocado/machine_aarch64_sbsaref.py
@@ -1133,7 +1109,7 @@ F: docs/system/arm/emcraft-sf2.rst
ASPEED BMCs ASPEED BMCs
M: Cédric Le Goater <clg@kaod.org> M: Cédric Le Goater <clg@kaod.org>
M: Peter Maydell <peter.maydell@linaro.org> M: Peter Maydell <peter.maydell@linaro.org>
R: Andrew Jeffery <andrew@codeconstruct.com.au> R: Andrew Jeffery <andrew@aj.id.au>
R: Joel Stanley <joel@jms.id.au> R: Joel Stanley <joel@jms.id.au>
L: qemu-arm@nongnu.org L: qemu-arm@nongnu.org
S: Maintained S: Maintained
@@ -1189,29 +1165,24 @@ F: hw/*/etraxfs_*.c
HP-PARISC Machines HP-PARISC Machines
------------------ ------------------
HP B160L, HP C3700 HP B160L
M: Richard Henderson <richard.henderson@linaro.org> M: Richard Henderson <richard.henderson@linaro.org>
R: Helge Deller <deller@gmx.de> R: Helge Deller <deller@gmx.de>
S: Odd Fixes S: Odd Fixes
F: configs/devices/hppa-softmmu/default.mak F: configs/devices/hppa-softmmu/default.mak
F: hw/display/artist.c
F: hw/hppa/ F: hw/hppa/
F: hw/input/lasips2.c
F: hw/net/*i82596* F: hw/net/*i82596*
F: hw/misc/lasi.c F: hw/misc/lasi.c
F: hw/pci-host/astro.c
F: hw/pci-host/dino.c F: hw/pci-host/dino.c
F: include/hw/input/lasips2.h
F: include/hw/misc/lasi.h F: include/hw/misc/lasi.h
F: include/hw/net/lasi_82596.h F: include/hw/net/lasi_82596.h
F: include/hw/pci-host/astro.h
F: include/hw/pci-host/dino.h F: include/hw/pci-host/dino.h
F: pc-bios/hppa-firmware.img F: pc-bios/hppa-firmware.img
F: roms/seabios-hppa/
LoongArch Machines LoongArch Machines
------------------ ------------------
Virt Virt
M: Xiaojuan Yang <yangxiaojuan@loongson.cn>
M: Song Gao <gaosong@loongson.cn> M: Song Gao <gaosong@loongson.cn>
S: Maintained S: Maintained
F: docs/system/loongarch/virt.rst F: docs/system/loongarch/virt.rst
@@ -1286,7 +1257,6 @@ F: include/hw/char/goldfish_tty.h
F: include/hw/intc/goldfish_pic.h F: include/hw/intc/goldfish_pic.h
F: include/hw/intc/m68k_irqc.h F: include/hw/intc/m68k_irqc.h
F: include/hw/misc/virt_ctrl.h F: include/hw/misc/virt_ctrl.h
F: docs/specs/virt-ctlr.rst
MicroBlaze Machines MicroBlaze Machines
------------------- -------------------
@@ -1316,16 +1286,14 @@ M: Hervé Poussineau <hpoussin@reactos.org>
R: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com> R: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
S: Maintained S: Maintained
F: hw/mips/jazz.c F: hw/mips/jazz.c
F: hw/display/g364fb.c
F: hw/display/jazz_led.c F: hw/display/jazz_led.c
F: hw/dma/rc4030.c F: hw/dma/rc4030.c
F: hw/nvram/ds1225y.c
Malta Malta
M: Philippe Mathieu-Daudé <philmd@linaro.org> M: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Aurelien Jarno <aurelien@aurel32.net> R: Aurelien Jarno <aurelien@aurel32.net>
S: Odd Fixes S: Odd Fixes
F: hw/isa/piix.c F: hw/isa/piix4.c
F: hw/acpi/piix4.c F: hw/acpi/piix4.c
F: hw/mips/malta.c F: hw/mips/malta.c
F: hw/pci-host/gt64120.c F: hw/pci-host/gt64120.c
@@ -1345,7 +1313,10 @@ M: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Jiaxun Yang <jiaxun.yang@flygoat.com> R: Jiaxun Yang <jiaxun.yang@flygoat.com>
S: Odd Fixes S: Odd Fixes
F: hw/mips/fuloong2e.c F: hw/mips/fuloong2e.c
F: hw/isa/vt82c686.c
F: hw/pci-host/bonito.c F: hw/pci-host/bonito.c
F: hw/usb/vt82c686-uhci-pci.c
F: include/hw/isa/vt82c686.h
F: include/hw/pci-host/bonito.h F: include/hw/pci-host/bonito.h
F: tests/avocado/machine_mips_fuloong2e.py F: tests/avocado/machine_mips_fuloong2e.py
@@ -1357,7 +1328,6 @@ F: hw/intc/loongson_liointc.c
F: hw/mips/loongson3_bootp.c F: hw/mips/loongson3_bootp.c
F: hw/mips/loongson3_bootp.h F: hw/mips/loongson3_bootp.h
F: hw/mips/loongson3_virt.c F: hw/mips/loongson3_virt.c
F: include/hw/intc/loongson_liointc.h
F: tests/avocado/machine_mips_loongson3v.py F: tests/avocado/machine_mips_loongson3v.py
Boston Boston
@@ -1375,7 +1345,6 @@ or1k-sim
M: Jia Liu <proljc@gmail.com> M: Jia Liu <proljc@gmail.com>
S: Maintained S: Maintained
F: docs/system/openrisc/or1k-sim.rst F: docs/system/openrisc/or1k-sim.rst
F: hw/intc/ompic.c
F: hw/openrisc/openrisc_sim.c F: hw/openrisc/openrisc_sim.c
PowerPC Machines PowerPC Machines
@@ -1383,8 +1352,7 @@ PowerPC Machines
405 (ref405ep) 405 (ref405ep)
L: qemu-ppc@nongnu.org L: qemu-ppc@nongnu.org
S: Orphan S: Orphan
F: hw/ppc/ppc405* F: hw/ppc/ppc405_boards.c
F: tests/avocado/ppc_405.py
Bamboo Bamboo
L: qemu-ppc@nongnu.org L: qemu-ppc@nongnu.org
@@ -1396,7 +1364,6 @@ e500
L: qemu-ppc@nongnu.org L: qemu-ppc@nongnu.org
S: Orphan S: Orphan
F: hw/ppc/e500* F: hw/ppc/e500*
F: hw/ppc/ppce500_spin.c
F: hw/gpio/mpc8xxx.c F: hw/gpio/mpc8xxx.c
F: hw/i2c/mpc_i2c.c F: hw/i2c/mpc_i2c.c
F: hw/net/fsl_etsec/ F: hw/net/fsl_etsec/
@@ -1404,9 +1371,8 @@ F: hw/pci-host/ppce500.c
F: include/hw/ppc/ppc_e500.h F: include/hw/ppc/ppc_e500.h
F: include/hw/pci-host/ppce500.h F: include/hw/pci-host/ppce500.h
F: pc-bios/u-boot.e500 F: pc-bios/u-boot.e500
F: hw/intc/openpic_kvm.c F: hw/intc/openpic_kvm.h
F: include/hw/ppc/openpic_kvm.h F: include/hw/ppc/openpic_kvm.h
F: docs/system/ppc/ppce500.rst
mpc8544ds mpc8544ds
L: qemu-ppc@nongnu.org L: qemu-ppc@nongnu.org
@@ -1426,7 +1392,6 @@ F: hw/pci-bridge/dec.[hc]
F: hw/misc/macio/ F: hw/misc/macio/
F: hw/misc/mos6522.c F: hw/misc/mos6522.c
F: hw/nvram/mac_nvram.c F: hw/nvram/mac_nvram.c
F: hw/ppc/fw_cfg.c
F: hw/input/adb* F: hw/input/adb*
F: include/hw/misc/macio/ F: include/hw/misc/macio/
F: include/hw/misc/mos6522.h F: include/hw/misc/mos6522.h
@@ -1480,10 +1445,6 @@ F: hw/*/spapr*
F: include/hw/*/spapr* F: include/hw/*/spapr*
F: hw/*/xics* F: hw/*/xics*
F: include/hw/*/xics* F: include/hw/*/xics*
F: include/hw/ppc/fdt.h
F: hw/ppc/fdt.c
F: include/hw/ppc/pef.h
F: hw/ppc/pef.c
F: pc-bios/slof.bin F: pc-bios/slof.bin
F: docs/system/ppc/pseries.rst F: docs/system/ppc/pseries.rst
F: docs/specs/ppc-spapr-* F: docs/specs/ppc-spapr-*
@@ -1521,7 +1482,6 @@ M: BALATON Zoltan <balaton@eik.bme.hu>
L: qemu-ppc@nongnu.org L: qemu-ppc@nongnu.org
S: Maintained S: Maintained
F: hw/ppc/sam460ex.c F: hw/ppc/sam460ex.c
F: hw/ppc/ppc440_uc.c
F: hw/ppc/ppc440_pcix.c F: hw/ppc/ppc440_pcix.c
F: hw/display/sm501* F: hw/display/sm501*
F: hw/ide/sii3112.c F: hw/ide/sii3112.c
@@ -1539,14 +1499,6 @@ F: hw/pci-host/mv64361.c
F: hw/pci-host/mv643xx.h F: hw/pci-host/mv643xx.h
F: include/hw/pci-host/mv64361.h F: include/hw/pci-host/mv64361.h
amigaone
M: BALATON Zoltan <balaton@eik.bme.hu>
L: qemu-ppc@nongnu.org
S: Maintained
F: hw/ppc/amigaone.c
F: hw/pci-host/articia.c
F: include/hw/pci-host/articia.h
Virtual Open Firmware (VOF) Virtual Open Firmware (VOF)
M: Alexey Kardashevskiy <aik@ozlabs.ru> M: Alexey Kardashevskiy <aik@ozlabs.ru>
R: David Gibson <david@gibson.dropbear.id.au> R: David Gibson <david@gibson.dropbear.id.au>
@@ -1573,7 +1525,6 @@ Microchip PolarFire SoC Icicle Kit
M: Bin Meng <bin.meng@windriver.com> M: Bin Meng <bin.meng@windriver.com>
L: qemu-riscv@nongnu.org L: qemu-riscv@nongnu.org
S: Supported S: Supported
F: docs/system/riscv/microchip-icicle-kit.rst
F: hw/riscv/microchip_pfsoc.c F: hw/riscv/microchip_pfsoc.c
F: hw/char/mchp_pfsoc_mmuart.c F: hw/char/mchp_pfsoc_mmuart.c
F: hw/misc/mchp_pfsoc_dmc.c F: hw/misc/mchp_pfsoc_dmc.c
@@ -1589,7 +1540,6 @@ Shakti C class SoC
M: Vijai Kumar K <vijai@behindbytes.com> M: Vijai Kumar K <vijai@behindbytes.com>
L: qemu-riscv@nongnu.org L: qemu-riscv@nongnu.org
S: Supported S: Supported
F: docs/system/riscv/shakti-c.rst
F: hw/riscv/shakti_c.c F: hw/riscv/shakti_c.c
F: hw/char/shakti_uart.c F: hw/char/shakti_uart.c
F: include/hw/riscv/shakti_c.h F: include/hw/riscv/shakti_c.h
@@ -1601,7 +1551,6 @@ M: Bin Meng <bin.meng@windriver.com>
M: Palmer Dabbelt <palmer@dabbelt.com> M: Palmer Dabbelt <palmer@dabbelt.com>
L: qemu-riscv@nongnu.org L: qemu-riscv@nongnu.org
S: Supported S: Supported
F: docs/system/riscv/sifive_u.rst
F: hw/*/*sifive*.c F: hw/*/*sifive*.c
F: include/hw/*/*sifive*.h F: include/hw/*/*sifive*.h
@@ -1626,7 +1575,6 @@ F: hw/intc/sh_intc.c
F: hw/pci-host/sh_pci.c F: hw/pci-host/sh_pci.c
F: hw/timer/sh_timer.c F: hw/timer/sh_timer.c
F: include/hw/sh4/sh_intc.h F: include/hw/sh4/sh_intc.h
F: include/hw/timer/tmu012.h
Shix Shix
R: Yoshinori Sato <ysato@users.sourceforge.jp> R: Yoshinori Sato <ysato@users.sourceforge.jp>
@@ -1750,16 +1698,6 @@ F: hw/s390x/event-facility.c
F: hw/s390x/sclp*.c F: hw/s390x/sclp*.c
L: qemu-s390x@nongnu.org L: qemu-s390x@nongnu.org
S390 CPU topology
M: Nina Schoetterl-Glausch <nsg@linux.ibm.com>
S: Supported
F: include/hw/s390x/cpu-topology.h
F: hw/s390x/cpu-topology.c
F: target/s390x/kvm/stsi-topology.c
F: docs/devel/s390-cpu-topology.rst
F: docs/system/s390x/cpu-topology.rst
F: tests/avocado/s390_topology.py
X86 Machines X86 Machines
------------ ------------
PC PC
@@ -1774,7 +1712,7 @@ F: hw/pci-host/pam.c
F: include/hw/pci-host/i440fx.h F: include/hw/pci-host/i440fx.h
F: include/hw/pci-host/q35.h F: include/hw/pci-host/q35.h
F: include/hw/pci-host/pam.h F: include/hw/pci-host/pam.h
F: hw/isa/piix.c F: hw/isa/piix3.c
F: hw/isa/lpc_ich9.c F: hw/isa/lpc_ich9.c
F: hw/i2c/smbus_ich9.c F: hw/i2c/smbus_ich9.c
F: hw/acpi/piix4.c F: hw/acpi/piix4.c
@@ -1784,7 +1722,7 @@ F: include/hw/southbridge/ich9.h
F: include/hw/southbridge/piix.h F: include/hw/southbridge/piix.h
F: hw/isa/apm.c F: hw/isa/apm.c
F: include/hw/isa/apm.h F: include/hw/isa/apm.h
F: tests/unit/test-x86-topo.c F: tests/unit/test-x86-cpuid.c
F: tests/qtest/test-x86-cpuid-compat.c F: tests/qtest/test-x86-cpuid-compat.c
PC Chipset PC Chipset
@@ -1814,7 +1752,6 @@ F: include/hw/dma/i8257.h
F: include/hw/i2c/pm_smbus.h F: include/hw/i2c/pm_smbus.h
F: include/hw/input/i8042.h F: include/hw/input/i8042.h
F: include/hw/intc/ioapic* F: include/hw/intc/ioapic*
F: include/hw/intc/i8259.h
F: include/hw/isa/i8259_internal.h F: include/hw/isa/i8259_internal.h
F: include/hw/isa/superio.h F: include/hw/isa/superio.h
F: include/hw/timer/hpet.h F: include/hw/timer/hpet.h
@@ -1844,7 +1781,6 @@ F: hw/core/null-machine.c
F: hw/core/numa.c F: hw/core/numa.c
F: hw/cpu/cluster.c F: hw/cpu/cluster.c
F: qapi/machine.json F: qapi/machine.json
F: qapi/machine-common.json
F: qapi/machine-target.json F: qapi/machine-target.json
F: include/hw/boards.h F: include/hw/boards.h
F: include/hw/core/cpu.h F: include/hw/core/cpu.h
@@ -1870,7 +1806,6 @@ M: Max Filippov <jcmvbkbc@gmail.com>
S: Maintained S: Maintained
F: hw/xtensa/xtfpga.c F: hw/xtensa/xtfpga.c
F: hw/net/opencores_eth.c F: hw/net/opencores_eth.c
F: include/hw/xtensa/mx_pic.h
Devices Devices
------- -------
@@ -1896,7 +1831,6 @@ EDU
M: Jiri Slaby <jslaby@suse.cz> M: Jiri Slaby <jslaby@suse.cz>
S: Maintained S: Maintained
F: hw/misc/edu.c F: hw/misc/edu.c
F: docs/specs/edu.rst
IDE IDE
M: John Snow <jsnow@redhat.com> M: John Snow <jsnow@redhat.com>
@@ -2032,9 +1966,7 @@ F: docs/specs/acpi_hest_ghes.rst
ppc4xx ppc4xx
L: qemu-ppc@nongnu.org L: qemu-ppc@nongnu.org
S: Orphan S: Orphan
F: hw/ppc/ppc4xx*.c F: hw/ppc/ppc4*.c
F: hw/ppc/ppc440_uc.c
F: hw/ppc/ppc440.h
F: hw/i2c/ppc4xx_i2c.c F: hw/i2c/ppc4xx_i2c.c
F: include/hw/ppc/ppc4xx.h F: include/hw/ppc/ppc4xx.h
F: include/hw/i2c/ppc4xx_i2c.h F: include/hw/i2c/ppc4xx_i2c.h
@@ -2046,7 +1978,6 @@ M: Marc-André Lureau <marcandre.lureau@redhat.com>
R: Paolo Bonzini <pbonzini@redhat.com> R: Paolo Bonzini <pbonzini@redhat.com>
S: Odd Fixes S: Odd Fixes
F: hw/char/ F: hw/char/
F: include/hw/char/
Network devices Network devices
M: Jason Wang <jasowang@redhat.com> M: Jason Wang <jasowang@redhat.com>
@@ -2323,15 +2254,6 @@ F: hw/virtio/virtio-mem-pci.h
F: hw/virtio/virtio-mem-pci.c F: hw/virtio/virtio-mem-pci.c
F: include/hw/virtio/virtio-mem.h F: include/hw/virtio/virtio-mem.h
virtio-snd
M: Gerd Hoffmann <kraxel@redhat.com>
R: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
S: Supported
F: hw/audio/virtio-snd.c
F: hw/audio/virtio-snd-pci.c
F: include/hw/audio/virtio-snd.h
F: docs/system/devices/virtio-snd.rst
nvme nvme
M: Keith Busch <kbusch@kernel.org> M: Keith Busch <kbusch@kernel.org>
M: Klaus Jensen <its@irrelevant.dk> M: Klaus Jensen <its@irrelevant.dk>
@@ -2374,7 +2296,6 @@ S: Maintained
F: hw/net/vmxnet* F: hw/net/vmxnet*
F: hw/scsi/vmw_pvscsi* F: hw/scsi/vmw_pvscsi*
F: tests/qtest/vmxnet3-test.c F: tests/qtest/vmxnet3-test.c
F: docs/specs/vwm_pvscsi-spec.rst
Rocker Rocker
M: Jiri Pirko <jiri@resnulli.us> M: Jiri Pirko <jiri@resnulli.us>
@@ -2459,7 +2380,7 @@ S: Orphan
R: Ani Sinha <ani@anisinha.ca> R: Ani Sinha <ani@anisinha.ca>
F: hw/acpi/vmgenid.c F: hw/acpi/vmgenid.c
F: include/hw/acpi/vmgenid.h F: include/hw/acpi/vmgenid.h
F: docs/specs/vmgenid.rst F: docs/specs/vmgenid.txt
F: tests/qtest/vmgenid-test.c F: tests/qtest/vmgenid-test.c
LED LED
@@ -2491,7 +2412,6 @@ F: hw/display/vga*
F: hw/display/bochs-display.c F: hw/display/bochs-display.c
F: include/hw/display/vga.h F: include/hw/display/vga.h
F: include/hw/display/bochs-vbe.h F: include/hw/display/bochs-vbe.h
F: docs/specs/standard-vga.rst
ramfb ramfb
M: Gerd Hoffmann <kraxel@redhat.com> M: Gerd Hoffmann <kraxel@redhat.com>
@@ -2505,7 +2425,6 @@ S: Odd Fixes
F: hw/display/virtio-gpu* F: hw/display/virtio-gpu*
F: hw/display/virtio-vga.* F: hw/display/virtio-vga.*
F: include/hw/virtio/virtio-gpu.h F: include/hw/virtio/virtio-gpu.h
F: docs/system/devices/virtio-gpu.rst
vhost-user-blk vhost-user-blk
M: Raphael Norwitz <raphael.norwitz@nutanix.com> M: Raphael Norwitz <raphael.norwitz@nutanix.com>
@@ -2546,18 +2465,9 @@ PIIX4 South Bridge (i82371AB)
M: Hervé Poussineau <hpoussin@reactos.org> M: Hervé Poussineau <hpoussin@reactos.org>
M: Philippe Mathieu-Daudé <philmd@linaro.org> M: Philippe Mathieu-Daudé <philmd@linaro.org>
S: Maintained S: Maintained
F: hw/isa/piix.c F: hw/isa/piix4.c
F: include/hw/southbridge/piix.h F: include/hw/southbridge/piix.h
VIA South Bridges (VT82C686B, VT8231)
M: BALATON Zoltan <balaton@eik.bme.hu>
M: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Jiaxun Yang <jiaxun.yang@flygoat.com>
S: Maintained
F: hw/isa/vt82c686.c
F: hw/usb/vt82c686-uhci-pci.c
F: include/hw/isa/vt82c686.h
Firmware configuration (fw_cfg) Firmware configuration (fw_cfg)
M: Philippe Mathieu-Daudé <philmd@linaro.org> M: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Gerd Hoffmann <kraxel@redhat.com> R: Gerd Hoffmann <kraxel@redhat.com>
@@ -2608,7 +2518,6 @@ W: https://canbus.pages.fel.cvut.cz/
F: net/can/* F: net/can/*
F: hw/net/can/* F: hw/net/can/*
F: include/net/can_*.h F: include/net/can_*.h
F: docs/system/devices/can.rst
OpenPIC interrupt controller OpenPIC interrupt controller
M: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> M: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
@@ -2652,7 +2561,7 @@ M: Halil Pasic <pasic@linux.ibm.com>
M: Christian Borntraeger <borntraeger@linux.ibm.com> M: Christian Borntraeger <borntraeger@linux.ibm.com>
S: Supported S: Supported
F: hw/s390x/storage-keys.h F: hw/s390x/storage-keys.h
F: hw/s390x/s390-skeys*.c F: hw/390x/s390-skeys*.c
L: qemu-s390x@nongnu.org L: qemu-s390x@nongnu.org
S390 storage attribute device S390 storage attribute device
@@ -2660,7 +2569,7 @@ M: Halil Pasic <pasic@linux.ibm.com>
M: Christian Borntraeger <borntraeger@linux.ibm.com> M: Christian Borntraeger <borntraeger@linux.ibm.com>
S: Supported S: Supported
F: hw/s390x/storage-attributes.h F: hw/s390x/storage-attributes.h
F: hw/s390x/s390-stattrib*.c F: hw/s390/s390-stattrib*.c
L: qemu-s390x@nongnu.org L: qemu-s390x@nongnu.org
S390 floating interrupt controller S390 floating interrupt controller
@@ -2680,14 +2589,6 @@ F: hw/usb/canokey.c
F: hw/usb/canokey.h F: hw/usb/canokey.h
F: docs/system/devices/canokey.rst F: docs/system/devices/canokey.rst
Hyper-V Dynamic Memory Protocol
M: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
S: Supported
F: hw/hyperv/hv-balloon*.c
F: hw/hyperv/hv-balloon*.h
F: include/hw/hyperv/dynmem-proto.h
F: include/hw/hyperv/hv-balloon.h
Subsystems Subsystems
---------- ----------
Overall Audio backends Overall Audio backends
@@ -2791,13 +2692,12 @@ S: Supported
F: util/async.c F: util/async.c
F: util/aio-*.c F: util/aio-*.c
F: util/aio-*.h F: util/aio-*.h
F: util/defer-call.c
F: util/fdmon-*.c F: util/fdmon-*.c
F: block/io.c F: block/io.c
F: block/plug.c
F: migration/block* F: migration/block*
F: include/block/aio.h F: include/block/aio.h
F: include/block/aio-wait.h F: include/block/aio-wait.h
F: include/qemu/defer-call.h
F: scripts/qemugdb/aio.py F: scripts/qemugdb/aio.py
F: tests/unit/test-fdmon-epoll.c F: tests/unit/test-fdmon-epoll.c
T: git https://github.com/stefanha/qemu.git block T: git https://github.com/stefanha/qemu.git block
@@ -2916,7 +2816,6 @@ F: include/sysemu/dump.h
F: qapi/dump.json F: qapi/dump.json
F: scripts/dump-guest-memory.py F: scripts/dump-guest-memory.py
F: stubs/dump.c F: stubs/dump.c
F: docs/specs/vmcoreinfo.rst
Error reporting Error reporting
M: Markus Armbruster <armbru@redhat.com> M: Markus Armbruster <armbru@redhat.com>
@@ -2942,8 +2841,8 @@ F: gdbstub/*
F: include/exec/gdbstub.h F: include/exec/gdbstub.h
F: include/gdbstub/* F: include/gdbstub/*
F: gdb-xml/ F: gdb-xml/
F: tests/tcg/multiarch/gdbstub/* F: tests/tcg/multiarch/gdbstub/
F: scripts/feature_to_c.py F: scripts/feature_to_c.sh
F: scripts/probe-gdb-support.py F: scripts/probe-gdb-support.py
Memory API Memory API
@@ -2977,7 +2876,6 @@ F: hw/mem/pc-dimm.c
F: include/hw/mem/memory-device.h F: include/hw/mem/memory-device.h
F: include/hw/mem/nvdimm.h F: include/hw/mem/nvdimm.h
F: include/hw/mem/pc-dimm.h F: include/hw/mem/pc-dimm.h
F: stubs/memory_device.c
F: docs/nvdimm.txt F: docs/nvdimm.txt
SPICE SPICE
@@ -3015,7 +2913,7 @@ F: include/qemu/main-loop.h
F: include/sysemu/runstate.h F: include/sysemu/runstate.h
F: include/sysemu/runstate-action.h F: include/sysemu/runstate-action.h
F: util/main-loop.c F: util/main-loop.c
F: util/qemu-timer*.c F: util/qemu-timer.c
F: system/vl.c F: system/vl.c
F: system/main.c F: system/main.c
F: system/cpus.c F: system/cpus.c
@@ -3164,11 +3062,10 @@ M: Michael Roth <michael.roth@amd.com>
M: Konstantin Kostiuk <kkostiuk@redhat.com> M: Konstantin Kostiuk <kkostiuk@redhat.com>
S: Maintained S: Maintained
F: qga/ F: qga/
F: contrib/systemd/qemu-guest-agent.service
F: docs/interop/qemu-ga.rst F: docs/interop/qemu-ga.rst
F: docs/interop/qemu-ga-ref.rst F: docs/interop/qemu-ga-ref.rst
F: scripts/qemu-guest-agent/ F: scripts/qemu-guest-agent/
F: tests/*/test-qga* F: tests/unit/test-qga.c
T: git https://github.com/mdroth/qemu.git qga T: git https://github.com/mdroth/qemu.git qga
QEMU Guest Agent Win32 QEMU Guest Agent Win32
@@ -3231,7 +3128,6 @@ M: Laurent Vivier <lvivier@redhat.com>
R: Paolo Bonzini <pbonzini@redhat.com> R: Paolo Bonzini <pbonzini@redhat.com>
S: Maintained S: Maintained
F: system/qtest.c F: system/qtest.c
F: include/sysemu/qtest.h
F: accel/qtest/ F: accel/qtest/
F: tests/qtest/ F: tests/qtest/
F: docs/devel/qgraph.rst F: docs/devel/qgraph.rst
@@ -3510,12 +3406,6 @@ M: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
S: Maintained S: Maintained
F: contrib/elf2dmp/ F: contrib/elf2dmp/
Overall sensors
M: Philippe Mathieu-Daudé <philmd@linaro.org>
S: Odd Fixes
F: hw/sensor
F: include/hw/sensor
I2C and SMBus I2C and SMBus
M: Corey Minyard <cminyard@mvista.com> M: Corey Minyard <cminyard@mvista.com>
S: Maintained S: Maintained
@@ -3681,7 +3571,7 @@ M: Alistair Francis <Alistair.Francis@wdc.com>
L: qemu-riscv@nongnu.org L: qemu-riscv@nongnu.org
S: Maintained S: Maintained
F: tcg/riscv/ F: tcg/riscv/
F: disas/riscv.[ch] F: disas/riscv.c
S390 TCG target S390 TCG target
M: Richard Henderson <richard.henderson@linaro.org> M: Richard Henderson <richard.henderson@linaro.org>
@@ -3953,7 +3843,7 @@ F: docs/block-replication.txt
PVRDMA PVRDMA
M: Yuval Shaia <yuval.shaia.ml@gmail.com> M: Yuval Shaia <yuval.shaia.ml@gmail.com>
M: Marcel Apfelbaum <marcel.apfelbaum@gmail.com> M: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
S: Odd Fixes S: Maintained
F: hw/rdma/* F: hw/rdma/*
F: hw/rdma/vmw/* F: hw/rdma/vmw/*
F: docs/pvrdma.txt F: docs/pvrdma.txt
@@ -4001,7 +3891,6 @@ M: Jason Wang <jasowang@redhat.com>
R: Andrew Melnychenko <andrew@daynix.com> R: Andrew Melnychenko <andrew@daynix.com>
R: Yuri Benditovich <yuri.benditovich@daynix.com> R: Yuri Benditovich <yuri.benditovich@daynix.com>
S: Maintained S: Maintained
F: docs/devel/ebpf_rss.rst
F: ebpf/* F: ebpf/*
F: tools/ebpf/* F: tools/ebpf/*
@@ -4018,7 +3907,6 @@ F: .github/workflows/lockdown.yml
F: .gitlab-ci.yml F: .gitlab-ci.yml
F: .gitlab-ci.d/ F: .gitlab-ci.d/
F: .travis.yml F: .travis.yml
F: docs/devel/ci*
F: scripts/ci/ F: scripts/ci/
F: tests/docker/ F: tests/docker/
F: tests/vm/ F: tests/vm/
@@ -4078,7 +3966,7 @@ F: gitdm.config
F: contrib/gitdm/* F: contrib/gitdm/*
Incompatible changes Incompatible changes
R: devel@lists.libvirt.org R: libvir-list@redhat.com
F: docs/about/deprecated.rst F: docs/about/deprecated.rst
Build System Build System

View File

@@ -283,13 +283,6 @@ include $(SRC_PATH)/tests/vm/Makefile.include
print-help-run = printf " %-30s - %s\\n" "$1" "$2" print-help-run = printf " %-30s - %s\\n" "$1" "$2"
print-help = @$(call print-help-run,$1,$2) print-help = @$(call print-help-run,$1,$2)
.PHONY: update-linux-vdso
update-linux-vdso:
@for m in $(SRC_PATH)/linux-user/*/Makefile.vdso; do \
$(MAKE) $(SUBDIR_MAKEFLAGS) -C $$(dirname $$m) -f Makefile.vdso \
SRC_PATH=$(SRC_PATH) BUILD_DIR=$(BUILD_DIR); \
done
.PHONY: help .PHONY: help
help: help:
@echo 'Generic targets:' @echo 'Generic targets:'
@@ -310,9 +303,6 @@ endif
$(call print-help,distclean,Remove all generated files) $(call print-help,distclean,Remove all generated files)
$(call print-help,dist,Build a distributable tarball) $(call print-help,dist,Build a distributable tarball)
@echo '' @echo ''
@echo 'Linux-user targets:'
$(call print-help,update-linux-vdso,Build linux-user vdso images)
@echo ''
@echo 'Test targets:' @echo 'Test targets:'
$(call print-help,check,Run all tests (check-help for details)) $(call print-help,check,Run all tests (check-help for details))
$(call print-help,bench,Run all benchmarks) $(call print-help,bench,Run all benchmarks)

View File

@@ -90,6 +90,8 @@ bool kvm_kernel_irqchip;
bool kvm_split_irqchip; bool kvm_split_irqchip;
bool kvm_async_interrupts_allowed; bool kvm_async_interrupts_allowed;
bool kvm_halt_in_kernel_allowed; bool kvm_halt_in_kernel_allowed;
bool kvm_eventfds_allowed;
bool kvm_irqfds_allowed;
bool kvm_resamplefds_allowed; bool kvm_resamplefds_allowed;
bool kvm_msi_via_irqfd_allowed; bool kvm_msi_via_irqfd_allowed;
bool kvm_gsi_routing_allowed; bool kvm_gsi_routing_allowed;
@@ -97,6 +99,8 @@ bool kvm_gsi_direct_mapping;
bool kvm_allowed; bool kvm_allowed;
bool kvm_readonly_mem_allowed; bool kvm_readonly_mem_allowed;
bool kvm_vm_attributes_allowed; bool kvm_vm_attributes_allowed;
bool kvm_direct_msi_allowed;
bool kvm_ioeventfd_any_length_allowed;
bool kvm_msi_use_devid; bool kvm_msi_use_devid;
bool kvm_has_guest_debug; bool kvm_has_guest_debug;
static int kvm_sstep_flags; static int kvm_sstep_flags;
@@ -107,9 +111,6 @@ static const KVMCapabilityInfo kvm_required_capabilites[] = {
KVM_CAP_INFO(USER_MEMORY), KVM_CAP_INFO(USER_MEMORY),
KVM_CAP_INFO(DESTROY_MEMORY_REGION_WORKS), KVM_CAP_INFO(DESTROY_MEMORY_REGION_WORKS),
KVM_CAP_INFO(JOIN_MEMORY_REGIONS_WORKS), KVM_CAP_INFO(JOIN_MEMORY_REGIONS_WORKS),
KVM_CAP_INFO(INTERNAL_ERROR_DATA),
KVM_CAP_INFO(IOEVENTFD),
KVM_CAP_INFO(IOEVENTFD_ANY_LENGTH),
KVM_CAP_LAST_INFO KVM_CAP_LAST_INFO
}; };
@@ -173,31 +174,13 @@ void kvm_resample_fd_notify(int gsi)
} }
} }
unsigned int kvm_get_max_memslots(void) int kvm_get_max_memslots(void)
{ {
KVMState *s = KVM_STATE(current_accel()); KVMState *s = KVM_STATE(current_accel());
return s->nr_slots; return s->nr_slots;
} }
unsigned int kvm_get_free_memslots(void)
{
unsigned int used_slots = 0;
KVMState *s = kvm_state;
int i;
kvm_slots_lock();
for (i = 0; i < s->nr_as; i++) {
if (!s->as[i].ml) {
continue;
}
used_slots = MAX(used_slots, s->as[i].ml->nr_used_slots);
}
kvm_slots_unlock();
return s->nr_slots - used_slots;
}
/* Called with KVMMemoryListener.slots_lock held */ /* Called with KVMMemoryListener.slots_lock held */
static KVMSlot *kvm_get_free_slot(KVMMemoryListener *kml) static KVMSlot *kvm_get_free_slot(KVMMemoryListener *kml)
{ {
@@ -213,6 +196,19 @@ static KVMSlot *kvm_get_free_slot(KVMMemoryListener *kml)
return NULL; return NULL;
} }
bool kvm_has_free_slot(MachineState *ms)
{
KVMState *s = KVM_STATE(ms->accelerator);
bool result;
KVMMemoryListener *kml = &s->memory_listener;
kvm_slots_lock();
result = !!kvm_get_free_slot(kml);
kvm_slots_unlock();
return result;
}
/* Called with KVMMemoryListener.slots_lock held */ /* Called with KVMMemoryListener.slots_lock held */
static KVMSlot *kvm_alloc_slot(KVMMemoryListener *kml) static KVMSlot *kvm_alloc_slot(KVMMemoryListener *kml)
{ {
@@ -1105,6 +1101,13 @@ static void kvm_coalesce_pio_del(MemoryListener *listener,
} }
} }
static MemoryListener kvm_coalesced_pio_listener = {
.name = "kvm-coalesced-pio",
.coalesced_io_add = kvm_coalesce_pio_add,
.coalesced_io_del = kvm_coalesce_pio_del,
.priority = MEMORY_LISTENER_PRIORITY_MIN,
};
int kvm_check_extension(KVMState *s, unsigned int extension) int kvm_check_extension(KVMState *s, unsigned int extension)
{ {
int ret; int ret;
@@ -1246,6 +1249,43 @@ static int kvm_set_ioeventfd_pio(int fd, uint16_t addr, uint16_t val,
} }
static int kvm_check_many_ioeventfds(void)
{
/* Userspace can use ioeventfd for io notification. This requires a host
* that supports eventfd(2) and an I/O thread; since eventfd does not
* support SIGIO it cannot interrupt the vcpu.
*
* Older kernels have a 6 device limit on the KVM io bus. Find out so we
* can avoid creating too many ioeventfds.
*/
#if defined(CONFIG_EVENTFD)
int ioeventfds[7];
int i, ret = 0;
for (i = 0; i < ARRAY_SIZE(ioeventfds); i++) {
ioeventfds[i] = eventfd(0, EFD_CLOEXEC);
if (ioeventfds[i] < 0) {
break;
}
ret = kvm_set_ioeventfd_pio(ioeventfds[i], 0, i, true, 2, true);
if (ret < 0) {
close(ioeventfds[i]);
break;
}
}
/* Decide whether many devices are supported or not */
ret = i == ARRAY_SIZE(ioeventfds);
while (i-- > 0) {
kvm_set_ioeventfd_pio(ioeventfds[i], 0, i, false, 2, true);
close(ioeventfds[i]);
}
return ret;
#else
return 0;
#endif
}
static const KVMCapabilityInfo * static const KVMCapabilityInfo *
kvm_check_extension_list(KVMState *s, const KVMCapabilityInfo *list) kvm_check_extension_list(KVMState *s, const KVMCapabilityInfo *list)
{ {
@@ -1347,7 +1387,6 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
} }
start_addr += slot_size; start_addr += slot_size;
size -= slot_size; size -= slot_size;
kml->nr_used_slots--;
} while (size); } while (size);
return; return;
} }
@@ -1373,7 +1412,6 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
ram_start_offset += slot_size; ram_start_offset += slot_size;
ram += slot_size; ram += slot_size;
size -= slot_size; size -= slot_size;
kml->nr_used_slots++;
} while (size); } while (size);
} }
@@ -1761,8 +1799,6 @@ void kvm_memory_listener_register(KVMState *s, KVMMemoryListener *kml,
static MemoryListener kvm_io_listener = { static MemoryListener kvm_io_listener = {
.name = "kvm-io", .name = "kvm-io",
.coalesced_io_add = kvm_coalesce_pio_add,
.coalesced_io_del = kvm_coalesce_pio_del,
.eventfd_add = kvm_io_ioeventfd_add, .eventfd_add = kvm_io_ioeventfd_add,
.eventfd_del = kvm_io_ioeventfd_del, .eventfd_del = kvm_io_ioeventfd_del,
.priority = MEMORY_LISTENER_PRIORITY_DEV_BACKEND, .priority = MEMORY_LISTENER_PRIORITY_DEV_BACKEND,
@@ -1804,7 +1840,7 @@ static void clear_gsi(KVMState *s, unsigned int gsi)
void kvm_init_irq_routing(KVMState *s) void kvm_init_irq_routing(KVMState *s)
{ {
int gsi_count; int gsi_count, i;
gsi_count = kvm_check_extension(s, KVM_CAP_IRQ_ROUTING) - 1; gsi_count = kvm_check_extension(s, KVM_CAP_IRQ_ROUTING) - 1;
if (gsi_count > 0) { if (gsi_count > 0) {
@@ -1816,6 +1852,12 @@ void kvm_init_irq_routing(KVMState *s)
s->irq_routes = g_malloc0(sizeof(*s->irq_routes)); s->irq_routes = g_malloc0(sizeof(*s->irq_routes));
s->nr_allocated_irq_routes = 0; s->nr_allocated_irq_routes = 0;
if (!kvm_direct_msi_allowed) {
for (i = 0; i < KVM_MSI_HASHTAB_SIZE; i++) {
QTAILQ_INIT(&s->msi_hashtab[i]);
}
}
kvm_arch_init_irq_routing(s); kvm_arch_init_irq_routing(s);
} }
@@ -1935,10 +1977,41 @@ void kvm_irqchip_change_notify(void)
notifier_list_notify(&kvm_irqchip_change_notifiers, NULL); notifier_list_notify(&kvm_irqchip_change_notifiers, NULL);
} }
static unsigned int kvm_hash_msi(uint32_t data)
{
/* This is optimized for IA32 MSI layout. However, no other arch shall
* repeat the mistake of not providing a direct MSI injection API. */
return data & 0xff;
}
static void kvm_flush_dynamic_msi_routes(KVMState *s)
{
KVMMSIRoute *route, *next;
unsigned int hash;
for (hash = 0; hash < KVM_MSI_HASHTAB_SIZE; hash++) {
QTAILQ_FOREACH_SAFE(route, &s->msi_hashtab[hash], entry, next) {
kvm_irqchip_release_virq(s, route->kroute.gsi);
QTAILQ_REMOVE(&s->msi_hashtab[hash], route, entry);
g_free(route);
}
}
}
static int kvm_irqchip_get_virq(KVMState *s) static int kvm_irqchip_get_virq(KVMState *s)
{ {
int next_virq; int next_virq;
/*
* PIC and IOAPIC share the first 16 GSI numbers, thus the available
* GSI numbers are more than the number of IRQ route. Allocating a GSI
* number can succeed even though a new route entry cannot be added.
* When this happens, flush dynamic MSI entries to free IRQ route entries.
*/
if (!kvm_direct_msi_allowed && s->irq_routes->nr == s->gsi_count) {
kvm_flush_dynamic_msi_routes(s);
}
/* Return the lowest unused GSI in the bitmap */ /* Return the lowest unused GSI in the bitmap */
next_virq = find_first_zero_bit(s->used_gsi_bitmap, s->gsi_count); next_virq = find_first_zero_bit(s->used_gsi_bitmap, s->gsi_count);
if (next_virq >= s->gsi_count) { if (next_virq >= s->gsi_count) {
@@ -1948,10 +2021,27 @@ static int kvm_irqchip_get_virq(KVMState *s)
} }
} }
static KVMMSIRoute *kvm_lookup_msi_route(KVMState *s, MSIMessage msg)
{
unsigned int hash = kvm_hash_msi(msg.data);
KVMMSIRoute *route;
QTAILQ_FOREACH(route, &s->msi_hashtab[hash], entry) {
if (route->kroute.u.msi.address_lo == (uint32_t)msg.address &&
route->kroute.u.msi.address_hi == (msg.address >> 32) &&
route->kroute.u.msi.data == le32_to_cpu(msg.data)) {
return route;
}
}
return NULL;
}
int kvm_irqchip_send_msi(KVMState *s, MSIMessage msg) int kvm_irqchip_send_msi(KVMState *s, MSIMessage msg)
{ {
struct kvm_msi msi; struct kvm_msi msi;
KVMMSIRoute *route;
if (kvm_direct_msi_allowed) {
msi.address_lo = (uint32_t)msg.address; msi.address_lo = (uint32_t)msg.address;
msi.address_hi = msg.address >> 32; msi.address_hi = msg.address >> 32;
msi.data = le32_to_cpu(msg.data); msi.data = le32_to_cpu(msg.data);
@@ -1959,6 +2049,35 @@ int kvm_irqchip_send_msi(KVMState *s, MSIMessage msg)
memset(msi.pad, 0, sizeof(msi.pad)); memset(msi.pad, 0, sizeof(msi.pad));
return kvm_vm_ioctl(s, KVM_SIGNAL_MSI, &msi); return kvm_vm_ioctl(s, KVM_SIGNAL_MSI, &msi);
}
route = kvm_lookup_msi_route(s, msg);
if (!route) {
int virq;
virq = kvm_irqchip_get_virq(s);
if (virq < 0) {
return virq;
}
route = g_new0(KVMMSIRoute, 1);
route->kroute.gsi = virq;
route->kroute.type = KVM_IRQ_ROUTING_MSI;
route->kroute.flags = 0;
route->kroute.u.msi.address_lo = (uint32_t)msg.address;
route->kroute.u.msi.address_hi = msg.address >> 32;
route->kroute.u.msi.data = le32_to_cpu(msg.data);
kvm_add_routing_entry(s, &route->kroute);
kvm_irqchip_commit_routes(s);
QTAILQ_INSERT_TAIL(&s->msi_hashtab[kvm_hash_msi(msg.data)], route,
entry);
}
assert(route->kroute.type == KVM_IRQ_ROUTING_MSI);
return kvm_set_irq(s, route->kroute.gsi, 1);
} }
int kvm_irqchip_add_msi_route(KVMRouteChange *c, int vector, PCIDevice *dev) int kvm_irqchip_add_msi_route(KVMRouteChange *c, int vector, PCIDevice *dev)
@@ -2085,6 +2204,10 @@ static int kvm_irqchip_assign_irqfd(KVMState *s, EventNotifier *event,
} }
} }
if (!kvm_irqfds_enabled()) {
return -ENOSYS;
}
return kvm_vm_ioctl(s, KVM_IRQFD, &irqfd); return kvm_vm_ioctl(s, KVM_IRQFD, &irqfd);
} }
@@ -2245,11 +2368,6 @@ static void kvm_irqchip_create(KVMState *s)
return; return;
} }
if (kvm_check_extension(s, KVM_CAP_IRQFD) <= 0) {
fprintf(stderr, "kvm: irqfd not implemented\n");
exit(1);
}
/* First probe and see if there's a arch-specific hook to create the /* First probe and see if there's a arch-specific hook to create the
* in-kernel irqchip for us */ * in-kernel irqchip for us */
ret = kvm_arch_irqchip_create(s); ret = kvm_arch_irqchip_create(s);
@@ -2524,8 +2642,22 @@ static int kvm_init(MachineState *ms)
#ifdef KVM_CAP_VCPU_EVENTS #ifdef KVM_CAP_VCPU_EVENTS
s->vcpu_events = kvm_check_extension(s, KVM_CAP_VCPU_EVENTS); s->vcpu_events = kvm_check_extension(s, KVM_CAP_VCPU_EVENTS);
#endif #endif
s->robust_singlestep =
kvm_check_extension(s, KVM_CAP_X86_ROBUST_SINGLESTEP);
#ifdef KVM_CAP_DEBUGREGS
s->debugregs = kvm_check_extension(s, KVM_CAP_DEBUGREGS);
#endif
s->max_nested_state_len = kvm_check_extension(s, KVM_CAP_NESTED_STATE); s->max_nested_state_len = kvm_check_extension(s, KVM_CAP_NESTED_STATE);
#ifdef KVM_CAP_IRQ_ROUTING
kvm_direct_msi_allowed = (kvm_check_extension(s, KVM_CAP_SIGNAL_MSI) > 0);
#endif
s->intx_set_mask = kvm_check_extension(s, KVM_CAP_PCI_2_3);
s->irq_set_ioctl = KVM_IRQ_LINE; s->irq_set_ioctl = KVM_IRQ_LINE;
if (kvm_check_extension(s, KVM_CAP_IRQ_INJECT_STATUS)) { if (kvm_check_extension(s, KVM_CAP_IRQ_INJECT_STATUS)) {
s->irq_set_ioctl = KVM_IRQ_LINE_STATUS; s->irq_set_ioctl = KVM_IRQ_LINE_STATUS;
@@ -2534,12 +2666,21 @@ static int kvm_init(MachineState *ms)
kvm_readonly_mem_allowed = kvm_readonly_mem_allowed =
(kvm_check_extension(s, KVM_CAP_READONLY_MEM) > 0); (kvm_check_extension(s, KVM_CAP_READONLY_MEM) > 0);
kvm_eventfds_allowed =
(kvm_check_extension(s, KVM_CAP_IOEVENTFD) > 0);
kvm_irqfds_allowed =
(kvm_check_extension(s, KVM_CAP_IRQFD) > 0);
kvm_resamplefds_allowed = kvm_resamplefds_allowed =
(kvm_check_extension(s, KVM_CAP_IRQFD_RESAMPLE) > 0); (kvm_check_extension(s, KVM_CAP_IRQFD_RESAMPLE) > 0);
kvm_vm_attributes_allowed = kvm_vm_attributes_allowed =
(kvm_check_extension(s, KVM_CAP_VM_ATTRIBUTES) > 0); (kvm_check_extension(s, KVM_CAP_VM_ATTRIBUTES) > 0);
kvm_ioeventfd_any_length_allowed =
(kvm_check_extension(s, KVM_CAP_IOEVENTFD_ANY_LENGTH) > 0);
#ifdef KVM_CAP_SET_GUEST_DEBUG #ifdef KVM_CAP_SET_GUEST_DEBUG
kvm_has_guest_debug = kvm_has_guest_debug =
(kvm_check_extension(s, KVM_CAP_SET_GUEST_DEBUG) > 0); (kvm_check_extension(s, KVM_CAP_SET_GUEST_DEBUG) > 0);
@@ -2576,15 +2717,23 @@ static int kvm_init(MachineState *ms)
kvm_irqchip_create(s); kvm_irqchip_create(s);
} }
if (kvm_eventfds_allowed) {
s->memory_listener.listener.eventfd_add = kvm_mem_ioeventfd_add; s->memory_listener.listener.eventfd_add = kvm_mem_ioeventfd_add;
s->memory_listener.listener.eventfd_del = kvm_mem_ioeventfd_del; s->memory_listener.listener.eventfd_del = kvm_mem_ioeventfd_del;
}
s->memory_listener.listener.coalesced_io_add = kvm_coalesce_mmio_region; s->memory_listener.listener.coalesced_io_add = kvm_coalesce_mmio_region;
s->memory_listener.listener.coalesced_io_del = kvm_uncoalesce_mmio_region; s->memory_listener.listener.coalesced_io_del = kvm_uncoalesce_mmio_region;
kvm_memory_listener_register(s, &s->memory_listener, kvm_memory_listener_register(s, &s->memory_listener,
&address_space_memory, 0, "kvm-memory"); &address_space_memory, 0, "kvm-memory");
if (kvm_eventfds_allowed) {
memory_listener_register(&kvm_io_listener, memory_listener_register(&kvm_io_listener,
&address_space_io); &address_space_io);
}
memory_listener_register(&kvm_coalesced_pio_listener,
&address_space_io);
s->many_ioeventfds = kvm_check_many_ioeventfds();
s->sync_mmu = !!kvm_vm_check_extension(kvm_state, KVM_CAP_SYNC_MMU); s->sync_mmu = !!kvm_vm_check_extension(kvm_state, KVM_CAP_SYNC_MMU);
if (!s->sync_mmu) { if (!s->sync_mmu) {
@@ -2638,15 +2787,17 @@ static void kvm_handle_io(uint16_t port, MemTxAttrs attrs, void *data, int direc
static int kvm_handle_internal_error(CPUState *cpu, struct kvm_run *run) static int kvm_handle_internal_error(CPUState *cpu, struct kvm_run *run)
{ {
int i;
fprintf(stderr, "KVM internal error. Suberror: %d\n", fprintf(stderr, "KVM internal error. Suberror: %d\n",
run->internal.suberror); run->internal.suberror);
if (kvm_check_extension(kvm_state, KVM_CAP_INTERNAL_ERROR_DATA)) {
int i;
for (i = 0; i < run->internal.ndata; ++i) { for (i = 0; i < run->internal.ndata; ++i) {
fprintf(stderr, "extra data[%d]: 0x%016"PRIx64"\n", fprintf(stderr, "extra data[%d]: 0x%016"PRIx64"\n",
i, (uint64_t)run->internal.data[i]); i, (uint64_t)run->internal.data[i]);
} }
}
if (run->internal.suberror == KVM_INTERNAL_ERROR_EMULATION) { if (run->internal.suberror == KVM_INTERNAL_ERROR_EMULATION) {
fprintf(stderr, "emulation failure\n"); fprintf(stderr, "emulation failure\n");
if (!kvm_arch_stop_on_emulation_error(cpu)) { if (!kvm_arch_stop_on_emulation_error(cpu)) {
@@ -3139,11 +3290,29 @@ int kvm_has_vcpu_events(void)
return kvm_state->vcpu_events; return kvm_state->vcpu_events;
} }
int kvm_has_robust_singlestep(void)
{
return kvm_state->robust_singlestep;
}
int kvm_has_debugregs(void)
{
return kvm_state->debugregs;
}
int kvm_max_nested_state_length(void) int kvm_max_nested_state_length(void)
{ {
return kvm_state->max_nested_state_len; return kvm_state->max_nested_state_len;
} }
int kvm_has_many_ioeventfds(void)
{
if (!kvm_enabled()) {
return 0;
}
return kvm_state->many_ioeventfds;
}
int kvm_has_gsi_routing(void) int kvm_has_gsi_routing(void)
{ {
#ifdef KVM_CAP_IRQ_ROUTING #ifdef KVM_CAP_IRQ_ROUTING
@@ -3153,6 +3322,11 @@ int kvm_has_gsi_routing(void)
#endif #endif
} }
int kvm_has_intx_set_mask(void)
{
return kvm_state->intx_set_mask;
}
bool kvm_arm_supports_user_irq(void) bool kvm_arm_supports_user_irq(void)
{ {
return kvm_check_extension(kvm_state, KVM_CAP_ARM_USER_IRQ); return kvm_check_extension(kvm_state, KVM_CAP_ARM_USER_IRQ);

View File

@@ -17,13 +17,17 @@
KVMState *kvm_state; KVMState *kvm_state;
bool kvm_kernel_irqchip; bool kvm_kernel_irqchip;
bool kvm_async_interrupts_allowed; bool kvm_async_interrupts_allowed;
bool kvm_eventfds_allowed;
bool kvm_irqfds_allowed;
bool kvm_resamplefds_allowed; bool kvm_resamplefds_allowed;
bool kvm_msi_via_irqfd_allowed; bool kvm_msi_via_irqfd_allowed;
bool kvm_gsi_routing_allowed; bool kvm_gsi_routing_allowed;
bool kvm_gsi_direct_mapping; bool kvm_gsi_direct_mapping;
bool kvm_allowed; bool kvm_allowed;
bool kvm_readonly_mem_allowed; bool kvm_readonly_mem_allowed;
bool kvm_ioeventfd_any_length_allowed;
bool kvm_msi_use_devid; bool kvm_msi_use_devid;
bool kvm_direct_msi_allowed;
void kvm_flush_coalesced_mmio_buffer(void) void kvm_flush_coalesced_mmio_buffer(void)
{ {
@@ -38,6 +42,11 @@ bool kvm_has_sync_mmu(void)
return false; return false;
} }
int kvm_has_many_ioeventfds(void)
{
return 0;
}
int kvm_on_sigbus_vcpu(CPUState *cpu, int code, void *addr) int kvm_on_sigbus_vcpu(CPUState *cpu, int code, void *addr)
{ {
return 1; return 1;
@@ -83,6 +92,11 @@ void kvm_irqchip_change_notify(void)
{ {
} }
int kvm_irqchip_add_adapter_route(KVMState *s, AdapterInfo *adapter)
{
return -ENOSYS;
}
int kvm_irqchip_add_irqfd_notifier_gsi(KVMState *s, EventNotifier *n, int kvm_irqchip_add_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
EventNotifier *rn, int virq) EventNotifier *rn, int virq)
{ {
@@ -95,14 +109,9 @@ int kvm_irqchip_remove_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
return -ENOSYS; return -ENOSYS;
} }
unsigned int kvm_get_max_memslots(void) bool kvm_has_free_slot(MachineState *ms)
{ {
return 0; return false;
}
unsigned int kvm_get_free_memslots(void)
{
return 0;
} }
void kvm_init_cpu_signals(CPUState *cpu) void kvm_init_cpu_signals(CPUState *cpu)

View File

@@ -22,6 +22,10 @@ void tlb_set_dirty(CPUState *cpu, vaddr vaddr)
{ {
} }
void tcg_flush_jmp_cache(CPUState *cpu)
{
}
int probe_access_flags(CPUArchState *env, vaddr addr, int size, int probe_access_flags(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx, MMUAccessType access_type, int mmu_idx,
bool nonfault, void **phost, uintptr_t retaddr) bool nonfault, void **phost, uintptr_t retaddr)

View File

@@ -24,7 +24,6 @@
#include "exec/memory.h" #include "exec/memory.h"
#include "exec/cpu_ldst.h" #include "exec/cpu_ldst.h"
#include "exec/cputlb.h" #include "exec/cputlb.h"
#include "exec/tb-flush.h"
#include "exec/memory-internal.h" #include "exec/memory-internal.h"
#include "exec/ram_addr.h" #include "exec/ram_addr.h"
#include "tcg/tcg.h" #include "tcg/tcg.h"
@@ -322,6 +321,21 @@ static void flush_all_helper(CPUState *src, run_on_cpu_func fn,
} }
} }
void tlb_flush_counts(size_t *pfull, size_t *ppart, size_t *pelide)
{
CPUState *cpu;
size_t full = 0, part = 0, elide = 0;
CPU_FOREACH(cpu) {
full += qatomic_read(&cpu->neg.tlb.c.full_flush_count);
part += qatomic_read(&cpu->neg.tlb.c.part_flush_count);
elide += qatomic_read(&cpu->neg.tlb.c.elide_flush_count);
}
*pfull = full;
*ppart = part;
*pelide = elide;
}
static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data) static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data)
{ {
uint16_t asked = data.host_int; uint16_t asked = data.host_int;
@@ -2692,7 +2706,7 @@ static uint64_t do_st16_leN(CPUState *cpu, MMULookupPageData *p,
case MO_ATOM_WITHIN16_PAIR: case MO_ATOM_WITHIN16_PAIR:
/* Since size > 8, this is the half that must be atomic. */ /* Since size > 8, this is the half that must be atomic. */
if (!HAVE_CMPXCHG128) { if (!HAVE_ATOMIC128_RW) {
cpu_loop_exit_atomic(cpu, ra); cpu_loop_exit_atomic(cpu, ra);
} }
return store_whole_le16(p->haddr, p->size, val_le); return store_whole_le16(p->haddr, p->size, val_le);

View File

@@ -14,6 +14,8 @@
extern int64_t max_delay; extern int64_t max_delay;
extern int64_t max_advance; extern int64_t max_advance;
void dump_exec_info(GString *buf);
/* /*
* Return true if CS is not running in parallel with other cpus, either * Return true if CS is not running in parallel with other cpus, either
* because there are no other cpus or we are within an exclusive context. * because there are no other cpus or we are within an exclusive context.

View File

@@ -825,7 +825,7 @@ static uint64_t store_whole_le16(void *pv, int size, Int128 val_le)
int sh = o * 8; int sh = o * 8;
Int128 m, v; Int128 m, v;
qemu_build_assert(HAVE_CMPXCHG128); qemu_build_assert(HAVE_ATOMIC128_RW);
/* Like MAKE_64BIT_MASK(0, sz), but larger. */ /* Like MAKE_64BIT_MASK(0, sz), but larger. */
if (sz <= 64) { if (sz <= 64) {
@@ -887,7 +887,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
return; return;
} }
} else if ((pi & 15) == 7) { } else if ((pi & 15) == 7) {
if (HAVE_CMPXCHG128) { if (HAVE_ATOMIC128_RW) {
Int128 v = int128_lshift(int128_make64(val), 56); Int128 v = int128_lshift(int128_make64(val), 56);
Int128 m = int128_lshift(int128_make64(0xffff), 56); Int128 m = int128_lshift(int128_make64(0xffff), 56);
store_atom_insert_al16(pv - 7, v, m); store_atom_insert_al16(pv - 7, v, m);
@@ -956,7 +956,7 @@ static void store_atom_4(CPUState *cpu, uintptr_t ra,
return; return;
} }
} else { } else {
if (HAVE_CMPXCHG128) { if (HAVE_ATOMIC128_RW) {
store_whole_le16(pv, 4, int128_make64(cpu_to_le32(val))); store_whole_le16(pv, 4, int128_make64(cpu_to_le32(val)));
return; return;
} }
@@ -1021,7 +1021,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
} }
break; break;
case MO_64: case MO_64:
if (HAVE_CMPXCHG128) { if (HAVE_ATOMIC128_RW) {
store_whole_le16(pv, 8, int128_make64(cpu_to_le64(val))); store_whole_le16(pv, 8, int128_make64(cpu_to_le64(val)));
return; return;
} }
@@ -1076,7 +1076,7 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
} }
break; break;
case -MO_64: case -MO_64:
if (HAVE_CMPXCHG128) { if (HAVE_ATOMIC128_RW) {
uint64_t val_le; uint64_t val_le;
int s2 = pi & 15; int s2 = pi & 15;
int s1 = 16 - s2; int s1 = 16 - s2;
@@ -1103,6 +1103,10 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
} }
break; break;
case MO_128: case MO_128:
if (HAVE_ATOMIC128_RW) {
atomic16_set(pv, val);
return;
}
break; break;
default: default:
g_assert_not_reached(); g_assert_not_reached();

View File

@@ -8,7 +8,6 @@
#include "qemu/osdep.h" #include "qemu/osdep.h"
#include "qemu/accel.h" #include "qemu/accel.h"
#include "qemu/qht.h"
#include "qapi/error.h" #include "qapi/error.h"
#include "qapi/type-helpers.h" #include "qapi/type-helpers.h"
#include "qapi/qapi-commands-machine.h" #include "qapi/qapi-commands-machine.h"
@@ -18,7 +17,6 @@
#include "sysemu/tcg.h" #include "sysemu/tcg.h"
#include "tcg/tcg.h" #include "tcg/tcg.h"
#include "internal-common.h" #include "internal-common.h"
#include "tb-context.h"
static void dump_drift_info(GString *buf) static void dump_drift_info(GString *buf)
@@ -52,153 +50,6 @@ static void dump_accel_info(GString *buf)
one_insn_per_tb ? "on" : "off"); one_insn_per_tb ? "on" : "off");
} }
static void print_qht_statistics(struct qht_stats hst, GString *buf)
{
uint32_t hgram_opts;
size_t hgram_bins;
char *hgram;
if (!hst.head_buckets) {
return;
}
g_string_append_printf(buf, "TB hash buckets %zu/%zu "
"(%0.2f%% head buckets used)\n",
hst.used_head_buckets, hst.head_buckets,
(double)hst.used_head_buckets /
hst.head_buckets * 100);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_opts |= QDIST_PR_100X | QDIST_PR_PERCENT;
if (qdist_xmax(&hst.occupancy) - qdist_xmin(&hst.occupancy) == 1) {
hgram_opts |= QDIST_PR_NODECIMAL;
}
hgram = qdist_pr(&hst.occupancy, 10, hgram_opts);
g_string_append_printf(buf, "TB hash occupancy %0.2f%% avg chain occ. "
"Histogram: %s\n",
qdist_avg(&hst.occupancy) * 100, hgram);
g_free(hgram);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_bins = qdist_xmax(&hst.chain) - qdist_xmin(&hst.chain);
if (hgram_bins > 10) {
hgram_bins = 10;
} else {
hgram_bins = 0;
hgram_opts |= QDIST_PR_NODECIMAL | QDIST_PR_NOBINRANGE;
}
hgram = qdist_pr(&hst.chain, hgram_bins, hgram_opts);
g_string_append_printf(buf, "TB hash avg chain %0.3f buckets. "
"Histogram: %s\n",
qdist_avg(&hst.chain), hgram);
g_free(hgram);
}
struct tb_tree_stats {
size_t nb_tbs;
size_t host_size;
size_t target_size;
size_t max_target_size;
size_t direct_jmp_count;
size_t direct_jmp2_count;
size_t cross_page;
};
static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data)
{
const TranslationBlock *tb = value;
struct tb_tree_stats *tst = data;
tst->nb_tbs++;
tst->host_size += tb->tc.size;
tst->target_size += tb->size;
if (tb->size > tst->max_target_size) {
tst->max_target_size = tb->size;
}
if (tb->page_addr[1] != -1) {
tst->cross_page++;
}
if (tb->jmp_reset_offset[0] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp_count++;
if (tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp2_count++;
}
}
return false;
}
static void tlb_flush_counts(size_t *pfull, size_t *ppart, size_t *pelide)
{
CPUState *cpu;
size_t full = 0, part = 0, elide = 0;
CPU_FOREACH(cpu) {
full += qatomic_read(&cpu->neg.tlb.c.full_flush_count);
part += qatomic_read(&cpu->neg.tlb.c.part_flush_count);
elide += qatomic_read(&cpu->neg.tlb.c.elide_flush_count);
}
*pfull = full;
*ppart = part;
*pelide = elide;
}
static void tcg_dump_info(GString *buf)
{
g_string_append_printf(buf, "[TCG profiler not compiled]\n");
}
static void dump_exec_info(GString *buf)
{
struct tb_tree_stats tst = {};
struct qht_stats hst;
size_t nb_tbs, flush_full, flush_part, flush_elide;
tcg_tb_foreach(tb_tree_stats_iter, &tst);
nb_tbs = tst.nb_tbs;
/* XXX: avoid using doubles ? */
g_string_append_printf(buf, "Translation buffer state:\n");
/*
* Report total code size including the padding and TB structs;
* otherwise users might think "-accel tcg,tb-size" is not honoured.
* For avg host size we use the precise numbers from tb_tree_stats though.
*/
g_string_append_printf(buf, "gen code size %zu/%zu\n",
tcg_code_size(), tcg_code_capacity());
g_string_append_printf(buf, "TB count %zu\n", nb_tbs);
g_string_append_printf(buf, "TB avg target size %zu max=%zu bytes\n",
nb_tbs ? tst.target_size / nb_tbs : 0,
tst.max_target_size);
g_string_append_printf(buf, "TB avg host size %zu bytes "
"(expansion ratio: %0.1f)\n",
nb_tbs ? tst.host_size / nb_tbs : 0,
tst.target_size ?
(double)tst.host_size / tst.target_size : 0);
g_string_append_printf(buf, "cross page TB count %zu (%zu%%)\n",
tst.cross_page,
nb_tbs ? (tst.cross_page * 100) / nb_tbs : 0);
g_string_append_printf(buf, "direct jump count %zu (%zu%%) "
"(2 jumps=%zu %zu%%)\n",
tst.direct_jmp_count,
nb_tbs ? (tst.direct_jmp_count * 100) / nb_tbs : 0,
tst.direct_jmp2_count,
nb_tbs ? (tst.direct_jmp2_count * 100) / nb_tbs : 0);
qht_statistics_init(&tb_ctx.htable, &hst);
print_qht_statistics(hst, buf);
qht_statistics_destroy(&hst);
g_string_append_printf(buf, "\nStatistics:\n");
g_string_append_printf(buf, "TB flush count %u\n",
qatomic_read(&tb_ctx.tb_flush_count));
g_string_append_printf(buf, "TB invalidate count %u\n",
qatomic_read(&tb_ctx.tb_phys_invalidate_count));
tlb_flush_counts(&flush_full, &flush_part, &flush_elide);
g_string_append_printf(buf, "TLB full flushes %zu\n", flush_full);
g_string_append_printf(buf, "TLB partial flushes %zu\n", flush_part);
g_string_append_printf(buf, "TLB elided flushes %zu\n", flush_elide);
tcg_dump_info(buf);
}
HumanReadableText *qmp_x_query_jit(Error **errp) HumanReadableText *qmp_x_query_jit(Error **errp)
{ {
g_autoptr(GString) buf = g_string_new(""); g_autoptr(GString) buf = g_string_new("");
@@ -215,11 +66,6 @@ HumanReadableText *qmp_x_query_jit(Error **errp)
return human_readable_text_from_str(buf); return human_readable_text_from_str(buf);
} }
static void tcg_dump_op_count(GString *buf)
{
g_string_append_printf(buf, "[TCG profiler not compiled]\n");
}
HumanReadableText *qmp_x_query_opcount(Error **errp) HumanReadableText *qmp_x_query_opcount(Error **errp)
{ {
g_autoptr(GString) buf = g_string_new(""); g_autoptr(GString) buf = g_string_new("");

View File

@@ -327,7 +327,8 @@ static TCGOp *copy_st_ptr(TCGOp **begin_op, TCGOp *op)
return op; return op;
} }
static TCGOp *copy_call(TCGOp **begin_op, TCGOp *op, void *func, int *cb_idx) static TCGOp *copy_call(TCGOp **begin_op, TCGOp *op, void *empty_func,
void *func, int *cb_idx)
{ {
TCGOp *old_op; TCGOp *old_op;
int func_idx; int func_idx;
@@ -371,7 +372,8 @@ static TCGOp *append_udata_cb(const struct qemu_plugin_dyn_cb *cb,
} }
/* call */ /* call */
op = copy_call(&begin_op, op, cb->f.vcpu_udata, cb_idx); op = copy_call(&begin_op, op, HELPER(plugin_vcpu_udata_cb),
cb->f.vcpu_udata, cb_idx);
return op; return op;
} }
@@ -418,7 +420,8 @@ static TCGOp *append_mem_cb(const struct qemu_plugin_dyn_cb *cb,
if (type == PLUGIN_GEN_CB_MEM) { if (type == PLUGIN_GEN_CB_MEM) {
/* call */ /* call */
op = copy_call(&begin_op, op, cb->f.vcpu_udata, cb_idx); op = copy_call(&begin_op, op, HELPER(plugin_vcpu_mem_cb),
cb->f.vcpu_udata, cb_idx);
} }
return op; return op;
@@ -863,14 +866,10 @@ void plugin_gen_insn_end(void)
* do any clean-up here and make sure things are reset in * do any clean-up here and make sure things are reset in
* plugin_gen_tb_start. * plugin_gen_tb_start.
*/ */
void plugin_gen_tb_end(CPUState *cpu, size_t num_insns) void plugin_gen_tb_end(CPUState *cpu)
{ {
struct qemu_plugin_tb *ptb = tcg_ctx->plugin_tb; struct qemu_plugin_tb *ptb = tcg_ctx->plugin_tb;
/* translator may have removed instructions, update final count */
g_assert(num_insns <= ptb->n);
ptb->n = num_insns;
/* collect instrumentation requests */ /* collect instrumentation requests */
qemu_plugin_tb_trans_cb(cpu, ptb); qemu_plugin_tb_trans_cb(cpu, ptb);

View File

@@ -34,7 +34,6 @@
#include "qemu/timer.h" #include "qemu/timer.h"
#include "exec/exec-all.h" #include "exec/exec-all.h"
#include "exec/hwaddr.h" #include "exec/hwaddr.h"
#include "exec/tb-flush.h"
#include "exec/gdbstub.h" #include "exec/gdbstub.h"
#include "tcg-accel-ops.h" #include "tcg-accel-ops.h"
@@ -78,13 +77,6 @@ int tcg_cpus_exec(CPUState *cpu)
return ret; return ret;
} }
static void tcg_cpu_reset_hold(CPUState *cpu)
{
tcg_flush_jmp_cache(cpu);
tlb_flush(cpu);
}
/* mask must never be zero, except for A20 change call */ /* mask must never be zero, except for A20 change call */
void tcg_handle_interrupt(CPUState *cpu, int mask) void tcg_handle_interrupt(CPUState *cpu, int mask)
{ {
@@ -213,7 +205,6 @@ static void tcg_accel_ops_init(AccelOpsClass *ops)
} }
} }
ops->cpu_reset_hold = tcg_cpu_reset_hold;
ops->supports_guest_debug = tcg_supports_guest_debug; ops->supports_guest_debug = tcg_supports_guest_debug;
ops->insert_breakpoint = tcg_insert_breakpoint; ops->insert_breakpoint = tcg_insert_breakpoint;
ops->remove_breakpoint = tcg_remove_breakpoint; ops->remove_breakpoint = tcg_remove_breakpoint;

View File

@@ -645,6 +645,133 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
cpu_loop_exit_noexc(cpu); cpu_loop_exit_noexc(cpu);
} }
static void print_qht_statistics(struct qht_stats hst, GString *buf)
{
uint32_t hgram_opts;
size_t hgram_bins;
char *hgram;
if (!hst.head_buckets) {
return;
}
g_string_append_printf(buf, "TB hash buckets %zu/%zu "
"(%0.2f%% head buckets used)\n",
hst.used_head_buckets, hst.head_buckets,
(double)hst.used_head_buckets /
hst.head_buckets * 100);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_opts |= QDIST_PR_100X | QDIST_PR_PERCENT;
if (qdist_xmax(&hst.occupancy) - qdist_xmin(&hst.occupancy) == 1) {
hgram_opts |= QDIST_PR_NODECIMAL;
}
hgram = qdist_pr(&hst.occupancy, 10, hgram_opts);
g_string_append_printf(buf, "TB hash occupancy %0.2f%% avg chain occ. "
"Histogram: %s\n",
qdist_avg(&hst.occupancy) * 100, hgram);
g_free(hgram);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_bins = qdist_xmax(&hst.chain) - qdist_xmin(&hst.chain);
if (hgram_bins > 10) {
hgram_bins = 10;
} else {
hgram_bins = 0;
hgram_opts |= QDIST_PR_NODECIMAL | QDIST_PR_NOBINRANGE;
}
hgram = qdist_pr(&hst.chain, hgram_bins, hgram_opts);
g_string_append_printf(buf, "TB hash avg chain %0.3f buckets. "
"Histogram: %s\n",
qdist_avg(&hst.chain), hgram);
g_free(hgram);
}
struct tb_tree_stats {
size_t nb_tbs;
size_t host_size;
size_t target_size;
size_t max_target_size;
size_t direct_jmp_count;
size_t direct_jmp2_count;
size_t cross_page;
};
static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data)
{
const TranslationBlock *tb = value;
struct tb_tree_stats *tst = data;
tst->nb_tbs++;
tst->host_size += tb->tc.size;
tst->target_size += tb->size;
if (tb->size > tst->max_target_size) {
tst->max_target_size = tb->size;
}
if (tb_page_addr1(tb) != -1) {
tst->cross_page++;
}
if (tb->jmp_reset_offset[0] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp_count++;
if (tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp2_count++;
}
}
return false;
}
void dump_exec_info(GString *buf)
{
struct tb_tree_stats tst = {};
struct qht_stats hst;
size_t nb_tbs, flush_full, flush_part, flush_elide;
tcg_tb_foreach(tb_tree_stats_iter, &tst);
nb_tbs = tst.nb_tbs;
/* XXX: avoid using doubles ? */
g_string_append_printf(buf, "Translation buffer state:\n");
/*
* Report total code size including the padding and TB structs;
* otherwise users might think "-accel tcg,tb-size" is not honoured.
* For avg host size we use the precise numbers from tb_tree_stats though.
*/
g_string_append_printf(buf, "gen code size %zu/%zu\n",
tcg_code_size(), tcg_code_capacity());
g_string_append_printf(buf, "TB count %zu\n", nb_tbs);
g_string_append_printf(buf, "TB avg target size %zu max=%zu bytes\n",
nb_tbs ? tst.target_size / nb_tbs : 0,
tst.max_target_size);
g_string_append_printf(buf, "TB avg host size %zu bytes "
"(expansion ratio: %0.1f)\n",
nb_tbs ? tst.host_size / nb_tbs : 0,
tst.target_size ?
(double)tst.host_size / tst.target_size : 0);
g_string_append_printf(buf, "cross page TB count %zu (%zu%%)\n",
tst.cross_page,
nb_tbs ? (tst.cross_page * 100) / nb_tbs : 0);
g_string_append_printf(buf, "direct jump count %zu (%zu%%) "
"(2 jumps=%zu %zu%%)\n",
tst.direct_jmp_count,
nb_tbs ? (tst.direct_jmp_count * 100) / nb_tbs : 0,
tst.direct_jmp2_count,
nb_tbs ? (tst.direct_jmp2_count * 100) / nb_tbs : 0);
qht_statistics_init(&tb_ctx.htable, &hst);
print_qht_statistics(hst, buf);
qht_statistics_destroy(&hst);
g_string_append_printf(buf, "\nStatistics:\n");
g_string_append_printf(buf, "TB flush count %u\n",
qatomic_read(&tb_ctx.tb_flush_count));
g_string_append_printf(buf, "TB invalidate count %u\n",
qatomic_read(&tb_ctx.tb_phys_invalidate_count));
tlb_flush_counts(&flush_full, &flush_part, &flush_elide);
g_string_append_printf(buf, "TLB full flushes %zu\n", flush_full);
g_string_append_printf(buf, "TLB partial flushes %zu\n", flush_part);
g_string_append_printf(buf, "TLB elided flushes %zu\n", flush_elide);
tcg_dump_info(buf);
}
#else /* CONFIG_USER_ONLY */ #else /* CONFIG_USER_ONLY */
void cpu_interrupt(CPUState *cpu, int mask) void cpu_interrupt(CPUState *cpu, int mask)
@@ -673,3 +800,11 @@ void tcg_flush_jmp_cache(CPUState *cpu)
qatomic_set(&jc->array[i].tb, NULL); qatomic_set(&jc->array[i].tb, NULL);
} }
} }
/* This is a wrapper for common code that can not use CONFIG_SOFTMMU */
void tcg_flush_softmmu_tlb(CPUState *cs)
{
#ifdef CONFIG_SOFTMMU
tlb_flush(cs);
#endif
}

View File

@@ -158,7 +158,6 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
} else { } else {
plugin_enabled = plugin_gen_tb_start(cpu, db, false); plugin_enabled = plugin_gen_tb_start(cpu, db, false);
} }
db->plugin_enabled = plugin_enabled;
while (true) { while (true) {
*max_insns = ++db->num_insns; *max_insns = ++db->num_insns;
@@ -210,7 +209,7 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
gen_tb_end(tb, cflags, icount_start_insn, db->num_insns); gen_tb_end(tb, cflags, icount_start_insn, db->num_insns);
if (plugin_enabled) { if (plugin_enabled) {
plugin_gen_tb_end(cpu, db->num_insns); plugin_gen_tb_end(cpu);
} }
/* The disas_log hook may use these values rather than recompute. */ /* The disas_log hook may use these values rather than recompute. */

View File

@@ -14,10 +14,6 @@ void qemu_init_vcpu(CPUState *cpu)
{ {
} }
void cpu_exec_reset_hold(CPUState *cpu)
{
}
/* User mode emulation does not support record/replay yet. */ /* User mode emulation does not support record/replay yet. */
bool replay_exception(void) bool replay_exception(void)

View File

@@ -1781,7 +1781,7 @@ static AudioState *audio_init(Audiodev *dev, Error **errp)
QTAILQ_INSERT_TAIL(&audio_states, s, list); QTAILQ_INSERT_TAIL(&audio_states, s, list);
QLIST_INIT (&s->card_head); QLIST_INIT (&s->card_head);
vmstate_register_any(NULL, &vmstate_audio, s); vmstate_register (NULL, 0, &vmstate_audio, s);
return s; return s;
out: out:

View File

@@ -97,10 +97,6 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
dolog ("WAVE files can not handle 32bit formats\n"); dolog ("WAVE files can not handle 32bit formats\n");
return -1; return -1;
case AUDIO_FORMAT_F32:
dolog("WAVE files can not handle float formats\n");
return -1;
default: default:
abort(); abort();
} }

View File

@@ -426,7 +426,8 @@ dbus_vmstate_complete(UserCreatable *uc, Error **errp)
return; return;
} }
if (vmstate_register_any(VMSTATE_IF(self), &dbus_vmstate, self) < 0) { if (vmstate_register(VMSTATE_IF(self), VMSTATE_INSTANCE_ID_ANY,
&dbus_vmstate, self) < 0) {
error_setg(errp, "Failed to register vmstate"); error_setg(errp, "Failed to register vmstate");
} }
} }

View File

@@ -534,8 +534,11 @@ static int tpm_emulator_block_migration(TPMEmulator *tpm_emu)
error_setg(&tpm_emu->migration_blocker, error_setg(&tpm_emu->migration_blocker,
"Migration disabled: TPM emulator does not support " "Migration disabled: TPM emulator does not support "
"migration"); "migration");
if (migrate_add_blocker(&tpm_emu->migration_blocker, &err) < 0) { if (migrate_add_blocker(tpm_emu->migration_blocker, &err) < 0) {
error_report_err(err); error_report_err(err);
error_free(tpm_emu->migration_blocker);
tpm_emu->migration_blocker = NULL;
return -1; return -1;
} }
} }
@@ -975,7 +978,8 @@ static void tpm_emulator_inst_init(Object *obj)
qemu_add_vm_change_state_handler(tpm_emulator_vm_state_change, qemu_add_vm_change_state_handler(tpm_emulator_vm_state_change,
tpm_emu); tpm_emu);
vmstate_register_any(NULL, &vmstate_tpm_emulator, obj); vmstate_register(NULL, VMSTATE_INSTANCE_ID_ANY,
&vmstate_tpm_emulator, obj);
} }
/* /*
@@ -1012,7 +1016,10 @@ static void tpm_emulator_inst_finalize(Object *obj)
qapi_free_TPMEmulatorOptions(tpm_emu->options); qapi_free_TPMEmulatorOptions(tpm_emu->options);
migrate_del_blocker(&tpm_emu->migration_blocker); if (tpm_emu->migration_blocker) {
migrate_del_blocker(tpm_emu->migration_blocker);
error_free(tpm_emu->migration_blocker);
}
tpm_sized_buffer_reset(&state_blobs->volatil); tpm_sized_buffer_reset(&state_blobs->volatil);
tpm_sized_buffer_reset(&state_blobs->permanent); tpm_sized_buffer_reset(&state_blobs->permanent);

278
block.c
View File

@@ -279,8 +279,7 @@ bool bdrv_is_read_only(BlockDriverState *bs)
return !(bs->open_flags & BDRV_O_RDWR); return !(bs->open_flags & BDRV_O_RDWR);
} }
static int GRAPH_RDLOCK static int bdrv_can_set_read_only(BlockDriverState *bs, bool read_only,
bdrv_can_set_read_only(BlockDriverState *bs, bool read_only,
bool ignore_allow_rdw, Error **errp) bool ignore_allow_rdw, Error **errp)
{ {
IO_CODE(); IO_CODE();
@@ -372,8 +371,7 @@ char *bdrv_get_full_backing_filename_from_filename(const char *backed,
* setting @errp. In all other cases, NULL will only be returned with * setting @errp. In all other cases, NULL will only be returned with
* @errp set. * @errp set.
*/ */
static char * GRAPH_RDLOCK static char *bdrv_make_absolute_filename(BlockDriverState *relative_to,
bdrv_make_absolute_filename(BlockDriverState *relative_to,
const char *filename, Error **errp) const char *filename, Error **errp)
{ {
char *dir, *full_name; char *dir, *full_name;
@@ -820,17 +818,12 @@ int bdrv_probe_blocksizes(BlockDriverState *bs, BlockSizes *bsz)
int bdrv_probe_geometry(BlockDriverState *bs, HDGeometry *geo) int bdrv_probe_geometry(BlockDriverState *bs, HDGeometry *geo)
{ {
BlockDriver *drv = bs->drv; BlockDriver *drv = bs->drv;
BlockDriverState *filtered; BlockDriverState *filtered = bdrv_filter_bs(bs);
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (drv && drv->bdrv_probe_geometry) { if (drv && drv->bdrv_probe_geometry) {
return drv->bdrv_probe_geometry(bs, geo); return drv->bdrv_probe_geometry(bs, geo);
} } else if (filtered) {
filtered = bdrv_filter_bs(bs);
if (filtered) {
return bdrv_probe_geometry(filtered, geo); return bdrv_probe_geometry(filtered, geo);
} }
@@ -1199,19 +1192,19 @@ static char *bdrv_child_get_parent_desc(BdrvChild *c)
return g_strdup_printf("node '%s'", bdrv_get_node_name(parent)); return g_strdup_printf("node '%s'", bdrv_get_node_name(parent));
} }
static void GRAPH_RDLOCK bdrv_child_cb_drained_begin(BdrvChild *child) static void bdrv_child_cb_drained_begin(BdrvChild *child)
{ {
BlockDriverState *bs = child->opaque; BlockDriverState *bs = child->opaque;
bdrv_do_drained_begin_quiesce(bs, NULL); bdrv_do_drained_begin_quiesce(bs, NULL);
} }
static bool GRAPH_RDLOCK bdrv_child_cb_drained_poll(BdrvChild *child) static bool bdrv_child_cb_drained_poll(BdrvChild *child)
{ {
BlockDriverState *bs = child->opaque; BlockDriverState *bs = child->opaque;
return bdrv_drain_poll(bs, NULL, false); return bdrv_drain_poll(bs, NULL, false);
} }
static void GRAPH_RDLOCK bdrv_child_cb_drained_end(BdrvChild *child) static void bdrv_child_cb_drained_end(BdrvChild *child)
{ {
BlockDriverState *bs = child->opaque; BlockDriverState *bs = child->opaque;
bdrv_drained_end(bs); bdrv_drained_end(bs);
@@ -1257,7 +1250,7 @@ static void bdrv_temp_snapshot_options(int *child_flags, QDict *child_options,
*child_flags &= ~BDRV_O_NATIVE_AIO; *child_flags &= ~BDRV_O_NATIVE_AIO;
} }
static void GRAPH_WRLOCK bdrv_backing_attach(BdrvChild *c) static void bdrv_backing_attach(BdrvChild *c)
{ {
BlockDriverState *parent = c->opaque; BlockDriverState *parent = c->opaque;
BlockDriverState *backing_hd = c->bs; BlockDriverState *backing_hd = c->bs;
@@ -1707,14 +1700,12 @@ bdrv_open_driver(BlockDriverState *bs, BlockDriver *drv, const char *node_name,
return 0; return 0;
open_failed: open_failed:
bs->drv = NULL; bs->drv = NULL;
bdrv_graph_wrlock(NULL);
if (bs->file != NULL) { if (bs->file != NULL) {
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, bs->file); bdrv_unref_child(bs, bs->file);
bdrv_graph_wrunlock();
assert(!bs->file); assert(!bs->file);
} }
bdrv_graph_wrunlock();
g_free(bs->opaque); g_free(bs->opaque);
bs->opaque = NULL; bs->opaque = NULL;
return ret; return ret;
@@ -1856,12 +1847,9 @@ static int bdrv_open_common(BlockDriverState *bs, BlockBackend *file,
Error *local_err = NULL; Error *local_err = NULL;
bool ro; bool ro;
GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop();
assert(bs->file == NULL); assert(bs->file == NULL);
assert(options != NULL && bs->options != options); assert(options != NULL && bs->options != options);
bdrv_graph_rdunlock_main_loop(); GLOBAL_STATE_CODE();
opts = qemu_opts_create(&bdrv_runtime_opts, NULL, 0, &error_abort); opts = qemu_opts_create(&bdrv_runtime_opts, NULL, 0, &error_abort);
if (!qemu_opts_absorb_qdict(opts, options, errp)) { if (!qemu_opts_absorb_qdict(opts, options, errp)) {
@@ -1886,10 +1874,7 @@ static int bdrv_open_common(BlockDriverState *bs, BlockBackend *file,
} }
if (file != NULL) { if (file != NULL) {
bdrv_graph_rdlock_main_loop();
bdrv_refresh_filename(blk_bs(file)); bdrv_refresh_filename(blk_bs(file));
bdrv_graph_rdunlock_main_loop();
filename = blk_bs(file)->filename; filename = blk_bs(file)->filename;
} else { } else {
/* /*
@@ -1916,9 +1901,7 @@ static int bdrv_open_common(BlockDriverState *bs, BlockBackend *file,
if (use_bdrv_whitelist && !bdrv_is_whitelisted(drv, ro)) { if (use_bdrv_whitelist && !bdrv_is_whitelisted(drv, ro)) {
if (!ro && bdrv_is_whitelisted(drv, true)) { if (!ro && bdrv_is_whitelisted(drv, true)) {
bdrv_graph_rdlock_main_loop();
ret = bdrv_apply_auto_read_only(bs, NULL, NULL); ret = bdrv_apply_auto_read_only(bs, NULL, NULL);
bdrv_graph_rdunlock_main_loop();
} else { } else {
ret = -ENOTSUP; ret = -ENOTSUP;
} }
@@ -2983,8 +2966,6 @@ static void bdrv_child_free(BdrvChild *child)
{ {
assert(!child->bs); assert(!child->bs);
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
assert(!child->next.le_prev); /* not in children list */ assert(!child->next.le_prev); /* not in children list */
g_free(child->name); g_free(child->name);
@@ -3219,6 +3200,8 @@ BdrvChild *bdrv_root_attach_child(BlockDriverState *child_bs,
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_graph_wrlock(child_bs);
child = bdrv_attach_child_common(child_bs, child_name, child_class, child = bdrv_attach_child_common(child_bs, child_name, child_class,
child_role, perm, shared_perm, opaque, child_role, perm, shared_perm, opaque,
tran, errp); tran, errp);
@@ -3231,8 +3214,9 @@ BdrvChild *bdrv_root_attach_child(BlockDriverState *child_bs,
out: out:
tran_finalize(tran, ret); tran_finalize(tran, ret);
bdrv_graph_wrunlock();
bdrv_schedule_unref(child_bs); bdrv_unref(child_bs);
return ret < 0 ? NULL : child; return ret < 0 ? NULL : child;
} }
@@ -3537,7 +3521,19 @@ out:
* *
* If a backing child is already present (i.e. we're detaching a node), that * If a backing child is already present (i.e. we're detaching a node), that
* child node must be drained. * child node must be drained.
*
* After calling this function, the transaction @tran may only be completed
* while holding a writer lock for the graph.
*/ */
static int GRAPH_WRLOCK
bdrv_set_backing_noperm(BlockDriverState *bs,
BlockDriverState *backing_hd,
Transaction *tran, Error **errp)
{
GLOBAL_STATE_CODE();
return bdrv_set_file_or_backing_noperm(bs, backing_hd, true, tran, errp);
}
int bdrv_set_backing_hd_drained(BlockDriverState *bs, int bdrv_set_backing_hd_drained(BlockDriverState *bs,
BlockDriverState *backing_hd, BlockDriverState *backing_hd,
Error **errp) Error **errp)
@@ -3550,8 +3546,9 @@ int bdrv_set_backing_hd_drained(BlockDriverState *bs,
if (bs->backing) { if (bs->backing) {
assert(bs->backing->bs->quiesce_counter > 0); assert(bs->backing->bs->quiesce_counter > 0);
} }
bdrv_graph_wrlock(backing_hd);
ret = bdrv_set_file_or_backing_noperm(bs, backing_hd, true, tran, errp); ret = bdrv_set_backing_noperm(bs, backing_hd, tran, errp);
if (ret < 0) { if (ret < 0) {
goto out; goto out;
} }
@@ -3559,25 +3556,20 @@ int bdrv_set_backing_hd_drained(BlockDriverState *bs,
ret = bdrv_refresh_perms(bs, tran, errp); ret = bdrv_refresh_perms(bs, tran, errp);
out: out:
tran_finalize(tran, ret); tran_finalize(tran, ret);
bdrv_graph_wrunlock();
return ret; return ret;
} }
int bdrv_set_backing_hd(BlockDriverState *bs, BlockDriverState *backing_hd, int bdrv_set_backing_hd(BlockDriverState *bs, BlockDriverState *backing_hd,
Error **errp) Error **errp)
{ {
BlockDriverState *drain_bs; BlockDriverState *drain_bs = bs->backing ? bs->backing->bs : bs;
int ret; int ret;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop();
drain_bs = bs->backing ? bs->backing->bs : bs;
bdrv_graph_rdunlock_main_loop();
bdrv_ref(drain_bs); bdrv_ref(drain_bs);
bdrv_drained_begin(drain_bs); bdrv_drained_begin(drain_bs);
bdrv_graph_wrlock(backing_hd);
ret = bdrv_set_backing_hd_drained(bs, backing_hd, errp); ret = bdrv_set_backing_hd_drained(bs, backing_hd, errp);
bdrv_graph_wrunlock();
bdrv_drained_end(drain_bs); bdrv_drained_end(drain_bs);
bdrv_unref(drain_bs); bdrv_unref(drain_bs);
@@ -3611,7 +3603,6 @@ int bdrv_open_backing_file(BlockDriverState *bs, QDict *parent_options,
Error *local_err = NULL; Error *local_err = NULL;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (bs->backing != NULL) { if (bs->backing != NULL) {
goto free_exit; goto free_exit;
@@ -4323,8 +4314,8 @@ static int bdrv_reset_options_allowed(BlockDriverState *bs,
/* /*
* Returns true if @child can be reached recursively from @bs * Returns true if @child can be reached recursively from @bs
*/ */
static bool GRAPH_RDLOCK static bool bdrv_recurse_has_child(BlockDriverState *bs,
bdrv_recurse_has_child(BlockDriverState *bs, BlockDriverState *child) BlockDriverState *child)
{ {
BdrvChild *c; BdrvChild *c;
@@ -4365,11 +4356,14 @@ bdrv_recurse_has_child(BlockDriverState *bs, BlockDriverState *child)
* *
* To be called with bs->aio_context locked. * To be called with bs->aio_context locked.
*/ */
static BlockReopenQueue * GRAPH_RDLOCK static BlockReopenQueue *bdrv_reopen_queue_child(BlockReopenQueue *bs_queue,
bdrv_reopen_queue_child(BlockReopenQueue *bs_queue, BlockDriverState *bs, BlockDriverState *bs,
QDict *options, const BdrvChildClass *klass, QDict *options,
BdrvChildRole role, bool parent_is_format, const BdrvChildClass *klass,
QDict *parent_options, int parent_flags, BdrvChildRole role,
bool parent_is_format,
QDict *parent_options,
int parent_flags,
bool keep_old_opts) bool keep_old_opts)
{ {
assert(bs != NULL); assert(bs != NULL);
@@ -4382,11 +4376,6 @@ bdrv_reopen_queue_child(BlockReopenQueue *bs_queue, BlockDriverState *bs,
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
/*
* Strictly speaking, draining is illegal under GRAPH_RDLOCK. We know that
* we've been called with bdrv_graph_rdlock_main_loop(), though, so it's ok
* in practice.
*/
bdrv_drained_begin(bs); bdrv_drained_begin(bs);
if (bs_queue == NULL) { if (bs_queue == NULL) {
@@ -4528,7 +4517,6 @@ BlockReopenQueue *bdrv_reopen_queue(BlockReopenQueue *bs_queue,
QDict *options, bool keep_old_opts) QDict *options, bool keep_old_opts)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
return bdrv_reopen_queue_child(bs_queue, bs, options, NULL, 0, false, return bdrv_reopen_queue_child(bs_queue, bs, options, NULL, 0, false,
NULL, 0, keep_old_opts); NULL, 0, keep_old_opts);
@@ -4748,20 +4736,18 @@ int bdrv_reopen_set_read_only(BlockDriverState *bs, bool read_only,
* Callers must make sure that their AioContext locking is still correct after * Callers must make sure that their AioContext locking is still correct after
* this. * this.
*/ */
static int GRAPH_UNLOCKED static int bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
bool is_backing, Transaction *tran, bool is_backing, Transaction *tran,
Error **errp) Error **errp)
{ {
BlockDriverState *bs = reopen_state->bs; BlockDriverState *bs = reopen_state->bs;
BlockDriverState *new_child_bs; BlockDriverState *new_child_bs;
BlockDriverState *old_child_bs; BlockDriverState *old_child_bs = is_backing ? child_bs(bs->backing) :
child_bs(bs->file);
const char *child_name = is_backing ? "backing" : "file"; const char *child_name = is_backing ? "backing" : "file";
QObject *value; QObject *value;
const char *str; const char *str;
AioContext *ctx, *old_ctx; AioContext *ctx, *old_ctx;
bool has_child;
int ret; int ret;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
@@ -4771,8 +4757,6 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
return 0; return 0;
} }
bdrv_graph_rdlock_main_loop();
switch (qobject_type(value)) { switch (qobject_type(value)) {
case QTYPE_QNULL: case QTYPE_QNULL:
assert(is_backing); /* The 'file' option does not allow a null value */ assert(is_backing); /* The 'file' option does not allow a null value */
@@ -4782,16 +4766,11 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
str = qstring_get_str(qobject_to(QString, value)); str = qstring_get_str(qobject_to(QString, value));
new_child_bs = bdrv_lookup_bs(NULL, str, errp); new_child_bs = bdrv_lookup_bs(NULL, str, errp);
if (new_child_bs == NULL) { if (new_child_bs == NULL) {
ret = -EINVAL; return -EINVAL;
goto out_rdlock; } else if (bdrv_recurse_has_child(new_child_bs, bs)) {
}
has_child = bdrv_recurse_has_child(new_child_bs, bs);
if (has_child) {
error_setg(errp, "Making '%s' a %s child of '%s' would create a " error_setg(errp, "Making '%s' a %s child of '%s' would create a "
"cycle", str, child_name, bs->node_name); "cycle", str, child_name, bs->node_name);
ret = -EINVAL; return -EINVAL;
goto out_rdlock;
} }
break; break;
default: default:
@@ -4802,23 +4781,19 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
g_assert_not_reached(); g_assert_not_reached();
} }
old_child_bs = is_backing ? child_bs(bs->backing) : child_bs(bs->file);
if (old_child_bs == new_child_bs) { if (old_child_bs == new_child_bs) {
ret = 0; return 0;
goto out_rdlock;
} }
if (old_child_bs) { if (old_child_bs) {
if (bdrv_skip_implicit_filters(old_child_bs) == new_child_bs) { if (bdrv_skip_implicit_filters(old_child_bs) == new_child_bs) {
ret = 0; return 0;
goto out_rdlock;
} }
if (old_child_bs->implicit) { if (old_child_bs->implicit) {
error_setg(errp, "Cannot replace implicit %s child of %s", error_setg(errp, "Cannot replace implicit %s child of %s",
child_name, bs->node_name); child_name, bs->node_name);
ret = -EPERM; return -EPERM;
goto out_rdlock;
} }
} }
@@ -4829,8 +4804,7 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
*/ */
error_setg(errp, "'%s' is a %s filter node that does not support a " error_setg(errp, "'%s' is a %s filter node that does not support a "
"%s child", bs->node_name, bs->drv->format_name, child_name); "%s child", bs->node_name, bs->drv->format_name, child_name);
ret = -EINVAL; return -EINVAL;
goto out_rdlock;
} }
if (is_backing) { if (is_backing) {
@@ -4851,7 +4825,6 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
aio_context_acquire(ctx); aio_context_acquire(ctx);
} }
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(new_child_bs); bdrv_graph_wrlock(new_child_bs);
ret = bdrv_set_file_or_backing_noperm(bs, new_child_bs, is_backing, ret = bdrv_set_file_or_backing_noperm(bs, new_child_bs, is_backing,
@@ -4870,10 +4843,6 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
} }
return ret; return ret;
out_rdlock:
bdrv_graph_rdunlock_main_loop();
return ret;
} }
/* /*
@@ -4897,8 +4866,8 @@ out_rdlock:
* After calling this function, the transaction @change_child_tran may only be * After calling this function, the transaction @change_child_tran may only be
* completed while holding a writer lock for the graph. * completed while holding a writer lock for the graph.
*/ */
static int GRAPH_UNLOCKED static int bdrv_reopen_prepare(BDRVReopenState *reopen_state,
bdrv_reopen_prepare(BDRVReopenState *reopen_state, BlockReopenQueue *queue, BlockReopenQueue *queue,
Transaction *change_child_tran, Error **errp) Transaction *change_child_tran, Error **errp)
{ {
int ret = -1; int ret = -1;
@@ -4961,10 +4930,7 @@ bdrv_reopen_prepare(BDRVReopenState *reopen_state, BlockReopenQueue *queue,
* to r/w. Attempting to set to r/w may fail if either BDRV_O_ALLOW_RDWR is * to r/w. Attempting to set to r/w may fail if either BDRV_O_ALLOW_RDWR is
* not set, or if the BDS still has copy_on_read enabled */ * not set, or if the BDS still has copy_on_read enabled */
read_only = !(reopen_state->flags & BDRV_O_RDWR); read_only = !(reopen_state->flags & BDRV_O_RDWR);
bdrv_graph_rdlock_main_loop();
ret = bdrv_can_set_read_only(reopen_state->bs, read_only, true, &local_err); ret = bdrv_can_set_read_only(reopen_state->bs, read_only, true, &local_err);
bdrv_graph_rdunlock_main_loop();
if (local_err) { if (local_err) {
error_propagate(errp, local_err); error_propagate(errp, local_err);
goto error; goto error;
@@ -4987,9 +4953,7 @@ bdrv_reopen_prepare(BDRVReopenState *reopen_state, BlockReopenQueue *queue,
if (local_err != NULL) { if (local_err != NULL) {
error_propagate(errp, local_err); error_propagate(errp, local_err);
} else { } else {
bdrv_graph_rdlock_main_loop();
bdrv_refresh_filename(reopen_state->bs); bdrv_refresh_filename(reopen_state->bs);
bdrv_graph_rdunlock_main_loop();
error_setg(errp, "failed while preparing to reopen image '%s'", error_setg(errp, "failed while preparing to reopen image '%s'",
reopen_state->bs->filename); reopen_state->bs->filename);
} }
@@ -4998,11 +4962,9 @@ bdrv_reopen_prepare(BDRVReopenState *reopen_state, BlockReopenQueue *queue,
} else { } else {
/* It is currently mandatory to have a bdrv_reopen_prepare() /* It is currently mandatory to have a bdrv_reopen_prepare()
* handler for each supported drv. */ * handler for each supported drv. */
bdrv_graph_rdlock_main_loop();
error_setg(errp, "Block format '%s' used by node '%s' " error_setg(errp, "Block format '%s' used by node '%s' "
"does not support reopening files", drv->format_name, "does not support reopening files", drv->format_name,
bdrv_get_device_or_node_name(reopen_state->bs)); bdrv_get_device_or_node_name(reopen_state->bs));
bdrv_graph_rdunlock_main_loop();
ret = -1; ret = -1;
goto error; goto error;
} }
@@ -5014,16 +4976,13 @@ bdrv_reopen_prepare(BDRVReopenState *reopen_state, BlockReopenQueue *queue,
* file or if the image file has a backing file name as part of * file or if the image file has a backing file name as part of
* its metadata. Otherwise the 'backing' option can be omitted. * its metadata. Otherwise the 'backing' option can be omitted.
*/ */
bdrv_graph_rdlock_main_loop();
if (drv->supports_backing && reopen_state->backing_missing && if (drv->supports_backing && reopen_state->backing_missing &&
(reopen_state->bs->backing || reopen_state->bs->backing_file[0])) { (reopen_state->bs->backing || reopen_state->bs->backing_file[0])) {
error_setg(errp, "backing is missing for '%s'", error_setg(errp, "backing is missing for '%s'",
reopen_state->bs->node_name); reopen_state->bs->node_name);
bdrv_graph_rdunlock_main_loop();
ret = -EINVAL; ret = -EINVAL;
goto error; goto error;
} }
bdrv_graph_rdunlock_main_loop();
/* /*
* Allow changing the 'backing' option. The new value can be * Allow changing the 'backing' option. The new value can be
@@ -5051,8 +5010,6 @@ bdrv_reopen_prepare(BDRVReopenState *reopen_state, BlockReopenQueue *queue,
if (qdict_size(reopen_state->options)) { if (qdict_size(reopen_state->options)) {
const QDictEntry *entry = qdict_first(reopen_state->options); const QDictEntry *entry = qdict_first(reopen_state->options);
GRAPH_RDLOCK_GUARD_MAINLOOP();
do { do {
QObject *new = entry->value; QObject *new = entry->value;
QObject *old = qdict_get(reopen_state->bs->options, entry->key); QObject *old = qdict_get(reopen_state->bs->options, entry->key);
@@ -5126,7 +5083,7 @@ error:
* makes them final by swapping the staging BlockDriverState contents into * makes them final by swapping the staging BlockDriverState contents into
* the active BlockDriverState contents. * the active BlockDriverState contents.
*/ */
static void GRAPH_UNLOCKED bdrv_reopen_commit(BDRVReopenState *reopen_state) static void bdrv_reopen_commit(BDRVReopenState *reopen_state)
{ {
BlockDriver *drv; BlockDriver *drv;
BlockDriverState *bs; BlockDriverState *bs;
@@ -5143,8 +5100,6 @@ static void GRAPH_UNLOCKED bdrv_reopen_commit(BDRVReopenState *reopen_state)
drv->bdrv_reopen_commit(reopen_state); drv->bdrv_reopen_commit(reopen_state);
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* set BDS specific flags now */ /* set BDS specific flags now */
qobject_unref(bs->explicit_options); qobject_unref(bs->explicit_options);
qobject_unref(bs->options); qobject_unref(bs->options);
@@ -5166,7 +5121,9 @@ static void GRAPH_UNLOCKED bdrv_reopen_commit(BDRVReopenState *reopen_state)
qdict_del(bs->explicit_options, "backing"); qdict_del(bs->explicit_options, "backing");
qdict_del(bs->options, "backing"); qdict_del(bs->options, "backing");
bdrv_graph_rdlock_main_loop();
bdrv_refresh_limits(bs, NULL, NULL); bdrv_refresh_limits(bs, NULL, NULL);
bdrv_graph_rdunlock_main_loop();
bdrv_refresh_total_sectors(bs, bs->total_sectors); bdrv_refresh_total_sectors(bs, bs->total_sectors);
} }
@@ -5174,7 +5131,7 @@ static void GRAPH_UNLOCKED bdrv_reopen_commit(BDRVReopenState *reopen_state)
* Abort the reopen, and delete and free the staged changes in * Abort the reopen, and delete and free the staged changes in
* reopen_state * reopen_state
*/ */
static void GRAPH_UNLOCKED bdrv_reopen_abort(BDRVReopenState *reopen_state) static void bdrv_reopen_abort(BDRVReopenState *reopen_state)
{ {
BlockDriver *drv; BlockDriver *drv;
@@ -5209,15 +5166,14 @@ static void bdrv_close(BlockDriverState *bs)
bs->drv = NULL; bs->drv = NULL;
} }
bdrv_graph_wrlock(bs); bdrv_graph_wrlock(NULL);
QLIST_FOREACH_SAFE(child, &bs->children, next, next) { QLIST_FOREACH_SAFE(child, &bs->children, next, next) {
bdrv_unref_child(bs, child); bdrv_unref_child(bs, child);
} }
bdrv_graph_wrunlock();
assert(!bs->backing); assert(!bs->backing);
assert(!bs->file); assert(!bs->file);
bdrv_graph_wrunlock();
g_free(bs->opaque); g_free(bs->opaque);
bs->opaque = NULL; bs->opaque = NULL;
qatomic_set(&bs->copy_on_read, 0); qatomic_set(&bs->copy_on_read, 0);
@@ -5422,9 +5378,6 @@ bdrv_replace_node_noperm(BlockDriverState *from,
} }
/* /*
* Switch all parents of @from to point to @to instead. @from and @to must be in
* the same AioContext and both must be drained.
*
* With auto_skip=true bdrv_replace_node_common skips updating from parents * With auto_skip=true bdrv_replace_node_common skips updating from parents
* if it creates a parent-child relation loop or if parent is block-job. * if it creates a parent-child relation loop or if parent is block-job.
* *
@@ -5434,9 +5387,10 @@ bdrv_replace_node_noperm(BlockDriverState *from,
* With @detach_subchain=true @to must be in a backing chain of @from. In this * With @detach_subchain=true @to must be in a backing chain of @from. In this
* case backing link of the cow-parent of @to is removed. * case backing link of the cow-parent of @to is removed.
*/ */
static int GRAPH_WRLOCK static int bdrv_replace_node_common(BlockDriverState *from,
bdrv_replace_node_common(BlockDriverState *from, BlockDriverState *to, BlockDriverState *to,
bool auto_skip, bool detach_subchain, Error **errp) bool auto_skip, bool detach_subchain,
Error **errp)
{ {
Transaction *tran = tran_new(); Transaction *tran = tran_new();
g_autoptr(GSList) refresh_list = NULL; g_autoptr(GSList) refresh_list = NULL;
@@ -5445,10 +5399,6 @@ bdrv_replace_node_common(BlockDriverState *from, BlockDriverState *to,
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
assert(from->quiesce_counter);
assert(to->quiesce_counter);
assert(bdrv_get_aio_context(from) == bdrv_get_aio_context(to));
if (detach_subchain) { if (detach_subchain) {
assert(bdrv_chain_contains(from, to)); assert(bdrv_chain_contains(from, to));
assert(from != to); assert(from != to);
@@ -5460,6 +5410,17 @@ bdrv_replace_node_common(BlockDriverState *from, BlockDriverState *to,
} }
} }
/* Make sure that @from doesn't go away until we have successfully attached
* all of its parents to @to. */
bdrv_ref(from);
assert(qemu_get_current_aio_context() == qemu_get_aio_context());
assert(bdrv_get_aio_context(from) == bdrv_get_aio_context(to));
bdrv_drained_begin(from);
bdrv_drained_begin(to);
bdrv_graph_wrlock(to);
/* /*
* Do the replacement without permission update. * Do the replacement without permission update.
* Replacement may influence the permissions, we should calculate new * Replacement may influence the permissions, we should calculate new
@@ -5488,33 +5449,29 @@ bdrv_replace_node_common(BlockDriverState *from, BlockDriverState *to,
out: out:
tran_finalize(tran, ret); tran_finalize(tran, ret);
bdrv_graph_wrunlock();
bdrv_drained_end(to);
bdrv_drained_end(from);
bdrv_unref(from);
return ret; return ret;
} }
int bdrv_replace_node(BlockDriverState *from, BlockDriverState *to, int bdrv_replace_node(BlockDriverState *from, BlockDriverState *to,
Error **errp) Error **errp)
{ {
GLOBAL_STATE_CODE();
return bdrv_replace_node_common(from, to, true, false, errp); return bdrv_replace_node_common(from, to, true, false, errp);
} }
int bdrv_drop_filter(BlockDriverState *bs, Error **errp) int bdrv_drop_filter(BlockDriverState *bs, Error **errp)
{ {
BlockDriverState *child_bs;
int ret;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop(); return bdrv_replace_node_common(bs, bdrv_filter_or_cow_bs(bs), true, true,
child_bs = bdrv_filter_or_cow_bs(bs); errp);
bdrv_graph_rdunlock_main_loop();
bdrv_drained_begin(child_bs);
bdrv_graph_wrlock(bs);
ret = bdrv_replace_node_common(bs, child_bs, true, true, errp);
bdrv_graph_wrunlock();
bdrv_drained_end(child_bs);
return ret;
} }
/* /*
@@ -5541,9 +5498,7 @@ int bdrv_append(BlockDriverState *bs_new, BlockDriverState *bs_top,
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop();
assert(!bs_new->backing); assert(!bs_new->backing);
bdrv_graph_rdunlock_main_loop();
old_context = bdrv_get_aio_context(bs_top); old_context = bdrv_get_aio_context(bs_top);
bdrv_drained_begin(bs_top); bdrv_drained_begin(bs_top);
@@ -5711,19 +5666,9 @@ BlockDriverState *bdrv_insert_node(BlockDriverState *bs, QDict *options,
goto fail; goto fail;
} }
/*
* Make sure that @bs doesn't go away until we have successfully attached
* all of its parents to @new_node_bs and undrained it again.
*/
bdrv_ref(bs);
bdrv_drained_begin(bs); bdrv_drained_begin(bs);
bdrv_drained_begin(new_node_bs);
bdrv_graph_wrlock(new_node_bs);
ret = bdrv_replace_node(bs, new_node_bs, errp); ret = bdrv_replace_node(bs, new_node_bs, errp);
bdrv_graph_wrunlock();
bdrv_drained_end(new_node_bs);
bdrv_drained_end(bs); bdrv_drained_end(bs);
bdrv_unref(bs);
if (ret < 0) { if (ret < 0) {
error_prepend(errp, "Could not replace node: "); error_prepend(errp, "Could not replace node: ");
@@ -5769,14 +5714,13 @@ int coroutine_fn bdrv_co_check(BlockDriverState *bs,
* image file header * image file header
* -ENOTSUP - format driver doesn't support changing the backing file * -ENOTSUP - format driver doesn't support changing the backing file
*/ */
int coroutine_fn int bdrv_change_backing_file(BlockDriverState *bs, const char *backing_file,
bdrv_co_change_backing_file(BlockDriverState *bs, const char *backing_file,
const char *backing_fmt, bool require) const char *backing_fmt, bool require)
{ {
BlockDriver *drv = bs->drv; BlockDriver *drv = bs->drv;
int ret; int ret;
IO_CODE(); GLOBAL_STATE_CODE();
if (!drv) { if (!drv) {
return -ENOMEDIUM; return -ENOMEDIUM;
@@ -5791,8 +5735,8 @@ bdrv_co_change_backing_file(BlockDriverState *bs, const char *backing_file,
return -EINVAL; return -EINVAL;
} }
if (drv->bdrv_co_change_backing_file != NULL) { if (drv->bdrv_change_backing_file != NULL) {
ret = drv->bdrv_co_change_backing_file(bs, backing_file, backing_fmt); ret = drv->bdrv_change_backing_file(bs, backing_file, backing_fmt);
} else { } else {
ret = -ENOTSUP; ret = -ENOTSUP;
} }
@@ -5849,8 +5793,7 @@ BlockDriverState *bdrv_find_base(BlockDriverState *bs)
* between @bs and @base is frozen. @errp is set if that's the case. * between @bs and @base is frozen. @errp is set if that's the case.
* @base must be reachable from @bs, or NULL. * @base must be reachable from @bs, or NULL.
*/ */
static bool GRAPH_RDLOCK bool bdrv_is_backing_chain_frozen(BlockDriverState *bs, BlockDriverState *base,
bdrv_is_backing_chain_frozen(BlockDriverState *bs, BlockDriverState *base,
Error **errp) Error **errp)
{ {
BlockDriverState *i; BlockDriverState *i;
@@ -5975,15 +5918,14 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
bdrv_ref(top); bdrv_ref(top);
bdrv_drained_begin(base); bdrv_drained_begin(base);
bdrv_graph_wrlock(base);
if (!top->drv || !base->drv) { if (!top->drv || !base->drv) {
goto exit_wrlock; goto exit;
} }
/* Make sure that base is in the backing chain of top */ /* Make sure that base is in the backing chain of top */
if (!bdrv_chain_contains(top, base)) { if (!bdrv_chain_contains(top, base)) {
goto exit_wrlock; goto exit;
} }
/* If 'base' recursively inherits from 'top' then we should set /* If 'base' recursively inherits from 'top' then we should set
@@ -6000,9 +5942,11 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
backing_file_str = base->filename; backing_file_str = base->filename;
} }
bdrv_graph_rdlock_main_loop();
QLIST_FOREACH(c, &top->parents, next_parent) { QLIST_FOREACH(c, &top->parents, next_parent) {
updated_children = g_slist_prepend(updated_children, c); updated_children = g_slist_prepend(updated_children, c);
} }
bdrv_graph_rdunlock_main_loop();
/* /*
* It seems correct to pass detach_subchain=true here, but it triggers * It seems correct to pass detach_subchain=true here, but it triggers
@@ -6015,8 +5959,6 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
* That's a FIXME. * That's a FIXME.
*/ */
bdrv_replace_node_common(top, base, false, false, &local_err); bdrv_replace_node_common(top, base, false, false, &local_err);
bdrv_graph_wrunlock();
if (local_err) { if (local_err) {
error_report_err(local_err); error_report_err(local_err);
goto exit; goto exit;
@@ -6049,10 +5991,6 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
} }
ret = 0; ret = 0;
goto exit;
exit_wrlock:
bdrv_graph_wrunlock();
exit: exit:
bdrv_drained_end(base); bdrv_drained_end(base);
bdrv_unref(top); bdrv_unref(top);
@@ -6344,7 +6282,6 @@ BlockDeviceInfoList *bdrv_named_nodes_list(bool flat,
BlockDriverState *bs; BlockDriverState *bs;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
list = NULL; list = NULL;
QTAILQ_FOREACH(bs, &graph_bdrv_states, node_list) { QTAILQ_FOREACH(bs, &graph_bdrv_states, node_list) {
@@ -6615,7 +6552,7 @@ int bdrv_has_zero_init_1(BlockDriverState *bs)
return 1; return 1;
} }
int coroutine_mixed_fn bdrv_has_zero_init(BlockDriverState *bs) int bdrv_has_zero_init(BlockDriverState *bs)
{ {
BlockDriverState *filtered; BlockDriverState *filtered;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
@@ -6730,8 +6667,7 @@ void coroutine_fn bdrv_co_debug_event(BlockDriverState *bs, BlkdebugEvent event)
bs->drv->bdrv_co_debug_event(bs, event); bs->drv->bdrv_co_debug_event(bs, event);
} }
static BlockDriverState * GRAPH_RDLOCK static BlockDriverState *bdrv_find_debug_node(BlockDriverState *bs)
bdrv_find_debug_node(BlockDriverState *bs)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
while (bs && bs->drv && !bs->drv->bdrv_debug_breakpoint) { while (bs && bs->drv && !bs->drv->bdrv_debug_breakpoint) {
@@ -6750,8 +6686,6 @@ int bdrv_debug_breakpoint(BlockDriverState *bs, const char *event,
const char *tag) const char *tag)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs = bdrv_find_debug_node(bs); bs = bdrv_find_debug_node(bs);
if (bs) { if (bs) {
return bs->drv->bdrv_debug_breakpoint(bs, event, tag); return bs->drv->bdrv_debug_breakpoint(bs, event, tag);
@@ -6763,8 +6697,6 @@ int bdrv_debug_breakpoint(BlockDriverState *bs, const char *event,
int bdrv_debug_remove_breakpoint(BlockDriverState *bs, const char *tag) int bdrv_debug_remove_breakpoint(BlockDriverState *bs, const char *tag)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs = bdrv_find_debug_node(bs); bs = bdrv_find_debug_node(bs);
if (bs) { if (bs) {
return bs->drv->bdrv_debug_remove_breakpoint(bs, tag); return bs->drv->bdrv_debug_remove_breakpoint(bs, tag);
@@ -6776,8 +6708,6 @@ int bdrv_debug_remove_breakpoint(BlockDriverState *bs, const char *tag)
int bdrv_debug_resume(BlockDriverState *bs, const char *tag) int bdrv_debug_resume(BlockDriverState *bs, const char *tag)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
while (bs && (!bs->drv || !bs->drv->bdrv_debug_resume)) { while (bs && (!bs->drv || !bs->drv->bdrv_debug_resume)) {
bs = bdrv_primary_bs(bs); bs = bdrv_primary_bs(bs);
} }
@@ -6792,8 +6722,6 @@ int bdrv_debug_resume(BlockDriverState *bs, const char *tag)
bool bdrv_debug_is_suspended(BlockDriverState *bs, const char *tag) bool bdrv_debug_is_suspended(BlockDriverState *bs, const char *tag)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
while (bs && bs->drv && !bs->drv->bdrv_debug_is_suspended) { while (bs && bs->drv && !bs->drv->bdrv_debug_is_suspended) {
bs = bdrv_primary_bs(bs); bs = bdrv_primary_bs(bs);
} }
@@ -6822,7 +6750,6 @@ BlockDriverState *bdrv_find_backing_image(BlockDriverState *bs,
BlockDriverState *bs_below; BlockDriverState *bs_below;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!bs || !bs->drv || !backing_file) { if (!bs || !bs->drv || !backing_file) {
return NULL; return NULL;
@@ -7034,7 +6961,6 @@ void bdrv_activate_all(Error **errp)
BdrvNextIterator it; BdrvNextIterator it;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
AioContext *aio_context = bdrv_get_aio_context(bs); AioContext *aio_context = bdrv_get_aio_context(bs);
@@ -7050,8 +6976,7 @@ void bdrv_activate_all(Error **errp)
} }
} }
static bool GRAPH_RDLOCK static bool bdrv_has_bds_parent(BlockDriverState *bs, bool only_active)
bdrv_has_bds_parent(BlockDriverState *bs, bool only_active)
{ {
BdrvChild *parent; BdrvChild *parent;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
@@ -7068,13 +6993,14 @@ bdrv_has_bds_parent(BlockDriverState *bs, bool only_active)
return false; return false;
} }
static int GRAPH_RDLOCK bdrv_inactivate_recurse(BlockDriverState *bs) static int bdrv_inactivate_recurse(BlockDriverState *bs)
{ {
BdrvChild *child, *parent; BdrvChild *child, *parent;
int ret; int ret;
uint64_t cumulative_perms, cumulative_shared_perms; uint64_t cumulative_perms, cumulative_shared_perms;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!bs->drv) { if (!bs->drv) {
return -ENOMEDIUM; return -ENOMEDIUM;
@@ -7140,7 +7066,6 @@ int bdrv_inactivate_all(void)
GSList *aio_ctxs = NULL, *ctx; GSList *aio_ctxs = NULL, *ctx;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
AioContext *aio_context = bdrv_get_aio_context(bs); AioContext *aio_context = bdrv_get_aio_context(bs);
@@ -7280,7 +7205,6 @@ bool bdrv_op_is_blocked(BlockDriverState *bs, BlockOpType op, Error **errp)
{ {
BdrvOpBlocker *blocker; BdrvOpBlocker *blocker;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
assert((int) op >= 0 && op < BLOCK_OP_TYPE_MAX); assert((int) op >= 0 && op < BLOCK_OP_TYPE_MAX);
if (!QLIST_EMPTY(&bs->op_blockers[op])) { if (!QLIST_EMPTY(&bs->op_blockers[op])) {
blocker = QLIST_FIRST(&bs->op_blockers[op]); blocker = QLIST_FIRST(&bs->op_blockers[op]);
@@ -8128,7 +8052,7 @@ static bool append_strong_runtime_options(QDict *d, BlockDriverState *bs)
/* Note: This function may return false positives; it may return true /* Note: This function may return false positives; it may return true
* even if opening the backing file specified by bs's image header * even if opening the backing file specified by bs's image header
* would result in exactly bs->backing. */ * would result in exactly bs->backing. */
static bool GRAPH_RDLOCK bdrv_backing_overridden(BlockDriverState *bs) static bool bdrv_backing_overridden(BlockDriverState *bs)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
if (bs->backing) { if (bs->backing) {
@@ -8502,8 +8426,8 @@ BdrvChild *bdrv_primary_child(BlockDriverState *bs)
return found; return found;
} }
static BlockDriverState * GRAPH_RDLOCK static BlockDriverState *bdrv_do_skip_filters(BlockDriverState *bs,
bdrv_do_skip_filters(BlockDriverState *bs, bool stop_on_explicit_filter) bool stop_on_explicit_filter)
{ {
BdrvChild *c; BdrvChild *c;

View File

@@ -384,33 +384,31 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
return NULL; return NULL;
} }
bdrv_graph_rdlock_main_loop();
if (!bdrv_is_inserted(bs)) { if (!bdrv_is_inserted(bs)) {
error_setg(errp, "Device is not inserted: %s", error_setg(errp, "Device is not inserted: %s",
bdrv_get_device_name(bs)); bdrv_get_device_name(bs));
goto error_rdlock; return NULL;
} }
if (!bdrv_is_inserted(target)) { if (!bdrv_is_inserted(target)) {
error_setg(errp, "Device is not inserted: %s", error_setg(errp, "Device is not inserted: %s",
bdrv_get_device_name(target)); bdrv_get_device_name(target));
goto error_rdlock; return NULL;
} }
if (compress && !bdrv_supports_compressed_writes(target)) { if (compress && !bdrv_supports_compressed_writes(target)) {
error_setg(errp, "Compression is not supported for this drive %s", error_setg(errp, "Compression is not supported for this drive %s",
bdrv_get_device_name(target)); bdrv_get_device_name(target));
goto error_rdlock; return NULL;
} }
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) { if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) {
goto error_rdlock; return NULL;
} }
if (bdrv_op_is_blocked(target, BLOCK_OP_TYPE_BACKUP_TARGET, errp)) { if (bdrv_op_is_blocked(target, BLOCK_OP_TYPE_BACKUP_TARGET, errp)) {
goto error_rdlock; return NULL;
} }
bdrv_graph_rdunlock_main_loop();
if (perf->max_workers < 1 || perf->max_workers > INT_MAX) { if (perf->max_workers < 1 || perf->max_workers > INT_MAX) {
error_setg(errp, "max-workers must be between 1 and %d", INT_MAX); error_setg(errp, "max-workers must be between 1 and %d", INT_MAX);
@@ -438,7 +436,6 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
len = bdrv_getlength(bs); len = bdrv_getlength(bs);
if (len < 0) { if (len < 0) {
GRAPH_RDLOCK_GUARD_MAINLOOP();
error_setg_errno(errp, -len, "Unable to get length for '%s'", error_setg_errno(errp, -len, "Unable to get length for '%s'",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
goto error; goto error;
@@ -446,7 +443,6 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
target_len = bdrv_getlength(target); target_len = bdrv_getlength(target);
if (target_len < 0) { if (target_len < 0) {
GRAPH_RDLOCK_GUARD_MAINLOOP();
error_setg_errno(errp, -target_len, "Unable to get length for '%s'", error_setg_errno(errp, -target_len, "Unable to get length for '%s'",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
goto error; goto error;
@@ -496,10 +492,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
block_copy_set_speed(bcs, speed); block_copy_set_speed(bcs, speed);
/* Required permissions are taken by copy-before-write filter target */ /* Required permissions are taken by copy-before-write filter target */
bdrv_graph_wrlock(target);
block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL, block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
&error_abort); &error_abort);
bdrv_graph_wrunlock();
return &job->common; return &job->common;
@@ -512,8 +506,4 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
} }
return NULL; return NULL;
error_rdlock:
bdrv_graph_rdunlock_main_loop();
return NULL;
} }

View File

@@ -508,8 +508,6 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
goto out; goto out;
} }
bdrv_graph_rdlock_main_loop();
bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED | bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
(BDRV_REQ_FUA & bs->file->bs->supported_write_flags); (BDRV_REQ_FUA & bs->file->bs->supported_write_flags);
bs->supported_zero_flags = BDRV_REQ_WRITE_UNCHANGED | bs->supported_zero_flags = BDRV_REQ_WRITE_UNCHANGED |
@@ -522,7 +520,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
if (s->align && (s->align >= INT_MAX || !is_power_of_2(s->align))) { if (s->align && (s->align >= INT_MAX || !is_power_of_2(s->align))) {
error_setg(errp, "Cannot meet constraints with align %" PRIu64, error_setg(errp, "Cannot meet constraints with align %" PRIu64,
s->align); s->align);
goto out_rdlock; goto out;
} }
align = MAX(s->align, bs->file->bs->bl.request_alignment); align = MAX(s->align, bs->file->bs->bl.request_alignment);
@@ -532,7 +530,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
!QEMU_IS_ALIGNED(s->max_transfer, align))) { !QEMU_IS_ALIGNED(s->max_transfer, align))) {
error_setg(errp, "Cannot meet constraints with max-transfer %" PRIu64, error_setg(errp, "Cannot meet constraints with max-transfer %" PRIu64,
s->max_transfer); s->max_transfer);
goto out_rdlock; goto out;
} }
s->opt_write_zero = qemu_opt_get_size(opts, "opt-write-zero", 0); s->opt_write_zero = qemu_opt_get_size(opts, "opt-write-zero", 0);
@@ -541,7 +539,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
!QEMU_IS_ALIGNED(s->opt_write_zero, align))) { !QEMU_IS_ALIGNED(s->opt_write_zero, align))) {
error_setg(errp, "Cannot meet constraints with opt-write-zero %" PRIu64, error_setg(errp, "Cannot meet constraints with opt-write-zero %" PRIu64,
s->opt_write_zero); s->opt_write_zero);
goto out_rdlock; goto out;
} }
s->max_write_zero = qemu_opt_get_size(opts, "max-write-zero", 0); s->max_write_zero = qemu_opt_get_size(opts, "max-write-zero", 0);
@@ -551,7 +549,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
MAX(s->opt_write_zero, align)))) { MAX(s->opt_write_zero, align)))) {
error_setg(errp, "Cannot meet constraints with max-write-zero %" PRIu64, error_setg(errp, "Cannot meet constraints with max-write-zero %" PRIu64,
s->max_write_zero); s->max_write_zero);
goto out_rdlock; goto out;
} }
s->opt_discard = qemu_opt_get_size(opts, "opt-discard", 0); s->opt_discard = qemu_opt_get_size(opts, "opt-discard", 0);
@@ -560,7 +558,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
!QEMU_IS_ALIGNED(s->opt_discard, align))) { !QEMU_IS_ALIGNED(s->opt_discard, align))) {
error_setg(errp, "Cannot meet constraints with opt-discard %" PRIu64, error_setg(errp, "Cannot meet constraints with opt-discard %" PRIu64,
s->opt_discard); s->opt_discard);
goto out_rdlock; goto out;
} }
s->max_discard = qemu_opt_get_size(opts, "max-discard", 0); s->max_discard = qemu_opt_get_size(opts, "max-discard", 0);
@@ -570,14 +568,12 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
MAX(s->opt_discard, align)))) { MAX(s->opt_discard, align)))) {
error_setg(errp, "Cannot meet constraints with max-discard %" PRIu64, error_setg(errp, "Cannot meet constraints with max-discard %" PRIu64,
s->max_discard); s->max_discard);
goto out_rdlock; goto out;
} }
bdrv_debug_event(bs, BLKDBG_NONE); bdrv_debug_event(bs, BLKDBG_NONE);
ret = 0; ret = 0;
out_rdlock:
bdrv_graph_rdunlock_main_loop();
out: out:
if (ret < 0) { if (ret < 0) {
qemu_mutex_destroy(&s->lock); qemu_mutex_destroy(&s->lock);
@@ -750,9 +746,12 @@ blkdebug_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
return bdrv_co_pdiscard(bs->file, offset, bytes); return bdrv_co_pdiscard(bs->file, offset, bytes);
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn blkdebug_co_block_status(BlockDriverState *bs,
blkdebug_co_block_status(BlockDriverState *bs, bool want_zero, int64_t offset, bool want_zero,
int64_t bytes, int64_t *pnum, int64_t *map, int64_t offset,
int64_t bytes,
int64_t *pnum,
int64_t *map,
BlockDriverState **file) BlockDriverState **file)
{ {
int err; int err;
@@ -974,7 +973,7 @@ blkdebug_co_getlength(BlockDriverState *bs)
return bdrv_co_getlength(bs->file->bs); return bdrv_co_getlength(bs->file->bs);
} }
static void GRAPH_RDLOCK blkdebug_refresh_filename(BlockDriverState *bs) static void blkdebug_refresh_filename(BlockDriverState *bs)
{ {
BDRVBlkdebugState *s = bs->opaque; BDRVBlkdebugState *s = bs->opaque;
const QDictEntry *e; const QDictEntry *e;

View File

@@ -13,7 +13,6 @@
#include "block/block_int.h" #include "block/block_int.h"
#include "exec/memory.h" #include "exec/memory.h"
#include "exec/cpu-common.h" /* for qemu_ram_get_fd() */ #include "exec/cpu-common.h" /* for qemu_ram_get_fd() */
#include "qemu/defer-call.h"
#include "qapi/error.h" #include "qapi/error.h"
#include "qemu/error-report.h" #include "qemu/error-report.h"
#include "qapi/qmp/qdict.h" #include "qapi/qmp/qdict.h"
@@ -313,10 +312,10 @@ static void blkio_detach_aio_context(BlockDriverState *bs)
} }
/* /*
* Called by defer_call_end() or immediately if not in a deferred section. * Called by blk_io_unplug() or immediately if not plugged. Called without
* Called without blkio_lock. * blkio_lock.
*/ */
static void blkio_deferred_fn(void *opaque) static void blkio_unplug_fn(void *opaque)
{ {
BDRVBlkioState *s = opaque; BDRVBlkioState *s = opaque;
@@ -333,7 +332,7 @@ static void blkio_submit_io(BlockDriverState *bs)
{ {
BDRVBlkioState *s = bs->opaque; BDRVBlkioState *s = bs->opaque;
defer_call(blkio_deferred_fn, s); blk_io_plug_call(blkio_unplug_fn, s);
} }
static int coroutine_fn static int coroutine_fn

View File

@@ -130,13 +130,7 @@ static int coroutine_fn GRAPH_RDLOCK blkreplay_co_flush(BlockDriverState *bs)
static int blkreplay_snapshot_goto(BlockDriverState *bs, static int blkreplay_snapshot_goto(BlockDriverState *bs,
const char *snapshot_id) const char *snapshot_id)
{ {
BlockDriverState *file_bs; return bdrv_snapshot_goto(bs->file->bs, snapshot_id, NULL);
bdrv_graph_rdlock_main_loop();
file_bs = bs->file->bs;
bdrv_graph_rdunlock_main_loop();
return bdrv_snapshot_goto(file_bs, snapshot_id, NULL);
} }
static BlockDriver bdrv_blkreplay = { static BlockDriver bdrv_blkreplay = {

View File

@@ -33,8 +33,8 @@ typedef struct BlkverifyRequest {
uint64_t bytes; uint64_t bytes;
int flags; int flags;
int GRAPH_RDLOCK_PTR (*request_fn)( int (*request_fn)(BdrvChild *, int64_t, int64_t, QEMUIOVector *,
BdrvChild *, int64_t, int64_t, QEMUIOVector *, BdrvRequestFlags); BdrvRequestFlags);
int ret; /* test image result */ int ret; /* test image result */
int raw_ret; /* raw image result */ int raw_ret; /* raw image result */
@@ -170,11 +170,8 @@ static void coroutine_fn blkverify_do_test_req(void *opaque)
BlkverifyRequest *r = opaque; BlkverifyRequest *r = opaque;
BDRVBlkverifyState *s = r->bs->opaque; BDRVBlkverifyState *s = r->bs->opaque;
bdrv_graph_co_rdlock();
r->ret = r->request_fn(s->test_file, r->offset, r->bytes, r->qiov, r->ret = r->request_fn(s->test_file, r->offset, r->bytes, r->qiov,
r->flags); r->flags);
bdrv_graph_co_rdunlock();
r->done++; r->done++;
qemu_coroutine_enter_if_inactive(r->co); qemu_coroutine_enter_if_inactive(r->co);
} }
@@ -183,16 +180,13 @@ static void coroutine_fn blkverify_do_raw_req(void *opaque)
{ {
BlkverifyRequest *r = opaque; BlkverifyRequest *r = opaque;
bdrv_graph_co_rdlock();
r->raw_ret = r->request_fn(r->bs->file, r->offset, r->bytes, r->raw_qiov, r->raw_ret = r->request_fn(r->bs->file, r->offset, r->bytes, r->raw_qiov,
r->flags); r->flags);
bdrv_graph_co_rdunlock();
r->done++; r->done++;
qemu_coroutine_enter_if_inactive(r->co); qemu_coroutine_enter_if_inactive(r->co);
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn
blkverify_co_prwv(BlockDriverState *bs, BlkverifyRequest *r, uint64_t offset, blkverify_co_prwv(BlockDriverState *bs, BlkverifyRequest *r, uint64_t offset,
uint64_t bytes, QEMUIOVector *qiov, QEMUIOVector *raw_qiov, uint64_t bytes, QEMUIOVector *qiov, QEMUIOVector *raw_qiov,
int flags, bool is_write) int flags, bool is_write)
@@ -228,7 +222,7 @@ blkverify_co_prwv(BlockDriverState *bs, BlkverifyRequest *r, uint64_t offset,
return r->ret; return r->ret;
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn
blkverify_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes, blkverify_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
QEMUIOVector *qiov, BdrvRequestFlags flags) QEMUIOVector *qiov, BdrvRequestFlags flags)
{ {
@@ -257,7 +251,7 @@ blkverify_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
return ret; return ret;
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn
blkverify_co_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes, blkverify_co_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes,
QEMUIOVector *qiov, BdrvRequestFlags flags) QEMUIOVector *qiov, BdrvRequestFlags flags)
{ {
@@ -288,7 +282,7 @@ blkverify_recurse_can_replace(BlockDriverState *bs,
bdrv_recurse_can_replace(s->test_file->bs, to_replace); bdrv_recurse_can_replace(s->test_file->bs, to_replace);
} }
static void GRAPH_RDLOCK blkverify_refresh_filename(BlockDriverState *bs) static void blkverify_refresh_filename(BlockDriverState *bs)
{ {
BDRVBlkverifyState *s = bs->opaque; BDRVBlkverifyState *s = bs->opaque;

View File

@@ -780,12 +780,11 @@ BlockDriverState *blk_bs(BlockBackend *blk)
return blk->root ? blk->root->bs : NULL; return blk->root ? blk->root->bs : NULL;
} }
static BlockBackend * GRAPH_RDLOCK bdrv_first_blk(BlockDriverState *bs) static BlockBackend *bdrv_first_blk(BlockDriverState *bs)
{ {
BdrvChild *child; BdrvChild *child;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
assert_bdrv_graph_readable();
QLIST_FOREACH(child, &bs->parents, next_parent) { QLIST_FOREACH(child, &bs->parents, next_parent) {
if (child->klass == &child_root) { if (child->klass == &child_root) {
@@ -813,8 +812,6 @@ bool bdrv_is_root_node(BlockDriverState *bs)
BdrvChild *c; BdrvChild *c;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
assert_bdrv_graph_readable();
QLIST_FOREACH(c, &bs->parents, next_parent) { QLIST_FOREACH(c, &bs->parents, next_parent) {
if (c->klass != &child_root) { if (c->klass != &child_root) {
return false; return false;
@@ -931,12 +928,10 @@ int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp)
ThrottleGroupMember *tgm = &blk->public.throttle_group_member; ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_ref(bs); bdrv_ref(bs);
bdrv_graph_wrlock(bs);
blk->root = bdrv_root_attach_child(bs, "root", &child_root, blk->root = bdrv_root_attach_child(bs, "root", &child_root,
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY, BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
blk->perm, blk->shared_perm, blk->perm, blk->shared_perm,
blk, errp); blk, errp);
bdrv_graph_wrunlock();
if (blk->root == NULL) { if (blk->root == NULL) {
return -EPERM; return -EPERM;
} }
@@ -2264,7 +2259,6 @@ void blk_activate(BlockBackend *blk, Error **errp)
if (qemu_in_coroutine()) { if (qemu_in_coroutine()) {
bdrv_co_activate(bs, errp); bdrv_co_activate(bs, errp);
} else { } else {
GRAPH_RDLOCK_GUARD_MAINLOOP();
bdrv_activate(bs, errp); bdrv_activate(bs, errp);
} }
} }
@@ -2390,7 +2384,6 @@ bool blk_op_is_blocked(BlockBackend *blk, BlockOpType op, Error **errp)
{ {
BlockDriverState *bs = blk_bs(blk); BlockDriverState *bs = blk_bs(blk);
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!bs) { if (!bs) {
return false; return false;
@@ -2668,8 +2661,6 @@ int blk_load_vmstate(BlockBackend *blk, uint8_t *buf, int64_t pos, int size)
int blk_probe_blocksizes(BlockBackend *blk, BlockSizes *bsz) int blk_probe_blocksizes(BlockBackend *blk, BlockSizes *bsz)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!blk_is_available(blk)) { if (!blk_is_available(blk)) {
return -ENOMEDIUM; return -ENOMEDIUM;
} }
@@ -2730,7 +2721,6 @@ int blk_commit_all(void)
{ {
BlockBackend *blk = NULL; BlockBackend *blk = NULL;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
while ((blk = blk_all_next(blk)) != NULL) { while ((blk = blk_all_next(blk)) != NULL) {
AioContext *aio_context = blk_get_aio_context(blk); AioContext *aio_context = blk_get_aio_context(blk);
@@ -2911,8 +2901,6 @@ const BdrvChild *blk_root(BlockBackend *blk)
int blk_make_empty(BlockBackend *blk, Error **errp) int blk_make_empty(BlockBackend *blk, Error **errp)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!blk_is_available(blk)) { if (!blk_is_available(blk)) {
error_setg(errp, "No medium inserted"); error_setg(errp, "No medium inserted");
return -ENOMEDIUM; return -ENOMEDIUM;

View File

@@ -313,12 +313,7 @@ static int64_t block_copy_calculate_cluster_size(BlockDriverState *target,
{ {
int ret; int ret;
BlockDriverInfo bdi; BlockDriverInfo bdi;
bool target_does_cow; bool target_does_cow = bdrv_backing_chain_next(target);
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
target_does_cow = bdrv_backing_chain_next(target);
/* /*
* If there is no backing file on the target, we cannot rely on COW if our * If there is no backing file on the target, we cannot rely on COW if our
@@ -360,8 +355,6 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
BdrvDirtyBitmap *copy_bitmap; BdrvDirtyBitmap *copy_bitmap;
bool is_fleecing; bool is_fleecing;
GLOBAL_STATE_CODE();
cluster_size = block_copy_calculate_cluster_size(target->bs, errp); cluster_size = block_copy_calculate_cluster_size(target->bs, errp);
if (cluster_size < 0) { if (cluster_size < 0) {
return NULL; return NULL;
@@ -399,9 +392,7 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
* For more information see commit f8d59dfb40bb and test * For more information see commit f8d59dfb40bb and test
* tests/qemu-iotests/222 * tests/qemu-iotests/222
*/ */
bdrv_graph_rdlock_main_loop();
is_fleecing = bdrv_chain_contains(target->bs, source->bs); is_fleecing = bdrv_chain_contains(target->bs, source->bs);
bdrv_graph_rdunlock_main_loop();
s = g_new(BlockCopyState, 1); s = g_new(BlockCopyState, 1);
*s = (BlockCopyState) { *s = (BlockCopyState) {

View File

@@ -105,12 +105,8 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
struct bochs_header bochs; struct bochs_header bochs;
int ret; int ret;
GLOBAL_STATE_CODE();
/* No write support yet */ /* No write support yet */
bdrv_graph_rdlock_main_loop();
ret = bdrv_apply_auto_read_only(bs, NULL, errp); ret = bdrv_apply_auto_read_only(bs, NULL, errp);
bdrv_graph_rdunlock_main_loop();
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
@@ -120,8 +116,6 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
ret = bdrv_pread(bs->file, 0, sizeof(bochs), &bochs, 0); ret = bdrv_pread(bs->file, 0, sizeof(bochs), &bochs, 0);
if (ret < 0) { if (ret < 0) {
return ret; return ret;

View File

@@ -67,11 +67,7 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
uint32_t offsets_size, max_compressed_block_size = 1, i; uint32_t offsets_size, max_compressed_block_size = 1, i;
int ret; int ret;
GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop();
ret = bdrv_apply_auto_read_only(bs, NULL, errp); ret = bdrv_apply_auto_read_only(bs, NULL, errp);
bdrv_graph_rdunlock_main_loop();
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
@@ -81,8 +77,6 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* read header */ /* read header */
ret = bdrv_pread(bs->file, 128, 4, &s->block_size, 0); ret = bdrv_pread(bs->file, 128, 4, &s->block_size, 0);
if (ret < 0) { if (ret < 0) {

View File

@@ -48,10 +48,8 @@ static int commit_prepare(Job *job)
{ {
CommitBlockJob *s = container_of(job, CommitBlockJob, common.job); CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
bdrv_graph_rdlock_main_loop();
bdrv_unfreeze_backing_chain(s->commit_top_bs, s->base_bs); bdrv_unfreeze_backing_chain(s->commit_top_bs, s->base_bs);
s->chain_frozen = false; s->chain_frozen = false;
bdrv_graph_rdunlock_main_loop();
/* Remove base node parent that still uses BLK_PERM_WRITE/RESIZE before /* Remove base node parent that still uses BLK_PERM_WRITE/RESIZE before
* the normal backing chain can be restored. */ * the normal backing chain can be restored. */
@@ -68,12 +66,9 @@ static void commit_abort(Job *job)
{ {
CommitBlockJob *s = container_of(job, CommitBlockJob, common.job); CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
BlockDriverState *top_bs = blk_bs(s->top); BlockDriverState *top_bs = blk_bs(s->top);
BlockDriverState *commit_top_backing_bs;
if (s->chain_frozen) { if (s->chain_frozen) {
bdrv_graph_rdlock_main_loop();
bdrv_unfreeze_backing_chain(s->commit_top_bs, s->base_bs); bdrv_unfreeze_backing_chain(s->commit_top_bs, s->base_bs);
bdrv_graph_rdunlock_main_loop();
} }
/* Make sure commit_top_bs and top stay around until bdrv_replace_node() */ /* Make sure commit_top_bs and top stay around until bdrv_replace_node() */
@@ -95,15 +90,8 @@ static void commit_abort(Job *job)
* XXX Can (or should) we somehow keep 'consistent read' blocked even * XXX Can (or should) we somehow keep 'consistent read' blocked even
* after the failed/cancelled commit job is gone? If we already wrote * after the failed/cancelled commit job is gone? If we already wrote
* something to base, the intermediate images aren't valid any more. */ * something to base, the intermediate images aren't valid any more. */
bdrv_graph_rdlock_main_loop(); bdrv_replace_node(s->commit_top_bs, s->commit_top_bs->backing->bs,
commit_top_backing_bs = s->commit_top_bs->backing->bs; &error_abort);
bdrv_graph_rdunlock_main_loop();
bdrv_drained_begin(commit_top_backing_bs);
bdrv_graph_wrlock(commit_top_backing_bs);
bdrv_replace_node(s->commit_top_bs, commit_top_backing_bs, &error_abort);
bdrv_graph_wrunlock();
bdrv_drained_end(commit_top_backing_bs);
bdrv_unref(s->commit_top_bs); bdrv_unref(s->commit_top_bs);
bdrv_unref(top_bs); bdrv_unref(top_bs);
@@ -222,7 +210,7 @@ bdrv_commit_top_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags); return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags);
} }
static GRAPH_RDLOCK void bdrv_commit_top_refresh_filename(BlockDriverState *bs) static void bdrv_commit_top_refresh_filename(BlockDriverState *bs)
{ {
pstrcpy(bs->exact_filename, sizeof(bs->exact_filename), pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
bs->backing->bs->filename); bs->backing->bs->filename);
@@ -267,13 +255,10 @@ void commit_start(const char *job_id, BlockDriverState *bs,
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
assert(top != bs); assert(top != bs);
bdrv_graph_rdlock_main_loop();
if (bdrv_skip_filters(top) == bdrv_skip_filters(base)) { if (bdrv_skip_filters(top) == bdrv_skip_filters(base)) {
error_setg(errp, "Invalid files for merge: top and base are the same"); error_setg(errp, "Invalid files for merge: top and base are the same");
bdrv_graph_rdunlock_main_loop();
return; return;
} }
bdrv_graph_rdunlock_main_loop();
base_size = bdrv_getlength(base); base_size = bdrv_getlength(base);
if (base_size < 0) { if (base_size < 0) {
@@ -339,7 +324,6 @@ void commit_start(const char *job_id, BlockDriverState *bs,
* this is the responsibility of the interface (i.e. whoever calls * this is the responsibility of the interface (i.e. whoever calls
* commit_start()). * commit_start()).
*/ */
bdrv_graph_wrlock(top);
s->base_overlay = bdrv_find_overlay(top, base); s->base_overlay = bdrv_find_overlay(top, base);
assert(s->base_overlay); assert(s->base_overlay);
@@ -370,20 +354,16 @@ void commit_start(const char *job_id, BlockDriverState *bs,
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0, ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
iter_shared_perms, errp); iter_shared_perms, errp);
if (ret < 0) { if (ret < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
} }
if (bdrv_freeze_backing_chain(commit_top_bs, base, errp) < 0) { if (bdrv_freeze_backing_chain(commit_top_bs, base, errp) < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
s->chain_frozen = true; s->chain_frozen = true;
ret = block_job_add_bdrv(&s->common, "base", base, 0, BLK_PERM_ALL, errp); ret = block_job_add_bdrv(&s->common, "base", base, 0, BLK_PERM_ALL, errp);
bdrv_graph_wrunlock();
if (ret < 0) { if (ret < 0) {
goto fail; goto fail;
} }
@@ -416,9 +396,7 @@ void commit_start(const char *job_id, BlockDriverState *bs,
fail: fail:
if (s->chain_frozen) { if (s->chain_frozen) {
bdrv_graph_rdlock_main_loop();
bdrv_unfreeze_backing_chain(commit_top_bs, base); bdrv_unfreeze_backing_chain(commit_top_bs, base);
bdrv_graph_rdunlock_main_loop();
} }
if (s->base) { if (s->base) {
blk_unref(s->base); blk_unref(s->base);
@@ -433,11 +411,7 @@ fail:
/* commit_top_bs has to be replaced after deleting the block job, /* commit_top_bs has to be replaced after deleting the block job,
* otherwise this would fail because of lack of permissions. */ * otherwise this would fail because of lack of permissions. */
if (commit_top_bs) { if (commit_top_bs) {
bdrv_drained_begin(top);
bdrv_graph_wrlock(top);
bdrv_replace_node(commit_top_bs, top, &error_abort); bdrv_replace_node(commit_top_bs, top, &error_abort);
bdrv_graph_wrunlock();
bdrv_drained_end(top);
} }
} }
@@ -460,7 +434,6 @@ int bdrv_commit(BlockDriverState *bs)
Error *local_err = NULL; Error *local_err = NULL;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!drv) if (!drv)
return -ENOMEDIUM; return -ENOMEDIUM;

View File

@@ -203,7 +203,7 @@ static int coroutine_fn GRAPH_RDLOCK cbw_co_flush(BlockDriverState *bs)
* It's guaranteed that guest writes will not interact in the region until * It's guaranteed that guest writes will not interact in the region until
* cbw_snapshot_read_unlock() called. * cbw_snapshot_read_unlock() called.
*/ */
static BlockReq * coroutine_fn GRAPH_RDLOCK static coroutine_fn BlockReq *
cbw_snapshot_read_lock(BlockDriverState *bs, int64_t offset, int64_t bytes, cbw_snapshot_read_lock(BlockDriverState *bs, int64_t offset, int64_t bytes,
int64_t *pnum, BdrvChild **file) int64_t *pnum, BdrvChild **file)
{ {
@@ -305,7 +305,7 @@ cbw_co_snapshot_block_status(BlockDriverState *bs,
return -EACCES; return -EACCES;
} }
ret = bdrv_co_block_status(child->bs, offset, cur_bytes, pnum, map, file); ret = bdrv_block_status(child->bs, offset, cur_bytes, pnum, map, file);
if (child == s->target) { if (child == s->target) {
/* /*
* We refer to s->target only for areas that we've written to it. * We refer to s->target only for areas that we've written to it.
@@ -335,7 +335,7 @@ cbw_co_pdiscard_snapshot(BlockDriverState *bs, int64_t offset, int64_t bytes)
return bdrv_co_pdiscard(s->target, offset, bytes); return bdrv_co_pdiscard(s->target, offset, bytes);
} }
static void GRAPH_RDLOCK cbw_refresh_filename(BlockDriverState *bs) static void cbw_refresh_filename(BlockDriverState *bs)
{ {
pstrcpy(bs->exact_filename, sizeof(bs->exact_filename), pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
bs->file->bs->filename); bs->file->bs->filename);
@@ -433,8 +433,6 @@ static int cbw_open(BlockDriverState *bs, QDict *options, int flags,
return -EINVAL; return -EINVAL;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
ctx = bdrv_get_aio_context(bs); ctx = bdrv_get_aio_context(bs);
aio_context_acquire(ctx); aio_context_acquire(ctx);

View File

@@ -35,8 +35,8 @@ typedef struct BDRVStateCOR {
} BDRVStateCOR; } BDRVStateCOR;
static int GRAPH_UNLOCKED static int cor_open(BlockDriverState *bs, QDict *options, int flags,
cor_open(BlockDriverState *bs, QDict *options, int flags, Error **errp) Error **errp)
{ {
BlockDriverState *bottom_bs = NULL; BlockDriverState *bottom_bs = NULL;
BDRVStateCOR *state = bs->opaque; BDRVStateCOR *state = bs->opaque;
@@ -44,15 +44,11 @@ cor_open(BlockDriverState *bs, QDict *options, int flags, Error **errp)
const char *bottom_node = qdict_get_try_str(options, "bottom"); const char *bottom_node = qdict_get_try_str(options, "bottom");
int ret; int ret;
GLOBAL_STATE_CODE();
ret = bdrv_open_file_child(NULL, options, "file", bs, errp); ret = bdrv_open_file_child(NULL, options, "file", bs, errp);
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs->supported_read_flags = BDRV_REQ_PREFETCH; bs->supported_read_flags = BDRV_REQ_PREFETCH;
bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED | bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
@@ -150,9 +146,9 @@ cor_co_preadv_part(BlockDriverState *bs, int64_t offset, int64_t bytes,
local_flags = flags; local_flags = flags;
/* In case of failure, try to copy-on-read anyway */ /* In case of failure, try to copy-on-read anyway */
ret = bdrv_co_is_allocated(bs->file->bs, offset, bytes, &n); ret = bdrv_is_allocated(bs->file->bs, offset, bytes, &n);
if (ret <= 0) { if (ret <= 0) {
ret = bdrv_co_is_allocated_above(bdrv_backing_chain_next(bs->file->bs), ret = bdrv_is_allocated_above(bdrv_backing_chain_next(bs->file->bs),
state->bottom_bs, true, offset, state->bottom_bs, true, offset,
n, &n); n, &n);
if (ret > 0 || ret < 0) { if (ret > 0 || ret < 0) {
@@ -231,17 +227,13 @@ cor_co_lock_medium(BlockDriverState *bs, bool locked)
} }
static void GRAPH_UNLOCKED cor_close(BlockDriverState *bs) static void cor_close(BlockDriverState *bs)
{ {
BDRVStateCOR *s = bs->opaque; BDRVStateCOR *s = bs->opaque;
GLOBAL_STATE_CODE();
if (s->chain_frozen) { if (s->chain_frozen) {
bdrv_graph_rdlock_main_loop();
s->chain_frozen = false; s->chain_frozen = false;
bdrv_unfreeze_backing_chain(bs, s->bottom_bs); bdrv_unfreeze_backing_chain(bs, s->bottom_bs);
bdrv_graph_rdunlock_main_loop();
} }
bdrv_unref(s->bottom_bs); bdrv_unref(s->bottom_bs);
@@ -271,15 +263,12 @@ static BlockDriver bdrv_copy_on_read = {
}; };
void no_coroutine_fn bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs) void bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs)
{ {
BDRVStateCOR *s = cor_filter_bs->opaque; BDRVStateCOR *s = cor_filter_bs->opaque;
GLOBAL_STATE_CODE();
/* unfreeze, as otherwise bdrv_replace_node() will fail */ /* unfreeze, as otherwise bdrv_replace_node() will fail */
if (s->chain_frozen) { if (s->chain_frozen) {
GRAPH_RDLOCK_GUARD_MAINLOOP();
s->chain_frozen = false; s->chain_frozen = false;
bdrv_unfreeze_backing_chain(cor_filter_bs, s->bottom_bs); bdrv_unfreeze_backing_chain(cor_filter_bs, s->bottom_bs);
} }

View File

@@ -27,7 +27,6 @@
#include "block/block_int.h" #include "block/block_int.h"
void no_coroutine_fn GRAPH_UNLOCKED void bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs);
bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs);
#endif /* BLOCK_COPY_ON_READ_H */ #endif /* BLOCK_COPY_ON_READ_H */

View File

@@ -65,9 +65,6 @@ static int block_crypto_read_func(QCryptoBlock *block,
BlockDriverState *bs = opaque; BlockDriverState *bs = opaque;
ssize_t ret; ssize_t ret;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
ret = bdrv_pread(bs->file, offset, buflen, buf, 0); ret = bdrv_pread(bs->file, offset, buflen, buf, 0);
if (ret < 0) { if (ret < 0) {
error_setg_errno(errp, -ret, "Could not read encryption header"); error_setg_errno(errp, -ret, "Could not read encryption header");
@@ -86,9 +83,6 @@ static int block_crypto_write_func(QCryptoBlock *block,
BlockDriverState *bs = opaque; BlockDriverState *bs = opaque;
ssize_t ret; ssize_t ret;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
ret = bdrv_pwrite(bs->file, offset, buflen, buf, 0); ret = bdrv_pwrite(bs->file, offset, buflen, buf, 0);
if (ret < 0) { if (ret < 0) {
error_setg_errno(errp, -ret, "Could not write encryption header"); error_setg_errno(errp, -ret, "Could not write encryption header");
@@ -269,15 +263,11 @@ static int block_crypto_open_generic(QCryptoBlockFormat format,
unsigned int cflags = 0; unsigned int cflags = 0;
QDict *cryptoopts = NULL; QDict *cryptoopts = NULL;
GLOBAL_STATE_CODE();
ret = bdrv_open_file_child(NULL, options, "file", bs, errp); ret = bdrv_open_file_child(NULL, options, "file", bs, errp);
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs->supported_write_flags = BDRV_REQ_FUA & bs->supported_write_flags = BDRV_REQ_FUA &
bs->file->bs->supported_write_flags; bs->file->bs->supported_write_flags;
@@ -838,7 +828,7 @@ block_crypto_amend_options_generic_luks(BlockDriverState *bs,
errp); errp);
} }
static int GRAPH_RDLOCK static int
block_crypto_amend_options_luks(BlockDriverState *bs, block_crypto_amend_options_luks(BlockDriverState *bs,
QemuOpts *opts, QemuOpts *opts,
BlockDriverAmendStatusCB *status_cb, BlockDriverAmendStatusCB *status_cb,
@@ -851,6 +841,8 @@ block_crypto_amend_options_luks(BlockDriverState *bs,
QCryptoBlockAmendOptions *amend_options = NULL; QCryptoBlockAmendOptions *amend_options = NULL;
int ret = -EINVAL; int ret = -EINVAL;
assume_graph_lock(); /* FIXME */
assert(crypto); assert(crypto);
assert(crypto->block); assert(crypto->block);

View File

@@ -696,10 +696,8 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags,
const char *protocol_delimiter; const char *protocol_delimiter;
int ret; int ret;
bdrv_graph_rdlock_main_loop();
ret = bdrv_apply_auto_read_only(bs, "curl driver does not support writes", ret = bdrv_apply_auto_read_only(bs, "curl driver does not support writes",
errp); errp);
bdrv_graph_rdunlock_main_loop();
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }

View File

@@ -70,8 +70,7 @@ static int dmg_probe(const uint8_t *buf, int buf_size, const char *filename)
return 0; return 0;
} }
static int GRAPH_RDLOCK static int read_uint64(BlockDriverState *bs, int64_t offset, uint64_t *result)
read_uint64(BlockDriverState *bs, int64_t offset, uint64_t *result)
{ {
uint64_t buffer; uint64_t buffer;
int ret; int ret;
@@ -85,8 +84,7 @@ read_uint64(BlockDriverState *bs, int64_t offset, uint64_t *result)
return 0; return 0;
} }
static int GRAPH_RDLOCK static int read_uint32(BlockDriverState *bs, int64_t offset, uint32_t *result)
read_uint32(BlockDriverState *bs, int64_t offset, uint32_t *result)
{ {
uint32_t buffer; uint32_t buffer;
int ret; int ret;
@@ -323,8 +321,7 @@ fail:
return ret; return ret;
} }
static int GRAPH_RDLOCK static int dmg_read_resource_fork(BlockDriverState *bs, DmgHeaderState *ds,
dmg_read_resource_fork(BlockDriverState *bs, DmgHeaderState *ds,
uint64_t info_begin, uint64_t info_length) uint64_t info_begin, uint64_t info_length)
{ {
BDRVDMGState *s = bs->opaque; BDRVDMGState *s = bs->opaque;
@@ -391,8 +388,7 @@ fail:
return ret; return ret;
} }
static int GRAPH_RDLOCK static int dmg_read_plist_xml(BlockDriverState *bs, DmgHeaderState *ds,
dmg_read_plist_xml(BlockDriverState *bs, DmgHeaderState *ds,
uint64_t info_begin, uint64_t info_length) uint64_t info_begin, uint64_t info_length)
{ {
BDRVDMGState *s = bs->opaque; BDRVDMGState *s = bs->opaque;
@@ -456,11 +452,7 @@ static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
int64_t offset; int64_t offset;
int ret; int ret;
GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop();
ret = bdrv_apply_auto_read_only(bs, NULL, errp); ret = bdrv_apply_auto_read_only(bs, NULL, errp);
bdrv_graph_rdunlock_main_loop();
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
@@ -469,9 +461,6 @@ static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* /*
* NB: if uncompress submodules are absent, * NB: if uncompress submodules are absent,
* ie block_module_load return value == 0, the function pointers * ie block_module_load return value == 0, the function pointers

View File

@@ -83,8 +83,6 @@ BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp)
uint64_t perm; uint64_t perm;
int ret; int ret;
GLOBAL_STATE_CODE();
if (!id_wellformed(export->id)) { if (!id_wellformed(export->id)) {
error_setg(errp, "Invalid block export id"); error_setg(errp, "Invalid block export id");
return NULL; return NULL;
@@ -147,9 +145,7 @@ BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp)
* access since the export could be available before migration handover. * access since the export could be available before migration handover.
* ctx was acquired in the caller. * ctx was acquired in the caller.
*/ */
bdrv_graph_rdlock_main_loop();
bdrv_activate(bs, NULL); bdrv_activate(bs, NULL);
bdrv_graph_rdunlock_main_loop();
perm = BLK_PERM_CONSISTENT_READ; perm = BLK_PERM_CONSISTENT_READ;
if (export->writable) { if (export->writable) {

View File

@@ -160,6 +160,7 @@ typedef struct BDRVRawState {
bool has_write_zeroes:1; bool has_write_zeroes:1;
bool use_linux_aio:1; bool use_linux_aio:1;
bool use_linux_io_uring:1; bool use_linux_io_uring:1;
int64_t *offset; /* offset of zone append operation */
int page_cache_inconsistent; /* errno from fdatasync failure */ int page_cache_inconsistent; /* errno from fdatasync failure */
bool has_fallocate; bool has_fallocate;
bool needs_alignment; bool needs_alignment;
@@ -2444,13 +2445,12 @@ static bool bdrv_qiov_is_aligned(BlockDriverState *bs, QEMUIOVector *qiov)
return true; return true;
} }
static int coroutine_fn raw_co_prw(BlockDriverState *bs, int64_t *offset_ptr, static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
uint64_t bytes, QEMUIOVector *qiov, int type) uint64_t bytes, QEMUIOVector *qiov, int type)
{ {
BDRVRawState *s = bs->opaque; BDRVRawState *s = bs->opaque;
RawPosixAIOData acb; RawPosixAIOData acb;
int ret; int ret;
uint64_t offset = *offset_ptr;
if (fd_open(bs) < 0) if (fd_open(bs) < 0)
return -EIO; return -EIO;
@@ -2513,8 +2513,8 @@ out:
uint64_t *wp = &wps->wp[offset / bs->bl.zone_size]; uint64_t *wp = &wps->wp[offset / bs->bl.zone_size];
if (!BDRV_ZT_IS_CONV(*wp)) { if (!BDRV_ZT_IS_CONV(*wp)) {
if (type & QEMU_AIO_ZONE_APPEND) { if (type & QEMU_AIO_ZONE_APPEND) {
*offset_ptr = *wp; *s->offset = *wp;
trace_zbd_zone_append_complete(bs, *offset_ptr trace_zbd_zone_append_complete(bs, *s->offset
>> BDRV_SECTOR_BITS); >> BDRV_SECTOR_BITS);
} }
/* Advance the wp if needed */ /* Advance the wp if needed */
@@ -2523,10 +2523,7 @@ out:
} }
} }
} else { } else {
/* update_zones_wp(bs, s->fd, 0, 1);
* write and append write are not allowed to cross zone boundaries
*/
update_zones_wp(bs, s->fd, offset, 1);
} }
qemu_co_mutex_unlock(&wps->colock); qemu_co_mutex_unlock(&wps->colock);
@@ -2539,14 +2536,14 @@ static int coroutine_fn raw_co_preadv(BlockDriverState *bs, int64_t offset,
int64_t bytes, QEMUIOVector *qiov, int64_t bytes, QEMUIOVector *qiov,
BdrvRequestFlags flags) BdrvRequestFlags flags)
{ {
return raw_co_prw(bs, &offset, bytes, qiov, QEMU_AIO_READ); return raw_co_prw(bs, offset, bytes, qiov, QEMU_AIO_READ);
} }
static int coroutine_fn raw_co_pwritev(BlockDriverState *bs, int64_t offset, static int coroutine_fn raw_co_pwritev(BlockDriverState *bs, int64_t offset,
int64_t bytes, QEMUIOVector *qiov, int64_t bytes, QEMUIOVector *qiov,
BdrvRequestFlags flags) BdrvRequestFlags flags)
{ {
return raw_co_prw(bs, &offset, bytes, qiov, QEMU_AIO_WRITE); return raw_co_prw(bs, offset, bytes, qiov, QEMU_AIO_WRITE);
} }
static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs) static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
@@ -3473,7 +3470,7 @@ static int coroutine_fn raw_co_zone_mgmt(BlockDriverState *bs, BlockZoneOp op,
len >> BDRV_SECTOR_BITS); len >> BDRV_SECTOR_BITS);
ret = raw_thread_pool_submit(handle_aiocb_zone_mgmt, &acb); ret = raw_thread_pool_submit(handle_aiocb_zone_mgmt, &acb);
if (ret != 0) { if (ret != 0) {
update_zones_wp(bs, s->fd, offset, nrz); update_zones_wp(bs, s->fd, offset, i);
error_report("ioctl %s failed %d", op_name, ret); error_report("ioctl %s failed %d", op_name, ret);
return ret; return ret;
} }
@@ -3509,6 +3506,8 @@ static int coroutine_fn raw_co_zone_append(BlockDriverState *bs,
int64_t zone_size_mask = bs->bl.zone_size - 1; int64_t zone_size_mask = bs->bl.zone_size - 1;
int64_t iov_len = 0; int64_t iov_len = 0;
int64_t len = 0; int64_t len = 0;
BDRVRawState *s = bs->opaque;
s->offset = offset;
if (*offset & zone_size_mask) { if (*offset & zone_size_mask) {
error_report("sector offset %" PRId64 " is not aligned to zone size " error_report("sector offset %" PRId64 " is not aligned to zone size "
@@ -3529,7 +3528,7 @@ static int coroutine_fn raw_co_zone_append(BlockDriverState *bs,
} }
trace_zbd_zone_append(bs, *offset >> BDRV_SECTOR_BITS); trace_zbd_zone_append(bs, *offset >> BDRV_SECTOR_BITS);
return raw_co_prw(bs, offset, len, qiov, QEMU_AIO_ZONE_APPEND); return raw_co_prw(bs, *offset, len, qiov, QEMU_AIO_ZONE_APPEND);
} }
#endif #endif

View File

@@ -36,8 +36,6 @@ static int compress_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!bs->file->bs->drv || !block_driver_can_compress(bs->file->bs->drv)) { if (!bs->file->bs->drv || !block_driver_can_compress(bs->file->bs->drv)) {
error_setg(errp, error_setg(errp,
"Compression is not supported for underlying format: %s", "Compression is not supported for underlying format: %s",
@@ -99,8 +97,7 @@ compress_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
} }
static void GRAPH_RDLOCK static void compress_refresh_limits(BlockDriverState *bs, Error **errp)
compress_refresh_limits(BlockDriverState *bs, Error **errp)
{ {
BlockDriverInfo bdi; BlockDriverInfo bdi;
int ret; int ret;

View File

@@ -863,13 +863,11 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
if (ret == -EACCES || ret == -EROFS) { if (ret == -EACCES || ret == -EROFS) {
/* Try to degrade to read-only, but if it doesn't work, still use the /* Try to degrade to read-only, but if it doesn't work, still use the
* normal error message. */ * normal error message. */
bdrv_graph_rdlock_main_loop();
if (bdrv_apply_auto_read_only(bs, NULL, NULL) == 0) { if (bdrv_apply_auto_read_only(bs, NULL, NULL) == 0) {
open_flags = (open_flags & ~O_RDWR) | O_RDONLY; open_flags = (open_flags & ~O_RDWR) | O_RDONLY;
s->fd = glfs_open(s->glfs, gconf->path, open_flags); s->fd = glfs_open(s->glfs, gconf->path, open_flags);
ret = s->fd ? 0 : -errno; ret = s->fd ? 0 : -errno;
} }
bdrv_graph_rdunlock_main_loop();
} }
s->supports_seek_data = qemu_gluster_test_seek(s->fd); s->supports_seek_data = qemu_gluster_test_seek(s->fd);

View File

@@ -106,13 +106,12 @@ static uint32_t reader_count(void)
return rd; return rd;
} }
void no_coroutine_fn bdrv_graph_wrlock(BlockDriverState *bs) void bdrv_graph_wrlock(BlockDriverState *bs)
{ {
AioContext *ctx = NULL; AioContext *ctx = NULL;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
assert(!qatomic_read(&has_writer)); assert(!qatomic_read(&has_writer));
assert(!qemu_in_coroutine());
/* /*
* Release only non-mainloop AioContext. The mainloop often relies on the * Release only non-mainloop AioContext. The mainloop often relies on the

View File

@@ -42,18 +42,13 @@
/* Maximum bounce buffer for copy-on-read and write zeroes, in bytes */ /* Maximum bounce buffer for copy-on-read and write zeroes, in bytes */
#define MAX_BOUNCE_BUFFER (32768 << BDRV_SECTOR_BITS) #define MAX_BOUNCE_BUFFER (32768 << BDRV_SECTOR_BITS)
static void coroutine_fn GRAPH_RDLOCK static void bdrv_parent_cb_resize(BlockDriverState *bs);
bdrv_parent_cb_resize(BlockDriverState *bs);
static int coroutine_fn bdrv_co_do_pwrite_zeroes(BlockDriverState *bs, static int coroutine_fn bdrv_co_do_pwrite_zeroes(BlockDriverState *bs,
int64_t offset, int64_t bytes, BdrvRequestFlags flags); int64_t offset, int64_t bytes, BdrvRequestFlags flags);
static void GRAPH_RDLOCK static void bdrv_parent_drained_begin(BlockDriverState *bs, BdrvChild *ignore)
bdrv_parent_drained_begin(BlockDriverState *bs, BdrvChild *ignore)
{ {
BdrvChild *c, *next; BdrvChild *c, *next;
IO_OR_GS_CODE();
assert_bdrv_graph_readable();
QLIST_FOREACH_SAFE(c, &bs->parents, next_parent, next) { QLIST_FOREACH_SAFE(c, &bs->parents, next_parent, next) {
if (c == ignore) { if (c == ignore) {
@@ -75,12 +70,9 @@ void bdrv_parent_drained_end_single(BdrvChild *c)
} }
} }
static void GRAPH_RDLOCK static void bdrv_parent_drained_end(BlockDriverState *bs, BdrvChild *ignore)
bdrv_parent_drained_end(BlockDriverState *bs, BdrvChild *ignore)
{ {
BdrvChild *c; BdrvChild *c;
IO_OR_GS_CODE();
assert_bdrv_graph_readable();
QLIST_FOREACH(c, &bs->parents, next_parent) { QLIST_FOREACH(c, &bs->parents, next_parent) {
if (c == ignore) { if (c == ignore) {
@@ -92,22 +84,17 @@ bdrv_parent_drained_end(BlockDriverState *bs, BdrvChild *ignore)
bool bdrv_parent_drained_poll_single(BdrvChild *c) bool bdrv_parent_drained_poll_single(BdrvChild *c)
{ {
IO_OR_GS_CODE();
if (c->klass->drained_poll) { if (c->klass->drained_poll) {
return c->klass->drained_poll(c); return c->klass->drained_poll(c);
} }
return false; return false;
} }
static bool GRAPH_RDLOCK static bool bdrv_parent_drained_poll(BlockDriverState *bs, BdrvChild *ignore,
bdrv_parent_drained_poll(BlockDriverState *bs, BdrvChild *ignore,
bool ignore_bds_parents) bool ignore_bds_parents)
{ {
BdrvChild *c, *next; BdrvChild *c, *next;
bool busy = false; bool busy = false;
IO_OR_GS_CODE();
assert_bdrv_graph_readable();
QLIST_FOREACH_SAFE(c, &bs->parents, next_parent, next) { QLIST_FOREACH_SAFE(c, &bs->parents, next_parent, next) {
if (c == ignore || (ignore_bds_parents && c->klass->parent_is_bds)) { if (c == ignore || (ignore_bds_parents && c->klass->parent_is_bds)) {
@@ -127,7 +114,6 @@ void bdrv_parent_drained_begin_single(BdrvChild *c)
c->quiesced_parent = true; c->quiesced_parent = true;
if (c->klass->drained_begin) { if (c->klass->drained_begin) {
/* called with rdlock taken, but it doesn't really need it. */
c->klass->drained_begin(c); c->klass->drained_begin(c);
} }
} }
@@ -277,9 +263,6 @@ bool bdrv_drain_poll(BlockDriverState *bs, BdrvChild *ignore_parent,
static bool bdrv_drain_poll_top_level(BlockDriverState *bs, static bool bdrv_drain_poll_top_level(BlockDriverState *bs,
BdrvChild *ignore_parent) BdrvChild *ignore_parent)
{ {
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
return bdrv_drain_poll(bs, ignore_parent, false); return bdrv_drain_poll(bs, ignore_parent, false);
} }
@@ -379,7 +362,6 @@ static void bdrv_do_drained_begin(BlockDriverState *bs, BdrvChild *parent,
/* Stop things in parent-to-child order */ /* Stop things in parent-to-child order */
if (qatomic_fetch_inc(&bs->quiesce_counter) == 0) { if (qatomic_fetch_inc(&bs->quiesce_counter) == 0) {
GRAPH_RDLOCK_GUARD_MAINLOOP();
bdrv_parent_drained_begin(bs, parent); bdrv_parent_drained_begin(bs, parent);
if (bs->drv && bs->drv->bdrv_drain_begin) { if (bs->drv && bs->drv->bdrv_drain_begin) {
bs->drv->bdrv_drain_begin(bs); bs->drv->bdrv_drain_begin(bs);
@@ -426,16 +408,12 @@ static void bdrv_do_drained_end(BlockDriverState *bs, BdrvChild *parent)
bdrv_co_yield_to_drain(bs, false, parent, false); bdrv_co_yield_to_drain(bs, false, parent, false);
return; return;
} }
/* At this point, we should be always running in the main loop. */
GLOBAL_STATE_CODE();
assert(bs->quiesce_counter > 0); assert(bs->quiesce_counter > 0);
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
/* Re-enable things in child-to-parent order */ /* Re-enable things in child-to-parent order */
old_quiesce_counter = qatomic_fetch_dec(&bs->quiesce_counter); old_quiesce_counter = qatomic_fetch_dec(&bs->quiesce_counter);
if (old_quiesce_counter == 1) { if (old_quiesce_counter == 1) {
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (bs->drv && bs->drv->bdrv_drain_end) { if (bs->drv && bs->drv->bdrv_drain_end) {
bs->drv->bdrv_drain_end(bs); bs->drv->bdrv_drain_end(bs);
} }
@@ -459,8 +437,6 @@ void bdrv_drain(BlockDriverState *bs)
static void bdrv_drain_assert_idle(BlockDriverState *bs) static void bdrv_drain_assert_idle(BlockDriverState *bs)
{ {
BdrvChild *child, *next; BdrvChild *child, *next;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
assert(qatomic_read(&bs->in_flight) == 0); assert(qatomic_read(&bs->in_flight) == 0);
QLIST_FOREACH_SAFE(child, &bs->children, next, next) { QLIST_FOREACH_SAFE(child, &bs->children, next, next) {
@@ -474,9 +450,7 @@ static bool bdrv_drain_all_poll(void)
{ {
BlockDriverState *bs = NULL; BlockDriverState *bs = NULL;
bool result = false; bool result = false;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* bdrv_drain_poll() can't make changes to the graph and we are holding the /* bdrv_drain_poll() can't make changes to the graph and we are holding the
* main AioContext lock, so iterating bdrv_next_all_states() is safe. */ * main AioContext lock, so iterating bdrv_next_all_states() is safe. */
@@ -1249,7 +1223,7 @@ bdrv_co_do_copy_on_readv(BdrvChild *child, int64_t offset, int64_t bytes,
ret = 1; /* "already allocated", so nothing will be copied */ ret = 1; /* "already allocated", so nothing will be copied */
pnum = MIN(align_bytes, max_transfer); pnum = MIN(align_bytes, max_transfer);
} else { } else {
ret = bdrv_co_is_allocated(bs, align_offset, ret = bdrv_is_allocated(bs, align_offset,
MIN(align_bytes, max_transfer), &pnum); MIN(align_bytes, max_transfer), &pnum);
if (ret < 0) { if (ret < 0) {
/* /*
@@ -1397,7 +1371,7 @@ bdrv_aligned_preadv(BdrvChild *child, BdrvTrackedRequest *req,
/* The flag BDRV_REQ_COPY_ON_READ has reached its addressee */ /* The flag BDRV_REQ_COPY_ON_READ has reached its addressee */
flags &= ~BDRV_REQ_COPY_ON_READ; flags &= ~BDRV_REQ_COPY_ON_READ;
ret = bdrv_co_is_allocated(bs, offset, bytes, &pnum); ret = bdrv_is_allocated(bs, offset, bytes, &pnum);
if (ret < 0) { if (ret < 0) {
goto out; goto out;
} }
@@ -2029,7 +2003,7 @@ bdrv_co_write_req_prepare(BdrvChild *child, int64_t offset, int64_t bytes,
} }
} }
static inline void coroutine_fn GRAPH_RDLOCK static inline void coroutine_fn
bdrv_co_write_req_finish(BdrvChild *child, int64_t offset, int64_t bytes, bdrv_co_write_req_finish(BdrvChild *child, int64_t offset, int64_t bytes,
BdrvTrackedRequest *req, int ret) BdrvTrackedRequest *req, int ret)
{ {
@@ -2356,7 +2330,6 @@ int bdrv_flush_all(void)
int result = 0; int result = 0;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* /*
* bdrv queue is managed by record/replay, * bdrv queue is managed by record/replay,
@@ -2410,7 +2383,7 @@ int bdrv_flush_all(void)
* set to the host mapping and BDS corresponding to the guest offset. * set to the host mapping and BDS corresponding to the guest offset.
*/ */
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn GRAPH_RDLOCK
bdrv_co_do_block_status(BlockDriverState *bs, bool want_zero, bdrv_co_block_status(BlockDriverState *bs, bool want_zero,
int64_t offset, int64_t bytes, int64_t offset, int64_t bytes,
int64_t *pnum, int64_t *map, BlockDriverState **file) int64_t *pnum, int64_t *map, BlockDriverState **file)
{ {
@@ -2571,7 +2544,7 @@ bdrv_co_do_block_status(BlockDriverState *bs, bool want_zero,
if (ret & BDRV_BLOCK_RAW) { if (ret & BDRV_BLOCK_RAW) {
assert(ret & BDRV_BLOCK_OFFSET_VALID && local_file); assert(ret & BDRV_BLOCK_OFFSET_VALID && local_file);
ret = bdrv_co_do_block_status(local_file, want_zero, local_map, ret = bdrv_co_block_status(local_file, want_zero, local_map,
*pnum, pnum, &local_map, &local_file); *pnum, pnum, &local_map, &local_file);
goto out; goto out;
} }
@@ -2599,7 +2572,7 @@ bdrv_co_do_block_status(BlockDriverState *bs, bool want_zero,
int64_t file_pnum; int64_t file_pnum;
int ret2; int ret2;
ret2 = bdrv_co_do_block_status(local_file, want_zero, local_map, ret2 = bdrv_co_block_status(local_file, want_zero, local_map,
*pnum, &file_pnum, NULL, NULL); *pnum, &file_pnum, NULL, NULL);
if (ret2 >= 0) { if (ret2 >= 0) {
/* Ignore errors. This is just providing extra information, it /* Ignore errors. This is just providing extra information, it
@@ -2667,8 +2640,7 @@ bdrv_co_common_block_status_above(BlockDriverState *bs,
return 0; return 0;
} }
ret = bdrv_co_do_block_status(bs, want_zero, offset, bytes, pnum, ret = bdrv_co_block_status(bs, want_zero, offset, bytes, pnum, map, file);
map, file);
++*depth; ++*depth;
if (ret < 0 || *pnum == 0 || ret & BDRV_BLOCK_ALLOCATED || bs == base) { if (ret < 0 || *pnum == 0 || ret & BDRV_BLOCK_ALLOCATED || bs == base) {
return ret; return ret;
@@ -2684,8 +2656,8 @@ bdrv_co_common_block_status_above(BlockDriverState *bs,
for (p = bdrv_filter_or_cow_bs(bs); include_base || p != base; for (p = bdrv_filter_or_cow_bs(bs); include_base || p != base;
p = bdrv_filter_or_cow_bs(p)) p = bdrv_filter_or_cow_bs(p))
{ {
ret = bdrv_co_do_block_status(p, want_zero, offset, bytes, pnum, ret = bdrv_co_block_status(p, want_zero, offset, bytes, pnum, map,
map, file); file);
++*depth; ++*depth;
if (ret < 0) { if (ret < 0) {
return ret; return ret;
@@ -2751,12 +2723,20 @@ int coroutine_fn bdrv_co_block_status_above(BlockDriverState *bs,
bytes, pnum, map, file, NULL); bytes, pnum, map, file, NULL);
} }
int coroutine_fn bdrv_co_block_status(BlockDriverState *bs, int64_t offset, int bdrv_block_status_above(BlockDriverState *bs, BlockDriverState *base,
int64_t bytes, int64_t *pnum, int64_t offset, int64_t bytes, int64_t *pnum,
int64_t *map, BlockDriverState **file) int64_t *map, BlockDriverState **file)
{ {
IO_CODE(); IO_CODE();
return bdrv_co_block_status_above(bs, bdrv_filter_or_cow_bs(bs), return bdrv_common_block_status_above(bs, base, false, true, offset, bytes,
pnum, map, file, NULL);
}
int bdrv_block_status(BlockDriverState *bs, int64_t offset, int64_t bytes,
int64_t *pnum, int64_t *map, BlockDriverState **file)
{
IO_CODE();
return bdrv_block_status_above(bs, bdrv_filter_or_cow_bs(bs),
offset, bytes, pnum, map, file); offset, bytes, pnum, map, file);
} }
@@ -2804,6 +2784,45 @@ int coroutine_fn bdrv_co_is_allocated(BlockDriverState *bs, int64_t offset,
return !!(ret & BDRV_BLOCK_ALLOCATED); return !!(ret & BDRV_BLOCK_ALLOCATED);
} }
int bdrv_is_allocated(BlockDriverState *bs, int64_t offset, int64_t bytes,
int64_t *pnum)
{
int ret;
int64_t dummy;
IO_CODE();
ret = bdrv_common_block_status_above(bs, bs, true, false, offset,
bytes, pnum ? pnum : &dummy, NULL,
NULL, NULL);
if (ret < 0) {
return ret;
}
return !!(ret & BDRV_BLOCK_ALLOCATED);
}
/* See bdrv_is_allocated_above for documentation */
int coroutine_fn bdrv_co_is_allocated_above(BlockDriverState *top,
BlockDriverState *base,
bool include_base, int64_t offset,
int64_t bytes, int64_t *pnum)
{
int depth;
int ret;
IO_CODE();
ret = bdrv_co_common_block_status_above(top, base, include_base, false,
offset, bytes, pnum, NULL, NULL,
&depth);
if (ret < 0) {
return ret;
}
if (ret & BDRV_BLOCK_ALLOCATED) {
return depth;
}
return 0;
}
/* /*
* Given an image chain: ... -> [BASE] -> [INTER1] -> [INTER2] -> [TOP] * Given an image chain: ... -> [BASE] -> [INTER1] -> [INTER2] -> [TOP]
* *
@@ -2821,7 +2840,7 @@ int coroutine_fn bdrv_co_is_allocated(BlockDriverState *bs, int64_t offset,
* words, the result is not necessarily the maximum possible range); * words, the result is not necessarily the maximum possible range);
* but 'pnum' will only be 0 when end of file is reached. * but 'pnum' will only be 0 when end of file is reached.
*/ */
int coroutine_fn bdrv_co_is_allocated_above(BlockDriverState *bs, int bdrv_is_allocated_above(BlockDriverState *top,
BlockDriverState *base, BlockDriverState *base,
bool include_base, int64_t offset, bool include_base, int64_t offset,
int64_t bytes, int64_t *pnum) int64_t bytes, int64_t *pnum)
@@ -2830,7 +2849,7 @@ int coroutine_fn bdrv_co_is_allocated_above(BlockDriverState *bs,
int ret; int ret;
IO_CODE(); IO_CODE();
ret = bdrv_co_common_block_status_above(bs, base, include_base, false, ret = bdrv_common_block_status_above(top, base, include_base, false,
offset, bytes, pnum, NULL, NULL, offset, bytes, pnum, NULL, NULL,
&depth); &depth);
if (ret < 0) { if (ret < 0) {
@@ -3532,13 +3551,9 @@ int coroutine_fn bdrv_co_copy_range(BdrvChild *src, int64_t src_offset,
bytes, read_flags, write_flags); bytes, read_flags, write_flags);
} }
static void coroutine_fn GRAPH_RDLOCK static void bdrv_parent_cb_resize(BlockDriverState *bs)
bdrv_parent_cb_resize(BlockDriverState *bs)
{ {
BdrvChild *c; BdrvChild *c;
assert_bdrv_graph_readable();
QLIST_FOREACH(c, &bs->parents, next_parent) { QLIST_FOREACH(c, &bs->parents, next_parent) {
if (c->klass->resize) { if (c->klass->resize) {
c->klass->resize(c); c->klass->resize(c);
@@ -3685,8 +3700,6 @@ out:
void bdrv_cancel_in_flight(BlockDriverState *bs) void bdrv_cancel_in_flight(BlockDriverState *bs)
{ {
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!bs || !bs->drv) { if (!bs || !bs->drv) {
return; return;
} }

View File

@@ -15,7 +15,6 @@
#include "block/block.h" #include "block/block.h"
#include "block/raw-aio.h" #include "block/raw-aio.h"
#include "qemu/coroutine.h" #include "qemu/coroutine.h"
#include "qemu/defer-call.h"
#include "qapi/error.h" #include "qapi/error.h"
#include "sysemu/block-backend.h" #include "sysemu/block-backend.h"
#include "trace.h" #include "trace.h"
@@ -125,9 +124,6 @@ static void luring_process_completions(LuringState *s)
{ {
struct io_uring_cqe *cqes; struct io_uring_cqe *cqes;
int total_bytes; int total_bytes;
defer_call_begin();
/* /*
* Request completion callbacks can run the nested event loop. * Request completion callbacks can run the nested event loop.
* Schedule ourselves so the nested event loop will "see" remaining * Schedule ourselves so the nested event loop will "see" remaining
@@ -220,10 +216,7 @@ end:
aio_co_wake(luringcb->co); aio_co_wake(luringcb->co);
} }
} }
qemu_bh_cancel(s->completion_bh); qemu_bh_cancel(s->completion_bh);
defer_call_end();
} }
static int ioq_submit(LuringState *s) static int ioq_submit(LuringState *s)
@@ -313,7 +306,7 @@ static void ioq_init(LuringQueue *io_q)
io_q->blocked = false; io_q->blocked = false;
} }
static void luring_deferred_fn(void *opaque) static void luring_unplug_fn(void *opaque)
{ {
LuringState *s = opaque; LuringState *s = opaque;
trace_luring_unplug_fn(s, s->io_q.blocked, s->io_q.in_queue, trace_luring_unplug_fn(s, s->io_q.blocked, s->io_q.in_queue,
@@ -374,7 +367,7 @@ static int luring_do_submit(int fd, LuringAIOCB *luringcb, LuringState *s,
return ret; return ret;
} }
defer_call(luring_deferred_fn, s); blk_io_plug_call(luring_unplug_fn, s);
} }
return 0; return 0;
} }

View File

@@ -1925,9 +1925,7 @@ static int iscsi_open(BlockDriverState *bs, QDict *options, int flags,
/* Check the write protect flag of the LUN if we want to write */ /* Check the write protect flag of the LUN if we want to write */
if (iscsilun->type == TYPE_DISK && (flags & BDRV_O_RDWR) && if (iscsilun->type == TYPE_DISK && (flags & BDRV_O_RDWR) &&
iscsilun->write_protected) { iscsilun->write_protected) {
bdrv_graph_rdlock_main_loop();
ret = bdrv_apply_auto_read_only(bs, "LUN is write protected", errp); ret = bdrv_apply_auto_read_only(bs, "LUN is write protected", errp);
bdrv_graph_rdunlock_main_loop();
if (ret < 0) { if (ret < 0) {
goto out; goto out;
} }

View File

@@ -14,7 +14,6 @@
#include "block/raw-aio.h" #include "block/raw-aio.h"
#include "qemu/event_notifier.h" #include "qemu/event_notifier.h"
#include "qemu/coroutine.h" #include "qemu/coroutine.h"
#include "qemu/defer-call.h"
#include "qapi/error.h" #include "qapi/error.h"
#include "sysemu/block-backend.h" #include "sysemu/block-backend.h"
@@ -205,8 +204,6 @@ static void qemu_laio_process_completions(LinuxAioState *s)
{ {
struct io_event *events; struct io_event *events;
defer_call_begin();
/* Reschedule so nested event loops see currently pending completions */ /* Reschedule so nested event loops see currently pending completions */
qemu_bh_schedule(s->completion_bh); qemu_bh_schedule(s->completion_bh);
@@ -233,8 +230,6 @@ static void qemu_laio_process_completions(LinuxAioState *s)
* own `for` loop. If we are the last all counters dropped to zero. */ * own `for` loop. If we are the last all counters dropped to zero. */
s->event_max = 0; s->event_max = 0;
s->event_idx = 0; s->event_idx = 0;
defer_call_end();
} }
static void qemu_laio_process_completions_and_submit(LinuxAioState *s) static void qemu_laio_process_completions_and_submit(LinuxAioState *s)
@@ -358,7 +353,7 @@ static uint64_t laio_max_batch(LinuxAioState *s, uint64_t dev_max_batch)
return max_batch; return max_batch;
} }
static void laio_deferred_fn(void *opaque) static void laio_unplug_fn(void *opaque)
{ {
LinuxAioState *s = opaque; LinuxAioState *s = opaque;
@@ -398,7 +393,7 @@ static int laio_do_submit(int fd, struct qemu_laiocb *laiocb, off_t offset,
if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch)) { if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch)) {
ioq_submit(s); ioq_submit(s);
} else { } else {
defer_call(laio_deferred_fn, s); blk_io_plug_call(laio_unplug_fn, s);
} }
} }

View File

@@ -21,6 +21,7 @@ block_ss.add(files(
'mirror.c', 'mirror.c',
'nbd.c', 'nbd.c',
'null.c', 'null.c',
'plug.c',
'preallocate.c', 'preallocate.c',
'progress_meter.c', 'progress_meter.c',
'qapi.c', 'qapi.c',

View File

@@ -55,18 +55,10 @@ typedef struct MirrorBlockJob {
BlockMirrorBackingMode backing_mode; BlockMirrorBackingMode backing_mode;
/* Whether the target image requires explicit zero-initialization */ /* Whether the target image requires explicit zero-initialization */
bool zero_target; bool zero_target;
/*
* To be accesssed with atomics. Written only under the BQL (required by the
* current implementation of mirror_change()).
*/
MirrorCopyMode copy_mode; MirrorCopyMode copy_mode;
BlockdevOnError on_source_error, on_target_error; BlockdevOnError on_source_error, on_target_error;
/* /* Set when the target is synced (dirty bitmap is clean, nothing
* To be accessed with atomics. * in flight) and the job is running in active mode */
*
* Set when the target is synced (dirty bitmap is clean, nothing in flight)
* and the job is running in active mode.
*/
bool actively_synced; bool actively_synced;
bool should_complete; bool should_complete;
int64_t granularity; int64_t granularity;
@@ -130,7 +122,7 @@ typedef enum MirrorMethod {
static BlockErrorAction mirror_error_action(MirrorBlockJob *s, bool read, static BlockErrorAction mirror_error_action(MirrorBlockJob *s, bool read,
int error) int error)
{ {
qatomic_set(&s->actively_synced, false); s->actively_synced = false;
if (read) { if (read) {
return block_job_error_action(&s->common, s->on_source_error, return block_job_error_action(&s->common, s->on_source_error,
true, error); true, error);
@@ -479,7 +471,7 @@ static unsigned mirror_perform(MirrorBlockJob *s, int64_t offset,
return bytes_handled; return bytes_handled;
} }
static void coroutine_fn GRAPH_RDLOCK mirror_iteration(MirrorBlockJob *s) static void coroutine_fn mirror_iteration(MirrorBlockJob *s)
{ {
BlockDriverState *source = s->mirror_top_bs->backing->bs; BlockDriverState *source = s->mirror_top_bs->backing->bs;
MirrorOp *pseudo_op; MirrorOp *pseudo_op;
@@ -567,7 +559,7 @@ static void coroutine_fn GRAPH_RDLOCK mirror_iteration(MirrorBlockJob *s)
assert(!(offset % s->granularity)); assert(!(offset % s->granularity));
WITH_GRAPH_RDLOCK_GUARD() { WITH_GRAPH_RDLOCK_GUARD() {
ret = bdrv_co_block_status_above(source, NULL, offset, ret = bdrv_block_status_above(source, NULL, offset,
nb_chunks * s->granularity, nb_chunks * s->granularity,
&io_bytes, NULL, NULL); &io_bytes, NULL, NULL);
} }
@@ -678,7 +670,6 @@ static int mirror_exit_common(Job *job)
s->prepared = true; s->prepared = true;
aio_context_acquire(qemu_get_aio_context()); aio_context_acquire(qemu_get_aio_context());
bdrv_graph_rdlock_main_loop();
mirror_top_bs = s->mirror_top_bs; mirror_top_bs = s->mirror_top_bs;
bs_opaque = mirror_top_bs->opaque; bs_opaque = mirror_top_bs->opaque;
@@ -697,8 +688,6 @@ static int mirror_exit_common(Job *job)
bdrv_ref(mirror_top_bs); bdrv_ref(mirror_top_bs);
bdrv_ref(target_bs); bdrv_ref(target_bs);
bdrv_graph_rdunlock_main_loop();
/* /*
* Remove target parent that still uses BLK_PERM_WRITE/RESIZE before * Remove target parent that still uses BLK_PERM_WRITE/RESIZE before
* inserting target_bs at s->to_replace, where we might not be able to get * inserting target_bs at s->to_replace, where we might not be able to get
@@ -712,12 +701,12 @@ static int mirror_exit_common(Job *job)
* these permissions any more means that we can't allow any new requests on * these permissions any more means that we can't allow any new requests on
* mirror_top_bs from now on, so keep it drained. */ * mirror_top_bs from now on, so keep it drained. */
bdrv_drained_begin(mirror_top_bs); bdrv_drained_begin(mirror_top_bs);
bdrv_drained_begin(target_bs);
bs_opaque->stop = true; bs_opaque->stop = true;
bdrv_graph_rdlock_main_loop(); bdrv_graph_rdlock_main_loop();
bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing, bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing,
&error_abort); &error_abort);
bdrv_graph_rdunlock_main_loop();
if (!abort && s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) { if (!abort && s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) {
BlockDriverState *backing = s->is_none_mode ? src : s->base; BlockDriverState *backing = s->is_none_mode ? src : s->base;
@@ -740,7 +729,6 @@ static int mirror_exit_common(Job *job)
local_err = NULL; local_err = NULL;
} }
} }
bdrv_graph_rdunlock_main_loop();
if (s->to_replace) { if (s->to_replace) {
replace_aio_context = bdrv_get_aio_context(s->to_replace); replace_aio_context = bdrv_get_aio_context(s->to_replace);
@@ -758,13 +746,15 @@ static int mirror_exit_common(Job *job)
/* The mirror job has no requests in flight any more, but we need to /* The mirror job has no requests in flight any more, but we need to
* drain potential other users of the BDS before changing the graph. */ * drain potential other users of the BDS before changing the graph. */
assert(s->in_drain); assert(s->in_drain);
bdrv_drained_begin(to_replace); bdrv_drained_begin(target_bs);
/* /*
* Cannot use check_to_replace_node() here, because that would * Cannot use check_to_replace_node() here, because that would
* check for an op blocker on @to_replace, and we have our own * check for an op blocker on @to_replace, and we have our own
* there. * there.
*
* TODO Pull out the writer lock from bdrv_replace_node() to here
*/ */
bdrv_graph_wrlock(target_bs); bdrv_graph_rdlock_main_loop();
if (bdrv_recurse_can_replace(src, to_replace)) { if (bdrv_recurse_can_replace(src, to_replace)) {
bdrv_replace_node(to_replace, target_bs, &local_err); bdrv_replace_node(to_replace, target_bs, &local_err);
} else { } else {
@@ -773,8 +763,8 @@ static int mirror_exit_common(Job *job)
"would not lead to an abrupt change of visible data", "would not lead to an abrupt change of visible data",
to_replace->node_name, target_bs->node_name); to_replace->node_name, target_bs->node_name);
} }
bdrv_graph_wrunlock(); bdrv_graph_rdunlock_main_loop();
bdrv_drained_end(to_replace); bdrv_drained_end(target_bs);
if (local_err) { if (local_err) {
error_report_err(local_err); error_report_err(local_err);
ret = -EPERM; ret = -EPERM;
@@ -789,6 +779,7 @@ static int mirror_exit_common(Job *job)
aio_context_release(replace_aio_context); aio_context_release(replace_aio_context);
} }
g_free(s->replaces); g_free(s->replaces);
bdrv_unref(target_bs);
/* /*
* Remove the mirror filter driver from the graph. Before this, get rid of * Remove the mirror filter driver from the graph. Before this, get rid of
@@ -796,12 +787,7 @@ static int mirror_exit_common(Job *job)
* valid. * valid.
*/ */
block_job_remove_all_bdrv(bjob); block_job_remove_all_bdrv(bjob);
bdrv_graph_wrlock(mirror_top_bs);
bdrv_replace_node(mirror_top_bs, mirror_top_bs->backing->bs, &error_abort); bdrv_replace_node(mirror_top_bs, mirror_top_bs->backing->bs, &error_abort);
bdrv_graph_wrunlock();
bdrv_drained_end(target_bs);
bdrv_unref(target_bs);
bs_opaque->job = NULL; bs_opaque->job = NULL;
@@ -839,18 +825,14 @@ static void coroutine_fn mirror_throttle(MirrorBlockJob *s)
} }
} }
static int coroutine_fn GRAPH_UNLOCKED mirror_dirty_init(MirrorBlockJob *s) static int coroutine_fn mirror_dirty_init(MirrorBlockJob *s)
{ {
int64_t offset; int64_t offset;
BlockDriverState *bs; BlockDriverState *bs = s->mirror_top_bs->backing->bs;
BlockDriverState *target_bs = blk_bs(s->target); BlockDriverState *target_bs = blk_bs(s->target);
int ret; int ret;
int64_t count; int64_t count;
bdrv_graph_co_rdlock();
bs = s->mirror_top_bs->backing->bs;
bdrv_graph_co_rdunlock();
if (s->zero_target) { if (s->zero_target) {
if (!bdrv_can_write_zeroes_with_unmap(target_bs)) { if (!bdrv_can_write_zeroes_with_unmap(target_bs)) {
bdrv_set_dirty_bitmap(s->dirty_bitmap, 0, s->bdev_length); bdrv_set_dirty_bitmap(s->dirty_bitmap, 0, s->bdev_length);
@@ -897,7 +879,7 @@ static int coroutine_fn GRAPH_UNLOCKED mirror_dirty_init(MirrorBlockJob *s)
} }
WITH_GRAPH_RDLOCK_GUARD() { WITH_GRAPH_RDLOCK_GUARD() {
ret = bdrv_co_is_allocated_above(bs, s->base_overlay, true, offset, ret = bdrv_is_allocated_above(bs, s->base_overlay, true, offset,
bytes, &count); bytes, &count);
} }
if (ret < 0) { if (ret < 0) {
@@ -930,7 +912,7 @@ static int coroutine_fn mirror_flush(MirrorBlockJob *s)
static int coroutine_fn mirror_run(Job *job, Error **errp) static int coroutine_fn mirror_run(Job *job, Error **errp)
{ {
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job); MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job);
BlockDriverState *bs; BlockDriverState *bs = s->mirror_top_bs->backing->bs;
MirrorBDSOpaque *mirror_top_opaque = s->mirror_top_bs->opaque; MirrorBDSOpaque *mirror_top_opaque = s->mirror_top_bs->opaque;
BlockDriverState *target_bs = blk_bs(s->target); BlockDriverState *target_bs = blk_bs(s->target);
bool need_drain = true; bool need_drain = true;
@@ -942,10 +924,6 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
checking for a NULL string */ checking for a NULL string */
int ret = 0; int ret = 0;
bdrv_graph_co_rdlock();
bs = bdrv_filter_bs(s->mirror_top_bs);
bdrv_graph_co_rdunlock();
if (job_is_cancelled(&s->common.job)) { if (job_is_cancelled(&s->common.job)) {
goto immediate_exit; goto immediate_exit;
} }
@@ -984,7 +962,7 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
if (s->bdev_length == 0) { if (s->bdev_length == 0) {
/* Transition to the READY state and wait for complete. */ /* Transition to the READY state and wait for complete. */
job_transition_to_ready(&s->common.job); job_transition_to_ready(&s->common.job);
qatomic_set(&s->actively_synced, true); s->actively_synced = true;
while (!job_cancel_requested(&s->common.job) && !s->should_complete) { while (!job_cancel_requested(&s->common.job) && !s->should_complete) {
job_yield(&s->common.job); job_yield(&s->common.job);
} }
@@ -1006,13 +984,13 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
} else { } else {
s->target_cluster_size = BDRV_SECTOR_SIZE; s->target_cluster_size = BDRV_SECTOR_SIZE;
} }
bdrv_graph_co_rdunlock();
if (backing_filename[0] && !bdrv_backing_chain_next(target_bs) && if (backing_filename[0] && !bdrv_backing_chain_next(target_bs) &&
s->granularity < s->target_cluster_size) { s->granularity < s->target_cluster_size) {
s->buf_size = MAX(s->buf_size, s->target_cluster_size); s->buf_size = MAX(s->buf_size, s->target_cluster_size);
s->cow_bitmap = bitmap_new(length); s->cow_bitmap = bitmap_new(length);
} }
s->max_iov = MIN(bs->bl.max_iov, target_bs->bl.max_iov); s->max_iov = MIN(bs->bl.max_iov, target_bs->bl.max_iov);
bdrv_graph_co_rdunlock();
s->buf = qemu_try_blockalign(bs, s->buf_size); s->buf = qemu_try_blockalign(bs, s->buf_size);
if (s->buf == NULL) { if (s->buf == NULL) {
@@ -1078,9 +1056,7 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
mirror_wait_for_free_in_flight_slot(s); mirror_wait_for_free_in_flight_slot(s);
continue; continue;
} else if (cnt != 0) { } else if (cnt != 0) {
bdrv_graph_co_rdlock();
mirror_iteration(s); mirror_iteration(s);
bdrv_graph_co_rdunlock();
} }
} }
@@ -1098,9 +1074,9 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
* the target in a consistent state. * the target in a consistent state.
*/ */
job_transition_to_ready(&s->common.job); job_transition_to_ready(&s->common.job);
if (s->copy_mode != MIRROR_COPY_MODE_BACKGROUND) {
s->actively_synced = true;
} }
if (qatomic_read(&s->copy_mode) != MIRROR_COPY_MODE_BACKGROUND) {
qatomic_set(&s->actively_synced, true);
} }
should_complete = s->should_complete || should_complete = s->should_complete ||
@@ -1270,48 +1246,6 @@ static bool commit_active_cancel(Job *job, bool force)
return force || !job_is_ready(job); return force || !job_is_ready(job);
} }
static void mirror_change(BlockJob *job, BlockJobChangeOptions *opts,
Error **errp)
{
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
BlockJobChangeOptionsMirror *change_opts = &opts->u.mirror;
MirrorCopyMode current;
/*
* The implementation relies on the fact that copy_mode is only written
* under the BQL. Otherwise, further synchronization would be required.
*/
GLOBAL_STATE_CODE();
if (qatomic_read(&s->copy_mode) == change_opts->copy_mode) {
return;
}
if (change_opts->copy_mode != MIRROR_COPY_MODE_WRITE_BLOCKING) {
error_setg(errp, "Change to copy mode '%s' is not implemented",
MirrorCopyMode_str(change_opts->copy_mode));
return;
}
current = qatomic_cmpxchg(&s->copy_mode, MIRROR_COPY_MODE_BACKGROUND,
change_opts->copy_mode);
if (current != MIRROR_COPY_MODE_BACKGROUND) {
error_setg(errp, "Expected current copy mode '%s', got '%s'",
MirrorCopyMode_str(MIRROR_COPY_MODE_BACKGROUND),
MirrorCopyMode_str(current));
}
}
static void mirror_query(BlockJob *job, BlockJobInfo *info)
{
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
info->u.mirror = (BlockJobInfoMirror) {
.actively_synced = qatomic_read(&s->actively_synced),
};
}
static const BlockJobDriver mirror_job_driver = { static const BlockJobDriver mirror_job_driver = {
.job_driver = { .job_driver = {
.instance_size = sizeof(MirrorBlockJob), .instance_size = sizeof(MirrorBlockJob),
@@ -1326,8 +1260,6 @@ static const BlockJobDriver mirror_job_driver = {
.cancel = mirror_cancel, .cancel = mirror_cancel,
}, },
.drained_poll = mirror_drained_poll, .drained_poll = mirror_drained_poll,
.change = mirror_change,
.query = mirror_query,
}; };
static const BlockJobDriver commit_active_job_driver = { static const BlockJobDriver commit_active_job_driver = {
@@ -1446,7 +1378,7 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMethod method,
bitmap_end = QEMU_ALIGN_UP(offset + bytes, job->granularity); bitmap_end = QEMU_ALIGN_UP(offset + bytes, job->granularity);
bdrv_set_dirty_bitmap(job->dirty_bitmap, bitmap_offset, bdrv_set_dirty_bitmap(job->dirty_bitmap, bitmap_offset,
bitmap_end - bitmap_offset); bitmap_end - bitmap_offset);
qatomic_set(&job->actively_synced, false); job->actively_synced = false;
action = mirror_error_action(job, false, -ret); action = mirror_error_action(job, false, -ret);
if (action == BLOCK_ERROR_ACTION_REPORT) { if (action == BLOCK_ERROR_ACTION_REPORT) {
@@ -1505,8 +1437,7 @@ static void coroutine_fn GRAPH_RDLOCK active_write_settle(MirrorOp *op)
uint64_t end_chunk = DIV_ROUND_UP(op->offset + op->bytes, uint64_t end_chunk = DIV_ROUND_UP(op->offset + op->bytes,
op->s->granularity); op->s->granularity);
if (!--op->s->in_active_write_counter && if (!--op->s->in_active_write_counter && op->s->actively_synced) {
qatomic_read(&op->s->actively_synced)) {
BdrvChild *source = op->s->mirror_top_bs->backing; BdrvChild *source = op->s->mirror_top_bs->backing;
if (QLIST_FIRST(&source->bs->parents) == source && if (QLIST_FIRST(&source->bs->parents) == source &&
@@ -1532,21 +1463,21 @@ bdrv_mirror_top_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags); return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags);
} }
static bool should_copy_to_target(MirrorBDSOpaque *s)
{
return s->job && s->job->ret >= 0 &&
!job_is_cancelled(&s->job->common.job) &&
qatomic_read(&s->job->copy_mode) == MIRROR_COPY_MODE_WRITE_BLOCKING;
}
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn GRAPH_RDLOCK
bdrv_mirror_top_do_write(BlockDriverState *bs, MirrorMethod method, bdrv_mirror_top_do_write(BlockDriverState *bs, MirrorMethod method,
bool copy_to_target, uint64_t offset, uint64_t bytes, uint64_t offset, uint64_t bytes, QEMUIOVector *qiov,
QEMUIOVector *qiov, int flags) int flags)
{ {
MirrorOp *op = NULL; MirrorOp *op = NULL;
MirrorBDSOpaque *s = bs->opaque; MirrorBDSOpaque *s = bs->opaque;
int ret = 0; int ret = 0;
bool copy_to_target = false;
if (s->job) {
copy_to_target = s->job->ret >= 0 &&
!job_is_cancelled(&s->job->common.job) &&
s->job->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING;
}
if (copy_to_target) { if (copy_to_target) {
op = active_write_prepare(s->job, offset, bytes); op = active_write_prepare(s->job, offset, bytes);
@@ -1569,11 +1500,6 @@ bdrv_mirror_top_do_write(BlockDriverState *bs, MirrorMethod method,
abort(); abort();
} }
if (!copy_to_target && s->job && s->job->dirty_bitmap) {
qatomic_set(&s->job->actively_synced, false);
bdrv_set_dirty_bitmap(s->job->dirty_bitmap, offset, bytes);
}
if (ret < 0) { if (ret < 0) {
goto out; goto out;
} }
@@ -1593,10 +1519,17 @@ static int coroutine_fn GRAPH_RDLOCK
bdrv_mirror_top_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes, bdrv_mirror_top_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes,
QEMUIOVector *qiov, BdrvRequestFlags flags) QEMUIOVector *qiov, BdrvRequestFlags flags)
{ {
MirrorBDSOpaque *s = bs->opaque;
QEMUIOVector bounce_qiov; QEMUIOVector bounce_qiov;
void *bounce_buf; void *bounce_buf;
int ret = 0; int ret = 0;
bool copy_to_target = should_copy_to_target(bs->opaque); bool copy_to_target = false;
if (s->job) {
copy_to_target = s->job->ret >= 0 &&
!job_is_cancelled(&s->job->common.job) &&
s->job->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING;
}
if (copy_to_target) { if (copy_to_target) {
/* The guest might concurrently modify the data to write; but /* The guest might concurrently modify the data to write; but
@@ -1613,8 +1546,8 @@ bdrv_mirror_top_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes,
flags &= ~BDRV_REQ_REGISTERED_BUF; flags &= ~BDRV_REQ_REGISTERED_BUF;
} }
ret = bdrv_mirror_top_do_write(bs, MIRROR_METHOD_COPY, copy_to_target, ret = bdrv_mirror_top_do_write(bs, MIRROR_METHOD_COPY, offset, bytes, qiov,
offset, bytes, qiov, flags); flags);
if (copy_to_target) { if (copy_to_target) {
qemu_iovec_destroy(&bounce_qiov); qemu_iovec_destroy(&bounce_qiov);
@@ -1637,20 +1570,18 @@ static int coroutine_fn GRAPH_RDLOCK
bdrv_mirror_top_pwrite_zeroes(BlockDriverState *bs, int64_t offset, bdrv_mirror_top_pwrite_zeroes(BlockDriverState *bs, int64_t offset,
int64_t bytes, BdrvRequestFlags flags) int64_t bytes, BdrvRequestFlags flags)
{ {
bool copy_to_target = should_copy_to_target(bs->opaque); return bdrv_mirror_top_do_write(bs, MIRROR_METHOD_ZERO, offset, bytes, NULL,
return bdrv_mirror_top_do_write(bs, MIRROR_METHOD_ZERO, copy_to_target, flags);
offset, bytes, NULL, flags);
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn GRAPH_RDLOCK
bdrv_mirror_top_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes) bdrv_mirror_top_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
{ {
bool copy_to_target = should_copy_to_target(bs->opaque); return bdrv_mirror_top_do_write(bs, MIRROR_METHOD_DISCARD, offset, bytes,
return bdrv_mirror_top_do_write(bs, MIRROR_METHOD_DISCARD, copy_to_target, NULL, 0);
offset, bytes, NULL, 0);
} }
static void GRAPH_RDLOCK bdrv_mirror_top_refresh_filename(BlockDriverState *bs) static void bdrv_mirror_top_refresh_filename(BlockDriverState *bs)
{ {
if (bs->backing == NULL) { if (bs->backing == NULL) {
/* we can be here after failed bdrv_attach_child in /* we can be here after failed bdrv_attach_child in
@@ -1760,15 +1691,12 @@ static BlockJob *mirror_start_job(
buf_size = DEFAULT_MIRROR_BUF_SIZE; buf_size = DEFAULT_MIRROR_BUF_SIZE;
} }
bdrv_graph_rdlock_main_loop();
if (bdrv_skip_filters(bs) == bdrv_skip_filters(target)) { if (bdrv_skip_filters(bs) == bdrv_skip_filters(target)) {
error_setg(errp, "Can't mirror node into itself"); error_setg(errp, "Can't mirror node into itself");
bdrv_graph_rdunlock_main_loop();
return NULL; return NULL;
} }
target_is_backing = bdrv_chain_contains(bs, target); target_is_backing = bdrv_chain_contains(bs, target);
bdrv_graph_rdunlock_main_loop();
/* In the case of active commit, add dummy driver to provide consistent /* In the case of active commit, add dummy driver to provide consistent
* reads on the top, while disabling it in the intermediate nodes, and make * reads on the top, while disabling it in the intermediate nodes, and make
@@ -1851,20 +1779,15 @@ static BlockJob *mirror_start_job(
} }
target_shared_perms |= BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE; target_shared_perms |= BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE;
} else { } else if (bdrv_chain_contains(bs, bdrv_skip_filters(target))) {
bdrv_graph_rdlock_main_loop();
if (bdrv_chain_contains(bs, bdrv_skip_filters(target))) {
/* /*
* We may want to allow this in the future, but it would * We may want to allow this in the future, but it would
* require taking some extra care. * require taking some extra care.
*/ */
error_setg(errp, "Cannot mirror to a filter on top of a node in " error_setg(errp, "Cannot mirror to a filter on top of a node in the "
"the source's backing chain"); "source's backing chain");
bdrv_graph_rdunlock_main_loop();
goto fail; goto fail;
} }
bdrv_graph_rdunlock_main_loop();
}
s->target = blk_new(s->common.job.aio_context, s->target = blk_new(s->common.job.aio_context,
target_perms, target_shared_perms); target_perms, target_shared_perms);
@@ -1884,14 +1807,13 @@ static BlockJob *mirror_start_job(
blk_set_allow_aio_context_change(s->target, true); blk_set_allow_aio_context_change(s->target, true);
blk_set_disable_request_queuing(s->target, true); blk_set_disable_request_queuing(s->target, true);
bdrv_graph_rdlock_main_loop();
s->replaces = g_strdup(replaces); s->replaces = g_strdup(replaces);
s->on_source_error = on_source_error; s->on_source_error = on_source_error;
s->on_target_error = on_target_error; s->on_target_error = on_target_error;
s->is_none_mode = is_none_mode; s->is_none_mode = is_none_mode;
s->backing_mode = backing_mode; s->backing_mode = backing_mode;
s->zero_target = zero_target; s->zero_target = zero_target;
qatomic_set(&s->copy_mode, copy_mode); s->copy_mode = copy_mode;
s->base = base; s->base = base;
s->base_overlay = bdrv_find_overlay(bs, base); s->base_overlay = bdrv_find_overlay(bs, base);
s->granularity = granularity; s->granularity = granularity;
@@ -1900,27 +1822,20 @@ static BlockJob *mirror_start_job(
if (auto_complete) { if (auto_complete) {
s->should_complete = true; s->should_complete = true;
} }
bdrv_graph_rdunlock_main_loop();
s->dirty_bitmap = bdrv_create_dirty_bitmap(s->mirror_top_bs, granularity, s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, NULL, errp);
NULL, errp);
if (!s->dirty_bitmap) { if (!s->dirty_bitmap) {
goto fail; goto fail;
} }
if (s->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING) {
/*
* The dirty bitmap is set by bdrv_mirror_top_do_write() when not in active
* mode.
*/
bdrv_disable_dirty_bitmap(s->dirty_bitmap); bdrv_disable_dirty_bitmap(s->dirty_bitmap);
}
bdrv_graph_wrlock(bs);
ret = block_job_add_bdrv(&s->common, "source", bs, 0, ret = block_job_add_bdrv(&s->common, "source", bs, 0,
BLK_PERM_WRITE_UNCHANGED | BLK_PERM_WRITE | BLK_PERM_WRITE_UNCHANGED | BLK_PERM_WRITE |
BLK_PERM_CONSISTENT_READ, BLK_PERM_CONSISTENT_READ,
errp); errp);
if (ret < 0) { if (ret < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
@@ -1965,17 +1880,14 @@ static BlockJob *mirror_start_job(
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0, ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
iter_shared_perms, errp); iter_shared_perms, errp);
if (ret < 0) { if (ret < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
} }
if (bdrv_freeze_backing_chain(mirror_top_bs, target, errp) < 0) { if (bdrv_freeze_backing_chain(mirror_top_bs, target, errp) < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
} }
bdrv_graph_wrunlock();
QTAILQ_INIT(&s->ops_in_flight); QTAILQ_INIT(&s->ops_in_flight);
@@ -2000,14 +1912,11 @@ fail:
} }
bs_opaque->stop = true; bs_opaque->stop = true;
bdrv_drained_begin(bs); bdrv_graph_rdlock_main_loop();
bdrv_graph_wrlock(bs);
assert(mirror_top_bs->backing->bs == bs);
bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing, bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing,
&error_abort); &error_abort);
bdrv_replace_node(mirror_top_bs, bs, &error_abort); bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrunlock(); bdrv_replace_node(mirror_top_bs, mirror_top_bs->backing->bs, &error_abort);
bdrv_drained_end(bs);
bdrv_unref(mirror_top_bs); bdrv_unref(mirror_top_bs);
@@ -2036,12 +1945,8 @@ void mirror_start(const char *job_id, BlockDriverState *bs,
MirrorSyncMode_str(mode)); MirrorSyncMode_str(mode));
return; return;
} }
bdrv_graph_rdlock_main_loop();
is_none_mode = mode == MIRROR_SYNC_MODE_NONE; is_none_mode = mode == MIRROR_SYNC_MODE_NONE;
base = mode == MIRROR_SYNC_MODE_TOP ? bdrv_backing_chain_next(bs) : NULL; base = mode == MIRROR_SYNC_MODE_TOP ? bdrv_backing_chain_next(bs) : NULL;
bdrv_graph_rdunlock_main_loop();
mirror_start_job(job_id, bs, creation_flags, target, replaces, mirror_start_job(job_id, bs, creation_flags, target, replaces,
speed, granularity, buf_size, backing_mode, zero_target, speed, granularity, buf_size, backing_mode, zero_target,
on_source_error, on_target_error, unmap, NULL, NULL, on_source_error, on_target_error, unmap, NULL, NULL,

View File

@@ -144,9 +144,6 @@ void hmp_drive_del(Monitor *mon, const QDict *qdict)
AioContext *aio_context; AioContext *aio_context;
Error *local_err = NULL; Error *local_err = NULL;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs = bdrv_find_node(id); bs = bdrv_find_node(id);
if (bs) { if (bs) {
qmp_blockdev_del(id, &local_err); qmp_blockdev_del(id, &local_err);
@@ -206,9 +203,6 @@ void hmp_commit(Monitor *mon, const QDict *qdict)
BlockBackend *blk; BlockBackend *blk;
int ret; int ret;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!strcmp(device, "all")) { if (!strcmp(device, "all")) {
ret = blk_commit_all(); ret = blk_commit_all();
} else { } else {
@@ -849,7 +843,7 @@ void hmp_info_block_jobs(Monitor *mon, const QDict *qdict)
} }
while (list) { while (list) {
if (list->value->type == JOB_TYPE_STREAM) { if (strcmp(list->value->type, "stream") == 0) {
monitor_printf(mon, "Streaming device %s: Completed %" PRId64 monitor_printf(mon, "Streaming device %s: Completed %" PRId64
" of %" PRId64 " bytes, speed limit %" PRId64 " of %" PRId64 " bytes, speed limit %" PRId64
" bytes/s\n", " bytes/s\n",
@@ -861,7 +855,7 @@ void hmp_info_block_jobs(Monitor *mon, const QDict *qdict)
monitor_printf(mon, "Type %s, device %s: Completed %" PRId64 monitor_printf(mon, "Type %s, device %s: Completed %" PRId64
" of %" PRId64 " bytes, speed limit %" PRId64 " of %" PRId64 " bytes, speed limit %" PRId64
" bytes/s\n", " bytes/s\n",
JobType_str(list->value->type), list->value->type,
list->value->device, list->value->device,
list->value->offset, list->value->offset,
list->value->len, list->value->len,
@@ -902,8 +896,6 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict)
SnapshotEntry *snapshot_entry; SnapshotEntry *snapshot_entry;
Error *err = NULL; Error *err = NULL;
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs = bdrv_all_find_vmstate_bs(NULL, false, NULL, &err); bs = bdrv_all_find_vmstate_bs(NULL, false, NULL, &err);
if (!bs) { if (!bs) {
error_report_err(err); error_report_err(err);

View File

@@ -275,8 +275,7 @@ static bool nbd_client_will_reconnect(BDRVNBDState *s)
* Return failure if the server's advertised options are incompatible with the * Return failure if the server's advertised options are incompatible with the
* client's needs. * client's needs.
*/ */
static int coroutine_fn GRAPH_RDLOCK static int nbd_handle_updated_info(BlockDriverState *bs, Error **errp)
nbd_handle_updated_info(BlockDriverState *bs, Error **errp)
{ {
BDRVNBDState *s = (BDRVNBDState *)bs->opaque; BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
int ret; int ret;

View File

@@ -843,7 +843,7 @@ static void nfs_refresh_filename(BlockDriverState *bs)
} }
} }
static char * GRAPH_RDLOCK nfs_dirname(BlockDriverState *bs, Error **errp) static char *nfs_dirname(BlockDriverState *bs, Error **errp)
{ {
NFSClient *client = bs->opaque; NFSClient *client = bs->opaque;

View File

@@ -16,7 +16,6 @@
#include "qapi/error.h" #include "qapi/error.h"
#include "qapi/qmp/qdict.h" #include "qapi/qmp/qdict.h"
#include "qapi/qmp/qstring.h" #include "qapi/qmp/qstring.h"
#include "qemu/defer-call.h"
#include "qemu/error-report.h" #include "qemu/error-report.h"
#include "qemu/main-loop.h" #include "qemu/main-loop.h"
#include "qemu/module.h" #include "qemu/module.h"
@@ -417,10 +416,9 @@ static bool nvme_process_completion(NVMeQueuePair *q)
q->cq_phase = !q->cq_phase; q->cq_phase = !q->cq_phase;
} }
cid = le16_to_cpu(c->cid); cid = le16_to_cpu(c->cid);
if (cid == 0 || cid > NVME_NUM_REQS) { if (cid == 0 || cid > NVME_QUEUE_SIZE) {
warn_report("NVMe: Unexpected CID in completion queue: %" PRIu32 warn_report("NVMe: Unexpected CID in completion queue: %"PRIu32", "
", should be within: 1..%u inclusively", cid, "queue size: %u", cid, NVME_QUEUE_SIZE);
NVME_NUM_REQS);
continue; continue;
} }
trace_nvme_complete_command(s, q->index, cid); trace_nvme_complete_command(s, q->index, cid);
@@ -478,7 +476,7 @@ static void nvme_trace_command(const NvmeCmd *cmd)
} }
} }
static void nvme_deferred_fn(void *opaque) static void nvme_unplug_fn(void *opaque)
{ {
NVMeQueuePair *q = opaque; NVMeQueuePair *q = opaque;
@@ -505,7 +503,7 @@ static void nvme_submit_command(NVMeQueuePair *q, NVMeRequest *req,
q->need_kick++; q->need_kick++;
qemu_mutex_unlock(&q->lock); qemu_mutex_unlock(&q->lock);
defer_call(nvme_deferred_fn, q); blk_io_plug_call(nvme_unplug_fn, q);
} }
static void nvme_admin_cmd_sync_cb(void *opaque, int ret) static void nvme_admin_cmd_sync_cb(void *opaque, int ret)

View File

@@ -59,9 +59,10 @@ typedef struct ParallelsDirtyBitmapFeature {
} QEMU_PACKED ParallelsDirtyBitmapFeature; } QEMU_PACKED ParallelsDirtyBitmapFeature;
/* Given L1 table read bitmap data from the image and populate @bitmap */ /* Given L1 table read bitmap data from the image and populate @bitmap */
static int GRAPH_RDLOCK static int parallels_load_bitmap_data(BlockDriverState *bs,
parallels_load_bitmap_data(BlockDriverState *bs, const uint64_t *l1_table, const uint64_t *l1_table,
uint32_t l1_size, BdrvDirtyBitmap *bitmap, uint32_t l1_size,
BdrvDirtyBitmap *bitmap,
Error **errp) Error **errp)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
@@ -119,8 +120,9 @@ finish:
* @data buffer (of @data_size size) is the Dirty bitmaps feature which * @data buffer (of @data_size size) is the Dirty bitmaps feature which
* consists of ParallelsDirtyBitmapFeature followed by L1 table. * consists of ParallelsDirtyBitmapFeature followed by L1 table.
*/ */
static BdrvDirtyBitmap * GRAPH_RDLOCK static BdrvDirtyBitmap *parallels_load_bitmap(BlockDriverState *bs,
parallels_load_bitmap(BlockDriverState *bs, uint8_t *data, size_t data_size, uint8_t *data,
size_t data_size,
Error **errp) Error **errp)
{ {
int ret; int ret;
@@ -128,7 +130,7 @@ parallels_load_bitmap(BlockDriverState *bs, uint8_t *data, size_t data_size,
g_autofree uint64_t *l1_table = NULL; g_autofree uint64_t *l1_table = NULL;
BdrvDirtyBitmap *bitmap; BdrvDirtyBitmap *bitmap;
QemuUUID uuid; QemuUUID uuid;
char uuidstr[UUID_STR_LEN]; char uuidstr[UUID_FMT_LEN + 1];
int i; int i;
if (data_size < sizeof(bf)) { if (data_size < sizeof(bf)) {
@@ -181,9 +183,8 @@ parallels_load_bitmap(BlockDriverState *bs, uint8_t *data, size_t data_size,
return bitmap; return bitmap;
} }
static int GRAPH_RDLOCK static int parallels_parse_format_extension(BlockDriverState *bs,
parallels_parse_format_extension(BlockDriverState *bs, uint8_t *ext_cluster, uint8_t *ext_cluster, Error **errp)
Error **errp)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
int ret; int ret;

View File

@@ -200,7 +200,7 @@ static int mark_used(BlockDriverState *bs, unsigned long *bitmap,
* bitmap anyway, as much as we can. This information will be used for * bitmap anyway, as much as we can. This information will be used for
* error resolution. * error resolution.
*/ */
static int GRAPH_RDLOCK parallels_fill_used_bitmap(BlockDriverState *bs) static int parallels_fill_used_bitmap(BlockDriverState *bs)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
int64_t payload_bytes; int64_t payload_bytes;
@@ -415,9 +415,13 @@ parallels_co_flush_to_os(BlockDriverState *bs)
return 0; return 0;
} }
static int coroutine_fn GRAPH_RDLOCK
parallels_co_block_status(BlockDriverState *bs, bool want_zero, int64_t offset, static int coroutine_fn parallels_co_block_status(BlockDriverState *bs,
int64_t bytes, int64_t *pnum, int64_t *map, bool want_zero,
int64_t offset,
int64_t bytes,
int64_t *pnum,
int64_t *map,
BlockDriverState **file) BlockDriverState **file)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
@@ -1185,7 +1189,7 @@ static int parallels_probe(const uint8_t *buf, int buf_size,
return 0; return 0;
} }
static int GRAPH_RDLOCK parallels_update_header(BlockDriverState *bs) static int parallels_update_header(BlockDriverState *bs)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
unsigned size = MAX(bdrv_opt_mem_align(bs->file->bs), unsigned size = MAX(bdrv_opt_mem_align(bs->file->bs),
@@ -1255,8 +1259,6 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
file_nb_sectors = bdrv_nb_sectors(bs->file->bs); file_nb_sectors = bdrv_nb_sectors(bs->file->bs);
if (file_nb_sectors < 0) { if (file_nb_sectors < 0) {
return -EINVAL; return -EINVAL;
@@ -1364,9 +1366,9 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
error_setg(&s->migration_blocker, "The Parallels format used by node '%s' " error_setg(&s->migration_blocker, "The Parallels format used by node '%s' "
"does not support live migration", "does not support live migration",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
ret = migrate_add_blocker(s->migration_blocker, errp);
ret = migrate_add_blocker_normal(&s->migration_blocker, errp);
if (ret < 0) { if (ret < 0) {
error_setg(errp, "Migration blocker error");
goto fail; goto fail;
} }
qemu_co_mutex_init(&s->lock); qemu_co_mutex_init(&s->lock);
@@ -1401,7 +1403,7 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
ret = bdrv_check(bs, &res, BDRV_FIX_ERRORS | BDRV_FIX_LEAKS); ret = bdrv_check(bs, &res, BDRV_FIX_ERRORS | BDRV_FIX_LEAKS);
if (ret < 0) { if (ret < 0) {
error_setg_errno(errp, -ret, "Could not repair corrupted image"); error_setg_errno(errp, -ret, "Could not repair corrupted image");
migrate_del_blocker(&s->migration_blocker); migrate_del_blocker(s->migration_blocker);
goto fail; goto fail;
} }
} }
@@ -1418,6 +1420,7 @@ fail:
*/ */
parallels_free_used_bitmap(bs); parallels_free_used_bitmap(bs);
error_free(s->migration_blocker);
g_free(s->bat_dirty_bmap); g_free(s->bat_dirty_bmap);
qemu_vfree(s->header); qemu_vfree(s->header);
return ret; return ret;
@@ -1428,8 +1431,6 @@ static void parallels_close(BlockDriverState *bs)
{ {
BDRVParallelsState *s = bs->opaque; BDRVParallelsState *s = bs->opaque;
GRAPH_RDLOCK_GUARD_MAINLOOP();
if ((bs->open_flags & BDRV_O_RDWR) && !(bs->open_flags & BDRV_O_INACTIVE)) { if ((bs->open_flags & BDRV_O_RDWR) && !(bs->open_flags & BDRV_O_INACTIVE)) {
s->header->inuse = 0; s->header->inuse = 0;
parallels_update_header(bs); parallels_update_header(bs);
@@ -1444,7 +1445,8 @@ static void parallels_close(BlockDriverState *bs)
g_free(s->bat_dirty_bmap); g_free(s->bat_dirty_bmap);
qemu_vfree(s->header); qemu_vfree(s->header);
migrate_del_blocker(&s->migration_blocker); migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
} }
static bool parallels_is_support_dirty_bitmaps(BlockDriverState *bs) static bool parallels_is_support_dirty_bitmaps(BlockDriverState *bs)

View File

@@ -90,8 +90,7 @@ typedef struct BDRVParallelsState {
Error *migration_blocker; Error *migration_blocker;
} BDRVParallelsState; } BDRVParallelsState;
int GRAPH_RDLOCK int parallels_read_format_extension(BlockDriverState *bs,
parallels_read_format_extension(BlockDriverState *bs, int64_t ext_off, int64_t ext_off, Error **errp);
Error **errp);
#endif #endif

159
block/plug.c Normal file
View File

@@ -0,0 +1,159 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Block I/O plugging
*
* Copyright Red Hat.
*
* This API defers a function call within a blk_io_plug()/blk_io_unplug()
* section, allowing multiple calls to batch up. This is a performance
* optimization that is used in the block layer to submit several I/O requests
* at once instead of individually:
*
* blk_io_plug(); <-- start of plugged region
* ...
* blk_io_plug_call(my_func, my_obj); <-- deferred my_func(my_obj) call
* blk_io_plug_call(my_func, my_obj); <-- another
* blk_io_plug_call(my_func, my_obj); <-- another
* ...
* blk_io_unplug(); <-- end of plugged region, my_func(my_obj) is called once
*
* This code is actually generic and not tied to the block layer. If another
* subsystem needs this functionality, it could be renamed.
*/
#include "qemu/osdep.h"
#include "qemu/coroutine-tls.h"
#include "qemu/notify.h"
#include "qemu/thread.h"
#include "sysemu/block-backend.h"
/* A function call that has been deferred until unplug() */
typedef struct {
void (*fn)(void *);
void *opaque;
} UnplugFn;
/* Per-thread state */
typedef struct {
unsigned count; /* how many times has plug() been called? */
GArray *unplug_fns; /* functions to call at unplug time */
} Plug;
/* Use get_ptr_plug() to fetch this thread-local value */
QEMU_DEFINE_STATIC_CO_TLS(Plug, plug);
/* Called at thread cleanup time */
static void blk_io_plug_atexit(Notifier *n, void *value)
{
Plug *plug = get_ptr_plug();
g_array_free(plug->unplug_fns, TRUE);
}
/* This won't involve coroutines, so use __thread */
static __thread Notifier blk_io_plug_atexit_notifier;
/**
* blk_io_plug_call:
* @fn: a function pointer to be invoked
* @opaque: a user-defined argument to @fn()
*
* Call @fn(@opaque) immediately if not within a blk_io_plug()/blk_io_unplug()
* section.
*
* Otherwise defer the call until the end of the outermost
* blk_io_plug()/blk_io_unplug() section in this thread. If the same
* @fn/@opaque pair has already been deferred, it will only be called once upon
* blk_io_unplug() so that accumulated calls are batched into a single call.
*
* The caller must ensure that @opaque is not freed before @fn() is invoked.
*/
void blk_io_plug_call(void (*fn)(void *), void *opaque)
{
Plug *plug = get_ptr_plug();
/* Call immediately if we're not plugged */
if (plug->count == 0) {
fn(opaque);
return;
}
GArray *array = plug->unplug_fns;
if (!array) {
array = g_array_new(FALSE, FALSE, sizeof(UnplugFn));
plug->unplug_fns = array;
blk_io_plug_atexit_notifier.notify = blk_io_plug_atexit;
qemu_thread_atexit_add(&blk_io_plug_atexit_notifier);
}
UnplugFn *fns = (UnplugFn *)array->data;
UnplugFn new_fn = {
.fn = fn,
.opaque = opaque,
};
/*
* There won't be many, so do a linear search. If this becomes a bottleneck
* then a binary search (glib 2.62+) or different data structure could be
* used.
*/
for (guint i = 0; i < array->len; i++) {
if (memcmp(&fns[i], &new_fn, sizeof(new_fn)) == 0) {
return; /* already exists */
}
}
g_array_append_val(array, new_fn);
}
/**
* blk_io_plug: Defer blk_io_plug_call() functions until blk_io_unplug()
*
* blk_io_plug/unplug are thread-local operations. This means that multiple
* threads can simultaneously call plug/unplug, but the caller must ensure that
* each unplug() is called in the same thread of the matching plug().
*
* Nesting is supported. blk_io_plug_call() functions are only called at the
* outermost blk_io_unplug().
*/
void blk_io_plug(void)
{
Plug *plug = get_ptr_plug();
assert(plug->count < UINT32_MAX);
plug->count++;
}
/**
* blk_io_unplug: Run any pending blk_io_plug_call() functions
*
* There must have been a matching blk_io_plug() call in the same thread prior
* to this blk_io_unplug() call.
*/
void blk_io_unplug(void)
{
Plug *plug = get_ptr_plug();
assert(plug->count > 0);
if (--plug->count > 0) {
return;
}
GArray *array = plug->unplug_fns;
if (!array) {
return;
}
UnplugFn *fns = (UnplugFn *)array->data;
for (guint i = 0; i < array->len; i++) {
fns[i].fn(fns[i].opaque);
}
/*
* This resets the array without freeing memory so that appending is cheap
* in the future.
*/
g_array_set_size(array, 0);
}

View File

@@ -143,8 +143,6 @@ static int preallocate_open(BlockDriverState *bs, QDict *options, int flags,
BDRVPreallocateState *s = bs->opaque; BDRVPreallocateState *s = bs->opaque;
int ret; int ret;
GLOBAL_STATE_CODE();
/* /*
* s->data_end and friends should be initialized on permission update. * s->data_end and friends should be initialized on permission update.
* For this to work, mark them invalid. * For this to work, mark them invalid.
@@ -157,8 +155,6 @@ static int preallocate_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!preallocate_absorb_opts(&s->opts, options, bs->file->bs, errp)) { if (!preallocate_absorb_opts(&s->opts, options, bs->file->bs, errp)) {
return -EINVAL; return -EINVAL;
} }
@@ -173,8 +169,7 @@ static int preallocate_open(BlockDriverState *bs, QDict *options, int flags,
return 0; return 0;
} }
static int GRAPH_RDLOCK static int preallocate_truncate_to_real_size(BlockDriverState *bs, Error **errp)
preallocate_truncate_to_real_size(BlockDriverState *bs, Error **errp)
{ {
BDRVPreallocateState *s = bs->opaque; BDRVPreallocateState *s = bs->opaque;
int ret; int ret;
@@ -205,9 +200,6 @@ static void preallocate_close(BlockDriverState *bs)
{ {
BDRVPreallocateState *s = bs->opaque; BDRVPreallocateState *s = bs->opaque;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
qemu_bh_cancel(s->drop_resize_bh); qemu_bh_cancel(s->drop_resize_bh);
qemu_bh_delete(s->drop_resize_bh); qemu_bh_delete(s->drop_resize_bh);
@@ -231,9 +223,6 @@ static int preallocate_reopen_prepare(BDRVReopenState *reopen_state,
PreallocateOpts *opts = g_new0(PreallocateOpts, 1); PreallocateOpts *opts = g_new0(PreallocateOpts, 1);
int ret; int ret;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!preallocate_absorb_opts(opts, reopen_state->options, if (!preallocate_absorb_opts(opts, reopen_state->options,
reopen_state->bs->file->bs, errp)) { reopen_state->bs->file->bs, errp)) {
g_free(opts); g_free(opts);
@@ -294,7 +283,7 @@ static bool can_write_resize(uint64_t perm)
return (perm & BLK_PERM_WRITE) && (perm & BLK_PERM_RESIZE); return (perm & BLK_PERM_WRITE) && (perm & BLK_PERM_RESIZE);
} }
static bool GRAPH_RDLOCK has_prealloc_perms(BlockDriverState *bs) static bool has_prealloc_perms(BlockDriverState *bs)
{ {
BDRVPreallocateState *s = bs->opaque; BDRVPreallocateState *s = bs->opaque;
@@ -510,8 +499,7 @@ preallocate_co_getlength(BlockDriverState *bs)
return ret; return ret;
} }
static int GRAPH_RDLOCK static int preallocate_drop_resize(BlockDriverState *bs, Error **errp)
preallocate_drop_resize(BlockDriverState *bs, Error **errp)
{ {
BDRVPreallocateState *s = bs->opaque; BDRVPreallocateState *s = bs->opaque;
int ret; int ret;
@@ -537,16 +525,15 @@ preallocate_drop_resize(BlockDriverState *bs, Error **errp)
*/ */
s->data_end = s->file_end = s->zero_start = -EINVAL; s->data_end = s->file_end = s->zero_start = -EINVAL;
bdrv_graph_rdlock_main_loop();
bdrv_child_refresh_perms(bs, bs->file, NULL); bdrv_child_refresh_perms(bs, bs->file, NULL);
bdrv_graph_rdunlock_main_loop();
return 0; return 0;
} }
static void preallocate_drop_resize_bh(void *opaque) static void preallocate_drop_resize_bh(void *opaque)
{ {
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* /*
* In case of errors, we'll simply keep the exclusive lock on the image * In case of errors, we'll simply keep the exclusive lock on the image
* indefinitely. * indefinitely.
@@ -554,8 +541,8 @@ static void preallocate_drop_resize_bh(void *opaque)
preallocate_drop_resize(opaque, NULL); preallocate_drop_resize(opaque, NULL);
} }
static void GRAPH_RDLOCK static void preallocate_set_perm(BlockDriverState *bs,
preallocate_set_perm(BlockDriverState *bs, uint64_t perm, uint64_t shared) uint64_t perm, uint64_t shared)
{ {
BDRVPreallocateState *s = bs->opaque; BDRVPreallocateState *s = bs->opaque;

View File

@@ -169,16 +169,14 @@ void qmp_blockdev_close_tray(const char *device,
} }
} }
static void GRAPH_UNLOCKED static void blockdev_remove_medium(const char *device, const char *id,
blockdev_remove_medium(const char *device, const char *id, Error **errp) Error **errp)
{ {
BlockBackend *blk; BlockBackend *blk;
BlockDriverState *bs; BlockDriverState *bs;
AioContext *aio_context; AioContext *aio_context;
bool has_attached_device; bool has_attached_device;
GLOBAL_STATE_CODE();
blk = qmp_get_blk(device, id, errp); blk = qmp_get_blk(device, id, errp);
if (!blk) { if (!blk) {
return; return;
@@ -207,12 +205,9 @@ blockdev_remove_medium(const char *device, const char *id, Error **errp)
aio_context = bdrv_get_aio_context(bs); aio_context = bdrv_get_aio_context(bs);
aio_context_acquire(aio_context); aio_context_acquire(aio_context);
bdrv_graph_rdlock_main_loop();
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_EJECT, errp)) { if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_EJECT, errp)) {
bdrv_graph_rdunlock_main_loop();
goto out; goto out;
} }
bdrv_graph_rdunlock_main_loop();
blk_remove_bs(blk); blk_remove_bs(blk);
@@ -237,7 +232,6 @@ static void qmp_blockdev_insert_anon_medium(BlockBackend *blk,
BlockDriverState *bs, Error **errp) BlockDriverState *bs, Error **errp)
{ {
Error *local_err = NULL; Error *local_err = NULL;
AioContext *ctx;
bool has_device; bool has_device;
int ret; int ret;
@@ -259,11 +253,7 @@ static void qmp_blockdev_insert_anon_medium(BlockBackend *blk,
return; return;
} }
ctx = bdrv_get_aio_context(bs);
aio_context_acquire(ctx);
ret = blk_insert_bs(blk, bs, errp); ret = blk_insert_bs(blk, bs, errp);
aio_context_release(ctx);
if (ret < 0) { if (ret < 0) {
return; return;
} }
@@ -289,8 +279,6 @@ static void blockdev_insert_medium(const char *device, const char *id,
BlockBackend *blk; BlockBackend *blk;
BlockDriverState *bs; BlockDriverState *bs;
GRAPH_RDLOCK_GUARD_MAINLOOP();
blk = qmp_get_blk(device, id, errp); blk = qmp_get_blk(device, id, errp);
if (!blk) { if (!blk) {
return; return;

View File

@@ -225,8 +225,9 @@ int bdrv_query_snapshot_info_list(BlockDriverState *bs,
* Helper function for other query info functions. Store information about @bs * Helper function for other query info functions. Store information about @bs
* in @info, setting @errp on error. * in @info, setting @errp on error.
*/ */
static void GRAPH_RDLOCK static void bdrv_do_query_node_info(BlockDriverState *bs,
bdrv_do_query_node_info(BlockDriverState *bs, BlockNodeInfo *info, Error **errp) BlockNodeInfo *info,
Error **errp)
{ {
int64_t size; int64_t size;
const char *backing_filename; const char *backing_filename;
@@ -422,8 +423,8 @@ fail:
} }
/* @p_info will be set only on success. */ /* @p_info will be set only on success. */
static void GRAPH_RDLOCK static void bdrv_query_info(BlockBackend *blk, BlockInfo **p_info,
bdrv_query_info(BlockBackend *blk, BlockInfo **p_info, Error **errp) Error **errp)
{ {
BlockInfo *info = g_malloc0(sizeof(*info)); BlockInfo *info = g_malloc0(sizeof(*info));
BlockDriverState *bs = blk_bs(blk); BlockDriverState *bs = blk_bs(blk);
@@ -671,8 +672,6 @@ BlockInfoList *qmp_query_block(Error **errp)
BlockBackend *blk; BlockBackend *blk;
Error *local_err = NULL; Error *local_err = NULL;
GRAPH_RDLOCK_GUARD_MAINLOOP();
for (blk = blk_all_next(NULL); blk; blk = blk_all_next(blk)) { for (blk = blk_all_next(NULL); blk; blk = blk_all_next(blk)) {
BlockInfoList *info; BlockInfoList *info;

View File

@@ -124,11 +124,9 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
ret = bdrv_open_file_child(NULL, options, "file", bs, errp); ret = bdrv_open_file_child(NULL, options, "file", bs, errp);
if (ret < 0) { if (ret < 0) {
goto fail_unlocked; goto fail;
} }
bdrv_graph_rdlock_main_loop();
ret = bdrv_pread(bs->file, 0, sizeof(header), &header, 0); ret = bdrv_pread(bs->file, 0, sizeof(header), &header, 0);
if (ret < 0) { if (ret < 0) {
goto fail; goto fail;
@@ -306,21 +304,18 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
error_setg(&s->migration_blocker, "The qcow format used by node '%s' " error_setg(&s->migration_blocker, "The qcow format used by node '%s' "
"does not support live migration", "does not support live migration",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
ret = migrate_add_blocker(s->migration_blocker, errp);
ret = migrate_add_blocker_normal(&s->migration_blocker, errp);
if (ret < 0) { if (ret < 0) {
error_free(s->migration_blocker);
goto fail; goto fail;
} }
qobject_unref(encryptopts); qobject_unref(encryptopts);
qapi_free_QCryptoBlockOpenOptions(crypto_opts); qapi_free_QCryptoBlockOpenOptions(crypto_opts);
qemu_co_mutex_init(&s->lock); qemu_co_mutex_init(&s->lock);
bdrv_graph_rdunlock_main_loop();
return 0; return 0;
fail: fail:
bdrv_graph_rdunlock_main_loop();
fail_unlocked:
g_free(s->l1_table); g_free(s->l1_table);
qemu_vfree(s->l2_cache); qemu_vfree(s->l2_cache);
g_free(s->cluster_cache); g_free(s->cluster_cache);
@@ -804,7 +799,8 @@ static void qcow_close(BlockDriverState *bs)
g_free(s->cluster_cache); g_free(s->cluster_cache);
g_free(s->cluster_data); g_free(s->cluster_data);
migrate_del_blocker(&s->migration_blocker); migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
} }
static int coroutine_fn GRAPH_UNLOCKED static int coroutine_fn GRAPH_UNLOCKED
@@ -1027,7 +1023,7 @@ fail:
return ret; return ret;
} }
static int GRAPH_RDLOCK qcow_make_empty(BlockDriverState *bs) static int qcow_make_empty(BlockDriverState *bs)
{ {
BDRVQcowState *s = bs->opaque; BDRVQcowState *s = bs->opaque;
uint32_t l1_length = s->l1_size * sizeof(uint64_t); uint32_t l1_length = s->l1_size * sizeof(uint64_t);

View File

@@ -105,7 +105,7 @@ static inline bool can_write(BlockDriverState *bs)
return !bdrv_is_read_only(bs) && !(bdrv_get_flags(bs) & BDRV_O_INACTIVE); return !bdrv_is_read_only(bs) && !(bdrv_get_flags(bs) & BDRV_O_INACTIVE);
} }
static int GRAPH_RDLOCK update_header_sync(BlockDriverState *bs) static int update_header_sync(BlockDriverState *bs)
{ {
int ret; int ret;
@@ -156,9 +156,10 @@ static int64_t get_bitmap_bytes_needed(int64_t len, uint32_t granularity)
return DIV_ROUND_UP(num_bits, 8); return DIV_ROUND_UP(num_bits, 8);
} }
static int GRAPH_RDLOCK static int check_constraints_on_bitmap(BlockDriverState *bs,
check_constraints_on_bitmap(BlockDriverState *bs, const char *name, const char *name,
uint32_t granularity, Error **errp) uint32_t granularity,
Error **errp)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
int granularity_bits = ctz32(granularity); int granularity_bits = ctz32(granularity);
@@ -203,8 +204,7 @@ check_constraints_on_bitmap(BlockDriverState *bs, const char *name,
return 0; return 0;
} }
static void GRAPH_RDLOCK static void clear_bitmap_table(BlockDriverState *bs, uint64_t *bitmap_table,
clear_bitmap_table(BlockDriverState *bs, uint64_t *bitmap_table,
uint32_t bitmap_table_size) uint32_t bitmap_table_size)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -221,8 +221,7 @@ clear_bitmap_table(BlockDriverState *bs, uint64_t *bitmap_table,
} }
} }
static int GRAPH_RDLOCK static int bitmap_table_load(BlockDriverState *bs, Qcow2BitmapTable *tb,
bitmap_table_load(BlockDriverState *bs, Qcow2BitmapTable *tb,
uint64_t **bitmap_table) uint64_t **bitmap_table)
{ {
int ret; int ret;
@@ -260,8 +259,7 @@ fail:
return ret; return ret;
} }
static int GRAPH_RDLOCK static int free_bitmap_clusters(BlockDriverState *bs, Qcow2BitmapTable *tb)
free_bitmap_clusters(BlockDriverState *bs, Qcow2BitmapTable *tb)
{ {
int ret; int ret;
uint64_t *bitmap_table; uint64_t *bitmap_table;
@@ -552,9 +550,8 @@ static uint32_t bitmap_list_count(Qcow2BitmapList *bm_list)
* Get bitmap list from qcow2 image. Actually reads bitmap directory, * Get bitmap list from qcow2 image. Actually reads bitmap directory,
* checks it and convert to bitmap list. * checks it and convert to bitmap list.
*/ */
static Qcow2BitmapList * GRAPH_RDLOCK static Qcow2BitmapList *bitmap_list_load(BlockDriverState *bs, uint64_t offset,
bitmap_list_load(BlockDriverState *bs, uint64_t offset, uint64_t size, uint64_t size, Error **errp)
Error **errp)
{ {
int ret; int ret;
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -733,8 +730,7 @@ out:
* Store bitmap list to qcow2 image as a bitmap directory. * Store bitmap list to qcow2 image as a bitmap directory.
* Everything is checked. * Everything is checked.
*/ */
static int GRAPH_RDLOCK static int bitmap_list_store(BlockDriverState *bs, Qcow2BitmapList *bm_list,
bitmap_list_store(BlockDriverState *bs, Qcow2BitmapList *bm_list,
uint64_t *offset, uint64_t *size, bool in_place) uint64_t *offset, uint64_t *size, bool in_place)
{ {
int ret; int ret;
@@ -833,8 +829,7 @@ fail:
* Bitmap List end * Bitmap List end
*/ */
static int GRAPH_RDLOCK static int update_ext_header_and_dir_in_place(BlockDriverState *bs,
update_ext_header_and_dir_in_place(BlockDriverState *bs,
Qcow2BitmapList *bm_list) Qcow2BitmapList *bm_list)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -882,8 +877,8 @@ update_ext_header_and_dir_in_place(BlockDriverState *bs,
*/ */
} }
static int GRAPH_RDLOCK static int update_ext_header_and_dir(BlockDriverState *bs,
update_ext_header_and_dir(BlockDriverState *bs, Qcow2BitmapList *bm_list) Qcow2BitmapList *bm_list)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
int ret; int ret;
@@ -963,7 +958,7 @@ static void set_readonly_helper(gpointer bitmap, gpointer value)
* If header_updated is not NULL then it is set appropriately regardless of * If header_updated is not NULL then it is set appropriately regardless of
* the return value. * the return value.
*/ */
bool coroutine_fn bool coroutine_fn GRAPH_RDLOCK
qcow2_load_dirty_bitmaps(BlockDriverState *bs, qcow2_load_dirty_bitmaps(BlockDriverState *bs,
bool *header_updated, Error **errp) bool *header_updated, Error **errp)
{ {
@@ -1276,8 +1271,8 @@ out:
/* store_bitmap_data() /* store_bitmap_data()
* Store bitmap to image, filling bitmap table accordingly. * Store bitmap to image, filling bitmap table accordingly.
*/ */
static uint64_t * GRAPH_RDLOCK static uint64_t *store_bitmap_data(BlockDriverState *bs,
store_bitmap_data(BlockDriverState *bs, BdrvDirtyBitmap *bitmap, BdrvDirtyBitmap *bitmap,
uint32_t *bitmap_table_size, Error **errp) uint32_t *bitmap_table_size, Error **errp)
{ {
int ret; int ret;
@@ -1375,8 +1370,7 @@ fail:
* Store bm->dirty_bitmap to qcow2. * Store bm->dirty_bitmap to qcow2.
* Set bm->table_offset and bm->table_size accordingly. * Set bm->table_offset and bm->table_size accordingly.
*/ */
static int GRAPH_RDLOCK static int store_bitmap(BlockDriverState *bs, Qcow2Bitmap *bm, Error **errp)
store_bitmap(BlockDriverState *bs, Qcow2Bitmap *bm, Error **errp)
{ {
int ret; int ret;
uint64_t *tb; uint64_t *tb;

View File

@@ -163,8 +163,7 @@ int qcow2_cache_destroy(Qcow2Cache *c)
return 0; return 0;
} }
static int GRAPH_RDLOCK static int qcow2_cache_flush_dependency(BlockDriverState *bs, Qcow2Cache *c)
qcow2_cache_flush_dependency(BlockDriverState *bs, Qcow2Cache *c)
{ {
int ret; int ret;
@@ -179,8 +178,7 @@ qcow2_cache_flush_dependency(BlockDriverState *bs, Qcow2Cache *c)
return 0; return 0;
} }
static int GRAPH_RDLOCK static int qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
int ret = 0; int ret = 0;
@@ -320,9 +318,8 @@ int qcow2_cache_empty(BlockDriverState *bs, Qcow2Cache *c)
return 0; return 0;
} }
static int GRAPH_RDLOCK static int qcow2_cache_do_get(BlockDriverState *bs, Qcow2Cache *c,
qcow2_cache_do_get(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset, uint64_t offset, void **table, bool read_from_disk)
void **table, bool read_from_disk)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
int i; int i;

View File

@@ -207,8 +207,7 @@ int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
* the cache is used; otherwise the L2 slice is loaded from the image * the cache is used; otherwise the L2 slice is loaded from the image
* file. * file.
*/ */
static int GRAPH_RDLOCK static int l2_load(BlockDriverState *bs, uint64_t offset,
l2_load(BlockDriverState *bs, uint64_t offset,
uint64_t l2_offset, uint64_t **l2_slice) uint64_t l2_offset, uint64_t **l2_slice)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -270,7 +269,7 @@ int qcow2_write_l1_entry(BlockDriverState *bs, int l1_index)
* *
*/ */
static int GRAPH_RDLOCK l2_allocate(BlockDriverState *bs, int l1_index) static int l2_allocate(BlockDriverState *bs, int l1_index)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
uint64_t old_l2_offset; uint64_t old_l2_offset;
@@ -391,9 +390,10 @@ fail:
* If the L2 entry is invalid return -errno and set @type to * If the L2 entry is invalid return -errno and set @type to
* QCOW2_SUBCLUSTER_INVALID. * QCOW2_SUBCLUSTER_INVALID.
*/ */
static int GRAPH_RDLOCK static int qcow2_get_subcluster_range_type(BlockDriverState *bs,
qcow2_get_subcluster_range_type(BlockDriverState *bs, uint64_t l2_entry, uint64_t l2_entry,
uint64_t l2_bitmap, unsigned sc_from, uint64_t l2_bitmap,
unsigned sc_from,
QCow2SubclusterType *type) QCow2SubclusterType *type)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -441,8 +441,7 @@ qcow2_get_subcluster_range_type(BlockDriverState *bs, uint64_t l2_entry,
* On failure return -errno and update @l2_index to point to the * On failure return -errno and update @l2_index to point to the
* invalid entry. * invalid entry.
*/ */
static int GRAPH_RDLOCK static int count_contiguous_subclusters(BlockDriverState *bs, int nb_clusters,
count_contiguous_subclusters(BlockDriverState *bs, int nb_clusters,
unsigned sc_index, uint64_t *l2_slice, unsigned sc_index, uint64_t *l2_slice,
unsigned *l2_index) unsigned *l2_index)
{ {
@@ -752,9 +751,9 @@ fail:
* *
* Returns 0 on success, -errno in failure case * Returns 0 on success, -errno in failure case
*/ */
static int GRAPH_RDLOCK static int get_cluster_table(BlockDriverState *bs, uint64_t offset,
get_cluster_table(BlockDriverState *bs, uint64_t offset, uint64_t **new_l2_slice,
uint64_t **new_l2_slice, int *new_l2_index) int *new_l2_index)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
unsigned int l2_index; unsigned int l2_index;
@@ -1156,10 +1155,11 @@ void coroutine_fn qcow2_alloc_cluster_abort(BlockDriverState *bs, QCowL2Meta *m)
* *
* Returns 0 on success, -errno on failure. * Returns 0 on success, -errno on failure.
*/ */
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn calculate_l2_meta(BlockDriverState *bs,
calculate_l2_meta(BlockDriverState *bs, uint64_t host_cluster_offset, uint64_t host_cluster_offset,
uint64_t guest_offset, unsigned bytes, uint64_t *l2_slice, uint64_t guest_offset, unsigned bytes,
QCowL2Meta **m, bool keep_old) uint64_t *l2_slice, QCowL2Meta **m,
bool keep_old)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
int sc_index, l2_index = offset_to_l2_slice_index(s, guest_offset); int sc_index, l2_index = offset_to_l2_slice_index(s, guest_offset);
@@ -1329,8 +1329,7 @@ calculate_l2_meta(BlockDriverState *bs, uint64_t host_cluster_offset,
* requires a new allocation (that is, if the cluster is unallocated * requires a new allocation (that is, if the cluster is unallocated
* or has refcount > 1 and therefore cannot be written in-place). * or has refcount > 1 and therefore cannot be written in-place).
*/ */
static bool GRAPH_RDLOCK static bool cluster_needs_new_alloc(BlockDriverState *bs, uint64_t l2_entry)
cluster_needs_new_alloc(BlockDriverState *bs, uint64_t l2_entry)
{ {
switch (qcow2_get_cluster_type(bs, l2_entry)) { switch (qcow2_get_cluster_type(bs, l2_entry)) {
case QCOW2_CLUSTER_NORMAL: case QCOW2_CLUSTER_NORMAL:
@@ -1361,9 +1360,9 @@ cluster_needs_new_alloc(BlockDriverState *bs, uint64_t l2_entry)
* allocated and can be overwritten in-place (this includes clusters * allocated and can be overwritten in-place (this includes clusters
* of type QCOW2_CLUSTER_ZERO_ALLOC). * of type QCOW2_CLUSTER_ZERO_ALLOC).
*/ */
static int GRAPH_RDLOCK static int count_single_write_clusters(BlockDriverState *bs, int nb_clusters,
count_single_write_clusters(BlockDriverState *bs, int nb_clusters, uint64_t *l2_slice, int l2_index,
uint64_t *l2_slice, int l2_index, bool new_alloc) bool new_alloc)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
uint64_t l2_entry = get_l2_entry(s, l2_slice, l2_index); uint64_t l2_entry = get_l2_entry(s, l2_slice, l2_index);
@@ -1491,9 +1490,9 @@ static int coroutine_fn handle_dependencies(BlockDriverState *bs,
* *
* -errno: in error cases * -errno: in error cases
*/ */
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn handle_copied(BlockDriverState *bs,
handle_copied(BlockDriverState *bs, uint64_t guest_offset, uint64_t guest_offset, uint64_t *host_offset, uint64_t *bytes,
uint64_t *host_offset, uint64_t *bytes, QCowL2Meta **m) QCowL2Meta **m)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
int l2_index; int l2_index;
@@ -1601,9 +1600,10 @@ out:
* function has been waiting for another request and the allocation must be * function has been waiting for another request and the allocation must be
* restarted, but the whole request should not be failed. * restarted, but the whole request should not be failed.
*/ */
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn do_alloc_cluster_offset(BlockDriverState *bs,
do_alloc_cluster_offset(BlockDriverState *bs, uint64_t guest_offset, uint64_t guest_offset,
uint64_t *host_offset, uint64_t *nb_clusters) uint64_t *host_offset,
uint64_t *nb_clusters)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -1658,9 +1658,9 @@ do_alloc_cluster_offset(BlockDriverState *bs, uint64_t guest_offset,
* *
* -errno: in error cases * -errno: in error cases
*/ */
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn handle_alloc(BlockDriverState *bs,
handle_alloc(BlockDriverState *bs, uint64_t guest_offset, uint64_t guest_offset, uint64_t *host_offset, uint64_t *bytes,
uint64_t *host_offset, uint64_t *bytes, QCowL2Meta **m) QCowL2Meta **m)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
int l2_index; int l2_index;
@@ -1898,8 +1898,8 @@ again:
* all clusters in the same L2 slice) and returns the number of discarded * all clusters in the same L2 slice) and returns the number of discarded
* clusters. * clusters.
*/ */
static int GRAPH_RDLOCK static int discard_in_l2_slice(BlockDriverState *bs, uint64_t offset,
discard_in_l2_slice(BlockDriverState *bs, uint64_t offset, uint64_t nb_clusters, uint64_t nb_clusters,
enum qcow2_discard_type type, bool full_discard) enum qcow2_discard_type type, bool full_discard)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -2037,7 +2037,7 @@ fail:
* all clusters in the same L2 slice) and returns the number of zeroed * all clusters in the same L2 slice) and returns the number of zeroed
* clusters. * clusters.
*/ */
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn
zero_in_l2_slice(BlockDriverState *bs, uint64_t offset, zero_in_l2_slice(BlockDriverState *bs, uint64_t offset,
uint64_t nb_clusters, int flags) uint64_t nb_clusters, int flags)
{ {
@@ -2062,15 +2062,9 @@ zero_in_l2_slice(BlockDriverState *bs, uint64_t offset,
QCow2ClusterType type = qcow2_get_cluster_type(bs, old_l2_entry); QCow2ClusterType type = qcow2_get_cluster_type(bs, old_l2_entry);
bool unmap = (type == QCOW2_CLUSTER_COMPRESSED) || bool unmap = (type == QCOW2_CLUSTER_COMPRESSED) ||
((flags & BDRV_REQ_MAY_UNMAP) && qcow2_cluster_is_allocated(type)); ((flags & BDRV_REQ_MAY_UNMAP) && qcow2_cluster_is_allocated(type));
bool keep_reference = uint64_t new_l2_entry = unmap ? 0 : old_l2_entry;
(s->discard_no_unref && type != QCOW2_CLUSTER_COMPRESSED);
uint64_t new_l2_entry = old_l2_entry;
uint64_t new_l2_bitmap = old_l2_bitmap; uint64_t new_l2_bitmap = old_l2_bitmap;
if (unmap && !keep_reference) {
new_l2_entry = 0;
}
if (has_subclusters(s)) { if (has_subclusters(s)) {
new_l2_bitmap = QCOW_L2_BITMAP_ALL_ZEROES; new_l2_bitmap = QCOW_L2_BITMAP_ALL_ZEROES;
} else { } else {
@@ -2088,17 +2082,9 @@ zero_in_l2_slice(BlockDriverState *bs, uint64_t offset,
set_l2_bitmap(s, l2_slice, l2_index + i, new_l2_bitmap); set_l2_bitmap(s, l2_slice, l2_index + i, new_l2_bitmap);
} }
if (unmap) {
if (!keep_reference) {
/* Then decrease the refcount */ /* Then decrease the refcount */
if (unmap) {
qcow2_free_any_cluster(bs, old_l2_entry, QCOW2_DISCARD_REQUEST); qcow2_free_any_cluster(bs, old_l2_entry, QCOW2_DISCARD_REQUEST);
} else if (s->discard_passthrough[QCOW2_DISCARD_REQUEST] &&
(type == QCOW2_CLUSTER_NORMAL ||
type == QCOW2_CLUSTER_ZERO_ALLOC)) {
/* If we keep the reference, pass on the discard still */
bdrv_pdiscard(s->data_file, old_l2_entry & L2E_OFFSET_MASK,
s->cluster_size);
}
} }
} }
@@ -2107,7 +2093,7 @@ zero_in_l2_slice(BlockDriverState *bs, uint64_t offset,
return nb_clusters; return nb_clusters;
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn
zero_l2_subclusters(BlockDriverState *bs, uint64_t offset, zero_l2_subclusters(BlockDriverState *bs, uint64_t offset,
unsigned nb_subclusters) unsigned nb_subclusters)
{ {
@@ -2245,8 +2231,7 @@ fail:
* status_cb(). l1_entries contains the total number of L1 entries and * status_cb(). l1_entries contains the total number of L1 entries and
* *visited_l1_entries counts all visited L1 entries. * *visited_l1_entries counts all visited L1 entries.
*/ */
static int GRAPH_RDLOCK static int expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
int l1_size, int64_t *visited_l1_entries, int l1_size, int64_t *visited_l1_entries,
int64_t l1_entries, int64_t l1_entries,
BlockDriverAmendStatusCB *status_cb, BlockDriverAmendStatusCB *status_cb,

View File

@@ -229,8 +229,8 @@ static void set_refcount_ro6(void *refcount_array, uint64_t index,
} }
static int GRAPH_RDLOCK static int load_refcount_block(BlockDriverState *bs,
load_refcount_block(BlockDriverState *bs, int64_t refcount_block_offset, int64_t refcount_block_offset,
void **refcount_block) void **refcount_block)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -302,9 +302,8 @@ static int in_same_refcount_block(BDRVQcow2State *s, uint64_t offset_a,
* *
* Returns 0 on success or -errno in error case * Returns 0 on success or -errno in error case
*/ */
static int GRAPH_RDLOCK static int alloc_refcount_block(BlockDriverState *bs,
alloc_refcount_block(BlockDriverState *bs, int64_t cluster_index, int64_t cluster_index, void **refcount_block)
void **refcount_block)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
unsigned int refcount_table_index; unsigned int refcount_table_index;
@@ -807,9 +806,12 @@ found:
/* XXX: cache several refcount block clusters ? */ /* XXX: cache several refcount block clusters ? */
/* @addend is the absolute value of the addend; if @decrease is set, @addend /* @addend is the absolute value of the addend; if @decrease is set, @addend
* will be subtracted from the current refcount, otherwise it will be added */ * will be subtracted from the current refcount, otherwise it will be added */
static int GRAPH_RDLOCK static int update_refcount(BlockDriverState *bs,
update_refcount(BlockDriverState *bs, int64_t offset, int64_t length, int64_t offset,
uint64_t addend, bool decrease, enum qcow2_discard_type type) int64_t length,
uint64_t addend,
bool decrease,
enum qcow2_discard_type type)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
int64_t start, last, cluster_offset; int64_t start, last, cluster_offset;
@@ -965,8 +967,8 @@ int qcow2_update_cluster_refcount(BlockDriverState *bs,
/* return < 0 if error */ /* return < 0 if error */
static int64_t GRAPH_RDLOCK static int64_t alloc_clusters_noref(BlockDriverState *bs, uint64_t size,
alloc_clusters_noref(BlockDriverState *bs, uint64_t size, uint64_t max) uint64_t max)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
uint64_t i, nb_clusters, refcount; uint64_t i, nb_clusters, refcount;
@@ -2300,7 +2302,7 @@ calculate_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
* Compares the actual reference count for each cluster in the image against the * Compares the actual reference count for each cluster in the image against the
* refcount as reported by the refcount structures on-disk. * refcount as reported by the refcount structures on-disk.
*/ */
static void coroutine_fn GRAPH_RDLOCK static void coroutine_fn
compare_refcounts(BlockDriverState *bs, BdrvCheckResult *res, compare_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix, bool *rebuild, BdrvCheckMode fix, bool *rebuild,
int64_t *highest_cluster, int64_t *highest_cluster,
@@ -3101,8 +3103,7 @@ int qcow2_pre_write_overlap_check(BlockDriverState *bs, int ign, int64_t offset,
* *
* @allocated should be set to true if a new cluster has been allocated. * @allocated should be set to true if a new cluster has been allocated.
*/ */
typedef int /* GRAPH_RDLOCK_PTR */ typedef int (RefblockFinishOp)(BlockDriverState *bs, uint64_t **reftable,
(RefblockFinishOp)(BlockDriverState *bs, uint64_t **reftable,
uint64_t reftable_index, uint64_t *reftable_size, uint64_t reftable_index, uint64_t *reftable_size,
void *refblock, bool refblock_empty, void *refblock, bool refblock_empty,
bool *allocated, Error **errp); bool *allocated, Error **errp);
@@ -3112,8 +3113,7 @@ typedef int /* GRAPH_RDLOCK_PTR */
* it is not empty) and inserts its offset into the new reftable. The size of * it is not empty) and inserts its offset into the new reftable. The size of
* this new reftable is increased as required. * this new reftable is increased as required.
*/ */
static int GRAPH_RDLOCK static int alloc_refblock(BlockDriverState *bs, uint64_t **reftable,
alloc_refblock(BlockDriverState *bs, uint64_t **reftable,
uint64_t reftable_index, uint64_t *reftable_size, uint64_t reftable_index, uint64_t *reftable_size,
void *refblock, bool refblock_empty, bool *allocated, void *refblock, bool refblock_empty, bool *allocated,
Error **errp) Error **errp)
@@ -3166,8 +3166,7 @@ alloc_refblock(BlockDriverState *bs, uint64_t **reftable,
* offset specified by the new reftable's entry. It does not modify the new * offset specified by the new reftable's entry. It does not modify the new
* reftable or change any refcounts. * reftable or change any refcounts.
*/ */
static int GRAPH_RDLOCK static int flush_refblock(BlockDriverState *bs, uint64_t **reftable,
flush_refblock(BlockDriverState *bs, uint64_t **reftable,
uint64_t reftable_index, uint64_t *reftable_size, uint64_t reftable_index, uint64_t *reftable_size,
void *refblock, bool refblock_empty, bool *allocated, void *refblock, bool refblock_empty, bool *allocated,
Error **errp) Error **errp)
@@ -3211,8 +3210,7 @@ flush_refblock(BlockDriverState *bs, uint64_t **reftable,
* *
* @allocated is set to true if a new cluster has been allocated. * @allocated is set to true if a new cluster has been allocated.
*/ */
static int GRAPH_RDLOCK static int walk_over_reftable(BlockDriverState *bs, uint64_t **new_reftable,
walk_over_reftable(BlockDriverState *bs, uint64_t **new_reftable,
uint64_t *new_reftable_index, uint64_t *new_reftable_index,
uint64_t *new_reftable_size, uint64_t *new_reftable_size,
void *new_refblock, int new_refblock_size, void *new_refblock, int new_refblock_size,
@@ -3547,8 +3545,8 @@ done:
return ret; return ret;
} }
static int64_t coroutine_fn GRAPH_RDLOCK static int64_t coroutine_fn get_refblock_offset(BlockDriverState *bs,
get_refblock_offset(BlockDriverState *bs, uint64_t offset) uint64_t offset)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
uint32_t index = offset_to_reftable_index(s, offset); uint32_t index = offset_to_reftable_index(s, offset);
@@ -3567,7 +3565,7 @@ get_refblock_offset(BlockDriverState *bs, uint64_t offset)
return covering_refblock_offset; return covering_refblock_offset;
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn
qcow2_discard_refcount_block(BlockDriverState *bs, uint64_t discard_block_offs) qcow2_discard_refcount_block(BlockDriverState *bs, uint64_t discard_block_offs)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;

View File

@@ -95,8 +95,7 @@ static int qcow2_probe(const uint8_t *buf, int buf_size, const char *filename)
} }
static int GRAPH_RDLOCK static int qcow2_crypto_hdr_read_func(QCryptoBlock *block, size_t offset,
qcow2_crypto_hdr_read_func(QCryptoBlock *block, size_t offset,
uint8_t *buf, size_t buflen, uint8_t *buf, size_t buflen,
void *opaque, Error **errp) void *opaque, Error **errp)
{ {
@@ -157,7 +156,7 @@ qcow2_crypto_hdr_init_func(QCryptoBlock *block, size_t headerlen, void *opaque,
/* The graph lock must be held when called in coroutine context */ /* The graph lock must be held when called in coroutine context */
static int coroutine_mixed_fn GRAPH_RDLOCK static int coroutine_mixed_fn
qcow2_crypto_hdr_write_func(QCryptoBlock *block, size_t offset, qcow2_crypto_hdr_write_func(QCryptoBlock *block, size_t offset,
const uint8_t *buf, size_t buflen, const uint8_t *buf, size_t buflen,
void *opaque, Error **errp) void *opaque, Error **errp)
@@ -537,7 +536,7 @@ int qcow2_mark_dirty(BlockDriverState *bs)
* function when there are no pending requests, it does not guard against * function when there are no pending requests, it does not guard against
* concurrent requests dirtying the image. * concurrent requests dirtying the image.
*/ */
static int GRAPH_RDLOCK qcow2_mark_clean(BlockDriverState *bs) static int qcow2_mark_clean(BlockDriverState *bs)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -571,8 +570,7 @@ int qcow2_mark_corrupt(BlockDriverState *bs)
* Marks the image as consistent, i.e., unsets the corrupt bit, and flushes * Marks the image as consistent, i.e., unsets the corrupt bit, and flushes
* before if necessary. * before if necessary.
*/ */
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn qcow2_mark_consistent(BlockDriverState *bs)
qcow2_mark_consistent(BlockDriverState *bs)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -982,9 +980,10 @@ typedef struct Qcow2ReopenState {
QCryptoBlockOpenOptions *crypto_opts; /* Disk encryption runtime options */ QCryptoBlockOpenOptions *crypto_opts; /* Disk encryption runtime options */
} Qcow2ReopenState; } Qcow2ReopenState;
static int GRAPH_RDLOCK static int qcow2_update_options_prepare(BlockDriverState *bs,
qcow2_update_options_prepare(BlockDriverState *bs, Qcow2ReopenState *r, Qcow2ReopenState *r,
QDict *options, int flags, Error **errp) QDict *options, int flags,
Error **errp)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
QemuOpts *opts = NULL; QemuOpts *opts = NULL;
@@ -1261,7 +1260,7 @@ static void qcow2_update_options_abort(BlockDriverState *bs,
qapi_free_QCryptoBlockOpenOptions(r->crypto_opts); qapi_free_QCryptoBlockOpenOptions(r->crypto_opts);
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn
qcow2_update_options(BlockDriverState *bs, QDict *options, int flags, qcow2_update_options(BlockDriverState *bs, QDict *options, int flags,
Error **errp) Error **errp)
{ {
@@ -1970,17 +1969,13 @@ static void qcow2_refresh_limits(BlockDriverState *bs, Error **errp)
bs->bl.pdiscard_alignment = s->cluster_size; bs->bl.pdiscard_alignment = s->cluster_size;
} }
static int GRAPH_UNLOCKED static int qcow2_reopen_prepare(BDRVReopenState *state,
qcow2_reopen_prepare(BDRVReopenState *state,BlockReopenQueue *queue, BlockReopenQueue *queue, Error **errp)
Error **errp)
{ {
BDRVQcow2State *s = state->bs->opaque; BDRVQcow2State *s = state->bs->opaque;
Qcow2ReopenState *r; Qcow2ReopenState *r;
int ret; int ret;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
r = g_new0(Qcow2ReopenState, 1); r = g_new0(Qcow2ReopenState, 1);
state->opaque = r; state->opaque = r;
@@ -2030,8 +2025,6 @@ static void qcow2_reopen_commit(BDRVReopenState *state)
{ {
BDRVQcow2State *s = state->bs->opaque; BDRVQcow2State *s = state->bs->opaque;
GRAPH_RDLOCK_GUARD_MAINLOOP();
qcow2_update_options_commit(state->bs, state->opaque); qcow2_update_options_commit(state->bs, state->opaque);
if (!s->data_file) { if (!s->data_file) {
/* /*
@@ -2045,8 +2038,6 @@ static void qcow2_reopen_commit(BDRVReopenState *state)
static void qcow2_reopen_commit_post(BDRVReopenState *state) static void qcow2_reopen_commit_post(BDRVReopenState *state)
{ {
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (state->flags & BDRV_O_RDWR) { if (state->flags & BDRV_O_RDWR) {
Error *local_err = NULL; Error *local_err = NULL;
@@ -2067,8 +2058,6 @@ static void qcow2_reopen_abort(BDRVReopenState *state)
{ {
BDRVQcow2State *s = state->bs->opaque; BDRVQcow2State *s = state->bs->opaque;
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!s->data_file) { if (!s->data_file) {
/* /*
* If we don't have an external data file, s->data_file was cleared by * If we don't have an external data file, s->data_file was cleared by
@@ -2742,7 +2731,7 @@ fail_nometa:
return ret; return ret;
} }
static int GRAPH_RDLOCK qcow2_inactivate(BlockDriverState *bs) static int qcow2_inactivate(BlockDriverState *bs)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
int ret, result = 0; int ret, result = 0;
@@ -2777,8 +2766,7 @@ static int GRAPH_RDLOCK qcow2_inactivate(BlockDriverState *bs)
return result; return result;
} }
static void coroutine_mixed_fn GRAPH_RDLOCK static void qcow2_do_close(BlockDriverState *bs, bool close_data_file)
qcow2_do_close(BlockDriverState *bs, bool close_data_file)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
qemu_vfree(s->l1_table); qemu_vfree(s->l1_table);
@@ -2805,24 +2793,18 @@ qcow2_do_close(BlockDriverState *bs, bool close_data_file)
g_free(s->image_backing_format); g_free(s->image_backing_format);
if (close_data_file && has_data_file(bs)) { if (close_data_file && has_data_file(bs)) {
GLOBAL_STATE_CODE();
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(NULL); bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, s->data_file); bdrv_unref_child(bs, s->data_file);
bdrv_graph_wrunlock(); bdrv_graph_wrunlock();
s->data_file = NULL; s->data_file = NULL;
bdrv_graph_rdlock_main_loop();
} }
qcow2_refcount_close(bs); qcow2_refcount_close(bs);
qcow2_free_snapshots(bs); qcow2_free_snapshots(bs);
} }
static void GRAPH_UNLOCKED qcow2_close(BlockDriverState *bs) static void qcow2_close(BlockDriverState *bs)
{ {
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
qcow2_do_close(bs, true); qcow2_do_close(bs, true);
} }
@@ -3160,9 +3142,8 @@ fail:
return ret; return ret;
} }
static int coroutine_fn GRAPH_RDLOCK static int qcow2_change_backing_file(BlockDriverState *bs,
qcow2_co_change_backing_file(BlockDriverState *bs, const char *backing_file, const char *backing_file, const char *backing_fmt)
const char *backing_fmt)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -3822,11 +3803,8 @@ qcow2_co_create(BlockdevCreateOptions *create_options, Error **errp)
backing_format = BlockdevDriver_str(qcow2_opts->backing_fmt); backing_format = BlockdevDriver_str(qcow2_opts->backing_fmt);
} }
bdrv_graph_co_rdlock(); ret = bdrv_change_backing_file(blk_bs(blk), qcow2_opts->backing_file,
ret = bdrv_co_change_backing_file(blk_bs(blk), qcow2_opts->backing_file,
backing_format, false); backing_format, false);
bdrv_graph_co_rdunlock();
if (ret < 0) { if (ret < 0) {
error_setg_errno(errp, -ret, "Could not assign backing file '%s' " error_setg_errno(errp, -ret, "Could not assign backing file '%s' "
"with format '%s'", qcow2_opts->backing_file, "with format '%s'", qcow2_opts->backing_file,
@@ -4013,8 +3991,7 @@ finish:
} }
static bool coroutine_fn GRAPH_RDLOCK static bool is_zero(BlockDriverState *bs, int64_t offset, int64_t bytes)
is_zero(BlockDriverState *bs, int64_t offset, int64_t bytes)
{ {
int64_t nr; int64_t nr;
int res; int res;
@@ -4035,7 +4012,7 @@ is_zero(BlockDriverState *bs, int64_t offset, int64_t bytes)
* backing file. So, we need a loop. * backing file. So, we need a loop.
*/ */
do { do {
res = bdrv_co_block_status_above(bs, NULL, offset, bytes, &nr, NULL, NULL); res = bdrv_block_status_above(bs, NULL, offset, bytes, &nr, NULL, NULL);
offset += nr; offset += nr;
bytes -= nr; bytes -= nr;
} while (res >= 0 && (res & BDRV_BLOCK_ZERO) && nr && bytes); } while (res >= 0 && (res & BDRV_BLOCK_ZERO) && nr && bytes);
@@ -4099,8 +4076,8 @@ qcow2_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset, int64_t bytes,
return ret; return ret;
} }
static int coroutine_fn GRAPH_RDLOCK static coroutine_fn int qcow2_co_pdiscard(BlockDriverState *bs,
qcow2_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes) int64_t offset, int64_t bytes)
{ {
int ret; int ret;
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -4845,7 +4822,7 @@ fail:
return ret; return ret;
} }
static int GRAPH_RDLOCK make_completely_empty(BlockDriverState *bs) static int make_completely_empty(BlockDriverState *bs)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
Error *local_err = NULL; Error *local_err = NULL;
@@ -4996,7 +4973,7 @@ fail:
return ret; return ret;
} }
static int GRAPH_RDLOCK qcow2_make_empty(BlockDriverState *bs) static int qcow2_make_empty(BlockDriverState *bs)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
uint64_t offset, end_offset; uint64_t offset, end_offset;
@@ -5040,7 +5017,7 @@ static int GRAPH_RDLOCK qcow2_make_empty(BlockDriverState *bs)
return ret; return ret;
} }
static coroutine_fn GRAPH_RDLOCK int qcow2_co_flush_to_os(BlockDriverState *bs) static coroutine_fn int qcow2_co_flush_to_os(BlockDriverState *bs)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
int ret; int ret;
@@ -5231,8 +5208,8 @@ qcow2_co_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
return 0; return 0;
} }
static ImageInfoSpecific * GRAPH_RDLOCK static ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *bs,
qcow2_get_specific_info(BlockDriverState *bs, Error **errp) Error **errp)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
ImageInfoSpecific *spec_info; ImageInfoSpecific *spec_info;
@@ -5311,8 +5288,7 @@ qcow2_get_specific_info(BlockDriverState *bs, Error **errp)
return spec_info; return spec_info;
} }
static int coroutine_mixed_fn GRAPH_RDLOCK static int coroutine_mixed_fn qcow2_has_zero_init(BlockDriverState *bs)
qcow2_has_zero_init(BlockDriverState *bs)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
bool preallocated; bool preallocated;
@@ -5390,7 +5366,7 @@ qcow2_co_load_vmstate(BlockDriverState *bs, QEMUIOVector *qiov, int64_t pos)
return bs->drv->bdrv_co_preadv_part(bs, offset, qiov->size, qiov, 0, 0); return bs->drv->bdrv_co_preadv_part(bs, offset, qiov->size, qiov, 0, 0);
} }
static int GRAPH_RDLOCK qcow2_has_compressed_clusters(BlockDriverState *bs) static int qcow2_has_compressed_clusters(BlockDriverState *bs)
{ {
int64_t offset = 0; int64_t offset = 0;
int64_t bytes = bdrv_getlength(bs); int64_t bytes = bdrv_getlength(bs);
@@ -5426,8 +5402,7 @@ static int GRAPH_RDLOCK qcow2_has_compressed_clusters(BlockDriverState *bs)
* Downgrades an image's version. To achieve this, any incompatible features * Downgrades an image's version. To achieve this, any incompatible features
* have to be removed. * have to be removed.
*/ */
static int GRAPH_RDLOCK static int qcow2_downgrade(BlockDriverState *bs, int target_version,
qcow2_downgrade(BlockDriverState *bs, int target_version,
BlockDriverAmendStatusCB *status_cb, void *cb_opaque, BlockDriverAmendStatusCB *status_cb, void *cb_opaque,
Error **errp) Error **errp)
{ {
@@ -5537,8 +5512,7 @@ qcow2_downgrade(BlockDriverState *bs, int target_version,
* features of older versions, some things may have to be presented * features of older versions, some things may have to be presented
* differently. * differently.
*/ */
static int GRAPH_RDLOCK static int qcow2_upgrade(BlockDriverState *bs, int target_version,
qcow2_upgrade(BlockDriverState *bs, int target_version,
BlockDriverAmendStatusCB *status_cb, void *cb_opaque, BlockDriverAmendStatusCB *status_cb, void *cb_opaque,
Error **errp) Error **errp)
{ {
@@ -5666,10 +5640,11 @@ static void qcow2_amend_helper_cb(BlockDriverState *bs,
info->original_cb_opaque); info->original_cb_opaque);
} }
static int GRAPH_RDLOCK static int qcow2_amend_options(BlockDriverState *bs, QemuOpts *opts,
qcow2_amend_options(BlockDriverState *bs, QemuOpts *opts, BlockDriverAmendStatusCB *status_cb,
BlockDriverAmendStatusCB *status_cb, void *cb_opaque, void *cb_opaque,
bool force, Error **errp) bool force,
Error **errp)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
int old_version = s->qcow_version, new_version = old_version; int old_version = s->qcow_version, new_version = old_version;
@@ -6166,7 +6141,7 @@ BlockDriver bdrv_qcow2 = {
.is_format = true, .is_format = true,
.supports_backing = true, .supports_backing = true,
.bdrv_co_change_backing_file = qcow2_co_change_backing_file, .bdrv_change_backing_file = qcow2_change_backing_file,
.bdrv_refresh_limits = qcow2_refresh_limits, .bdrv_refresh_limits = qcow2_refresh_limits,
.bdrv_co_invalidate_cache = qcow2_co_invalidate_cache, .bdrv_co_invalidate_cache = qcow2_co_invalidate_cache,

View File

@@ -641,7 +641,7 @@ static inline void set_l2_bitmap(BDRVQcow2State *s, uint64_t *l2_slice,
l2_slice[idx + 1] = cpu_to_be64(bitmap); l2_slice[idx + 1] = cpu_to_be64(bitmap);
} }
static inline bool GRAPH_RDLOCK has_data_file(BlockDriverState *bs) static inline bool has_data_file(BlockDriverState *bs)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
return (s->data_file != bs->file); return (s->data_file != bs->file);
@@ -709,8 +709,8 @@ static inline int64_t qcow2_vm_state_offset(BDRVQcow2State *s)
return (int64_t)s->l1_vm_state_index << (s->cluster_bits + s->l2_bits); return (int64_t)s->l1_vm_state_index << (s->cluster_bits + s->l2_bits);
} }
static inline QCow2ClusterType GRAPH_RDLOCK static inline QCow2ClusterType qcow2_get_cluster_type(BlockDriverState *bs,
qcow2_get_cluster_type(BlockDriverState *bs, uint64_t l2_entry) uint64_t l2_entry)
{ {
BDRVQcow2State *s = bs->opaque; BDRVQcow2State *s = bs->opaque;
@@ -743,7 +743,7 @@ qcow2_get_cluster_type(BlockDriverState *bs, uint64_t l2_entry)
* (this checks the whole entry and bitmap, not only the bits related * (this checks the whole entry and bitmap, not only the bits related
* to subcluster @sc_index). * to subcluster @sc_index).
*/ */
static inline GRAPH_RDLOCK static inline
QCow2SubclusterType qcow2_get_subcluster_type(BlockDriverState *bs, QCow2SubclusterType qcow2_get_subcluster_type(BlockDriverState *bs,
uint64_t l2_entry, uint64_t l2_entry,
uint64_t l2_bitmap, uint64_t l2_bitmap,
@@ -834,12 +834,11 @@ int64_t qcow2_refcount_metadata_size(int64_t clusters, size_t cluster_size,
int refcount_order, bool generous_increase, int refcount_order, bool generous_increase,
uint64_t *refblock_count); uint64_t *refblock_count);
int GRAPH_RDLOCK qcow2_mark_dirty(BlockDriverState *bs); int qcow2_mark_dirty(BlockDriverState *bs);
int GRAPH_RDLOCK qcow2_mark_corrupt(BlockDriverState *bs); int qcow2_mark_corrupt(BlockDriverState *bs);
int GRAPH_RDLOCK qcow2_update_header(BlockDriverState *bs); int qcow2_update_header(BlockDriverState *bs);
void GRAPH_RDLOCK void qcow2_signal_corruption(BlockDriverState *bs, bool fatal, int64_t offset,
qcow2_signal_corruption(BlockDriverState *bs, bool fatal, int64_t offset,
int64_t size, const char *message_format, ...) int64_t size, const char *message_format, ...)
G_GNUC_PRINTF(5, 6); G_GNUC_PRINTF(5, 6);
@@ -852,208 +851,165 @@ int qcow2_validate_table(BlockDriverState *bs, uint64_t offset,
int coroutine_fn GRAPH_RDLOCK qcow2_refcount_init(BlockDriverState *bs); int coroutine_fn GRAPH_RDLOCK qcow2_refcount_init(BlockDriverState *bs);
void qcow2_refcount_close(BlockDriverState *bs); void qcow2_refcount_close(BlockDriverState *bs);
int GRAPH_RDLOCK qcow2_get_refcount(BlockDriverState *bs, int64_t cluster_index, int qcow2_get_refcount(BlockDriverState *bs, int64_t cluster_index,
uint64_t *refcount); uint64_t *refcount);
int GRAPH_RDLOCK int qcow2_update_cluster_refcount(BlockDriverState *bs, int64_t cluster_index,
qcow2_update_cluster_refcount(BlockDriverState *bs, int64_t cluster_index,
uint64_t addend, bool decrease, uint64_t addend, bool decrease,
enum qcow2_discard_type type); enum qcow2_discard_type type);
int64_t GRAPH_RDLOCK int64_t qcow2_refcount_area(BlockDriverState *bs, uint64_t offset,
qcow2_refcount_area(BlockDriverState *bs, uint64_t offset,
uint64_t additional_clusters, bool exact_size, uint64_t additional_clusters, bool exact_size,
int new_refblock_index, int new_refblock_index,
uint64_t new_refblock_offset); uint64_t new_refblock_offset);
int64_t GRAPH_RDLOCK int64_t qcow2_alloc_clusters(BlockDriverState *bs, uint64_t size);
qcow2_alloc_clusters(BlockDriverState *bs, uint64_t size); int64_t coroutine_fn qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offset,
int64_t GRAPH_RDLOCK coroutine_fn
qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offset,
int64_t nb_clusters); int64_t nb_clusters);
int64_t coroutine_fn GRAPH_RDLOCK qcow2_alloc_bytes(BlockDriverState *bs, int size); int64_t coroutine_fn GRAPH_RDLOCK qcow2_alloc_bytes(BlockDriverState *bs, int size);
void GRAPH_RDLOCK qcow2_free_clusters(BlockDriverState *bs, void qcow2_free_clusters(BlockDriverState *bs,
int64_t offset, int64_t size, int64_t offset, int64_t size,
enum qcow2_discard_type type); enum qcow2_discard_type type);
void GRAPH_RDLOCK void qcow2_free_any_cluster(BlockDriverState *bs, uint64_t l2_entry,
qcow2_free_any_cluster(BlockDriverState *bs, uint64_t l2_entry,
enum qcow2_discard_type type); enum qcow2_discard_type type);
int GRAPH_RDLOCK int qcow2_update_snapshot_refcount(BlockDriverState *bs,
qcow2_update_snapshot_refcount(BlockDriverState *bs, int64_t l1_table_offset, int64_t l1_table_offset, int l1_size, int addend);
int l1_size, int addend);
int GRAPH_RDLOCK qcow2_flush_caches(BlockDriverState *bs); int qcow2_flush_caches(BlockDriverState *bs);
int GRAPH_RDLOCK qcow2_write_caches(BlockDriverState *bs); int qcow2_write_caches(BlockDriverState *bs);
int coroutine_fn qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res, int coroutine_fn qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix); BdrvCheckMode fix);
void GRAPH_RDLOCK qcow2_process_discards(BlockDriverState *bs, int ret); void qcow2_process_discards(BlockDriverState *bs, int ret);
int GRAPH_RDLOCK int qcow2_check_metadata_overlap(BlockDriverState *bs, int ign, int64_t offset,
qcow2_check_metadata_overlap(BlockDriverState *bs, int ign, int64_t offset,
int64_t size); int64_t size);
int GRAPH_RDLOCK int qcow2_pre_write_overlap_check(BlockDriverState *bs, int ign, int64_t offset,
qcow2_pre_write_overlap_check(BlockDriverState *bs, int ign, int64_t offset,
int64_t size, bool data_file); int64_t size, bool data_file);
int coroutine_fn qcow2_inc_refcounts_imrt(BlockDriverState *bs, BdrvCheckResult *res, int coroutine_fn qcow2_inc_refcounts_imrt(BlockDriverState *bs, BdrvCheckResult *res,
void **refcount_table, void **refcount_table,
int64_t *refcount_table_size, int64_t *refcount_table_size,
int64_t offset, int64_t size); int64_t offset, int64_t size);
int GRAPH_RDLOCK int qcow2_change_refcount_order(BlockDriverState *bs, int refcount_order,
qcow2_change_refcount_order(BlockDriverState *bs, int refcount_order,
BlockDriverAmendStatusCB *status_cb, BlockDriverAmendStatusCB *status_cb,
void *cb_opaque, Error **errp); void *cb_opaque, Error **errp);
int coroutine_fn GRAPH_RDLOCK qcow2_shrink_reftable(BlockDriverState *bs); int coroutine_fn GRAPH_RDLOCK qcow2_shrink_reftable(BlockDriverState *bs);
int64_t coroutine_fn qcow2_get_last_cluster(BlockDriverState *bs, int64_t size);
int64_t coroutine_fn GRAPH_RDLOCK
qcow2_get_last_cluster(BlockDriverState *bs, int64_t size);
int coroutine_fn GRAPH_RDLOCK int coroutine_fn GRAPH_RDLOCK
qcow2_detect_metadata_preallocation(BlockDriverState *bs); qcow2_detect_metadata_preallocation(BlockDriverState *bs);
/* qcow2-cluster.c functions */ /* qcow2-cluster.c functions */
int GRAPH_RDLOCK int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size, bool exact_size); bool exact_size);
int coroutine_fn GRAPH_RDLOCK int coroutine_fn GRAPH_RDLOCK
qcow2_shrink_l1_table(BlockDriverState *bs, uint64_t max_size); qcow2_shrink_l1_table(BlockDriverState *bs, uint64_t max_size);
int GRAPH_RDLOCK qcow2_write_l1_entry(BlockDriverState *bs, int l1_index); int qcow2_write_l1_entry(BlockDriverState *bs, int l1_index);
int qcow2_encrypt_sectors(BDRVQcow2State *s, int64_t sector_num, int qcow2_encrypt_sectors(BDRVQcow2State *s, int64_t sector_num,
uint8_t *buf, int nb_sectors, bool enc, Error **errp); uint8_t *buf, int nb_sectors, bool enc, Error **errp);
int GRAPH_RDLOCK int qcow2_get_host_offset(BlockDriverState *bs, uint64_t offset,
qcow2_get_host_offset(BlockDriverState *bs, uint64_t offset,
unsigned int *bytes, uint64_t *host_offset, unsigned int *bytes, uint64_t *host_offset,
QCow2SubclusterType *subcluster_type); QCow2SubclusterType *subcluster_type);
int coroutine_fn qcow2_alloc_host_offset(BlockDriverState *bs, uint64_t offset,
int coroutine_fn GRAPH_RDLOCK unsigned int *bytes,
qcow2_alloc_host_offset(BlockDriverState *bs, uint64_t offset, uint64_t *host_offset, QCowL2Meta **m);
unsigned int *bytes, uint64_t *host_offset,
QCowL2Meta **m);
int coroutine_fn GRAPH_RDLOCK int coroutine_fn GRAPH_RDLOCK
qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs, uint64_t offset, qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs, uint64_t offset,
int compressed_size, uint64_t *host_offset); int compressed_size, uint64_t *host_offset);
void GRAPH_RDLOCK void qcow2_parse_compressed_l2_entry(BlockDriverState *bs, uint64_t l2_entry,
qcow2_parse_compressed_l2_entry(BlockDriverState *bs, uint64_t l2_entry,
uint64_t *coffset, int *csize); uint64_t *coffset, int *csize);
int coroutine_fn GRAPH_RDLOCK int coroutine_fn GRAPH_RDLOCK
qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m); qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m);
void coroutine_fn GRAPH_RDLOCK void coroutine_fn qcow2_alloc_cluster_abort(BlockDriverState *bs, QCowL2Meta *m);
qcow2_alloc_cluster_abort(BlockDriverState *bs, QCowL2Meta *m); int qcow2_cluster_discard(BlockDriverState *bs, uint64_t offset,
uint64_t bytes, enum qcow2_discard_type type,
int GRAPH_RDLOCK bool full_discard);
qcow2_cluster_discard(BlockDriverState *bs, uint64_t offset, uint64_t bytes,
enum qcow2_discard_type type, bool full_discard);
int coroutine_fn GRAPH_RDLOCK int coroutine_fn GRAPH_RDLOCK
qcow2_subcluster_zeroize(BlockDriverState *bs, uint64_t offset, uint64_t bytes, qcow2_subcluster_zeroize(BlockDriverState *bs, uint64_t offset, uint64_t bytes,
int flags); int flags);
int GRAPH_RDLOCK int qcow2_expand_zero_clusters(BlockDriverState *bs,
qcow2_expand_zero_clusters(BlockDriverState *bs,
BlockDriverAmendStatusCB *status_cb, BlockDriverAmendStatusCB *status_cb,
void *cb_opaque); void *cb_opaque);
/* qcow2-snapshot.c functions */ /* qcow2-snapshot.c functions */
int GRAPH_RDLOCK int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info);
qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info); int qcow2_snapshot_goto(BlockDriverState *bs, const char *snapshot_id);
int qcow2_snapshot_delete(BlockDriverState *bs,
int GRAPH_RDLOCK const char *snapshot_id,
qcow2_snapshot_goto(BlockDriverState *bs, const char *snapshot_id); const char *name,
Error **errp);
int GRAPH_RDLOCK int qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab);
qcow2_snapshot_delete(BlockDriverState *bs, const char *snapshot_id, int qcow2_snapshot_load_tmp(BlockDriverState *bs,
const char *name, Error **errp); const char *snapshot_id,
const char *name,
int GRAPH_RDLOCK Error **errp);
qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab);
int GRAPH_RDLOCK
qcow2_snapshot_load_tmp(BlockDriverState *bs, const char *snapshot_id,
const char *name, Error **errp);
void qcow2_free_snapshots(BlockDriverState *bs); void qcow2_free_snapshots(BlockDriverState *bs);
int coroutine_fn GRAPH_RDLOCK int coroutine_fn GRAPH_RDLOCK
qcow2_read_snapshots(BlockDriverState *bs, Error **errp); qcow2_read_snapshots(BlockDriverState *bs, Error **errp);
int GRAPH_RDLOCK qcow2_write_snapshots(BlockDriverState *bs); int qcow2_write_snapshots(BlockDriverState *bs);
int coroutine_fn GRAPH_RDLOCK int coroutine_fn GRAPH_RDLOCK
qcow2_check_read_snapshot_table(BlockDriverState *bs, BdrvCheckResult *result, qcow2_check_read_snapshot_table(BlockDriverState *bs, BdrvCheckResult *result,
BdrvCheckMode fix); BdrvCheckMode fix);
int coroutine_fn GRAPH_RDLOCK int coroutine_fn qcow2_check_fix_snapshot_table(BlockDriverState *bs,
qcow2_check_fix_snapshot_table(BlockDriverState *bs, BdrvCheckResult *result, BdrvCheckResult *result,
BdrvCheckMode fix); BdrvCheckMode fix);
/* qcow2-cache.c functions */ /* qcow2-cache.c functions */
Qcow2Cache * GRAPH_RDLOCK Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables,
qcow2_cache_create(BlockDriverState *bs, int num_tables, unsigned table_size); unsigned table_size);
int qcow2_cache_destroy(Qcow2Cache *c); int qcow2_cache_destroy(Qcow2Cache *c);
void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table); void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table);
int GRAPH_RDLOCK qcow2_cache_flush(BlockDriverState *bs, Qcow2Cache *c); int qcow2_cache_flush(BlockDriverState *bs, Qcow2Cache *c);
int GRAPH_RDLOCK qcow2_cache_write(BlockDriverState *bs, Qcow2Cache *c); int qcow2_cache_write(BlockDriverState *bs, Qcow2Cache *c);
int GRAPH_RDLOCK qcow2_cache_set_dependency(BlockDriverState *bs, Qcow2Cache *c, int qcow2_cache_set_dependency(BlockDriverState *bs, Qcow2Cache *c,
Qcow2Cache *dependency); Qcow2Cache *dependency);
void qcow2_cache_depends_on_flush(Qcow2Cache *c); void qcow2_cache_depends_on_flush(Qcow2Cache *c);
void qcow2_cache_clean_unused(Qcow2Cache *c); void qcow2_cache_clean_unused(Qcow2Cache *c);
int GRAPH_RDLOCK qcow2_cache_empty(BlockDriverState *bs, Qcow2Cache *c); int qcow2_cache_empty(BlockDriverState *bs, Qcow2Cache *c);
int GRAPH_RDLOCK int qcow2_cache_get(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,
qcow2_cache_get(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,
void **table); void **table);
int qcow2_cache_get_empty(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,
int GRAPH_RDLOCK
qcow2_cache_get_empty(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,
void **table); void **table);
void qcow2_cache_put(Qcow2Cache *c, void **table); void qcow2_cache_put(Qcow2Cache *c, void **table);
void *qcow2_cache_is_table_offset(Qcow2Cache *c, uint64_t offset); void *qcow2_cache_is_table_offset(Qcow2Cache *c, uint64_t offset);
void qcow2_cache_discard(Qcow2Cache *c, void *table); void qcow2_cache_discard(Qcow2Cache *c, void *table);
/* qcow2-bitmap.c functions */ /* qcow2-bitmap.c functions */
int coroutine_fn GRAPH_RDLOCK int coroutine_fn
qcow2_check_bitmaps_refcounts(BlockDriverState *bs, BdrvCheckResult *res, qcow2_check_bitmaps_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
void **refcount_table, void **refcount_table,
int64_t *refcount_table_size); int64_t *refcount_table_size);
bool coroutine_fn GRAPH_RDLOCK bool coroutine_fn GRAPH_RDLOCK
qcow2_load_dirty_bitmaps(BlockDriverState *bs, bool *header_updated, qcow2_load_dirty_bitmaps(BlockDriverState *bs, bool *header_updated, Error **errp);
Error **errp); bool qcow2_get_bitmap_info_list(BlockDriverState *bs,
bool GRAPH_RDLOCK
qcow2_get_bitmap_info_list(BlockDriverState *bs,
Qcow2BitmapInfoList **info_list, Error **errp); Qcow2BitmapInfoList **info_list, Error **errp);
int qcow2_reopen_bitmaps_rw(BlockDriverState *bs, Error **errp);
int GRAPH_RDLOCK qcow2_reopen_bitmaps_rw(BlockDriverState *bs, Error **errp); int coroutine_fn qcow2_truncate_bitmaps_check(BlockDriverState *bs, Error **errp);
int GRAPH_RDLOCK qcow2_reopen_bitmaps_ro(BlockDriverState *bs, Error **errp); bool qcow2_store_persistent_dirty_bitmaps(BlockDriverState *bs,
bool release_stored, Error **errp);
int coroutine_fn GRAPH_RDLOCK int qcow2_reopen_bitmaps_ro(BlockDriverState *bs, Error **errp);
qcow2_truncate_bitmaps_check(BlockDriverState *bs, Error **errp); bool coroutine_fn qcow2_co_can_store_new_dirty_bitmap(BlockDriverState *bs,
const char *name,
bool GRAPH_RDLOCK uint32_t granularity,
qcow2_store_persistent_dirty_bitmaps(BlockDriverState *bs, bool release_stored,
Error **errp); Error **errp);
int coroutine_fn qcow2_co_remove_persistent_dirty_bitmap(BlockDriverState *bs,
bool coroutine_fn GRAPH_RDLOCK const char *name,
qcow2_co_can_store_new_dirty_bitmap(BlockDriverState *bs, const char *name,
uint32_t granularity, Error **errp);
int coroutine_fn GRAPH_RDLOCK
qcow2_co_remove_persistent_dirty_bitmap(BlockDriverState *bs, const char *name,
Error **errp); Error **errp);
bool qcow2_supports_persistent_dirty_bitmap(BlockDriverState *bs); bool qcow2_supports_persistent_dirty_bitmap(BlockDriverState *bs);
uint64_t qcow2_get_persistent_dirty_bitmap_size(BlockDriverState *bs, uint64_t qcow2_get_persistent_dirty_bitmap_size(BlockDriverState *bs,
uint32_t cluster_size); uint32_t cluster_size);

View File

@@ -612,7 +612,7 @@ static int bdrv_qed_reopen_prepare(BDRVReopenState *state,
return 0; return 0;
} }
static void GRAPH_RDLOCK bdrv_qed_do_close(BlockDriverState *bs) static void bdrv_qed_close(BlockDriverState *bs)
{ {
BDRVQEDState *s = bs->opaque; BDRVQEDState *s = bs->opaque;
@@ -631,14 +631,6 @@ static void GRAPH_RDLOCK bdrv_qed_do_close(BlockDriverState *bs)
qemu_vfree(s->l1_table); qemu_vfree(s->l1_table);
} }
static void GRAPH_UNLOCKED bdrv_qed_close(BlockDriverState *bs)
{
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
bdrv_qed_do_close(bs);
}
static int coroutine_fn GRAPH_UNLOCKED static int coroutine_fn GRAPH_UNLOCKED
bdrv_qed_co_create(BlockdevCreateOptions *opts, Error **errp) bdrv_qed_co_create(BlockdevCreateOptions *opts, Error **errp)
{ {
@@ -1146,7 +1138,7 @@ out:
/** /**
* Check if the QED_F_NEED_CHECK bit should be set during allocating write * Check if the QED_F_NEED_CHECK bit should be set during allocating write
*/ */
static bool GRAPH_RDLOCK qed_should_set_need_check(BDRVQEDState *s) static bool qed_should_set_need_check(BDRVQEDState *s)
{ {
/* The flush before L2 update path ensures consistency */ /* The flush before L2 update path ensures consistency */
if (s->bs->backing) { if (s->bs->backing) {
@@ -1451,9 +1443,11 @@ bdrv_qed_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset, int64_t bytes,
QED_AIOCB_WRITE | QED_AIOCB_ZERO); QED_AIOCB_WRITE | QED_AIOCB_ZERO);
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn bdrv_qed_co_truncate(BlockDriverState *bs,
bdrv_qed_co_truncate(BlockDriverState *bs, int64_t offset, bool exact, int64_t offset,
PreallocMode prealloc, BdrvRequestFlags flags, bool exact,
PreallocMode prealloc,
BdrvRequestFlags flags,
Error **errp) Error **errp)
{ {
BDRVQEDState *s = bs->opaque; BDRVQEDState *s = bs->opaque;
@@ -1504,8 +1498,8 @@ bdrv_qed_co_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
return 0; return 0;
} }
static int coroutine_fn GRAPH_RDLOCK static int bdrv_qed_change_backing_file(BlockDriverState *bs,
bdrv_qed_co_change_backing_file(BlockDriverState *bs, const char *backing_file, const char *backing_file,
const char *backing_fmt) const char *backing_fmt)
{ {
BDRVQEDState *s = bs->opaque; BDRVQEDState *s = bs->opaque;
@@ -1568,7 +1562,7 @@ bdrv_qed_co_change_backing_file(BlockDriverState *bs, const char *backing_file,
} }
/* Write new header */ /* Write new header */
ret = bdrv_co_pwrite_sync(bs->file, 0, buffer_len, buffer, 0); ret = bdrv_pwrite_sync(bs->file, 0, buffer_len, buffer, 0);
g_free(buffer); g_free(buffer);
if (ret == 0) { if (ret == 0) {
memcpy(&s->header, &new_header, sizeof(new_header)); memcpy(&s->header, &new_header, sizeof(new_header));
@@ -1582,7 +1576,7 @@ bdrv_qed_co_invalidate_cache(BlockDriverState *bs, Error **errp)
BDRVQEDState *s = bs->opaque; BDRVQEDState *s = bs->opaque;
int ret; int ret;
bdrv_qed_do_close(bs); bdrv_qed_close(bs);
bdrv_qed_init_state(bs); bdrv_qed_init_state(bs);
qemu_co_mutex_lock(&s->table_lock); qemu_co_mutex_lock(&s->table_lock);
@@ -1664,7 +1658,7 @@ static BlockDriver bdrv_qed = {
.bdrv_co_getlength = bdrv_qed_co_getlength, .bdrv_co_getlength = bdrv_qed_co_getlength,
.bdrv_co_get_info = bdrv_qed_co_get_info, .bdrv_co_get_info = bdrv_qed_co_get_info,
.bdrv_refresh_limits = bdrv_qed_refresh_limits, .bdrv_refresh_limits = bdrv_qed_refresh_limits,
.bdrv_co_change_backing_file = bdrv_qed_co_change_backing_file, .bdrv_change_backing_file = bdrv_qed_change_backing_file,
.bdrv_co_invalidate_cache = bdrv_qed_co_invalidate_cache, .bdrv_co_invalidate_cache = bdrv_qed_co_invalidate_cache,
.bdrv_co_check = bdrv_qed_co_check, .bdrv_co_check = bdrv_qed_co_check,
.bdrv_detach_aio_context = bdrv_qed_detach_aio_context, .bdrv_detach_aio_context = bdrv_qed_detach_aio_context,

View File

@@ -185,7 +185,7 @@ enum {
/** /**
* Header functions * Header functions
*/ */
int GRAPH_RDLOCK qed_write_header_sync(BDRVQEDState *s); int qed_write_header_sync(BDRVQEDState *s);
/** /**
* L2 cache functions * L2 cache functions

View File

@@ -206,7 +206,7 @@ static void quorum_report_bad(QuorumOpType type, uint64_t offset,
end_sector - start_sector); end_sector - start_sector);
} }
static void GRAPH_RDLOCK quorum_report_failure(QuorumAIOCB *acb) static void quorum_report_failure(QuorumAIOCB *acb)
{ {
const char *reference = bdrv_get_device_or_node_name(acb->bs); const char *reference = bdrv_get_device_or_node_name(acb->bs);
int64_t start_sector = acb->offset / BDRV_SECTOR_SIZE; int64_t start_sector = acb->offset / BDRV_SECTOR_SIZE;
@@ -219,7 +219,7 @@ static void GRAPH_RDLOCK quorum_report_failure(QuorumAIOCB *acb)
static int quorum_vote_error(QuorumAIOCB *acb); static int quorum_vote_error(QuorumAIOCB *acb);
static bool GRAPH_RDLOCK quorum_has_too_much_io_failed(QuorumAIOCB *acb) static bool quorum_has_too_much_io_failed(QuorumAIOCB *acb)
{ {
BDRVQuorumState *s = acb->bs->opaque; BDRVQuorumState *s = acb->bs->opaque;

View File

@@ -95,9 +95,9 @@ end:
return ret; return ret;
} }
static int GRAPH_RDLOCK static int raw_apply_options(BlockDriverState *bs, BDRVRawState *s,
raw_apply_options(BlockDriverState *bs, BDRVRawState *s, uint64_t offset, uint64_t offset, bool has_size, uint64_t size,
bool has_size, uint64_t size, Error **errp) Error **errp)
{ {
int64_t real_size = 0; int64_t real_size = 0;
@@ -145,9 +145,6 @@ static int raw_reopen_prepare(BDRVReopenState *reopen_state,
uint64_t offset, size; uint64_t offset, size;
int ret; int ret;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
assert(reopen_state != NULL); assert(reopen_state != NULL);
assert(reopen_state->bs != NULL); assert(reopen_state->bs != NULL);
@@ -282,9 +279,10 @@ fail:
return ret; return ret;
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn raw_co_block_status(BlockDriverState *bs,
raw_co_block_status(BlockDriverState *bs, bool want_zero, int64_t offset, bool want_zero, int64_t offset,
int64_t bytes, int64_t *pnum, int64_t *map, int64_t bytes, int64_t *pnum,
int64_t *map,
BlockDriverState **file) BlockDriverState **file)
{ {
BDRVRawState *s = bs->opaque; BDRVRawState *s = bs->opaque;
@@ -399,7 +397,7 @@ raw_co_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
return bdrv_co_get_info(bs->file->bs, bdi); return bdrv_co_get_info(bs->file->bs, bdi);
} }
static void GRAPH_RDLOCK raw_refresh_limits(BlockDriverState *bs, Error **errp) static void raw_refresh_limits(BlockDriverState *bs, Error **errp)
{ {
bs->bl.has_variable_length = bs->file->bs->bl.has_variable_length; bs->bl.has_variable_length = bs->file->bs->bl.has_variable_length;
@@ -454,7 +452,7 @@ raw_co_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
return bdrv_co_ioctl(bs->file->bs, req, buf); return bdrv_co_ioctl(bs->file->bs, req, buf);
} }
static int GRAPH_RDLOCK raw_has_zero_init(BlockDriverState *bs) static int raw_has_zero_init(BlockDriverState *bs)
{ {
return bdrv_has_zero_init(bs->file->bs); return bdrv_has_zero_init(bs->file->bs);
} }
@@ -476,8 +474,6 @@ static int raw_open(BlockDriverState *bs, QDict *options, int flags,
BdrvChildRole file_role; BdrvChildRole file_role;
int ret; int ret;
GLOBAL_STATE_CODE();
ret = raw_read_options(options, &offset, &has_size, &size, errp); ret = raw_read_options(options, &offset, &has_size, &size, errp);
if (ret < 0) { if (ret < 0) {
return ret; return ret;
@@ -495,8 +491,6 @@ static int raw_open(BlockDriverState *bs, QDict *options, int flags,
bdrv_open_child(NULL, options, "file", bs, &child_of_bds, bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
file_role, false, errp); file_role, false, errp);
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!bs->file) { if (!bs->file) {
return -EINVAL; return -EINVAL;
} }
@@ -547,8 +541,7 @@ static int raw_probe(const uint8_t *buf, int buf_size, const char *filename)
return 1; return 1;
} }
static int GRAPH_RDLOCK static int raw_probe_blocksizes(BlockDriverState *bs, BlockSizes *bsz)
raw_probe_blocksizes(BlockDriverState *bs, BlockSizes *bsz)
{ {
BDRVRawState *s = bs->opaque; BDRVRawState *s = bs->opaque;
int ret; int ret;
@@ -565,8 +558,7 @@ raw_probe_blocksizes(BlockDriverState *bs, BlockSizes *bsz)
return 0; return 0;
} }
static int GRAPH_RDLOCK static int raw_probe_geometry(BlockDriverState *bs, HDGeometry *geo)
raw_probe_geometry(BlockDriverState *bs, HDGeometry *geo)
{ {
BDRVRawState *s = bs->opaque; BDRVRawState *s = bs->opaque;
if (s->offset || s->has_size) { if (s->offset || s->has_size) {
@@ -616,7 +608,7 @@ static const char *const raw_strong_runtime_opts[] = {
NULL NULL
}; };
static void GRAPH_RDLOCK raw_cancel_in_flight(BlockDriverState *bs) static void raw_cancel_in_flight(BlockDriverState *bs)
{ {
bdrv_cancel_in_flight(bs->file->bs); bdrv_cancel_in_flight(bs->file->bs);
} }

View File

@@ -1168,9 +1168,7 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags,
/* If we are using an rbd snapshot, we must be r/o, otherwise /* If we are using an rbd snapshot, we must be r/o, otherwise
* leave as-is */ * leave as-is */
if (s->snap != NULL) { if (s->snap != NULL) {
bdrv_graph_rdlock_main_loop();
r = bdrv_apply_auto_read_only(bs, "rbd snapshots are read-only", errp); r = bdrv_apply_auto_read_only(bs, "rbd snapshots are read-only", errp);
bdrv_graph_rdunlock_main_loop();
if (r < 0) { if (r < 0) {
goto failed_post_open; goto failed_post_open;
} }
@@ -1210,8 +1208,6 @@ static int qemu_rbd_reopen_prepare(BDRVReopenState *state,
BDRVRBDState *s = state->bs->opaque; BDRVRBDState *s = state->bs->opaque;
int ret = 0; int ret = 0;
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (s->snap && state->flags & BDRV_O_RDWR) { if (s->snap && state->flags & BDRV_O_RDWR) {
error_setg(errp, error_setg(errp,
"Cannot change node '%s' to r/w when using RBD snapshot", "Cannot change node '%s' to r/w when using RBD snapshot",

View File

@@ -276,7 +276,7 @@ replication_co_writev(BlockDriverState *bs, int64_t sector_num,
while (remaining_sectors > 0) { while (remaining_sectors > 0) {
int64_t count; int64_t count;
ret = bdrv_co_is_allocated_above(top->bs, base->bs, false, ret = bdrv_is_allocated_above(top->bs, base->bs, false,
sector_num * BDRV_SECTOR_SIZE, sector_num * BDRV_SECTOR_SIZE,
remaining_sectors * BDRV_SECTOR_SIZE, remaining_sectors * BDRV_SECTOR_SIZE,
&count); &count);
@@ -307,16 +307,13 @@ out:
return ret; return ret;
} }
static void GRAPH_UNLOCKED static void secondary_do_checkpoint(BlockDriverState *bs, Error **errp)
secondary_do_checkpoint(BlockDriverState *bs, Error **errp)
{ {
BDRVReplicationState *s = bs->opaque; BDRVReplicationState *s = bs->opaque;
BdrvChild *active_disk; BdrvChild *active_disk = bs->file;
Error *local_err = NULL; Error *local_err = NULL;
int ret; int ret;
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!s->backup_job) { if (!s->backup_job) {
error_setg(errp, "Backup job was cancelled unexpectedly"); error_setg(errp, "Backup job was cancelled unexpectedly");
return; return;
@@ -328,7 +325,6 @@ secondary_do_checkpoint(BlockDriverState *bs, Error **errp)
return; return;
} }
active_disk = bs->file;
if (!active_disk->bs->drv) { if (!active_disk->bs->drv) {
error_setg(errp, "Active disk %s is ejected", error_setg(errp, "Active disk %s is ejected",
active_disk->bs->node_name); active_disk->bs->node_name);
@@ -364,9 +360,6 @@ static void reopen_backing_file(BlockDriverState *bs, bool writable,
BdrvChild *hidden_disk, *secondary_disk; BdrvChild *hidden_disk, *secondary_disk;
BlockReopenQueue *reopen_queue = NULL; BlockReopenQueue *reopen_queue = NULL;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
/* /*
* s->hidden_disk and s->secondary_disk may not be set yet, as they will * s->hidden_disk and s->secondary_disk may not be set yet, as they will
* only be set after the children are writable. * only be set after the children are writable.
@@ -434,8 +427,7 @@ static void backup_job_completed(void *opaque, int ret)
backup_job_cleanup(bs); backup_job_cleanup(bs);
} }
static bool GRAPH_RDLOCK static bool check_top_bs(BlockDriverState *top_bs, BlockDriverState *bs)
check_top_bs(BlockDriverState *top_bs, BlockDriverState *bs)
{ {
BdrvChild *child; BdrvChild *child;
@@ -466,8 +458,6 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
Error *local_err = NULL; Error *local_err = NULL;
BackupPerf perf = { .use_copy_range = true, .max_workers = 1 }; BackupPerf perf = { .use_copy_range = true, .max_workers = 1 };
GLOBAL_STATE_CODE();
aio_context = bdrv_get_aio_context(bs); aio_context = bdrv_get_aio_context(bs);
aio_context_acquire(aio_context); aio_context_acquire(aio_context);
s = bs->opaque; s = bs->opaque;
@@ -500,11 +490,9 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
case REPLICATION_MODE_PRIMARY: case REPLICATION_MODE_PRIMARY:
break; break;
case REPLICATION_MODE_SECONDARY: case REPLICATION_MODE_SECONDARY:
bdrv_graph_rdlock_main_loop();
active_disk = bs->file; active_disk = bs->file;
if (!active_disk || !active_disk->bs || !active_disk->bs->backing) { if (!active_disk || !active_disk->bs || !active_disk->bs->backing) {
error_setg(errp, "Active disk doesn't have backing file"); error_setg(errp, "Active disk doesn't have backing file");
bdrv_graph_rdunlock_main_loop();
aio_context_release(aio_context); aio_context_release(aio_context);
return; return;
} }
@@ -512,7 +500,6 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
hidden_disk = active_disk->bs->backing; hidden_disk = active_disk->bs->backing;
if (!hidden_disk->bs || !hidden_disk->bs->backing) { if (!hidden_disk->bs || !hidden_disk->bs->backing) {
error_setg(errp, "Hidden disk doesn't have backing file"); error_setg(errp, "Hidden disk doesn't have backing file");
bdrv_graph_rdunlock_main_loop();
aio_context_release(aio_context); aio_context_release(aio_context);
return; return;
} }
@@ -520,11 +507,9 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
secondary_disk = hidden_disk->bs->backing; secondary_disk = hidden_disk->bs->backing;
if (!secondary_disk->bs || !bdrv_has_blk(secondary_disk->bs)) { if (!secondary_disk->bs || !bdrv_has_blk(secondary_disk->bs)) {
error_setg(errp, "The secondary disk doesn't have block backend"); error_setg(errp, "The secondary disk doesn't have block backend");
bdrv_graph_rdunlock_main_loop();
aio_context_release(aio_context); aio_context_release(aio_context);
return; return;
} }
bdrv_graph_rdunlock_main_loop();
/* verify the length */ /* verify the length */
active_length = bdrv_getlength(active_disk->bs); active_length = bdrv_getlength(active_disk->bs);
@@ -541,16 +526,13 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
/* Must be true, or the bdrv_getlength() calls would have failed */ /* Must be true, or the bdrv_getlength() calls would have failed */
assert(active_disk->bs->drv && hidden_disk->bs->drv); assert(active_disk->bs->drv && hidden_disk->bs->drv);
bdrv_graph_rdlock_main_loop();
if (!active_disk->bs->drv->bdrv_make_empty || if (!active_disk->bs->drv->bdrv_make_empty ||
!hidden_disk->bs->drv->bdrv_make_empty) { !hidden_disk->bs->drv->bdrv_make_empty) {
error_setg(errp, error_setg(errp,
"Active disk or hidden disk doesn't support make_empty"); "Active disk or hidden disk doesn't support make_empty");
aio_context_release(aio_context); aio_context_release(aio_context);
bdrv_graph_rdunlock_main_loop();
return; return;
} }
bdrv_graph_rdunlock_main_loop();
/* reopen the backing file in r/w mode */ /* reopen the backing file in r/w mode */
reopen_backing_file(bs, true, &local_err); reopen_backing_file(bs, true, &local_err);
@@ -584,6 +566,8 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
return; return;
} }
bdrv_graph_wrunlock();
/* start backup job now */ /* start backup job now */
error_setg(&s->blocker, error_setg(&s->blocker,
"Block device is in use by internal backup job"); "Block device is in use by internal backup job");
@@ -592,7 +576,6 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
if (!top_bs || !bdrv_is_root_node(top_bs) || if (!top_bs || !bdrv_is_root_node(top_bs) ||
!check_top_bs(top_bs, bs)) { !check_top_bs(top_bs, bs)) {
error_setg(errp, "No top_bs or it is invalid"); error_setg(errp, "No top_bs or it is invalid");
bdrv_graph_wrunlock();
reopen_backing_file(bs, false, NULL); reopen_backing_file(bs, false, NULL);
aio_context_release(aio_context); aio_context_release(aio_context);
return; return;
@@ -600,8 +583,6 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
bdrv_op_block_all(top_bs, s->blocker); bdrv_op_block_all(top_bs, s->blocker);
bdrv_op_unblock(top_bs, BLOCK_OP_TYPE_DATAPLANE, s->blocker); bdrv_op_unblock(top_bs, BLOCK_OP_TYPE_DATAPLANE, s->blocker);
bdrv_graph_wrunlock();
s->backup_job = backup_job_create( s->backup_job = backup_job_create(
NULL, s->secondary_disk->bs, s->hidden_disk->bs, NULL, s->secondary_disk->bs, s->hidden_disk->bs,
0, MIRROR_SYNC_MODE_NONE, NULL, 0, false, NULL, 0, MIRROR_SYNC_MODE_NONE, NULL, 0, false, NULL,
@@ -756,13 +737,11 @@ static void replication_stop(ReplicationState *rs, bool failover, Error **errp)
return; return;
} }
bdrv_graph_rdlock_main_loop();
s->stage = BLOCK_REPLICATION_FAILOVER; s->stage = BLOCK_REPLICATION_FAILOVER;
s->commit_job = commit_active_start( s->commit_job = commit_active_start(
NULL, bs->file->bs, s->secondary_disk->bs, NULL, bs->file->bs, s->secondary_disk->bs,
JOB_INTERNAL, 0, BLOCKDEV_ON_ERROR_REPORT, JOB_INTERNAL, 0, BLOCKDEV_ON_ERROR_REPORT,
NULL, replication_done, bs, true, errp); NULL, replication_done, bs, true, errp);
bdrv_graph_rdunlock_main_loop();
break; break;
default: default:
aio_context_release(aio_context); aio_context_release(aio_context);

View File

@@ -73,7 +73,7 @@ snapshot_access_co_pwritev_part(BlockDriverState *bs,
} }
static void GRAPH_RDLOCK snapshot_access_refresh_filename(BlockDriverState *bs) static void snapshot_access_refresh_filename(BlockDriverState *bs)
{ {
pstrcpy(bs->exact_filename, sizeof(bs->exact_filename), pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
bs->file->bs->filename); bs->file->bs->filename);
@@ -85,9 +85,6 @@ static int snapshot_access_open(BlockDriverState *bs, QDict *options, int flags,
bdrv_open_child(NULL, options, "file", bs, &child_of_bds, bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
BDRV_CHILD_DATA | BDRV_CHILD_PRIMARY, BDRV_CHILD_DATA | BDRV_CHILD_PRIMARY,
false, errp); false, errp);
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!bs->file) { if (!bs->file) {
return -EINVAL; return -EINVAL;
} }

View File

@@ -155,15 +155,11 @@ bool bdrv_snapshot_find_by_id_and_name(BlockDriverState *bs,
* back if the given BDS does not support snapshots. * back if the given BDS does not support snapshots.
* Return NULL if there is no BDS to (safely) fall back to. * Return NULL if there is no BDS to (safely) fall back to.
*/ */
static BdrvChild * GRAPH_RDLOCK static BdrvChild *bdrv_snapshot_fallback_child(BlockDriverState *bs)
bdrv_snapshot_fallback_child(BlockDriverState *bs)
{ {
BdrvChild *fallback = bdrv_primary_child(bs); BdrvChild *fallback = bdrv_primary_child(bs);
BdrvChild *child; BdrvChild *child;
GLOBAL_STATE_CODE();
assert_bdrv_graph_readable();
/* We allow fallback only to primary child */ /* We allow fallback only to primary child */
if (!fallback) { if (!fallback) {
return NULL; return NULL;
@@ -186,10 +182,8 @@ bdrv_snapshot_fallback_child(BlockDriverState *bs)
return fallback; return fallback;
} }
static BlockDriverState * GRAPH_RDLOCK static BlockDriverState *bdrv_snapshot_fallback(BlockDriverState *bs)
bdrv_snapshot_fallback(BlockDriverState *bs)
{ {
GLOBAL_STATE_CODE();
return child_bs(bdrv_snapshot_fallback_child(bs)); return child_bs(bdrv_snapshot_fallback_child(bs));
} }
@@ -260,10 +254,7 @@ int bdrv_snapshot_goto(BlockDriverState *bs,
return ret; return ret;
} }
bdrv_graph_rdlock_main_loop();
fallback = bdrv_snapshot_fallback_child(bs); fallback = bdrv_snapshot_fallback_child(bs);
bdrv_graph_rdunlock_main_loop();
if (fallback) { if (fallback) {
QDict *options; QDict *options;
QDict *file_options; QDict *file_options;
@@ -311,10 +302,7 @@ int bdrv_snapshot_goto(BlockDriverState *bs,
* respective option (with the qdict_put_str() call above). * respective option (with the qdict_put_str() call above).
* Assert that .bdrv_open() has attached the right BDS as primary child. * Assert that .bdrv_open() has attached the right BDS as primary child.
*/ */
bdrv_graph_rdlock_main_loop();
assert(bdrv_primary_bs(bs) == fallback_bs); assert(bdrv_primary_bs(bs) == fallback_bs);
bdrv_graph_rdunlock_main_loop();
bdrv_unref(fallback_bs); bdrv_unref(fallback_bs);
return ret; return ret;
} }
@@ -386,12 +374,10 @@ int bdrv_snapshot_delete(BlockDriverState *bs,
int bdrv_snapshot_list(BlockDriverState *bs, int bdrv_snapshot_list(BlockDriverState *bs,
QEMUSnapshotInfo **psn_info) QEMUSnapshotInfo **psn_info)
{ {
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
BlockDriver *drv = bs->drv; BlockDriver *drv = bs->drv;
BlockDriverState *fallback_bs = bdrv_snapshot_fallback(bs); BlockDriverState *fallback_bs = bdrv_snapshot_fallback(bs);
GLOBAL_STATE_CODE();
if (!drv) { if (!drv) {
return -ENOMEDIUM; return -ENOMEDIUM;
} }
@@ -432,7 +418,6 @@ int bdrv_snapshot_load_tmp(BlockDriverState *bs,
BlockDriver *drv = bs->drv; BlockDriver *drv = bs->drv;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!drv) { if (!drv) {
error_setg(errp, QERR_DEVICE_HAS_NO_MEDIUM, bdrv_get_device_name(bs)); error_setg(errp, QERR_DEVICE_HAS_NO_MEDIUM, bdrv_get_device_name(bs));
@@ -477,9 +462,9 @@ int bdrv_snapshot_load_tmp_by_id_or_name(BlockDriverState *bs,
} }
static int GRAPH_RDLOCK static int bdrv_all_get_snapshot_devices(bool has_devices, strList *devices,
bdrv_all_get_snapshot_devices(bool has_devices, strList *devices, GList **all_bdrvs,
GList **all_bdrvs, Error **errp) Error **errp)
{ {
g_autoptr(GList) bdrvs = NULL; g_autoptr(GList) bdrvs = NULL;
@@ -511,11 +496,8 @@ bdrv_all_get_snapshot_devices(bool has_devices, strList *devices,
} }
static bool GRAPH_RDLOCK bdrv_all_snapshots_includes_bs(BlockDriverState *bs) static bool bdrv_all_snapshots_includes_bs(BlockDriverState *bs)
{ {
GLOBAL_STATE_CODE();
assert_bdrv_graph_readable();
if (!bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) { if (!bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) {
return false; return false;
} }
@@ -536,7 +518,6 @@ bool bdrv_all_can_snapshot(bool has_devices, strList *devices,
GList *iterbdrvs; GList *iterbdrvs;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) { if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) {
return false; return false;
@@ -573,7 +554,6 @@ int bdrv_all_delete_snapshot(const char *name,
GList *iterbdrvs; GList *iterbdrvs;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) { if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) {
return -1; return -1;
@@ -613,15 +593,10 @@ int bdrv_all_goto_snapshot(const char *name,
{ {
g_autoptr(GList) bdrvs = NULL; g_autoptr(GList) bdrvs = NULL;
GList *iterbdrvs; GList *iterbdrvs;
int ret;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop(); if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) {
ret = bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp);
bdrv_graph_rdunlock_main_loop();
if (ret < 0) {
return -1; return -1;
} }
@@ -630,22 +605,15 @@ int bdrv_all_goto_snapshot(const char *name,
BlockDriverState *bs = iterbdrvs->data; BlockDriverState *bs = iterbdrvs->data;
AioContext *ctx = bdrv_get_aio_context(bs); AioContext *ctx = bdrv_get_aio_context(bs);
int ret = 0; int ret = 0;
bool all_snapshots_includes_bs;
aio_context_acquire(ctx); aio_context_acquire(ctx);
bdrv_graph_rdlock_main_loop(); if (devices || bdrv_all_snapshots_includes_bs(bs)) {
all_snapshots_includes_bs = bdrv_all_snapshots_includes_bs(bs);
bdrv_graph_rdunlock_main_loop();
if (devices || all_snapshots_includes_bs) {
ret = bdrv_snapshot_goto(bs, name, errp); ret = bdrv_snapshot_goto(bs, name, errp);
} }
aio_context_release(ctx); aio_context_release(ctx);
if (ret < 0) { if (ret < 0) {
bdrv_graph_rdlock_main_loop();
error_prepend(errp, "Could not load snapshot '%s' on '%s': ", error_prepend(errp, "Could not load snapshot '%s' on '%s': ",
name, bdrv_get_device_or_node_name(bs)); name, bdrv_get_device_or_node_name(bs));
bdrv_graph_rdunlock_main_loop();
return -1; return -1;
} }
@@ -663,7 +631,6 @@ int bdrv_all_has_snapshot(const char *name,
GList *iterbdrvs; GList *iterbdrvs;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) { if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) {
return -1; return -1;
@@ -706,9 +673,7 @@ int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn,
{ {
g_autoptr(GList) bdrvs = NULL; g_autoptr(GList) bdrvs = NULL;
GList *iterbdrvs; GList *iterbdrvs;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) { if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) {
return -1; return -1;
@@ -750,7 +715,6 @@ BlockDriverState *bdrv_all_find_vmstate_bs(const char *vmstate_bs,
GList *iterbdrvs; GList *iterbdrvs;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) { if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) {
return NULL; return NULL;

View File

@@ -53,20 +53,13 @@ static int coroutine_fn stream_populate(BlockBackend *blk,
static int stream_prepare(Job *job) static int stream_prepare(Job *job)
{ {
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job); StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
BlockDriverState *unfiltered_bs; BlockDriverState *unfiltered_bs = bdrv_skip_filters(s->target_bs);
BlockDriverState *unfiltered_bs_cow; BlockDriverState *unfiltered_bs_cow = bdrv_cow_bs(unfiltered_bs);
BlockDriverState *base; BlockDriverState *base;
BlockDriverState *unfiltered_base; BlockDriverState *unfiltered_base;
Error *local_err = NULL; Error *local_err = NULL;
int ret = 0; int ret = 0;
GLOBAL_STATE_CODE();
bdrv_graph_rdlock_main_loop();
unfiltered_bs = bdrv_skip_filters(s->target_bs);
unfiltered_bs_cow = bdrv_cow_bs(unfiltered_bs);
bdrv_graph_rdunlock_main_loop();
/* We should drop filter at this point, as filter hold the backing chain */ /* We should drop filter at this point, as filter hold the backing chain */
bdrv_cor_filter_drop(s->cor_filter_bs); bdrv_cor_filter_drop(s->cor_filter_bs);
s->cor_filter_bs = NULL; s->cor_filter_bs = NULL;
@@ -85,12 +78,10 @@ static int stream_prepare(Job *job)
bdrv_drained_begin(unfiltered_bs_cow); bdrv_drained_begin(unfiltered_bs_cow);
} }
bdrv_graph_rdlock_main_loop();
base = bdrv_filter_or_cow_bs(s->above_base); base = bdrv_filter_or_cow_bs(s->above_base);
unfiltered_base = bdrv_skip_filters(base); unfiltered_base = bdrv_skip_filters(base);
bdrv_graph_rdunlock_main_loop();
if (unfiltered_bs_cow) { if (bdrv_cow_child(unfiltered_bs)) {
const char *base_id = NULL, *base_fmt = NULL; const char *base_id = NULL, *base_fmt = NULL;
if (unfiltered_base) { if (unfiltered_base) {
base_id = s->backing_file_str ?: unfiltered_base->filename; base_id = s->backing_file_str ?: unfiltered_base->filename;
@@ -99,9 +90,7 @@ static int stream_prepare(Job *job)
} }
} }
bdrv_graph_wrlock(base);
bdrv_set_backing_hd_drained(unfiltered_bs, base, &local_err); bdrv_set_backing_hd_drained(unfiltered_bs, base, &local_err);
bdrv_graph_wrunlock();
/* /*
* This call will do I/O, so the graph can change again from here on. * This call will do I/O, so the graph can change again from here on.
@@ -149,19 +138,18 @@ static void stream_clean(Job *job)
static int coroutine_fn stream_run(Job *job, Error **errp) static int coroutine_fn stream_run(Job *job, Error **errp)
{ {
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job); StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
BlockDriverState *unfiltered_bs; BlockDriverState *unfiltered_bs = bdrv_skip_filters(s->target_bs);
int64_t len; int64_t len;
int64_t offset = 0; int64_t offset = 0;
int error = 0; int error = 0;
int64_t n = 0; /* bytes */ int64_t n = 0; /* bytes */
WITH_GRAPH_RDLOCK_GUARD() {
unfiltered_bs = bdrv_skip_filters(s->target_bs);
if (unfiltered_bs == s->base_overlay) { if (unfiltered_bs == s->base_overlay) {
/* Nothing to stream */ /* Nothing to stream */
return 0; return 0;
} }
WITH_GRAPH_RDLOCK_GUARD() {
len = bdrv_co_getlength(s->target_bs); len = bdrv_co_getlength(s->target_bs);
if (len < 0) { if (len < 0) {
return len; return len;
@@ -184,7 +172,7 @@ static int coroutine_fn stream_run(Job *job, Error **errp)
copy = false; copy = false;
WITH_GRAPH_RDLOCK_GUARD() { WITH_GRAPH_RDLOCK_GUARD() {
ret = bdrv_co_is_allocated(unfiltered_bs, offset, STREAM_CHUNK, &n); ret = bdrv_is_allocated(unfiltered_bs, offset, STREAM_CHUNK, &n);
if (ret == 1) { if (ret == 1) {
/* Allocated in the top, no need to copy. */ /* Allocated in the top, no need to copy. */
} else if (ret >= 0) { } else if (ret >= 0) {
@@ -192,7 +180,7 @@ static int coroutine_fn stream_run(Job *job, Error **errp)
* Copy if allocated in the intermediate images. Limit to the * Copy if allocated in the intermediate images. Limit to the
* known-unallocated area [offset, offset+n*BDRV_SECTOR_SIZE). * known-unallocated area [offset, offset+n*BDRV_SECTOR_SIZE).
*/ */
ret = bdrv_co_is_allocated_above(bdrv_cow_bs(unfiltered_bs), ret = bdrv_is_allocated_above(bdrv_cow_bs(unfiltered_bs),
s->base_overlay, true, s->base_overlay, true,
offset, n, &n); offset, n, &n);
/* Finish early if end of backing file has been reached */ /* Finish early if end of backing file has been reached */
@@ -268,8 +256,6 @@ void stream_start(const char *job_id, BlockDriverState *bs,
assert(!(base && bottom)); assert(!(base && bottom));
assert(!(backing_file_str && bottom)); assert(!(backing_file_str && bottom));
bdrv_graph_rdlock_main_loop();
if (bottom) { if (bottom) {
/* /*
* New simple interface. The code is written in terms of old interface * New simple interface. The code is written in terms of old interface
@@ -286,7 +272,7 @@ void stream_start(const char *job_id, BlockDriverState *bs,
if (!base_overlay) { if (!base_overlay) {
error_setg(errp, "'%s' is not in the backing chain of '%s'", error_setg(errp, "'%s' is not in the backing chain of '%s'",
base->node_name, bs->node_name); base->node_name, bs->node_name);
goto out_rdlock; return;
} }
/* /*
@@ -308,7 +294,7 @@ void stream_start(const char *job_id, BlockDriverState *bs,
if (bs_read_only) { if (bs_read_only) {
/* Hold the chain during reopen */ /* Hold the chain during reopen */
if (bdrv_freeze_backing_chain(bs, above_base, errp) < 0) { if (bdrv_freeze_backing_chain(bs, above_base, errp) < 0) {
goto out_rdlock; return;
} }
ret = bdrv_reopen_set_read_only(bs, false, errp); ret = bdrv_reopen_set_read_only(bs, false, errp);
@@ -317,12 +303,10 @@ void stream_start(const char *job_id, BlockDriverState *bs,
bdrv_unfreeze_backing_chain(bs, above_base); bdrv_unfreeze_backing_chain(bs, above_base);
if (ret < 0) { if (ret < 0) {
goto out_rdlock; return;
} }
} }
bdrv_graph_rdunlock_main_loop();
opts = qdict_new(); opts = qdict_new();
qdict_put_str(opts, "driver", "copy-on-read"); qdict_put_str(opts, "driver", "copy-on-read");
@@ -366,10 +350,8 @@ void stream_start(const char *job_id, BlockDriverState *bs,
* already have our own plans. Also don't allow resize as the image size is * already have our own plans. Also don't allow resize as the image size is
* queried only at the job start and then cached. * queried only at the job start and then cached.
*/ */
bdrv_graph_wrlock(bs);
if (block_job_add_bdrv(&s->common, "active node", bs, 0, if (block_job_add_bdrv(&s->common, "active node", bs, 0,
basic_flags | BLK_PERM_WRITE, errp)) { basic_flags | BLK_PERM_WRITE, errp)) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
@@ -389,11 +371,9 @@ void stream_start(const char *job_id, BlockDriverState *bs,
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0, ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
basic_flags, errp); basic_flags, errp);
if (ret < 0) { if (ret < 0) {
bdrv_graph_wrunlock();
goto fail; goto fail;
} }
} }
bdrv_graph_wrunlock();
s->base_overlay = base_overlay; s->base_overlay = base_overlay;
s->above_base = above_base; s->above_base = above_base;
@@ -417,8 +397,4 @@ fail:
if (bs_read_only) { if (bs_read_only) {
bdrv_reopen_set_read_only(bs, true, NULL); bdrv_reopen_set_read_only(bs, true, NULL);
} }
return;
out_rdlock:
bdrv_graph_rdunlock_main_loop();
} }

View File

@@ -84,9 +84,6 @@ static int throttle_open(BlockDriverState *bs, QDict *options,
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs->supported_write_flags = bs->file->bs->supported_write_flags | bs->supported_write_flags = bs->file->bs->supported_write_flags |
BDRV_REQ_WRITE_UNCHANGED; BDRV_REQ_WRITE_UNCHANGED;
bs->supported_zero_flags = bs->file->bs->supported_zero_flags | bs->supported_zero_flags = bs->file->bs->supported_zero_flags |

View File

@@ -239,7 +239,7 @@ static void vdi_header_to_le(VdiHeader *header)
static void vdi_header_print(VdiHeader *header) static void vdi_header_print(VdiHeader *header)
{ {
char uuidstr[UUID_STR_LEN]; char uuidstr[37];
QemuUUID uuid; QemuUUID uuid;
logout("text %s", header->text); logout("text %s", header->text);
logout("signature 0x%08x\n", header->signature); logout("signature 0x%08x\n", header->signature);
@@ -383,8 +383,6 @@ static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
logout("\n"); logout("\n");
ret = bdrv_pread(bs->file, 0, sizeof(header), &header, 0); ret = bdrv_pread(bs->file, 0, sizeof(header), &header, 0);
@@ -497,9 +495,9 @@ static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
error_setg(&s->migration_blocker, "The vdi format used by node '%s' " error_setg(&s->migration_blocker, "The vdi format used by node '%s' "
"does not support live migration", "does not support live migration",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
ret = migrate_add_blocker(s->migration_blocker, errp);
ret = migrate_add_blocker_normal(&s->migration_blocker, errp);
if (ret < 0) { if (ret < 0) {
error_free(s->migration_blocker);
goto fail_free_bmap; goto fail_free_bmap;
} }
@@ -520,9 +518,10 @@ static int vdi_reopen_prepare(BDRVReopenState *state,
return 0; return 0;
} }
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn vdi_co_block_status(BlockDriverState *bs,
vdi_co_block_status(BlockDriverState *bs, bool want_zero, int64_t offset, bool want_zero,
int64_t bytes, int64_t *pnum, int64_t *map, int64_t offset, int64_t bytes,
int64_t *pnum, int64_t *map,
BlockDriverState **file) BlockDriverState **file)
{ {
BDRVVdiState *s = (BDRVVdiState *)bs->opaque; BDRVVdiState *s = (BDRVVdiState *)bs->opaque;
@@ -986,10 +985,11 @@ static void vdi_close(BlockDriverState *bs)
qemu_vfree(s->bmap); qemu_vfree(s->bmap);
migrate_del_blocker(&s->migration_blocker); migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
} }
static int GRAPH_RDLOCK vdi_has_zero_init(BlockDriverState *bs) static int vdi_has_zero_init(BlockDriverState *bs)
{ {
BDRVVdiState *s = bs->opaque; BDRVVdiState *s = bs->opaque;

View File

@@ -55,8 +55,7 @@ static const MSGUID zero_guid = { 0 };
/* Allow peeking at the hdr entry at the beginning of the current /* Allow peeking at the hdr entry at the beginning of the current
* read index, without advancing the read index */ * read index, without advancing the read index */
static int GRAPH_RDLOCK static int vhdx_log_peek_hdr(BlockDriverState *bs, VHDXLogEntries *log,
vhdx_log_peek_hdr(BlockDriverState *bs, VHDXLogEntries *log,
VHDXLogEntryHeader *hdr) VHDXLogEntryHeader *hdr)
{ {
int ret = 0; int ret = 0;
@@ -108,7 +107,7 @@ static int vhdx_log_inc_idx(uint32_t idx, uint64_t length)
/* Reset the log to empty */ /* Reset the log to empty */
static void GRAPH_RDLOCK vhdx_log_reset(BlockDriverState *bs, BDRVVHDXState *s) static void vhdx_log_reset(BlockDriverState *bs, BDRVVHDXState *s)
{ {
MSGUID guid = { 0 }; MSGUID guid = { 0 };
s->log.read = s->log.write = 0; s->log.read = s->log.write = 0;
@@ -128,8 +127,7 @@ static void GRAPH_RDLOCK vhdx_log_reset(BlockDriverState *bs, BDRVVHDXState *s)
* not modified. * not modified.
* *
* 0 is returned on success, -errno otherwise. */ * 0 is returned on success, -errno otherwise. */
static int GRAPH_RDLOCK static int vhdx_log_read_sectors(BlockDriverState *bs, VHDXLogEntries *log,
vhdx_log_read_sectors(BlockDriverState *bs, VHDXLogEntries *log,
uint32_t *sectors_read, void *buffer, uint32_t *sectors_read, void *buffer,
uint32_t num_sectors, bool peek) uint32_t num_sectors, bool peek)
{ {
@@ -335,9 +333,9 @@ static int vhdx_compute_desc_sectors(uint32_t desc_cnt)
* will allocate all the space for buffer, which must be NULL when * will allocate all the space for buffer, which must be NULL when
* passed into this function. Each descriptor will also be validated, * passed into this function. Each descriptor will also be validated,
* and error returned if any are invalid. */ * and error returned if any are invalid. */
static int GRAPH_RDLOCK static int vhdx_log_read_desc(BlockDriverState *bs, BDRVVHDXState *s,
vhdx_log_read_desc(BlockDriverState *bs, BDRVVHDXState *s, VHDXLogEntries *log, VHDXLogEntries *log, VHDXLogDescEntries **buffer,
VHDXLogDescEntries **buffer, bool convert_endian) bool convert_endian)
{ {
int ret = 0; int ret = 0;
uint32_t desc_sectors; uint32_t desc_sectors;
@@ -414,8 +412,7 @@ exit:
* For a zero descriptor, it may describe multiple sectors to fill with zeroes. * For a zero descriptor, it may describe multiple sectors to fill with zeroes.
* In this case, it should be noted that zeroes are written to disk, and the * In this case, it should be noted that zeroes are written to disk, and the
* image file is not extended as a sparse file. */ * image file is not extended as a sparse file. */
static int GRAPH_RDLOCK static int vhdx_log_flush_desc(BlockDriverState *bs, VHDXLogDescriptor *desc,
vhdx_log_flush_desc(BlockDriverState *bs, VHDXLogDescriptor *desc,
VHDXLogDataSector *data) VHDXLogDataSector *data)
{ {
int ret = 0; int ret = 0;
@@ -487,8 +484,8 @@ exit:
* file, and then set the log to 'empty' status once complete. * file, and then set the log to 'empty' status once complete.
* *
* The log entries should be validate prior to flushing */ * The log entries should be validate prior to flushing */
static int GRAPH_RDLOCK static int vhdx_log_flush(BlockDriverState *bs, BDRVVHDXState *s,
vhdx_log_flush(BlockDriverState *bs, BDRVVHDXState *s, VHDXLogSequence *logs) VHDXLogSequence *logs)
{ {
int ret = 0; int ret = 0;
int i; int i;
@@ -587,8 +584,7 @@ exit:
return ret; return ret;
} }
static int GRAPH_RDLOCK static int vhdx_validate_log_entry(BlockDriverState *bs, BDRVVHDXState *s,
vhdx_validate_log_entry(BlockDriverState *bs, BDRVVHDXState *s,
VHDXLogEntries *log, uint64_t seq, VHDXLogEntries *log, uint64_t seq,
bool *valid, VHDXLogEntryHeader *entry) bool *valid, VHDXLogEntryHeader *entry)
{ {
@@ -667,8 +663,8 @@ free_and_exit:
/* Search through the log circular buffer, and find the valid, active /* Search through the log circular buffer, and find the valid, active
* log sequence, if any exists * log sequence, if any exists
* */ * */
static int GRAPH_RDLOCK static int vhdx_log_search(BlockDriverState *bs, BDRVVHDXState *s,
vhdx_log_search(BlockDriverState *bs, BDRVVHDXState *s, VHDXLogSequence *logs) VHDXLogSequence *logs)
{ {
int ret = 0; int ret = 0;
uint32_t tail; uint32_t tail;

View File

@@ -353,8 +353,7 @@ exit:
* *
* - non-current header is updated with largest sequence number * - non-current header is updated with largest sequence number
*/ */
static int GRAPH_RDLOCK static int vhdx_update_header(BlockDriverState *bs, BDRVVHDXState *s,
vhdx_update_header(BlockDriverState *bs, BDRVVHDXState *s,
bool generate_data_write_guid, MSGUID *log_guid) bool generate_data_write_guid, MSGUID *log_guid)
{ {
int ret = 0; int ret = 0;
@@ -417,8 +416,8 @@ int vhdx_update_headers(BlockDriverState *bs, BDRVVHDXState *s,
} }
/* opens the specified header block from the VHDX file header section */ /* opens the specified header block from the VHDX file header section */
static void GRAPH_RDLOCK static void vhdx_parse_header(BlockDriverState *bs, BDRVVHDXState *s,
vhdx_parse_header(BlockDriverState *bs, BDRVVHDXState *s, Error **errp) Error **errp)
{ {
int ret; int ret;
VHDXHeader *header1; VHDXHeader *header1;
@@ -518,8 +517,7 @@ exit:
} }
static int GRAPH_RDLOCK static int vhdx_open_region_tables(BlockDriverState *bs, BDRVVHDXState *s)
vhdx_open_region_tables(BlockDriverState *bs, BDRVVHDXState *s)
{ {
int ret = 0; int ret = 0;
uint8_t *buffer; uint8_t *buffer;
@@ -636,8 +634,7 @@ fail:
* Also, if the File Parameters indicate this is a differencing file, * Also, if the File Parameters indicate this is a differencing file,
* we must also look for the Parent Locator metadata item. * we must also look for the Parent Locator metadata item.
*/ */
static int GRAPH_RDLOCK static int vhdx_parse_metadata(BlockDriverState *bs, BDRVVHDXState *s)
vhdx_parse_metadata(BlockDriverState *bs, BDRVVHDXState *s)
{ {
int ret = 0; int ret = 0;
uint8_t *buffer; uint8_t *buffer;
@@ -888,8 +885,7 @@ static void vhdx_calc_bat_entries(BDRVVHDXState *s)
} }
static int coroutine_mixed_fn GRAPH_RDLOCK static int vhdx_check_bat_entries(BlockDriverState *bs, int *errcnt)
vhdx_check_bat_entries(BlockDriverState *bs, int *errcnt)
{ {
BDRVVHDXState *s = bs->opaque; BDRVVHDXState *s = bs->opaque;
int64_t image_file_size = bdrv_getlength(bs->file->bs); int64_t image_file_size = bdrv_getlength(bs->file->bs);
@@ -989,7 +985,8 @@ static void vhdx_close(BlockDriverState *bs)
s->bat = NULL; s->bat = NULL;
qemu_vfree(s->parent_entries); qemu_vfree(s->parent_entries);
s->parent_entries = NULL; s->parent_entries = NULL;
migrate_del_blocker(&s->migration_blocker); migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
qemu_vfree(s->log.hdr); qemu_vfree(s->log.hdr);
s->log.hdr = NULL; s->log.hdr = NULL;
vhdx_region_unregister_all(s); vhdx_region_unregister_all(s);
@@ -1004,15 +1001,11 @@ static int vhdx_open(BlockDriverState *bs, QDict *options, int flags,
uint64_t signature; uint64_t signature;
Error *local_err = NULL; Error *local_err = NULL;
GLOBAL_STATE_CODE();
ret = bdrv_open_file_child(NULL, options, "file", bs, errp); ret = bdrv_open_file_child(NULL, options, "file", bs, errp);
if (ret < 0) { if (ret < 0) {
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
s->bat = NULL; s->bat = NULL;
s->first_visible_write = true; s->first_visible_write = true;
@@ -1100,8 +1093,9 @@ static int vhdx_open(BlockDriverState *bs, QDict *options, int flags,
error_setg(&s->migration_blocker, "The vhdx format used by node '%s' " error_setg(&s->migration_blocker, "The vhdx format used by node '%s' "
"does not support live migration", "does not support live migration",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
ret = migrate_add_blocker_normal(&s->migration_blocker, errp); ret = migrate_add_blocker(s->migration_blocker, errp);
if (ret < 0) { if (ret < 0) {
error_free(s->migration_blocker);
goto fail; goto fail;
} }
@@ -1699,7 +1693,7 @@ exit:
* Fixed images: default state of the BAT is fully populated, with * Fixed images: default state of the BAT is fully populated, with
* file offsets and state PAYLOAD_BLOCK_FULLY_PRESENT. * file offsets and state PAYLOAD_BLOCK_FULLY_PRESENT.
*/ */
static int coroutine_fn GRAPH_UNLOCKED static int coroutine_fn
vhdx_create_bat(BlockBackend *blk, BDRVVHDXState *s, vhdx_create_bat(BlockBackend *blk, BDRVVHDXState *s,
uint64_t image_size, VHDXImageType type, uint64_t image_size, VHDXImageType type,
bool use_zero_blocks, uint64_t file_offset, bool use_zero_blocks, uint64_t file_offset,
@@ -1712,7 +1706,6 @@ vhdx_create_bat(BlockBackend *blk, BDRVVHDXState *s,
uint64_t unused; uint64_t unused;
int block_state; int block_state;
VHDXSectorInfo sinfo; VHDXSectorInfo sinfo;
bool has_zero_init;
assert(s->bat == NULL); assert(s->bat == NULL);
@@ -1742,13 +1735,9 @@ vhdx_create_bat(BlockBackend *blk, BDRVVHDXState *s,
goto exit; goto exit;
} }
bdrv_graph_co_rdlock();
has_zero_init = bdrv_has_zero_init(blk_bs(blk));
bdrv_graph_co_rdunlock();
if (type == VHDX_TYPE_FIXED || if (type == VHDX_TYPE_FIXED ||
use_zero_blocks || use_zero_blocks ||
has_zero_init == 0) { bdrv_has_zero_init(blk_bs(blk)) == 0) {
/* for a fixed file, the default BAT entry is not zero */ /* for a fixed file, the default BAT entry is not zero */
s->bat = g_try_malloc0(length); s->bat = g_try_malloc0(length);
if (length && s->bat == NULL) { if (length && s->bat == NULL) {
@@ -1791,7 +1780,7 @@ exit:
* to create the BAT itself, we will also cause the BAT to be * to create the BAT itself, we will also cause the BAT to be
* created. * created.
*/ */
static int coroutine_fn GRAPH_UNLOCKED static int coroutine_fn
vhdx_create_new_region_table(BlockBackend *blk, uint64_t image_size, vhdx_create_new_region_table(BlockBackend *blk, uint64_t image_size,
uint32_t block_size, uint32_t sector_size, uint32_t block_size, uint32_t sector_size,
uint32_t log_size, bool use_zero_blocks, uint32_t log_size, bool use_zero_blocks,
@@ -2167,8 +2156,8 @@ fail:
* r/w and any log has already been replayed, so there is nothing (currently) * r/w and any log has already been replayed, so there is nothing (currently)
* for us to do here * for us to do here
*/ */
static int coroutine_fn GRAPH_RDLOCK static int coroutine_fn vhdx_co_check(BlockDriverState *bs,
vhdx_co_check(BlockDriverState *bs, BdrvCheckResult *result, BdrvCheckResult *result,
BdrvCheckMode fix) BdrvCheckMode fix)
{ {
BDRVVHDXState *s = bs->opaque; BDRVVHDXState *s = bs->opaque;
@@ -2182,7 +2171,7 @@ vhdx_co_check(BlockDriverState *bs, BdrvCheckResult *result,
return 0; return 0;
} }
static int GRAPH_RDLOCK vhdx_has_zero_init(BlockDriverState *bs) static int vhdx_has_zero_init(BlockDriverState *bs)
{ {
BDRVVHDXState *s = bs->opaque; BDRVVHDXState *s = bs->opaque;
int state; int state;

View File

@@ -401,8 +401,7 @@ typedef struct BDRVVHDXState {
void vhdx_guid_generate(MSGUID *guid); void vhdx_guid_generate(MSGUID *guid);
int GRAPH_RDLOCK int vhdx_update_headers(BlockDriverState *bs, BDRVVHDXState *s, bool rw,
vhdx_update_headers(BlockDriverState *bs, BDRVVHDXState *s, bool rw,
MSGUID *log_guid); MSGUID *log_guid);
uint32_t vhdx_update_checksum(uint8_t *buf, size_t size, int crc_offset); uint32_t vhdx_update_checksum(uint8_t *buf, size_t size, int crc_offset);
@@ -411,8 +410,7 @@ uint32_t vhdx_checksum_calc(uint32_t crc, uint8_t *buf, size_t size,
bool vhdx_checksum_is_valid(uint8_t *buf, size_t size, int crc_offset); bool vhdx_checksum_is_valid(uint8_t *buf, size_t size, int crc_offset);
int GRAPH_RDLOCK int vhdx_parse_log(BlockDriverState *bs, BDRVVHDXState *s, bool *flushed,
vhdx_parse_log(BlockDriverState *bs, BDRVVHDXState *s, bool *flushed,
Error **errp); Error **errp);
int coroutine_fn GRAPH_RDLOCK int coroutine_fn GRAPH_RDLOCK
@@ -449,8 +447,6 @@ void vhdx_metadata_header_le_import(VHDXMetadataTableHeader *hdr);
void vhdx_metadata_header_le_export(VHDXMetadataTableHeader *hdr); void vhdx_metadata_header_le_export(VHDXMetadataTableHeader *hdr);
void vhdx_metadata_entry_le_import(VHDXMetadataTableEntry *e); void vhdx_metadata_entry_le_import(VHDXMetadataTableEntry *e);
void vhdx_metadata_entry_le_export(VHDXMetadataTableEntry *e); void vhdx_metadata_entry_le_export(VHDXMetadataTableEntry *e);
int vhdx_user_visible_write(BlockDriverState *bs, BDRVVHDXState *s);
int GRAPH_RDLOCK
vhdx_user_visible_write(BlockDriverState *bs, BDRVVHDXState *s);
#endif #endif

View File

@@ -300,8 +300,7 @@ static void vmdk_free_last_extent(BlockDriverState *bs)
} }
/* Return -ve errno, or 0 on success and write CID into *pcid. */ /* Return -ve errno, or 0 on success and write CID into *pcid. */
static int GRAPH_RDLOCK static int vmdk_read_cid(BlockDriverState *bs, int parent, uint32_t *pcid)
vmdk_read_cid(BlockDriverState *bs, int parent, uint32_t *pcid)
{ {
char *desc; char *desc;
uint32_t cid; uint32_t cid;
@@ -381,7 +380,7 @@ out:
return ret; return ret;
} }
static int coroutine_fn GRAPH_RDLOCK vmdk_is_cid_valid(BlockDriverState *bs) static int coroutine_fn vmdk_is_cid_valid(BlockDriverState *bs)
{ {
BDRVVmdkState *s = bs->opaque; BDRVVmdkState *s = bs->opaque;
uint32_t cur_pcid; uint32_t cur_pcid;
@@ -416,9 +415,6 @@ static int vmdk_reopen_prepare(BDRVReopenState *state,
BDRVVmdkReopenState *rs; BDRVVmdkReopenState *rs;
int i; int i;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
assert(state != NULL); assert(state != NULL);
assert(state->bs != NULL); assert(state->bs != NULL);
assert(state->opaque == NULL); assert(state->opaque == NULL);
@@ -455,9 +451,6 @@ static void vmdk_reopen_commit(BDRVReopenState *state)
BDRVVmdkReopenState *rs = state->opaque; BDRVVmdkReopenState *rs = state->opaque;
int i; int i;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
for (i = 0; i < s->num_extents; i++) { for (i = 0; i < s->num_extents; i++) {
if (rs->extents_using_bs_file[i]) { if (rs->extents_using_bs_file[i]) {
s->extents[i].file = state->bs->file; s->extents[i].file = state->bs->file;
@@ -472,7 +465,7 @@ static void vmdk_reopen_abort(BDRVReopenState *state)
vmdk_reopen_clean(state); vmdk_reopen_clean(state);
} }
static int GRAPH_RDLOCK vmdk_parent_open(BlockDriverState *bs) static int vmdk_parent_open(BlockDriverState *bs)
{ {
char *p_name; char *p_name;
char *desc; char *desc;
@@ -585,8 +578,8 @@ static int vmdk_add_extent(BlockDriverState *bs,
return 0; return 0;
} }
static int GRAPH_RDLOCK static int vmdk_init_tables(BlockDriverState *bs, VmdkExtent *extent,
vmdk_init_tables(BlockDriverState *bs, VmdkExtent *extent, Error **errp) Error **errp)
{ {
int ret; int ret;
size_t l1_size; size_t l1_size;
@@ -648,9 +641,9 @@ vmdk_init_tables(BlockDriverState *bs, VmdkExtent *extent, Error **errp)
return ret; return ret;
} }
static int GRAPH_RDLOCK static int vmdk_open_vmfs_sparse(BlockDriverState *bs,
vmdk_open_vmfs_sparse(BlockDriverState *bs, BdrvChild *file, int flags, BdrvChild *file,
Error **errp) int flags, Error **errp)
{ {
int ret; int ret;
uint32_t magic; uint32_t magic;
@@ -804,9 +797,9 @@ static int check_se_sparse_volatile_header(VMDKSESparseVolatileHeader *header,
return 0; return 0;
} }
static int GRAPH_RDLOCK static int vmdk_open_se_sparse(BlockDriverState *bs,
vmdk_open_se_sparse(BlockDriverState *bs, BdrvChild *file, int flags, BdrvChild *file,
Error **errp) int flags, Error **errp)
{ {
int ret; int ret;
VMDKSESparseConstHeader const_header; VMDKSESparseConstHeader const_header;
@@ -920,9 +913,9 @@ static char *vmdk_read_desc(BdrvChild *file, uint64_t desc_offset, Error **errp)
return buf; return buf;
} }
static int GRAPH_RDLOCK static int vmdk_open_vmdk4(BlockDriverState *bs,
vmdk_open_vmdk4(BlockDriverState *bs, BdrvChild *file, int flags, BdrvChild *file,
QDict *options, Error **errp) int flags, QDict *options, Error **errp)
{ {
int ret; int ret;
uint32_t magic; uint32_t magic;
@@ -1102,8 +1095,7 @@ static int vmdk_parse_description(const char *desc, const char *opt_name,
} }
/* Open an extent file and append to bs array */ /* Open an extent file and append to bs array */
static int GRAPH_RDLOCK static int vmdk_open_sparse(BlockDriverState *bs, BdrvChild *file, int flags,
vmdk_open_sparse(BlockDriverState *bs, BdrvChild *file, int flags,
char *buf, QDict *options, Error **errp) char *buf, QDict *options, Error **errp)
{ {
uint32_t magic; uint32_t magic;
@@ -1131,9 +1123,8 @@ static const char *next_line(const char *s)
return s; return s;
} }
static int GRAPH_RDLOCK static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
vmdk_parse_extents(const char *desc, BlockDriverState *bs, QDict *options, QDict *options, Error **errp)
Error **errp)
{ {
int ret; int ret;
int matches; int matches;
@@ -1152,8 +1143,6 @@ vmdk_parse_extents(const char *desc, BlockDriverState *bs, QDict *options,
char extent_opt_prefix[32]; char extent_opt_prefix[32];
Error *local_err = NULL; Error *local_err = NULL;
GLOBAL_STATE_CODE();
for (p = desc; *p; p = next_line(p)) { for (p = desc; *p; p = next_line(p)) {
/* parse extent line in one of below formats: /* parse extent line in one of below formats:
* *
@@ -1234,11 +1223,9 @@ vmdk_parse_extents(const char *desc, BlockDriverState *bs, QDict *options,
ret = vmdk_add_extent(bs, extent_file, true, sectors, ret = vmdk_add_extent(bs, extent_file, true, sectors,
0, 0, 0, 0, 0, &extent, errp); 0, 0, 0, 0, 0, &extent, errp);
if (ret < 0) { if (ret < 0) {
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(NULL); bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, extent_file); bdrv_unref_child(bs, extent_file);
bdrv_graph_wrunlock(); bdrv_graph_wrunlock();
bdrv_graph_rdlock_main_loop();
goto out; goto out;
} }
extent->flat_start_offset = flat_offset << 9; extent->flat_start_offset = flat_offset << 9;
@@ -1253,32 +1240,26 @@ vmdk_parse_extents(const char *desc, BlockDriverState *bs, QDict *options,
} }
g_free(buf); g_free(buf);
if (ret) { if (ret) {
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(NULL); bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, extent_file); bdrv_unref_child(bs, extent_file);
bdrv_graph_wrunlock(); bdrv_graph_wrunlock();
bdrv_graph_rdlock_main_loop();
goto out; goto out;
} }
extent = &s->extents[s->num_extents - 1]; extent = &s->extents[s->num_extents - 1];
} else if (!strcmp(type, "SESPARSE")) { } else if (!strcmp(type, "SESPARSE")) {
ret = vmdk_open_se_sparse(bs, extent_file, bs->open_flags, errp); ret = vmdk_open_se_sparse(bs, extent_file, bs->open_flags, errp);
if (ret) { if (ret) {
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(NULL); bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, extent_file); bdrv_unref_child(bs, extent_file);
bdrv_graph_wrunlock(); bdrv_graph_wrunlock();
bdrv_graph_rdlock_main_loop();
goto out; goto out;
} }
extent = &s->extents[s->num_extents - 1]; extent = &s->extents[s->num_extents - 1];
} else { } else {
error_setg(errp, "Unsupported extent type '%s'", type); error_setg(errp, "Unsupported extent type '%s'", type);
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(NULL); bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, extent_file); bdrv_unref_child(bs, extent_file);
bdrv_graph_wrunlock(); bdrv_graph_wrunlock();
bdrv_graph_rdlock_main_loop();
ret = -ENOTSUP; ret = -ENOTSUP;
goto out; goto out;
} }
@@ -1302,9 +1283,8 @@ out:
return ret; return ret;
} }
static int GRAPH_RDLOCK static int vmdk_open_desc_file(BlockDriverState *bs, int flags, char *buf,
vmdk_open_desc_file(BlockDriverState *bs, int flags, char *buf, QDict *options, QDict *options, Error **errp)
Error **errp)
{ {
int ret; int ret;
char ct[128]; char ct[128];
@@ -1393,8 +1373,9 @@ static int vmdk_open(BlockDriverState *bs, QDict *options, int flags,
error_setg(&s->migration_blocker, "The vmdk format used by node '%s' " error_setg(&s->migration_blocker, "The vmdk format used by node '%s' "
"does not support live migration", "does not support live migration",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
ret = migrate_add_blocker_normal(&s->migration_blocker, errp); ret = migrate_add_blocker(s->migration_blocker, errp);
if (ret < 0) { if (ret < 0) {
error_free(s->migration_blocker);
goto fail; goto fail;
} }
@@ -2554,10 +2535,7 @@ vmdk_co_do_create(int64_t size,
ret = -EINVAL; ret = -EINVAL;
goto exit; goto exit;
} }
bdrv_graph_co_rdlock();
ret = vmdk_read_cid(blk_bs(backing), 0, &parent_cid); ret = vmdk_read_cid(blk_bs(backing), 0, &parent_cid);
bdrv_graph_co_rdunlock();
blk_co_unref(backing); blk_co_unref(backing);
if (ret) { if (ret) {
error_setg(errp, "Failed to read parent CID"); error_setg(errp, "Failed to read parent CID");
@@ -2876,7 +2854,8 @@ static void vmdk_close(BlockDriverState *bs)
vmdk_free_extents(bs); vmdk_free_extents(bs);
g_free(s->create_type); g_free(s->create_type);
migrate_del_blocker(&s->migration_blocker); migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
} }
static int64_t coroutine_fn GRAPH_RDLOCK static int64_t coroutine_fn GRAPH_RDLOCK
@@ -2904,7 +2883,7 @@ vmdk_co_get_allocated_file_size(BlockDriverState *bs)
return ret; return ret;
} }
static int GRAPH_RDLOCK vmdk_has_zero_init(BlockDriverState *bs) static int vmdk_has_zero_init(BlockDriverState *bs)
{ {
int i; int i;
BDRVVmdkState *s = bs->opaque; BDRVVmdkState *s = bs->opaque;
@@ -2921,7 +2900,7 @@ static int GRAPH_RDLOCK vmdk_has_zero_init(BlockDriverState *bs)
return 1; return 1;
} }
static VmdkExtentInfo * GRAPH_RDLOCK vmdk_get_extent_info(VmdkExtent *extent) static VmdkExtentInfo *vmdk_get_extent_info(VmdkExtent *extent)
{ {
VmdkExtentInfo *info = g_new0(VmdkExtentInfo, 1); VmdkExtentInfo *info = g_new0(VmdkExtentInfo, 1);
@@ -2998,8 +2977,8 @@ vmdk_co_check(BlockDriverState *bs, BdrvCheckResult *result, BdrvCheckMode fix)
return ret; return ret;
} }
static ImageInfoSpecific * GRAPH_RDLOCK static ImageInfoSpecific *vmdk_get_specific_info(BlockDriverState *bs,
vmdk_get_specific_info(BlockDriverState *bs, Error **errp) Error **errp)
{ {
int i; int i;
BDRVVmdkState *s = bs->opaque; BDRVVmdkState *s = bs->opaque;
@@ -3054,8 +3033,7 @@ vmdk_co_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
return 0; return 0;
} }
static void GRAPH_RDLOCK static void vmdk_gather_child_options(BlockDriverState *bs, QDict *target,
vmdk_gather_child_options(BlockDriverState *bs, QDict *target,
bool backing_overridden) bool backing_overridden)
{ {
/* No children but file and backing can be explicitly specified (TODO) */ /* No children but file and backing can be explicitly specified (TODO) */

View File

@@ -238,8 +238,6 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
return ret; return ret;
} }
GRAPH_RDLOCK_GUARD_MAINLOOP();
opts = qemu_opts_create(&vpc_runtime_opts, NULL, 0, &error_abort); opts = qemu_opts_create(&vpc_runtime_opts, NULL, 0, &error_abort);
if (!qemu_opts_absorb_qdict(opts, options, errp)) { if (!qemu_opts_absorb_qdict(opts, options, errp)) {
ret = -EINVAL; ret = -EINVAL;
@@ -451,9 +449,9 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
error_setg(&s->migration_blocker, "The vpc format used by node '%s' " error_setg(&s->migration_blocker, "The vpc format used by node '%s' "
"does not support live migration", "does not support live migration",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
ret = migrate_add_blocker(s->migration_blocker, errp);
ret = migrate_add_blocker_normal(&s->migration_blocker, errp);
if (ret < 0) { if (ret < 0) {
error_free(s->migration_blocker);
goto fail; goto fail;
} }
@@ -1170,7 +1168,7 @@ fail:
} }
static int GRAPH_RDLOCK vpc_has_zero_init(BlockDriverState *bs) static int vpc_has_zero_init(BlockDriverState *bs)
{ {
BDRVVPCState *s = bs->opaque; BDRVVPCState *s = bs->opaque;
@@ -1189,7 +1187,8 @@ static void vpc_close(BlockDriverState *bs)
g_free(s->pageentry_u8); g_free(s->pageentry_u8);
#endif #endif
migrate_del_blocker(&s->migration_blocker); migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
} }
static QemuOptsList vpc_create_opts = { static QemuOptsList vpc_create_opts = {

View File

@@ -1144,8 +1144,6 @@ static int vvfat_open(BlockDriverState *bs, QDict *options, int flags,
QemuOpts *opts; QemuOpts *opts;
int ret; int ret;
GRAPH_RDLOCK_GUARD_MAINLOOP();
#ifdef DEBUG #ifdef DEBUG
vvv = s; vvv = s;
#endif #endif
@@ -1268,8 +1266,9 @@ static int vvfat_open(BlockDriverState *bs, QDict *options, int flags,
"The vvfat (rw) format used by node '%s' " "The vvfat (rw) format used by node '%s' "
"does not support live migration", "does not support live migration",
bdrv_get_device_or_node_name(bs)); bdrv_get_device_or_node_name(bs));
ret = migrate_add_blocker_normal(&s->migration_blocker, errp); ret = migrate_add_blocker(s->migration_blocker, errp);
if (ret < 0) { if (ret < 0) {
error_free(s->migration_blocker);
goto fail; goto fail;
} }
} }
@@ -1481,7 +1480,7 @@ vvfat_read(BlockDriverState *bs, int64_t sector_num, uint8_t *buf, int nb_sector
if (s->qcow) { if (s->qcow) {
int64_t n; int64_t n;
int ret; int ret;
ret = bdrv_co_is_allocated(s->qcow->bs, sector_num * BDRV_SECTOR_SIZE, ret = bdrv_is_allocated(s->qcow->bs, sector_num * BDRV_SECTOR_SIZE,
(nb_sectors - i) * BDRV_SECTOR_SIZE, &n); (nb_sectors - i) * BDRV_SECTOR_SIZE, &n);
if (ret < 0) { if (ret < 0) {
return ret; return ret;
@@ -1807,7 +1806,7 @@ cluster_was_modified(BDRVVVFATState *s, uint32_t cluster_num)
} }
for (i = 0; !was_modified && i < s->sectors_per_cluster; i++) { for (i = 0; !was_modified && i < s->sectors_per_cluster; i++) {
was_modified = bdrv_co_is_allocated(s->qcow->bs, was_modified = bdrv_is_allocated(s->qcow->bs,
(cluster2sector(s, cluster_num) + (cluster2sector(s, cluster_num) +
i) * BDRV_SECTOR_SIZE, i) * BDRV_SECTOR_SIZE,
BDRV_SECTOR_SIZE, NULL); BDRV_SECTOR_SIZE, NULL);
@@ -1968,7 +1967,7 @@ get_cluster_count_for_direntry(BDRVVVFATState* s, direntry_t* direntry, const ch
for (i = 0; i < s->sectors_per_cluster; i++) { for (i = 0; i < s->sectors_per_cluster; i++) {
int res; int res;
res = bdrv_co_is_allocated(s->qcow->bs, res = bdrv_is_allocated(s->qcow->bs,
(offs + i) * BDRV_SECTOR_SIZE, (offs + i) * BDRV_SECTOR_SIZE,
BDRV_SECTOR_SIZE, NULL); BDRV_SECTOR_SIZE, NULL);
if (res < 0) { if (res < 0) {
@@ -3238,7 +3237,8 @@ static void vvfat_close(BlockDriverState *bs)
g_free(s->cluster_buffer); g_free(s->cluster_buffer);
if (s->qcow) { if (s->qcow) {
migrate_del_blocker(&s->migration_blocker); migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
} }
} }

View File

@@ -255,13 +255,13 @@ void drive_check_orphaned(void)
* Ignore default drives, because we create certain default * Ignore default drives, because we create certain default
* drives unconditionally, then leave them unclaimed. Not the * drives unconditionally, then leave them unclaimed. Not the
* users fault. * users fault.
* Ignore IF_VIRTIO or IF_XEN, because it gets desugared into * Ignore IF_VIRTIO, because it gets desugared into -device,
* -device, so we can leave failing to -device. * so we can leave failing to -device.
* Ignore IF_NONE, because leaving unclaimed IF_NONE remains * Ignore IF_NONE, because leaving unclaimed IF_NONE remains
* available for device_add is a feature. * available for device_add is a feature.
*/ */
if (dinfo->is_default || dinfo->type == IF_VIRTIO if (dinfo->is_default || dinfo->type == IF_VIRTIO
|| dinfo->type == IF_XEN || dinfo->type == IF_NONE) { || dinfo->type == IF_NONE) {
continue; continue;
} }
if (!blk_get_attached_dev(blk)) { if (!blk_get_attached_dev(blk)) {
@@ -977,15 +977,6 @@ DriveInfo *drive_new(QemuOpts *all_opts, BlockInterfaceType block_default_type,
qemu_opt_set(devopts, "driver", "virtio-blk", &error_abort); qemu_opt_set(devopts, "driver", "virtio-blk", &error_abort);
qemu_opt_set(devopts, "drive", qdict_get_str(bs_opts, "id"), qemu_opt_set(devopts, "drive", qdict_get_str(bs_opts, "id"),
&error_abort); &error_abort);
} else if (type == IF_XEN) {
QemuOpts *devopts;
devopts = qemu_opts_create(qemu_find_opts("device"), NULL, 0,
&error_abort);
qemu_opt_set(devopts, "driver",
(media == MEDIA_CDROM) ? "xen-cdrom" : "xen-disk",
&error_abort);
qemu_opt_set(devopts, "drive", qdict_get_str(bs_opts, "id"),
&error_abort);
} }
filename = qemu_opt_get(legacy_opts, "file"); filename = qemu_opt_get(legacy_opts, "file");
@@ -1050,8 +1041,6 @@ static BlockDriverState *qmp_get_root_bs(const char *name, Error **errp)
BlockDriverState *bs; BlockDriverState *bs;
AioContext *aio_context; AioContext *aio_context;
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs = bdrv_lookup_bs(name, name, errp); bs = bdrv_lookup_bs(name, name, errp);
if (bs == NULL) { if (bs == NULL) {
return NULL; return NULL;
@@ -1147,9 +1136,6 @@ SnapshotInfo *qmp_blockdev_snapshot_delete_internal_sync(const char *device,
SnapshotInfo *info = NULL; SnapshotInfo *info = NULL;
int ret; int ret;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs = qmp_get_root_bs(device, errp); bs = qmp_get_root_bs(device, errp);
if (!bs) { if (!bs) {
return NULL; return NULL;
@@ -1235,9 +1221,6 @@ static void internal_snapshot_action(BlockdevSnapshotInternal *internal,
AioContext *aio_context; AioContext *aio_context;
int ret1; int ret1;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
tran_add(tran, &internal_snapshot_drv, state); tran_add(tran, &internal_snapshot_drv, state);
device = internal->device; device = internal->device;
@@ -1326,9 +1309,6 @@ static void internal_snapshot_abort(void *opaque)
AioContext *aio_context; AioContext *aio_context;
Error *local_error = NULL; Error *local_error = NULL;
GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
if (!state->created) { if (!state->created) {
return; return;
} }
@@ -1610,12 +1590,7 @@ static void external_snapshot_abort(void *opaque)
aio_context_acquire(aio_context); aio_context_acquire(aio_context);
} }
bdrv_drained_begin(state->new_bs);
bdrv_graph_wrlock(state->old_bs);
bdrv_replace_node(state->new_bs, state->old_bs, &error_abort); bdrv_replace_node(state->new_bs, state->old_bs, &error_abort);
bdrv_graph_wrunlock();
bdrv_drained_end(state->new_bs);
bdrv_unref(state->old_bs); /* bdrv_replace_node() ref'ed old_bs */ bdrv_unref(state->old_bs); /* bdrv_replace_node() ref'ed old_bs */
aio_context_release(aio_context); aio_context_release(aio_context);
@@ -1679,8 +1654,6 @@ static void drive_backup_action(DriveBackup *backup,
bool set_backing_hd = false; bool set_backing_hd = false;
int ret; int ret;
GLOBAL_STATE_CODE();
tran_add(tran, &drive_backup_drv, state); tran_add(tran, &drive_backup_drv, state);
if (!backup->has_mode) { if (!backup->has_mode) {
@@ -1710,9 +1683,7 @@ static void drive_backup_action(DriveBackup *backup,
} }
/* Early check to avoid creating target */ /* Early check to avoid creating target */
bdrv_graph_rdlock_main_loop();
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) { if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) {
bdrv_graph_rdunlock_main_loop();
goto out; goto out;
} }
@@ -1739,7 +1710,6 @@ static void drive_backup_action(DriveBackup *backup,
flags |= BDRV_O_NO_BACKING; flags |= BDRV_O_NO_BACKING;
set_backing_hd = true; set_backing_hd = true;
} }
bdrv_graph_rdunlock_main_loop();
size = bdrv_getlength(bs); size = bdrv_getlength(bs);
if (size < 0) { if (size < 0) {
@@ -1751,13 +1721,10 @@ static void drive_backup_action(DriveBackup *backup,
assert(format); assert(format);
if (source) { if (source) {
/* Implicit filters should not appear in the filename */ /* Implicit filters should not appear in the filename */
BlockDriverState *explicit_backing; BlockDriverState *explicit_backing =
bdrv_skip_implicit_filters(source);
bdrv_graph_rdlock_main_loop();
explicit_backing = bdrv_skip_implicit_filters(source);
bdrv_refresh_filename(explicit_backing); bdrv_refresh_filename(explicit_backing);
bdrv_graph_rdunlock_main_loop();
bdrv_img_create(backup->target, format, bdrv_img_create(backup->target, format,
explicit_backing->filename, explicit_backing->filename,
explicit_backing->drv->format_name, NULL, explicit_backing->drv->format_name, NULL,
@@ -2377,13 +2344,10 @@ void coroutine_fn qmp_block_resize(const char *device, const char *node_name,
return; return;
} }
bdrv_graph_co_rdlock();
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_RESIZE, NULL)) { if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_RESIZE, NULL)) {
error_setg(errp, QERR_DEVICE_IN_USE, device); error_setg(errp, QERR_DEVICE_IN_USE, device);
bdrv_graph_co_rdunlock();
return; return;
} }
bdrv_graph_co_rdunlock();
blk = blk_co_new_with_bs(bs, BLK_PERM_RESIZE, BLK_PERM_ALL, errp); blk = blk_co_new_with_bs(bs, BLK_PERM_RESIZE, BLK_PERM_ALL, errp);
if (!blk) { if (!blk) {
@@ -2423,8 +2387,6 @@ void qmp_block_stream(const char *job_id, const char *device,
Error *local_err = NULL; Error *local_err = NULL;
int job_flags = JOB_DEFAULT; int job_flags = JOB_DEFAULT;
GLOBAL_STATE_CODE();
if (base && base_node) { if (base && base_node) {
error_setg(errp, "'base' and 'base-node' cannot be specified " error_setg(errp, "'base' and 'base-node' cannot be specified "
"at the same time"); "at the same time");
@@ -2455,12 +2417,11 @@ void qmp_block_stream(const char *job_id, const char *device,
aio_context = bdrv_get_aio_context(bs); aio_context = bdrv_get_aio_context(bs);
aio_context_acquire(aio_context); aio_context_acquire(aio_context);
bdrv_graph_rdlock_main_loop();
if (base) { if (base) {
base_bs = bdrv_find_backing_image(bs, base); base_bs = bdrv_find_backing_image(bs, base);
if (base_bs == NULL) { if (base_bs == NULL) {
error_setg(errp, "Can't find '%s' in the backing chain", base); error_setg(errp, "Can't find '%s' in the backing chain", base);
goto out_rdlock; goto out;
} }
assert(bdrv_get_aio_context(base_bs) == aio_context); assert(bdrv_get_aio_context(base_bs) == aio_context);
} }
@@ -2468,36 +2429,35 @@ void qmp_block_stream(const char *job_id, const char *device,
if (base_node) { if (base_node) {
base_bs = bdrv_lookup_bs(NULL, base_node, errp); base_bs = bdrv_lookup_bs(NULL, base_node, errp);
if (!base_bs) { if (!base_bs) {
goto out_rdlock; goto out;
} }
if (bs == base_bs || !bdrv_chain_contains(bs, base_bs)) { if (bs == base_bs || !bdrv_chain_contains(bs, base_bs)) {
error_setg(errp, "Node '%s' is not a backing image of '%s'", error_setg(errp, "Node '%s' is not a backing image of '%s'",
base_node, device); base_node, device);
goto out_rdlock; goto out;
} }
assert(bdrv_get_aio_context(base_bs) == aio_context); assert(bdrv_get_aio_context(base_bs) == aio_context);
bdrv_refresh_filename(base_bs); bdrv_refresh_filename(base_bs);
} }
if (bottom) { if (bottom) {
bottom_bs = bdrv_lookup_bs(NULL, bottom, errp); bottom_bs = bdrv_lookup_bs(NULL, bottom, errp);
if (!bottom_bs) { if (!bottom_bs) {
goto out_rdlock; goto out;
} }
if (!bottom_bs->drv) { if (!bottom_bs->drv) {
error_setg(errp, "Node '%s' is not open", bottom); error_setg(errp, "Node '%s' is not open", bottom);
goto out_rdlock; goto out;
} }
if (bottom_bs->drv->is_filter) { if (bottom_bs->drv->is_filter) {
error_setg(errp, "Node '%s' is a filter, use a non-filter node " error_setg(errp, "Node '%s' is a filter, use a non-filter node "
"as 'bottom'", bottom); "as 'bottom'", bottom);
goto out_rdlock; goto out;
} }
if (!bdrv_chain_contains(bs, bottom_bs)) { if (!bdrv_chain_contains(bs, bottom_bs)) {
error_setg(errp, "Node '%s' is not in a chain starting from '%s'", error_setg(errp, "Node '%s' is not in a chain starting from '%s'",
bottom, device); bottom, device);
goto out_rdlock; goto out;
} }
assert(bdrv_get_aio_context(bottom_bs) == aio_context); assert(bdrv_get_aio_context(bottom_bs) == aio_context);
} }
@@ -2510,10 +2470,9 @@ void qmp_block_stream(const char *job_id, const char *device,
iter = bdrv_filter_or_cow_bs(iter)) iter = bdrv_filter_or_cow_bs(iter))
{ {
if (bdrv_op_is_blocked(iter, BLOCK_OP_TYPE_STREAM, errp)) { if (bdrv_op_is_blocked(iter, BLOCK_OP_TYPE_STREAM, errp)) {
goto out_rdlock; goto out;
} }
} }
bdrv_graph_rdunlock_main_loop();
/* if we are streaming the entire chain, the result will have no backing /* if we are streaming the entire chain, the result will have no backing
* file, and specifying one is therefore an error */ * file, and specifying one is therefore an error */
@@ -2542,11 +2501,6 @@ void qmp_block_stream(const char *job_id, const char *device,
out: out:
aio_context_release(aio_context); aio_context_release(aio_context);
return;
out_rdlock:
bdrv_graph_rdunlock_main_loop();
aio_context_release(aio_context);
} }
void qmp_block_commit(const char *job_id, const char *device, void qmp_block_commit(const char *job_id, const char *device,
@@ -2881,8 +2835,6 @@ BlockDeviceInfoList *qmp_query_named_block_nodes(bool has_flat,
XDbgBlockGraph *qmp_x_debug_query_block_graph(Error **errp) XDbgBlockGraph *qmp_x_debug_query_block_graph(Error **errp)
{ {
GRAPH_RDLOCK_GUARD_MAINLOOP();
return bdrv_get_xdbg_block_graph(errp); return bdrv_get_xdbg_block_graph(errp);
} }
@@ -2984,7 +2936,6 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
if (replaces) { if (replaces) {
BlockDriverState *to_replace_bs; BlockDriverState *to_replace_bs;
AioContext *aio_context;
AioContext *replace_aio_context; AioContext *replace_aio_context;
int64_t bs_size, replace_size; int64_t bs_size, replace_size;
@@ -2999,19 +2950,10 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
return; return;
} }
aio_context = bdrv_get_aio_context(bs);
replace_aio_context = bdrv_get_aio_context(to_replace_bs); replace_aio_context = bdrv_get_aio_context(to_replace_bs);
/*
* bdrv_getlength() is a co-wrapper and uses AIO_WAIT_WHILE. Be sure not
* to acquire the same AioContext twice.
*/
if (replace_aio_context != aio_context) {
aio_context_acquire(replace_aio_context); aio_context_acquire(replace_aio_context);
}
replace_size = bdrv_getlength(to_replace_bs); replace_size = bdrv_getlength(to_replace_bs);
if (replace_aio_context != aio_context) {
aio_context_release(replace_aio_context); aio_context_release(replace_aio_context);
}
if (replace_size < 0) { if (replace_size < 0) {
error_setg_errno(errp, -replace_size, error_setg_errno(errp, -replace_size,
@@ -3056,9 +2998,7 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
} }
/* Early check to avoid creating target */ /* Early check to avoid creating target */
bdrv_graph_rdlock_main_loop();
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_MIRROR_SOURCE, errp)) { if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_MIRROR_SOURCE, errp)) {
bdrv_graph_rdunlock_main_loop();
return; return;
} }
@@ -3082,7 +3022,6 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
if (arg->sync == MIRROR_SYNC_MODE_NONE) { if (arg->sync == MIRROR_SYNC_MODE_NONE) {
target_backing_bs = bs; target_backing_bs = bs;
} }
bdrv_graph_rdunlock_main_loop();
size = bdrv_getlength(bs); size = bdrv_getlength(bs);
if (size < 0) { if (size < 0) {
@@ -3115,21 +3054,16 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
bdrv_img_create(arg->target, format, bdrv_img_create(arg->target, format,
NULL, NULL, NULL, size, flags, false, &local_err); NULL, NULL, NULL, size, flags, false, &local_err);
} else { } else {
BlockDriverState *explicit_backing; /* Implicit filters should not appear in the filename */
BlockDriverState *explicit_backing =
bdrv_skip_implicit_filters(target_backing_bs);
switch (arg->mode) { switch (arg->mode) {
case NEW_IMAGE_MODE_EXISTING: case NEW_IMAGE_MODE_EXISTING:
break; break;
case NEW_IMAGE_MODE_ABSOLUTE_PATHS: case NEW_IMAGE_MODE_ABSOLUTE_PATHS:
/* /* create new image with backing file */
* Create new image with backing file.
* Implicit filters should not appear in the filename.
*/
bdrv_graph_rdlock_main_loop();
explicit_backing = bdrv_skip_implicit_filters(target_backing_bs);
bdrv_refresh_filename(explicit_backing); bdrv_refresh_filename(explicit_backing);
bdrv_graph_rdunlock_main_loop();
bdrv_img_create(arg->target, format, bdrv_img_create(arg->target, format,
explicit_backing->filename, explicit_backing->filename,
explicit_backing->drv->format_name, explicit_backing->drv->format_name,
@@ -3165,11 +3099,9 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
return; return;
} }
bdrv_graph_rdlock_main_loop();
zero_target = (arg->sync == MIRROR_SYNC_MODE_FULL && zero_target = (arg->sync == MIRROR_SYNC_MODE_FULL &&
(arg->mode == NEW_IMAGE_MODE_EXISTING || (arg->mode == NEW_IMAGE_MODE_EXISTING ||
!bdrv_has_zero_init(target_bs))); !bdrv_has_zero_init(target_bs)));
bdrv_graph_rdunlock_main_loop();
/* Honor bdrv_try_change_aio_context() context acquisition requirements. */ /* Honor bdrv_try_change_aio_context() context acquisition requirements. */
@@ -3412,20 +3344,6 @@ void qmp_block_job_dismiss(const char *id, Error **errp)
job_dismiss_locked(&job, errp); job_dismiss_locked(&job, errp);
} }
void qmp_block_job_change(BlockJobChangeOptions *opts, Error **errp)
{
BlockJob *job;
JOB_LOCK_GUARD();
job = find_block_job_locked(opts->id, errp);
if (!job) {
return;
}
block_job_change_locked(job, opts, errp);
}
void qmp_change_backing_file(const char *device, void qmp_change_backing_file(const char *device,
const char *image_node_name, const char *image_node_name,
const char *backing_file, const char *backing_file,
@@ -3446,38 +3364,35 @@ void qmp_change_backing_file(const char *device,
aio_context = bdrv_get_aio_context(bs); aio_context = bdrv_get_aio_context(bs);
aio_context_acquire(aio_context); aio_context_acquire(aio_context);
bdrv_graph_rdlock_main_loop();
image_bs = bdrv_lookup_bs(NULL, image_node_name, &local_err); image_bs = bdrv_lookup_bs(NULL, image_node_name, &local_err);
if (local_err) { if (local_err) {
error_propagate(errp, local_err); error_propagate(errp, local_err);
goto out_rdlock; goto out;
} }
if (!image_bs) { if (!image_bs) {
error_setg(errp, "image file not found"); error_setg(errp, "image file not found");
goto out_rdlock; goto out;
} }
if (bdrv_find_base(image_bs) == image_bs) { if (bdrv_find_base(image_bs) == image_bs) {
error_setg(errp, "not allowing backing file change on an image " error_setg(errp, "not allowing backing file change on an image "
"without a backing file"); "without a backing file");
goto out_rdlock; goto out;
} }
/* even though we are not necessarily operating on bs, we need it to /* even though we are not necessarily operating on bs, we need it to
* determine if block ops are currently prohibited on the chain */ * determine if block ops are currently prohibited on the chain */
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_CHANGE, errp)) { if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_CHANGE, errp)) {
goto out_rdlock; goto out;
} }
/* final sanity check */ /* final sanity check */
if (!bdrv_chain_contains(bs, image_bs)) { if (!bdrv_chain_contains(bs, image_bs)) {
error_setg(errp, "'%s' and image file are not in the same chain", error_setg(errp, "'%s' and image file are not in the same chain",
device); device);
goto out_rdlock; goto out;
} }
bdrv_graph_rdunlock_main_loop();
/* if not r/w, reopen to make r/w */ /* if not r/w, reopen to make r/w */
ro = bdrv_is_read_only(image_bs); ro = bdrv_is_read_only(image_bs);
@@ -3505,11 +3420,6 @@ void qmp_change_backing_file(const char *device,
out: out:
aio_context_release(aio_context); aio_context_release(aio_context);
return;
out_rdlock:
bdrv_graph_rdunlock_main_loop();
aio_context_release(aio_context);
} }
void qmp_blockdev_add(BlockdevOptions *options, Error **errp) void qmp_blockdev_add(BlockdevOptions *options, Error **errp)
@@ -3599,7 +3509,6 @@ void qmp_blockdev_del(const char *node_name, Error **errp)
BlockDriverState *bs; BlockDriverState *bs;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs = bdrv_find_node(node_name); bs = bdrv_find_node(node_name);
if (!bs) { if (!bs) {
@@ -3727,8 +3636,6 @@ void qmp_x_blockdev_set_iothread(const char *node_name, StrOrNull *iothread,
AioContext *new_context; AioContext *new_context;
BlockDriverState *bs; BlockDriverState *bs;
GRAPH_RDLOCK_GUARD_MAINLOOP();
bs = bdrv_find_node(node_name); bs = bdrv_find_node(node_name);
if (!bs) { if (!bs) {
error_setg(errp, "Failed to find node with node-name='%s'", node_name); error_setg(errp, "Failed to find node with node-name='%s'", node_name);

View File

@@ -198,9 +198,7 @@ void block_job_remove_all_bdrv(BlockJob *job)
* one to make sure that such a concurrent access does not attempt * one to make sure that such a concurrent access does not attempt
* to process an already freed BdrvChild. * to process an already freed BdrvChild.
*/ */
aio_context_release(job->job.aio_context);
bdrv_graph_wrlock(NULL); bdrv_graph_wrlock(NULL);
aio_context_acquire(job->job.aio_context);
while (job->nodes) { while (job->nodes) {
GSList *l = job->nodes; GSList *l = job->nodes;
BdrvChild *c = l->data; BdrvChild *c = l->data;
@@ -330,26 +328,6 @@ static bool block_job_set_speed(BlockJob *job, int64_t speed, Error **errp)
return block_job_set_speed_locked(job, speed, errp); return block_job_set_speed_locked(job, speed, errp);
} }
void block_job_change_locked(BlockJob *job, BlockJobChangeOptions *opts,
Error **errp)
{
const BlockJobDriver *drv = block_job_driver(job);
GLOBAL_STATE_CODE();
if (job_apply_verb_locked(&job->job, JOB_VERB_CHANGE, errp)) {
return;
}
if (drv->change) {
job_unlock();
drv->change(job, opts, errp);
job_lock();
} else {
error_setg(errp, "Job type does not support change");
}
}
void block_job_ratelimit_processed_bytes(BlockJob *job, uint64_t n) void block_job_ratelimit_processed_bytes(BlockJob *job, uint64_t n)
{ {
IO_CODE(); IO_CODE();
@@ -378,7 +356,6 @@ BlockJobInfo *block_job_query_locked(BlockJob *job, Error **errp)
{ {
BlockJobInfo *info; BlockJobInfo *info;
uint64_t progress_current, progress_total; uint64_t progress_current, progress_total;
const BlockJobDriver *drv = block_job_driver(job);
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
@@ -391,7 +368,7 @@ BlockJobInfo *block_job_query_locked(BlockJob *job, Error **errp)
&progress_total); &progress_total);
info = g_new0(BlockJobInfo, 1); info = g_new0(BlockJobInfo, 1);
info->type = job_type(&job->job); info->type = g_strdup(job_type_str(&job->job));
info->device = g_strdup(job->job.id); info->device = g_strdup(job->job.id);
info->busy = job->job.busy; info->busy = job->job.busy;
info->paused = job->job.pause_count > 0; info->paused = job->job.pause_count > 0;
@@ -408,11 +385,6 @@ BlockJobInfo *block_job_query_locked(BlockJob *job, Error **errp)
g_strdup(error_get_pretty(job->job.err)) : g_strdup(error_get_pretty(job->job.err)) :
g_strdup(strerror(-job->job.ret)); g_strdup(strerror(-job->job.ret));
} }
if (drv->query) {
job_unlock();
drv->query(job, info);
job_lock();
}
return info; return info;
} }
@@ -514,8 +486,6 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
int ret; int ret;
GLOBAL_STATE_CODE(); GLOBAL_STATE_CODE();
bdrv_graph_wrlock(bs);
if (job_id == NULL && !(flags & JOB_INTERNAL)) { if (job_id == NULL && !(flags & JOB_INTERNAL)) {
job_id = bdrv_get_device_name(bs); job_id = bdrv_get_device_name(bs);
} }
@@ -523,7 +493,6 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
job = job_create(job_id, &driver->job_driver, txn, bdrv_get_aio_context(bs), job = job_create(job_id, &driver->job_driver, txn, bdrv_get_aio_context(bs),
flags, cb, opaque, errp); flags, cb, opaque, errp);
if (job == NULL) { if (job == NULL) {
bdrv_graph_wrunlock();
return NULL; return NULL;
} }
@@ -563,11 +532,9 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
goto fail; goto fail;
} }
bdrv_graph_wrunlock();
return job; return job;
fail: fail:
bdrv_graph_wrunlock();
job_early_fail(&job->job); job_early_fail(&job->job);
return NULL; return NULL;
} }

View File

@@ -21,7 +21,6 @@
#define TARGET_ARCH_H #define TARGET_ARCH_H
#include "qemu.h" #include "qemu.h"
#include "target/arm/cpu-features.h"
void target_cpu_set_tls(CPUARMState *env, target_ulong newtls); void target_cpu_set_tls(CPUARMState *env, target_ulong newtls);
target_ulong target_cpu_get_tls(CPUARMState *env); target_ulong target_cpu_get_tls(CPUARMState *env);

View File

@@ -118,7 +118,7 @@ void fork_end(int child)
*/ */
CPU_FOREACH_SAFE(cpu, next_cpu) { CPU_FOREACH_SAFE(cpu, next_cpu) {
if (cpu != thread_cpu) { if (cpu != thread_cpu) {
QTAILQ_REMOVE_RCU(&cpus_queue, cpu, node); QTAILQ_REMOVE_RCU(&cpus, cpu, node);
} }
} }
mmap_fork_end(child); mmap_fork_end(child);

View File

@@ -171,7 +171,7 @@ static int msmouse_chr_write(struct Chardev *s, const uint8_t *buf, int len)
return len; return len;
} }
static const QemuInputHandler msmouse_handler = { static QemuInputHandler msmouse_handler = {
.name = "QEMU Microsoft Mouse", .name = "QEMU Microsoft Mouse",
.mask = INPUT_EVENT_MASK_BTN | INPUT_EVENT_MASK_REL, .mask = INPUT_EVENT_MASK_BTN | INPUT_EVENT_MASK_REL,
.event = msmouse_input_event, .event = msmouse_input_event,

View File

@@ -178,7 +178,7 @@ static void wctablet_input_sync(DeviceState *dev)
} }
} }
static const QemuInputHandler wctablet_handler = { static QemuInputHandler wctablet_handler = {
.name = "QEMU Wacom Pen Tablet", .name = "QEMU Wacom Pen Tablet",
.mask = INPUT_EVENT_MASK_BTN | INPUT_EVENT_MASK_ABS, .mask = INPUT_EVENT_MASK_BTN | INPUT_EVENT_MASK_ABS,
.event = wctablet_input_event, .event = wctablet_input_event,

View File

@@ -14,7 +14,6 @@ CONFIG_SAM460EX=y
CONFIG_MAC_OLDWORLD=y CONFIG_MAC_OLDWORLD=y
CONFIG_MAC_NEWWORLD=y CONFIG_MAC_NEWWORLD=y
CONFIG_AMIGAONE=y
CONFIG_PEGASOS2=y CONFIG_PEGASOS2=y
# For PReP # For PReP

View File

@@ -1,9 +0,0 @@
# target-specific defaults, can still be overridden on
# the command line
[built-in options]
bindir = ''
prefix = '/qemu'
[project options]
qemu_suffix = ''

View File

@@ -1,5 +1,4 @@
TARGET_ARCH=hppa TARGET_ARCH=hppa
TARGET_ABI32=y
TARGET_SYSTBL_ABI=common,32 TARGET_SYSTBL_ABI=common,32
TARGET_SYSTBL=syscall.tbl TARGET_SYSTBL=syscall.tbl
TARGET_BIG_ENDIAN=y TARGET_BIG_ENDIAN=y

View File

@@ -1,4 +1,3 @@
# Default configuration for loongarch64-linux-user # Default configuration for loongarch64-linux-user
TARGET_ARCH=loongarch64 TARGET_ARCH=loongarch64
TARGET_BASE_ARCH=loongarch TARGET_BASE_ARCH=loongarch
TARGET_XML_FILES=gdb-xml/loongarch-base64.xml gdb-xml/loongarch-fpu.xml

View File

@@ -1,3 +1,2 @@
TARGET_ARCH=sparc TARGET_ARCH=sparc
TARGET_BIG_ENDIAN=y TARGET_BIG_ENDIAN=y
TARGET_SUPPORTS_MTTCG=y

View File

@@ -1,4 +1,3 @@
TARGET_ARCH=sparc64 TARGET_ARCH=sparc64
TARGET_BASE_ARCH=sparc TARGET_BASE_ARCH=sparc
TARGET_BIG_ENDIAN=y TARGET_BIG_ENDIAN=y
TARGET_SUPPORTS_MTTCG=y

Some files were not shown because too many files have changed in this diff Show More