Compare commits

..

49 Commits

Author SHA1 Message Date
Fabiano Rosas
2371655364 tests/qtest: Add a test for migration with direct-io and multifd
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 17:07:25 -03:00
Fabiano Rosas
56b41a955d migration: Add direct-io parameter
Add the direct-io migration parameter that tells the migration code to
use O_DIRECT when opening the migration stream file whenever possible.

This is currently only used for the secondary channels of fixed-ram
migration, which can guarantee that writes are page aligned.

However the parameter could be made to affect other types of
file-based migrations in the future.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 17:07:25 -03:00
Fabiano Rosas
22aebdc8e0 tests/qtest: Add a multifd + fixed-ram migration test
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 17:07:25 -03:00
Fabiano Rosas
f93adb77bd migration/multifd: Support incoming fixed-ram stream format
For the incoming fixed-ram migration we need to read the ramblock
headers, get the pages bitmap and send the host address of each
non-zero page to the multifd channel thread for writing.

To read from the migration file we need a preadv function that can
read into the iovs in segments of contiguous pages because (as in the
writing case) the file offset applies to the entire iovec.

Usage on HMP is:

(qemu) migrate_set_capability multifd on
(qemu) migrate_set_capability fixed-ram on
(qemu) migrate_set_parameter max-bandwidth 0
(qemu) migrate_set_parameter multifd-channels 8
(qemu) migrate_incoming file:migfile
(qemu) info status
(qemu) c

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 17:07:25 -03:00
Fabiano Rosas
a110fcb9fb migration/multifd: Support outgoing fixed-ram stream format
The new fixed-ram stream format uses a file transport and puts ram
pages in the migration file at their respective offsets and can be
done in parallel by using the pwritev system call which takes iovecs
and an offset.

Add support to enabling the new format along with multifd to make use
of the threading and page handling already in place.

This requires multifd to stop sending headers and leaving the stream
format to the fixed-ram code. When it comes time to write the data, we
need to call a version of qio_channel_write that can take an offset.

Usage on HMP is:

(qemu) stop
(qemu) migrate_set_capability multifd on
(qemu) migrate_set_capability fixed-ram on
(qemu) migrate_set_parameter max-bandwidth 0
(qemu) migrate_set_parameter multifd-channels 8
(qemu) migrate file:migfile

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 17:07:24 -03:00
Fabiano Rosas
b0a05de4bd migration/ram: Ignore multifd flush when doing fixed-ram migration
Some functionalities of multifd are incompatible with the 'fixed-ram'
migration format.

The MULTIFD_FLUSH flag in particular is not used because in fixed-ram
there is no sinchronicity between migration source and destination so
there is not need for a sync packet. In fact, fixed-ram disables
packets in multifd as a whole.

Make sure RAM_SAVE_FLAG_MULTIFD_FLUSH is never emitted when fixed-ram
is enabled.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 17:07:24 -03:00
Fabiano Rosas
9c3ff163dd migration/ram: Add a wrapper for fixed-ram shadow bitmap
We'll need to set the shadow_bmap bits from outside ram.c soon and
TARGET_PAGE_BITS is poisoned, so add a wrapper to it.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 17:07:24 -03:00
Fabiano Rosas
cda3d8d0c4 io: Add a pwritev/preadv version that takes a discontiguous iovec
For the upcoming support to fixed-ram migration with multifd, we need
to be able to accept an iovec array with non-contiguous data.

Add a pwritev and preadv version that splits the array into contiguous
segments before writing. With that we can have the ram code continue
to add pages in any order and the multifd code continue to send large
arrays for reading and writing.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
Since iovs can be non contiguous, we'd need a separate array on the
side to carry an extra file offset for each of them, so I'm relying on
the fact that iovs are all within a same host page and passing in an
encoded offset that takes the host page into account.
2023-07-17 17:07:24 -03:00
Fabiano Rosas
8593df5b1d migration/multifd: Add pages to the receiving side
Currently multifd does not need to have knowledge of pages on the
receiving side because all the information needed is within the
packets that come in the stream.

We're about to add support to fixed-ram migration, which cannot use
packets because it expects the ramblock section in the migration file
to contain only the guest pages data.

Add a pointer to MultiFDPages in the multifd_recv_state and use the
pages similarly to what we already do on the sending side. The pages
are used to transfer data between the ram migration code in the main
migration thread and the multifd receiving threads.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 17:07:24 -03:00
Fabiano Rosas
3c0c8166d2 migration/multifd: Add incoming QIOChannelFile support
On the receiving side we don't need to differentiate between main
channel and threads, so whichever channel is defined first gets to be
the main one. And since there are no packets, use the atomic channel
count to index into the params array.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 17:07:24 -03:00
Fabiano Rosas
4a2a4cab71 migration/multifd: Add outgoing QIOChannelFile support
Allow multifd to open file-backed channels. This will be used when
enabling the fixed-ram migration stream format which expects a
seekable transport.

The QIOChannel read and write methods will use the preadv/pwritev
versions which don't update the file offset at each call so we can
reuse the fd without re-opening for every channel.

Note that this is just setup code and multifd cannot yet make use of
the file channels.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 17:07:24 -03:00
Fabiano Rosas
e69fb4a401 migration/multifd: Allow multifd without packets
For the upcoming support to the new 'fixed-ram' migration stream
format, we cannot use multifd packets because each write into the
ramblock section in the migration file is expected to contain only the
guest pages. They are written at their respective offsets relative to
the ramblock section header.

There is no space for the packet information and the expected gains
from the new approach come partly from being able to write the pages
sequentially without extraneous data in between.

The new format also doesn't need the packets and all necessary
information can be taken from the standard migration headers with some
(future) changes to multifd code.

Use the presence of the fixed-ram capability to decide whether to send
packets. For now this has no effect as fixed-ram cannot yet be enabled
with multifd.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 17:07:24 -03:00
Fabiano Rosas
c2b8b1d000 migration/multifd: Remove direct "socket" references
We're about to enable support for other transports in multifd, so
remove direct references to sockets.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 17:07:24 -03:00
Fabiano Rosas
47e38ba7d0 migration: Add completion tracepoint
Add a completion tracepoint that provides basic stats for
debug. Displays throughput (MB/s and pages/s) and total time (ms).

Usage:
  $QEMU ... -trace migration_status

Output:
  migration_status 1506 MB/s, 436725 pages/s, 8698 ms

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 17:07:24 -03:00
Nikolay Borisov
19283f7a7a tests/qtest: migration-test: Add tests for fixed-ram file-based migration
Add basic tests for 'fixed-ram' migration.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 16:53:11 -03:00
Nikolay Borisov
857bc1fd14 migration/ram: Add support for 'fixed-ram' migration restore
Add the necessary code to parse the format changes for the 'fixed-ram'
capability.

One of the more notable changes in behavior is that in the 'fixed-ram'
case ram pages are restored in one go rather than constantly looping
through the migration stream.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
(farosas) reused more of the common code by making the fixed-ram
function take only one ramblock and calling it from inside
parse_ramblock.
2023-07-17 16:17:54 -03:00
Nikolay Borisov
7a00057c6c migration/ram: Add support for 'fixed-ram' outgoing migration
Implement the outgoing migration side for the 'fixed-ram' capability.

A bitmap is introduced to track which pages have been written in the
migration file. Pages are written at a fixed location for every
ramblock. Zero pages are ignored as they'd be zero in the destination
migration as well.

The migration stream is altered to put the dirty pages for a ramblock
after its header instead of having a sequential stream of pages that
follow the ramblock headers. Since all pages have a fixed location,
RAM_SAVE_FLAG_EOS is no longer generated on every migration iteration.

Without fixed-ram (current):

ramblock 1 header|ramblock 2 header|...|RAM_SAVE_FLAG_EOS|stream of
 pages (iter 1)|RAM_SAVE_FLAG_EOS|stream of pages (iter 2)|...

With fixed-ram (new):

ramblock 1 header|ramblock 1 fixed-ram header|ramblock 1 pages (fixed
 offsets)|ramblock 2 header|ramblock 2 fixed-ram header|ramblock 2
 pages (fixed offsets)|...|RAM_SAVE_FLAG_EOS

where:
 - ramblock header: the generic information for a ramblock, such as
   idstr, used_len, etc.

 - ramblock fixed-ram header: the new information added by this
   feature: bitmap of pages written, bitmap size and offset of pages
   in the migration file.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 16:17:54 -03:00
Nikolay Borisov
e536de8c78 migration/ram: Refactor precopy ram loading code
To facilitate the implementation of the 'fixed-ram' migration restore,
factor out the code responsible for parsing the ramblocks
headers. This also makes ram_load_precopy easier to comprehend.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 16:17:54 -03:00
Fabiano Rosas
771d2a086e migration/ram: Introduce 'fixed-ram' migration capability
Add a new migration capability 'fixed-ram'.

The core of the feature is to ensure that each ram page has a specific
offset in the resulting migration stream. The reason why we'd want
such behavior are two fold:

 - When doing a 'fixed-ram' migration the resulting file will have a
   bounded size, since pages which are dirtied multiple times will
   always go to a fixed location in the file, rather than constantly
   being added to a sequential stream. This eliminates cases where a vm
   with, say, 1G of ram can result in a migration file that's 10s of
   GBs, provided that the workload constantly redirties memory.

 - It paves the way to implement DIRECT_IO-enabled save/restore of the
   migration stream as the pages are ensured to be written at aligned
   offsets.

For now, enabling the capability has no effect. The next couple of
patches implement the core funcionality.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 16:17:54 -03:00
Fabiano Rosas
1bb11df307 migration: fixed-ram: Add URI compatibility check
The fixed-ram migration format needs a channel that supports seeking
to be able to write each page to an arbitrary offset in the migration
stream.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 16:17:54 -03:00
Nikolay Borisov
47819fe035 migration/qemu-file: add utility methods for working with seekable channels
Add utility methods that will be needed when implementing 'fixed-ram'
migration capability.

qemu_file_is_seekable
qemu_put_buffer_at
qemu_get_buffer_at
qemu_set_offset
qemu_get_offset

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
fixed total_transferred accounting

restructured to use qio_channel_file_preadv instead of the _full
variant
2023-07-17 16:17:54 -03:00
Nikolay Borisov
44a6493a2b io: implement io_pwritev/preadv for QIOChannelFile
The upcoming 'fixed-ram' feature will require qemu to write data to
(and restore from) specific offsets of the migration file.

Add a minimal implementation of pwritev/preadv and expose them via the
io_pwritev and io_preadv interfaces.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-07-17 16:17:54 -03:00
Nikolay Borisov
5655b9e61c io: Add generic pwritev/preadv interface
Introduce basic pwritev/preadv support in the generic channel layer.
Specific implementation will follow for the file channel as this is
required in order to support migration streams with fixed location of
each ram page.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 16:17:54 -03:00
Nikolay Borisov
b2f4616fe9 io: add and implement QIO_CHANNEL_FEATURE_SEEKABLE for channel file
Add a generic QIOChannel feature SEEKABLE which would be used by the
qemu_file* apis. For the time being this will be only implemented for
file channels.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-07-17 16:17:54 -03:00
Fabiano Rosas
77cc914c08 migration/ram: Merge save_zero_page functions
We don't need to do this in two pieces. One single function makes it
easier to grasp, specially since it removes one indirection in the
return value.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 16:17:54 -03:00
Fabiano Rosas
59b605c347 migration/ram: Move xbzrle zero page handling into save_zero_page
It makes a bit more send to have the zero page handling of xbzrle
right where we save the zero page.

This also makes save_zero_page() follow the same format as
save_compress_page() at the top level of ram_save_target_page_legacy().

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 16:17:54 -03:00
Fabiano Rosas
e6486909e6 migration/ram: Stop passing file around in save_zero_page
We don't need to pass the file around when we already passing the
PageSearchStatus.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 16:17:54 -03:00
Fabiano Rosas
88c150d3dc migration/ram: Remove RAMState from xbzrle_cache_zero_page
'rs' is not used in that function. Commit 9360447d34 ("ram: Use
MigrationStats for statistics") forgot to remove it.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 16:17:54 -03:00
Fabiano Rosas
3f55d1b72b tests/qtest: File migration auto-pause tests
Adapt the file migration tests to take into account the auto-pause
feature.

The test currently has a flag 'stop_src' that is used to know if the
test itself should stop the VM. Add a new flag 'auto_pause' to enable
QEMU to stop the VM instead.. The two in combination allow us to
migrate a already stopped VM and check that it is still stopped on the
destination (auto-pause in effect restoring the original state).

By adding a more precise tracking of migration state changes, we can
also make sure that auto-pause is actually stopping the VM right after
qmp_migrate(), as opposed to the vm_stop() that happens at
migration_complete().

When resuming the destination a similar situation occurs, we use
'stop_src' to have a stopped VM and check that the destination does
not get a "resume" event.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 14:55:58 -03:00
Fabiano Rosas
94eae7214c migration: Run "file:" migration with a stopped VM
The file migration is asynchronous, so it benefits from being done
with a stopped VM. Allow the file migration to take benefit of the
auto-pause capability.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 14:13:28 -03:00
Fabiano Rosas
765c376dec migration: Add auto-pause capability
Add a capability that allows the management layer to delegate to QEMU
the decision of whether to pause a VM and perform a non-live
migration. Depending on the type of migration being performed, this
could bring performance benefits.

Note that the capability is enabled by default but at this moment no
migration scheme is making use of it.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 14:13:28 -03:00
Fabiano Rosas
91dd5ffd0e migration: Introduce global_state_store_once
There are some situations during migration when we want to change the
runstate of the VM, but don't actually want the new runstate to be put
on the wire to be restored on the destination VM. In those cases, the
pattern is to use global_state_store() to save the state for migration
before changing it.

One scenario where this happens is when switching the source VM into
the FINISH_MIGRATE state. This state only makes sense on the source
VM. Another situation is when pausing the source VM prior to migration
completion.

We are about to introduce a third scenario when the whole migration
should be performed with a paused VM. In this case we will want to
save the VM runstate at the very start of the migration and that state
will be the one restored on the destination regardless of all the
runstate changes that happen in between.

To achieve that we need to make sure that the other two calls to
global_state_store() do not overwrite the state that is to be
migrated.

Introduce a version of global_state_store() that only saves the state
if no other state has already been saved.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 14:13:28 -03:00
Fabiano Rosas
7c916bfb97 migration: Return the saved state from global_state_store
There is a pattern of calling runstate_get() to store the current
runstate and calling global_state_store() to save the current runstate
for migration. Since global_state_store() also calls runstate_get(),
make it return the runstate instead.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 14:13:28 -03:00
Fabiano Rosas
81132f4860 tests/qtest: Allow waiting for migration events
Add support for waiting for a migration state change event to
happen. This can help disambiguate between runstate changes that
happen during VM lifecycle.

Specifically, the next couple of patches want to know whether STOP
events happened at the migration start or end. Add the "setup" and
"active" migration states for that purpose.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 14:13:28 -03:00
Fabiano Rosas
c8c612c037 tests/qtest: Move QTestMigrationState to libqtest
Move the QTestMigrationState into QTestState so we don't have to pass
it around to the wait_for_* helpers anymore. Since QTestState is
private to libqtest.c, move the migration state struct to libqtest.h
and add a getter.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 14:13:28 -03:00
Fabiano Rosas
16a9d2df09 fixup! tests/qtest: migration events
I cherry-picked the previous patch from:

(1) [PATCH V2 00/10] fix migration of suspended runstate
https://lore.kernel.org/r/1688132988-314397-1-git-send-email-steven.sistare@oracle.com

and rebased it on top of the file migration series:

(2) [PATCH v5 0/6] migration: Test the new "file:" migration
https://lore.kernel.org/r/20230712190742.22294-1-farosas@suse.de

I expect [2] to be merged first, in which case this patch would be
needed as a fixup to the migration events patch.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-17 14:07:31 -03:00
Steve Sistare
8c8e7528fc tests/qtest: migration events
Define a state object to capture events seen by migration tests, to allow
more events to be captured in a subsequent patch, and simplify event
checking in wait_for_migration_pass.  No functional change.

Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
2023-07-14 11:43:43 -03:00
Fabiano Rosas
5f50ddd97f tests/qtest: migration-test: Add tests for file-based migration
Add basic tests for file-based migration.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-12 12:47:15 -03:00
Fabiano Rosas
861faa529f tests/qtest: migration: Add support for negative testing of qmp_migrate
There is currently no way to write a test for errors that happened in
qmp_migrate before the migration has started.

Add a version of qmp_migrate that ensures an error happens. To make
use of it a test needs to set MigrateCommon.result as
MIG_TEST_QMP_ERROR.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-12 12:47:15 -03:00
Fabiano Rosas
76f115a049 migration: Set migration status early in incoming side
We are sending a migration event of MIGRATION_STATUS_SETUP at
qemu_start_incoming_migration but never actually setting the state.

This creates a window between qmp_migrate_incoming and
process_incoming_migration_co where the migration status is still
MIGRATION_STATUS_NONE. Calling query-migrate during this time will
return an empty response even though the incoming migration command
has already been issued.

Commit 7cf1fe6d68 ("migration: Add migration events on target side")
has added support to the 'events' capability to the incoming part of
migration, but chose to send the SETUP event without setting the
state. I'm assuming this was a mistake.

This introduces a change in behavior, any QMP client waiting for the
SETUP event will hang, unless it has previously enabled the 'events'
capability. Having the capability enabled is sufficient to continue to
receive the event.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-12 12:47:15 -03:00
Fabiano Rosas
c08ae63dcf tests/qtest: migration: Use migrate_incoming_qmp where appropriate
Use the new migrate_incoming_qmp helper in the places that currently
open-code calling migrate-incoming.

Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-12 12:47:15 -03:00
Fabiano Rosas
32ba53c624 tests/qtest: migration: Add migrate_incoming_qmp helper
file-based migration requires the target to initiate its migration after
the source has finished writing out the data in the file. Currently
there's no easy way to initiate 'migrate-incoming', allow this by
introducing migrate_incoming_qmp helper, similarly to migrate_qmp.

Also make sure migration events are enabled and wait for the incoming
migration to start before returning. This avoid a race when querying
the migration status too soon after issuing the command.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-12 12:47:15 -03:00
Fabiano Rosas
50d3442fd8 tests/qtest: migration: Expose migrate_set_capability
The following patch will make use of this function from within
migrate-helpers.c, so move it there.

Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-12 12:47:15 -03:00
Steve Sistare
26e2dcafe2 migration: file URI offset
Allow an offset option to be specified as part of the file URI, in
the form "file:filename,offset=offset", where offset accepts the common
size suffixes, or the 0x prefix, but not both.  Migration data is written
to and read from the file starting at offset.  If unspecified, it defaults
to 0.

This is needed by libvirt to store its own data at the head of the file.

Suggested-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
2023-07-12 12:47:15 -03:00
Steve Sistare
17ddf9a2c3 migration: file URI
Extend the migration URI to support file:<filename>.  This can be used for
any migration scenario that does not require a reverse path.  It can be
used as an alternative to 'exec:cat > file' in minimized containers that
do not contain /bin/sh, and it is easier to use than the fd:<fdname> URI.
It can be used in HMP commands, and as a qemu command-line parameter.

For best performance, guest ram should be shared and x-ignore-shared
should be true, so guest pages are not written to the file, in which case
the guest may remain running.  If ram is not so configured, then the user
is advised to stop the guest first.  Otherwise, a busy guest may re-dirty
the same page, causing it to be appended to the file multiple times,
and the file may grow unboundedly.  That issue is being addressed in the
"fixed-ram" patch series.

Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Peter Xu <peterx@redhat.com>
2023-07-12 12:47:15 -03:00
Fabiano Rosas
28fe324c39 tests/qtest: Re-enable multifd cancel test
We've found the source of flakiness in this test, so re-enable it.

Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-12 12:47:15 -03:00
Fabiano Rosas
adc173733c tests/qtest: Fix typo in multifd cancel test
This wasn't noticed because the test is currently disabled.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Fixes: 02f56e3de ("tests/qtest: massively speed up migration-test")
2023-07-12 12:47:15 -03:00
Fabiano Rosas
d2ade85d10 migration/multifd: Protect accesses to migration_threads
This doubly linked list is common for all the multifd and migration
threads so we need to avoid concurrent access.

Add a mutex to protect the data from concurrent access. This fixes a
crash when removing two MigrationThread objects from the list at the
same time during cleanup of multifd threads.

Fixes: 671326201d ("migration: Introduce interface query-migrationthreads")
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-07-12 11:01:00 -03:00
Fabiano Rosas
89c68f88af migration/multifd: Rename threadinfo.c functions
We're about to add more functions to this file so make it use the same
coding style as the rest of the code.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-07-12 11:00:59 -03:00
3856 changed files with 95887 additions and 195441 deletions

View File

@@ -41,10 +41,6 @@ variables:
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_TAG'
when: never
# Scheduled runs on mainline don't get pipelines except for the special Coverity job
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_PIPELINE_SOURCE == "schedule"'
when: never
# Cirrus jobs can't run unless the creds / target repo are set
- if: '$QEMU_JOB_CIRRUS && ($CIRRUS_GITHUB_REPO == null || $CIRRUS_API_TOKEN == null)'
when: never
@@ -72,7 +68,7 @@ variables:
#############################################################
# Stage 2: fine tune execution of jobs in specific scenarios
# where the catch all logic is inappropriate
# where the catch all logic is inapprorpaite
#############################################################
# Optional jobs should not be run unless manually triggered

View File

@@ -2,21 +2,11 @@
extends: .base_job_template
stage: build
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
cache:
paths:
- ccache
key: "$CI_JOB_NAME"
when: always
before_script:
- JOBS=$(expr $(nproc) + 1)
script:
- export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
- export CCACHE_MAXSIZE="500M"
- export PATH="$CCACHE_WRAPPERSDIR:$PATH"
- mkdir build
- cd build
- ccache --zero-stats
- ../configure --enable-werror --disable-docs --enable-fdt=system
${TARGETS:+--target-list="$TARGETS"}
$CONFIGURE_ARGS ||
@@ -30,7 +20,6 @@
then
make -j"$JOBS" $MAKE_CHECK_ARGS ;
fi
- ccache --show-stats
# We jump some hoops in common_test_job_template to avoid
# rebuilding all the object files we skip in the artifacts

View File

@@ -30,7 +30,6 @@ avocado-system-alpine:
variables:
IMAGE: alpine
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:avr arch:loongarch64 arch:mips64 arch:mipsel
build-system-ubuntu:
extends:
@@ -41,7 +40,8 @@ build-system-ubuntu:
variables:
IMAGE: ubuntu2204
CONFIGURE_ARGS: --enable-docs
TARGETS: alpha-softmmu microblazeel-softmmu mips64el-softmmu
TARGETS: alpha-softmmu cris-softmmu hppa-softmmu
microblazeel-softmmu mips64el-softmmu
MAKE_CHECK_ARGS: check-build
check-system-ubuntu:
@@ -61,7 +61,6 @@ avocado-system-ubuntu:
variables:
IMAGE: ubuntu2204
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:alpha arch:microblazeel arch:mips64el
build-system-debian:
extends:
@@ -70,10 +69,10 @@ build-system-debian:
needs:
job: amd64-debian-container
variables:
IMAGE: debian
IMAGE: debian-amd64
CONFIGURE_ARGS: --with-coroutine=sigaltstack
TARGETS: arm-softmmu i386-softmmu riscv64-softmmu sh4eb-softmmu
sparc-softmmu xtensa-softmmu
sparc-softmmu xtensaeb-softmmu
MAKE_CHECK_ARGS: check-build
check-system-debian:
@@ -82,7 +81,7 @@ check-system-debian:
- job: build-system-debian
artifacts: true
variables:
IMAGE: debian
IMAGE: debian-amd64
MAKE_CHECK_ARGS: check
avocado-system-debian:
@@ -91,9 +90,8 @@ avocado-system-debian:
- job: build-system-debian
artifacts: true
variables:
IMAGE: debian
IMAGE: debian-amd64
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:arm arch:i386 arch:riscv64 arch:sh4 arch:sparc arch:xtensa
crash-test-debian:
extends: .native_test_job_template
@@ -101,11 +99,11 @@ crash-test-debian:
- job: build-system-debian
artifacts: true
variables:
IMAGE: debian
IMAGE: debian-amd64
script:
- cd build
- make NINJA=":" check-venv
- pyvenv/bin/python3 scripts/device-crash-test -q --tcg-only ./qemu-system-i386
- tests/venv/bin/python3 scripts/device-crash-test -q --tcg-only ./qemu-system-i386
build-system-fedora:
extends:
@@ -116,7 +114,7 @@ build-system-fedora:
variables:
IMAGE: fedora
CONFIGURE_ARGS: --disable-gcrypt --enable-nettle --enable-docs
TARGETS: microblaze-softmmu mips-softmmu
TARGETS: tricore-softmmu microblaze-softmmu mips-softmmu
xtensa-softmmu m68k-softmmu riscv32-softmmu ppc-softmmu sparc64-softmmu
MAKE_CHECK_ARGS: check-build
@@ -137,8 +135,6 @@ avocado-system-fedora:
variables:
IMAGE: fedora
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:microblaze arch:mips arch:xtensa arch:m68k
arch:riscv32 arch:ppc arch:sparc64
crash-test-fedora:
extends: .native_test_job_template
@@ -150,8 +146,8 @@ crash-test-fedora:
script:
- cd build
- make NINJA=":" check-venv
- pyvenv/bin/python3 scripts/device-crash-test -q ./qemu-system-ppc
- pyvenv/bin/python3 scripts/device-crash-test -q ./qemu-system-riscv32
- tests/venv/bin/python3 scripts/device-crash-test -q ./qemu-system-ppc
- tests/venv/bin/python3 scripts/device-crash-test -q ./qemu-system-riscv32
build-system-centos:
extends:
@@ -167,73 +163,6 @@ build-system-centos:
x86_64-softmmu rx-softmmu sh4-softmmu nios2-softmmu
MAKE_CHECK_ARGS: check-build
# Previous QEMU release. Used for cross-version migration tests.
build-previous-qemu:
extends: .native_build_job_template
artifacts:
when: on_success
expire_in: 2 days
paths:
- build-previous
exclude:
- build-previous/**/*.p
- build-previous/**/*.a.p
- build-previous/**/*.fa.p
- build-previous/**/*.c.o
- build-previous/**/*.c.o.d
- build-previous/**/*.fa
needs:
job: amd64-opensuse-leap-container
variables:
IMAGE: opensuse-leap
TARGETS: x86_64-softmmu aarch64-softmmu
before_script:
- export QEMU_PREV_VERSION="$(sed 's/\([0-9.]*\)\.[0-9]*/v\1.0/' VERSION)"
- git remote add upstream https://gitlab.com/qemu-project/qemu
- git fetch upstream refs/tags/$QEMU_PREV_VERSION:refs/tags/$QEMU_PREV_VERSION
- git checkout $QEMU_PREV_VERSION
after_script:
- mv build build-previous
.migration-compat-common:
extends: .common_test_job_template
needs:
- job: build-previous-qemu
- job: build-system-opensuse
# The old QEMU could have bugs unrelated to migration that are
# already fixed in the current development branch, so this test
# might fail.
allow_failure: true
variables:
IMAGE: opensuse-leap
MAKE_CHECK_ARGS: check-build
script:
# Use the migration-tests from the older QEMU tree. This avoids
# testing an old QEMU against new features/tests that it is not
# compatible with.
- cd build-previous
# old to new
- QTEST_QEMU_BINARY_SRC=./qemu-system-${TARGET}
QTEST_QEMU_BINARY=../build/qemu-system-${TARGET} ./tests/qtest/migration-test
# new to old
- QTEST_QEMU_BINARY_DST=./qemu-system-${TARGET}
QTEST_QEMU_BINARY=../build/qemu-system-${TARGET} ./tests/qtest/migration-test
# This job needs to be disabled until we can have an aarch64 CPU model that
# will both (1) support both KVM and TCG, and (2) provide a stable ABI.
# Currently only "-cpu max" can provide (1), however it doesn't guarantee
# (2). Mark this test skipped until later.
migration-compat-aarch64:
extends: .migration-compat-common
variables:
TARGET: aarch64
QEMU_JOB_SKIPPED: 1
migration-compat-x86_64:
extends: .migration-compat-common
variables:
TARGET: x86_64
check-system-centos:
extends: .native_test_job_template
needs:
@@ -251,8 +180,6 @@ avocado-system-centos:
variables:
IMAGE: centos8
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:ppc64 arch:or1k arch:s390x arch:x86_64 arch:rx
arch:sh4 arch:nios2
build-system-opensuse:
extends:
@@ -282,38 +209,7 @@ avocado-system-opensuse:
variables:
IMAGE: opensuse-leap
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:s390x arch:x86_64 arch:aarch64
#
# Flaky tests. We don't run these by default and they are allow fail
# but often the CI system is the only way to trigger the failures.
#
build-system-flaky:
extends:
- .native_build_job_template
- .native_build_artifact_template
needs:
job: amd64-debian-container
variables:
IMAGE: debian
QEMU_JOB_OPTIONAL: 1
TARGETS: aarch64-softmmu arm-softmmu mips64el-softmmu
ppc64-softmmu rx-softmmu s390x-softmmu sh4-softmmu x86_64-softmmu
MAKE_CHECK_ARGS: check-build
avocado-system-flaky:
extends: .avocado_test_job_template
needs:
- job: build-system-flaky
artifacts: true
allow_failure: true
variables:
IMAGE: debian
MAKE_CHECK_ARGS: check-avocado
QEMU_JOB_OPTIONAL: 1
QEMU_TEST_FLAKY_TESTS: 1
AVOCADO_TAGS: flaky
# This jobs explicitly disable TCG (--disable-tcg), KVM is detected by
# the configure script. The container doesn't contain Xen headers so
@@ -353,7 +249,6 @@ build-user:
variables:
IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --disable-tools --disable-system
--target-list-exclude=alpha-linux-user,sh4-linux-user
MAKE_CHECK_ARGS: check-tcg
build-user-static:
@@ -363,18 +258,6 @@ build-user-static:
variables:
IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --disable-tools --disable-system --static
--target-list-exclude=alpha-linux-user,sh4-linux-user
MAKE_CHECK_ARGS: check-tcg
# targets stuck on older compilers
build-legacy:
extends: .native_build_job_template
needs:
job: amd64-debian-legacy-cross-container
variables:
IMAGE: debian-legacy-test-cross
TARGETS: alpha-linux-user alpha-softmmu sh4-linux-user
CONFIGURE_ARGS: --disable-tools
MAKE_CHECK_ARGS: check-tcg
build-user-hexagon:
@@ -387,9 +270,7 @@ build-user-hexagon:
CONFIGURE_ARGS: --disable-tools --disable-docs --enable-debug-tcg
MAKE_CHECK_ARGS: check-tcg
# Build the softmmu targets we have check-tcg tests and compilers in
# our omnibus all-test-cross container. Those targets that haven't got
# Debian cross compiler support need to use special containers.
# Only build the softmmu targets we have check-tcg tests for
build-some-softmmu:
extends: .native_build_job_template
needs:
@@ -397,18 +278,7 @@ build-some-softmmu:
variables:
IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --disable-tools --enable-debug
TARGETS: arm-softmmu aarch64-softmmu i386-softmmu riscv64-softmmu
s390x-softmmu x86_64-softmmu
MAKE_CHECK_ARGS: check-tcg
build-loongarch64:
extends: .native_build_job_template
needs:
job: loongarch-debian-cross-container
variables:
IMAGE: debian-loongarch-cross
CONFIGURE_ARGS: --disable-tools --enable-debug
TARGETS: loongarch64-linux-user loongarch64-softmmu
TARGETS: xtensa-softmmu arm-softmmu aarch64-softmmu alpha-softmmu
MAKE_CHECK_ARGS: check-tcg
# We build tricore in a very minimal tricore only container
@@ -441,7 +311,7 @@ clang-user:
variables:
IMAGE: debian-all-test-cross
CONFIGURE_ARGS: --cc=clang --cxx=clang++ --disable-system
--target-list-exclude=alpha-linux-user,microblazeel-linux-user,aarch64_be-linux-user,i386-linux-user,m68k-linux-user,mipsn32el-linux-user,xtensaeb-linux-user
--target-list-exclude=microblazeel-linux-user,aarch64_be-linux-user,i386-linux-user,m68k-linux-user,mipsn32el-linux-user,xtensaeb-linux-user
--extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined
MAKE_CHECK_ARGS: check-unit check-tcg
@@ -628,7 +498,7 @@ build-tci:
variables:
IMAGE: debian-all-test-cross
script:
- TARGETS="aarch64 arm hppa m68k microblaze ppc64 s390x x86_64"
- TARGETS="aarch64 alpha arm hppa m68k microblaze ppc64 s390x x86_64"
- mkdir build
- cd build
- ../configure --enable-tcg-interpreter --disable-docs --disable-gtk --disable-vnc
@@ -659,7 +529,7 @@ build-without-defaults:
--disable-pie
--disable-qom-cast-debug
--disable-strip
TARGETS: avr-softmmu s390x-softmmu sh4-softmmu
TARGETS: avr-softmmu mips64-softmmu s390x-softmmu sh4-softmmu
sparc64-softmmu hexagon-linux-user i386-linux-user s390x-linux-user
MAKE_CHECK_ARGS: check
@@ -686,7 +556,7 @@ build-tools-and-docs-debian:
# when running on 'master' we use pre-existing container
optional: true
variables:
IMAGE: debian
IMAGE: debian-amd64
MAKE_CHECK_ARGS: check-unit ctags TAGS cscope
CONFIGURE_ARGS: --disable-system --disable-user --enable-docs --enable-tools
QEMU_JOB_PUBLISH: 1
@@ -706,7 +576,7 @@ build-tools-and-docs-debian:
# of what topic branch they're currently using
pages:
extends: .base_job_template
image: $CI_REGISTRY_IMAGE/qemu/debian:$QEMU_CI_CONTAINER_TAG
image: $CI_REGISTRY_IMAGE/qemu/debian-amd64:$QEMU_CI_CONTAINER_TAG
stage: test
needs:
- job: build-tools-and-docs-debian
@@ -714,10 +584,7 @@ pages:
- mkdir -p public
# HTML-ised source tree
- make gtags
# We unset variables to work around a bug in some htags versions
# which causes it to fail when the environment is large
- CI_COMMIT_MESSAGE= CI_COMMIT_TAG_MESSAGE= htags
-anT --tree-view=filetree -m qemu_init
- htags -anT --tree-view=filetree -m qemu_init
-t "Welcome to the QEMU sourcecode"
- mv HTML public/src
# Project documentation
@@ -729,40 +596,3 @@ pages:
- public
variables:
QEMU_JOB_PUBLISH: 1
coverity:
image: $CI_REGISTRY_IMAGE/qemu/fedora:$QEMU_CI_CONTAINER_TAG
stage: build
allow_failure: true
timeout: 3h
needs:
- job: amd64-fedora-container
optional: true
before_script:
- dnf install -y curl wget
script:
# would be nice to cancel the job if over quota (https://gitlab.com/gitlab-org/gitlab/-/issues/256089)
# for example:
# curl --request POST --header "PRIVATE-TOKEN: $CI_JOB_TOKEN" "${CI_SERVER_URL}/api/v4/projects/${CI_PROJECT_ID}/jobs/${CI_JOB_ID}/cancel
- 'scripts/coverity-scan/run-coverity-scan --check-upload-only || { exitcode=$?; if test $exitcode = 1; then
exit 0;
else
exit $exitcode;
fi; };
scripts/coverity-scan/run-coverity-scan --update-tools-only > update-tools.log 2>&1 || { cat update-tools.log; exit 1; };
scripts/coverity-scan/run-coverity-scan --no-update-tools'
rules:
- if: '$COVERITY_TOKEN == null'
when: never
- if: '$COVERITY_EMAIL == null'
when: never
# Never included on upstream pipelines, except for schedules
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_PIPELINE_SOURCE == "schedule"'
when: on_success
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM'
when: never
# Forks don't get any pipeline unless QEMU_CI=1 or QEMU_CI=2 is set
- if: '$QEMU_CI != "1" && $QEMU_CI != "2"'
when: never
# Always manual on forks even if $QEMU_CI == "2"
- when: manual

View File

@@ -15,10 +15,8 @@
stage: build
image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:master
needs: []
# 20 mins larger than "timeout_in" in cirrus/build.yml
# as there's often a 5-10 minute delay before Cirrus CI
# actually starts the task
timeout: 80m
allow_failure: true
script:
- source .gitlab-ci.d/cirrus/$NAME.vars
- sed -e "s|[@]CI_REPOSITORY_URL@|$CI_REPOSITORY_URL|g"
@@ -52,20 +50,20 @@ x64-freebsd-13-build:
NAME: freebsd-13
CIRRUS_VM_INSTANCE_TYPE: freebsd_instance
CIRRUS_VM_IMAGE_SELECTOR: image_family
CIRRUS_VM_IMAGE_NAME: freebsd-13-3
CIRRUS_VM_IMAGE_NAME: freebsd-13-1
CIRRUS_VM_CPUS: 8
CIRRUS_VM_RAM: 8G
UPDATE_COMMAND: pkg update; pkg upgrade -y
INSTALL_COMMAND: pkg install -y
TEST_TARGETS: check
aarch64-macos-13-base-build:
aarch64-macos-12-base-build:
extends: .cirrus_build_job
variables:
NAME: macos-13
NAME: macos-12
CIRRUS_VM_INSTANCE_TYPE: macos_instance
CIRRUS_VM_IMAGE_SELECTOR: image
CIRRUS_VM_IMAGE_NAME: ghcr.io/cirruslabs/macos-ventura-base:latest
CIRRUS_VM_IMAGE_NAME: ghcr.io/cirruslabs/macos-monterey-base:latest
CIRRUS_VM_CPUS: 12
CIRRUS_VM_RAM: 24G
UPDATE_COMMAND: brew update
@@ -74,22 +72,6 @@ aarch64-macos-13-base-build:
PKG_CONFIG_PATH: /opt/homebrew/curl/lib/pkgconfig:/opt/homebrew/ncurses/lib/pkgconfig:/opt/homebrew/readline/lib/pkgconfig
TEST_TARGETS: check-unit check-block check-qapi-schema check-softfloat check-qtest-x86_64
aarch64-macos-14-base-build:
extends: .cirrus_build_job
variables:
NAME: macos-14
CIRRUS_VM_INSTANCE_TYPE: macos_instance
CIRRUS_VM_IMAGE_SELECTOR: image
CIRRUS_VM_IMAGE_NAME: ghcr.io/cirruslabs/macos-sonoma-base:latest
CIRRUS_VM_CPUS: 12
CIRRUS_VM_RAM: 24G
UPDATE_COMMAND: brew update
INSTALL_COMMAND: brew install
PATH_EXTRA: /opt/homebrew/ccache/libexec:/opt/homebrew/gettext/bin
PKG_CONFIG_PATH: /opt/homebrew/curl/lib/pkgconfig:/opt/homebrew/ncurses/lib/pkgconfig:/opt/homebrew/readline/lib/pkgconfig
TEST_TARGETS: check-unit check-block check-qapi-schema check-softfloat check-qtest-x86_64
QEMU_JOB_OPTIONAL: 1
# The following jobs run VM-based tests via KVM on a Linux-based Cirrus-CI job
.cirrus_kvm_job:

View File

@@ -16,12 +16,10 @@ env:
TEST_TARGETS: "@TEST_TARGETS@"
build_task:
# A little shorter than GitLab timeout in ../cirrus.yml
timeout_in: 60m
install_script:
- @UPDATE_COMMAND@
- @INSTALL_COMMAND@ @PKGS@
- if test -n "@PYPI_PKGS@" ; then PYLIB=$(@PYTHON@ -c 'import sysconfig; print(sysconfig.get_path("stdlib"))'); rm -f $PYLIB/EXTERNALLY-MANAGED; @PIP3@ install @PYPI_PKGS@ ; fi
- if test -n "@PYPI_PKGS@" ; then @PIP3@ install @PYPI_PKGS@ ; fi
clone_script:
- git clone --depth 100 "$CI_REPOSITORY_URL" .
- git fetch origin "$CI_COMMIT_REF_NAME"

View File

@@ -11,6 +11,6 @@ MAKE='/usr/local/bin/gmake'
NINJA='/usr/local/bin/ninja'
PACKAGING_COMMAND='pkg'
PIP3='/usr/local/bin/pip-3.8'
PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache cmocka ctags curl cyrus-sasl dbus diffutils dtc flex fusefs-libs3 gettext git glib gmake gnutls gsed gtk3 json-c libepoxy libffi libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm lzo2 meson mtools ncurses nettle ninja opencv pixman pkgconf png py39-numpy py39-pillow py39-pip py39-sphinx py39-sphinx_rtd_theme py39-tomli py39-yaml python3 rpm2cpio sdl2 sdl2_image snappy sndio socat spice-protocol tesseract usbredir virglrenderer vte3 xorriso zstd'
PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache cmocka ctags curl cyrus-sasl dbus diffutils dtc flex fusefs-libs3 gettext git glib gmake gnutls gsed gtk3 json-c libepoxy libffi libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm lzo2 meson mtools ncurses nettle ninja opencv pixman pkgconf png py39-numpy py39-pillow py39-pip py39-sphinx py39-sphinx_rtd_theme py39-yaml python3 rpm2cpio sdl2 sdl2_image snappy sndio socat spice-protocol tesseract usbredir virglrenderer vte3 xorriso zstd'
PYPI_PKGS=''
PYTHON='/usr/local/bin/python3'

View File

@@ -15,7 +15,7 @@ env:
folder: $HOME/.cache/qemu-vm
install_script:
- dnf update -y
- dnf install -y git make openssh-clients qemu-img qemu-system-x86 wget meson
- dnf install -y git make openssh-clients qemu-img qemu-system-x86 wget
clone_script:
- git clone --depth 100 "$CI_REPOSITORY_URL" .
- git fetch origin "$CI_COMMIT_REF_NAME"

View File

@@ -1,6 +1,6 @@
# THIS FILE WAS AUTO-GENERATED
#
# $ lcitool variables macos-13 qemu
# $ lcitool variables macos-12 qemu
#
# https://gitlab.com/libvirt/libvirt-ci
@@ -11,6 +11,6 @@ MAKE='/opt/homebrew/bin/gmake'
NINJA='/opt/homebrew/bin/ninja'
PACKAGING_COMMAND='brew'
PIP3='/opt/homebrew/bin/pip3'
PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd'
PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli'
PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol tesseract usbredir vde vte3 xorriso zlib zstd'
PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme'
PYTHON='/opt/homebrew/bin/python3'

View File

@@ -1,16 +0,0 @@
# THIS FILE WAS AUTO-GENERATED
#
# $ lcitool variables macos-14 qemu
#
# https://gitlab.com/libvirt/libvirt-ci
CCACHE='/opt/homebrew/bin/ccache'
CPAN_PKGS=''
CROSS_PKGS=''
MAKE='/opt/homebrew/bin/gmake'
NINJA='/opt/homebrew/bin/ninja'
PACKAGING_COMMAND='brew'
PIP3='/opt/homebrew/bin/pip3'
PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd'
PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli'
PYTHON='/opt/homebrew/bin/python3'

View File

@@ -1,3 +1,9 @@
alpha-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-alpha-cross
amd64-debian-cross-container:
extends: .container_job_template
stage: containers
@@ -10,12 +16,6 @@ amd64-debian-user-cross-container:
variables:
NAME: debian-all-test-cross
amd64-debian-legacy-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-legacy-test-cross
arm64-debian-cross-container:
extends: .container_job_template
stage: containers
@@ -40,17 +40,23 @@ hexagon-cross-container:
variables:
NAME: debian-hexagon-cross
loongarch-debian-cross-container:
hppa-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-loongarch-cross
NAME: debian-hppa-cross
i686-debian-cross-container:
m68k-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-i686-cross
NAME: debian-m68k-cross
mips64-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-mips64-cross
mips64el-debian-cross-container:
extends: .container_job_template
@@ -58,12 +64,24 @@ mips64el-debian-cross-container:
variables:
NAME: debian-mips64el-cross
mips-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-mips-cross
mipsel-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-mipsel-cross
powerpc-test-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-powerpc-test-cross
ppc64el-debian-cross-container:
extends: .container_job_template
stage: containers
@@ -77,7 +95,13 @@ riscv64-debian-cross-container:
allow_failure: true
variables:
NAME: debian-riscv64-cross
QEMU_JOB_OPTIONAL: 1
# we can however build TCG tests using a non-sid base
riscv64-debian-test-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-riscv64-test-cross
s390x-debian-cross-container:
extends: .container_job_template
@@ -85,6 +109,18 @@ s390x-debian-cross-container:
variables:
NAME: debian-s390x-cross
sh4-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-sh4-cross
sparc64-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-sparc64-cross
tricore-debian-cross-container:
extends: .container_job_template
stage: containers
@@ -101,6 +137,16 @@ cris-fedora-cross-container:
variables:
NAME: fedora-cris-cross
i386-fedora-cross-container:
extends: .container_job_template
variables:
NAME: fedora-i386-cross
win32-fedora-cross-container:
extends: .container_job_template
variables:
NAME: fedora-win32-cross
win64-fedora-cross-container:
extends: .container_job_template
variables:

View File

@@ -11,7 +11,7 @@ amd64-debian-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian
NAME: debian-amd64
amd64-ubuntu2204-container:
extends: .container_job_template

View File

@@ -2,20 +2,10 @@
extends: .base_job_template
stage: build
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
cache:
paths:
- ccache
key: "$CI_JOB_NAME"
when: always
timeout: 80m
script:
- export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
- export CCACHE_MAXSIZE="500M"
- export PATH="$CCACHE_WRAPPERSDIR:$PATH"
- mkdir build
- cd build
- ccache --zero-stats
- ../configure --enable-werror --disable-docs --enable-fdt=system
--disable-user $QEMU_CONFIGURE_OPTS $EXTRA_CONFIGURE_OPTS
--target-list-exclude="arm-softmmu cris-softmmu
@@ -28,7 +18,6 @@
version="$(git describe --match v[0-9]* 2>/dev/null || git rev-parse --short HEAD)";
mv -v qemu-setup*.exe qemu-setup-${version}.exe;
fi
- ccache --show-stats
# Job to cross-build specific accelerators.
#
@@ -40,15 +29,7 @@
stage: build
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
timeout: 30m
cache:
paths:
- ccache/
key: "$CI_JOB_NAME"
script:
- export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
- export CCACHE_MAXSIZE="500M"
- export PATH="$CCACHE_WRAPPERSDIR:$PATH"
- mkdir build
- cd build
- ../configure --enable-werror --disable-docs $QEMU_CONFIGURE_OPTS
@@ -59,14 +40,7 @@
extends: .base_job_template
stage: build
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
cache:
paths:
- ccache/
key: "$CI_JOB_NAME"
script:
- export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
- export CCACHE_MAXSIZE="500M"
- mkdir build
- cd build
- ../configure --enable-werror --disable-docs $QEMU_CONFIGURE_OPTS

View File

@@ -37,25 +37,25 @@ cross-arm64-kvm-only:
IMAGE: debian-arm64-cross
EXTRA_CONFIGURE_OPTS: --disable-tcg --without-default-features
cross-i686-user:
cross-i386-user:
extends:
- .cross_user_build_job
- .cross_test_artifacts
needs:
job: i686-debian-cross-container
job: i386-fedora-cross-container
variables:
IMAGE: debian-i686-cross
IMAGE: fedora-i386-cross
MAKE_CHECK_ARGS: check
cross-i686-tci:
cross-i386-tci:
extends:
- .cross_accel_build_job
- .cross_test_artifacts
timeout: 60m
needs:
job: i686-debian-cross-container
job: i386-fedora-cross-container
variables:
IMAGE: debian-i686-cross
IMAGE: fedora-i386-cross
ACCEL: tcg-interpreter
EXTRA_CONFIGURE_OPTS: --target-list=i386-softmmu,i386-linux-user,aarch64-softmmu,aarch64-linux-user,ppc-softmmu,ppc-linux-user --disable-plugins
MAKE_CHECK_ARGS: check check-tcg
@@ -159,13 +159,27 @@ cross-mips64el-kvm-only:
IMAGE: debian-mips64el-cross
EXTRA_CONFIGURE_OPTS: --disable-tcg --target-list=mips64el-softmmu
cross-win32-system:
extends: .cross_system_build_job
needs:
job: win32-fedora-cross-container
variables:
IMAGE: fedora-win32-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu m68k-softmmu
microblazeel-softmmu mips64el-softmmu nios2-softmmu
artifacts:
when: on_success
paths:
- build/qemu-setup*.exe
cross-win64-system:
extends: .cross_system_build_job
needs:
job: win64-fedora-cross-container
variables:
IMAGE: fedora-win64-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu
m68k-softmmu microblazeel-softmmu nios2-softmmu
or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu

View File

@@ -24,10 +24,6 @@
- if: '$QEMU_CI == "1" && $CI_PROJECT_NAMESPACE != "qemu-project" && $CI_COMMIT_MESSAGE =~ /opensbi/i'
when: manual
# Scheduled runs on mainline don't get pipelines except for the special Coverity job
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_PIPELINE_SOURCE == "schedule"'
when: never
# Run if any files affecting the build output are touched
- changes:
- .gitlab-ci.d/opensbi.yml

View File

@@ -1,33 +1,16 @@
msys2-64bit:
.shared_msys2_builder:
extends: .base_job_template
tags:
- shared-windows
- windows
- windows-1809
cache:
key: "$CI_JOB_NAME"
key: "${CI_JOB_NAME}-cache"
paths:
- msys64/var/cache
- ccache
when: always
- ${CI_PROJECT_DIR}/msys64/var/cache
needs: []
stage: build
timeout: 100m
variables:
# Select the "64 bit, gcc and MSVCRT" MSYS2 environment
MSYSTEM: MINGW64
# This feature doesn't (currently) work with PowerShell, it stops
# the echo'ing of commands being run and doesn't show any timing
FF_SCRIPT_SECTIONS: 0
# do not remove "--without-default-devices"!
# commit 9f8e6cad65a6 ("gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices"
# changed to compile QEMU with the --without-default-devices switch
# for this job, because otherwise the build could not complete within
# the project timeout.
CONFIGURE_ARGS: --target-list=x86_64-softmmu --without-default-devices -Ddebug=false -Doptimization=0
# qTests don't run successfully with "--without-default-devices",
# so let's exclude the qtests from CI for now.
TEST_ARGS: --no-suite qtest
timeout: 80m
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
expire_in: 7 days
@@ -36,40 +19,14 @@ msys2-64bit:
reports:
junit: "build/meson-logs/testlog.junit.xml"
before_script:
- Write-Output "Acquiring msys2.exe installer at $(Get-Date -Format u)"
- If ( !(Test-Path -Path msys64\var\cache ) ) {
mkdir msys64\var\cache
}
- Invoke-WebRequest
"https://repo.msys2.org/distrib/msys2-x86_64-latest.sfx.exe.sig"
-outfile "msys2.exe.sig"
- if ( Test-Path -Path msys64\var\cache\msys2.exe.sig ) {
Write-Output "Cached installer sig" ;
if ( ((Get-FileHash msys2.exe.sig).Hash -ne (Get-FileHash msys64\var\cache\msys2.exe.sig).Hash) ) {
Write-Output "Mis-matched installer sig, new installer download required" ;
Remove-Item -Path msys64\var\cache\msys2.exe.sig ;
if ( Test-Path -Path msys64\var\cache\msys2.exe ) {
Remove-Item -Path msys64\var\cache\msys2.exe
}
} else {
Write-Output "Matched installer sig, cached installer still valid"
}
} else {
Write-Output "No cached installer sig, new installer download required" ;
if ( Test-Path -Path msys64\var\cache\msys2.exe ) {
Remove-Item -Path msys64\var\cache\msys2.exe
}
}
- if ( !(Test-Path -Path msys64\var\cache\msys2.exe ) ) {
Write-Output "Fetching latest installer" ;
- If ( !(Test-Path -Path msys64\var\cache\msys2.exe ) ) {
Invoke-WebRequest
"https://repo.msys2.org/distrib/msys2-x86_64-latest.sfx.exe"
-outfile "msys64\var\cache\msys2.exe" ;
Copy-Item -Path msys2.exe.sig -Destination msys64\var\cache\msys2.exe.sig
} else {
Write-Output "Using cached installer"
"https://github.com/msys2/msys2-installer/releases/download/2022-06-03/msys2-base-x86_64-20220603.sfx.exe"
-outfile "msys64\var\cache\msys2.exe"
}
- Write-Output "Invoking msys2.exe installer at $(Get-Date -Format u)"
- msys64\var\cache\msys2.exe -y
- ((Get-Content -path .\msys64\etc\\post-install\\07-pacman-key.post -Raw)
-replace '--refresh-keys', '--version') |
@@ -78,14 +35,14 @@ msys2-64bit:
- .\msys64\usr\bin\bash -lc 'pacman --noconfirm -Syuu' # Core update
- .\msys64\usr\bin\bash -lc 'pacman --noconfirm -Syuu' # Normal update
- taskkill /F /FI "MODULES eq msys-2.0.dll"
msys2-64bit:
extends: .shared_msys2_builder
script:
- Write-Output "Installing mingw packages at $(Get-Date -Format u)"
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
bison diffutils flex
git grep make sed
mingw-w64-x86_64-binutils
mingw-w64-x86_64-capstone
mingw-w64-x86_64-ccache
mingw-w64-x86_64-curl
mingw-w64-x86_64-cyrus-sasl
mingw-w64-x86_64-dtc
@@ -111,20 +68,64 @@ msys2-64bit:
mingw-w64-x86_64-snappy
mingw-w64-x86_64-spice
mingw-w64-x86_64-usbredir
mingw-w64-x86_64-zstd"
- Write-Output "Running build at $(Get-Date -Format u)"
mingw-w64-x86_64-zstd "
- $env:CHERE_INVOKING = 'yes' # Preserve the current working directory
- $env:MSYSTEM = 'MINGW64' # Start a 64-bit MinGW environment
- $env:MSYS = 'winsymlinks:native' # Enable native Windows symlink
- $env:CCACHE_BASEDIR = "$env:CI_PROJECT_DIR"
- $env:CCACHE_DIR = "$env:CCACHE_BASEDIR/ccache"
- $env:CCACHE_MAXSIZE = "500M"
- $env:CCACHE_DEPEND = 1 # cache misses are too expensive with preprocessor mode
- $env:CC = "ccache gcc"
- mkdir build
- cd build
- ..\msys64\usr\bin\bash -lc "ccache --zero-stats"
- ..\msys64\usr\bin\bash -lc "../configure --enable-fdt=system $CONFIGURE_ARGS"
- ..\msys64\usr\bin\bash -lc "make"
- ..\msys64\usr\bin\bash -lc "make check MTESTARGS='$TEST_ARGS' || { cat meson-logs/testlog.txt; exit 1; } ;"
- ..\msys64\usr\bin\bash -lc "ccache --show-stats"
- Write-Output "Finished build at $(Get-Date -Format u)"
# Note: do not remove "--without-default-devices"!
# commit 9f8e6cad65a6 ("gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices"
# changed to compile QEMU with the --without-default-devices switch
# for the msys2 64-bit job, due to the build could not complete within
# the project timeout.
- ..\msys64\usr\bin\bash -lc '../configure --target-list=x86_64-softmmu
--without-default-devices --enable-fdt=system'
- ..\msys64\usr\bin\bash -lc 'make'
# qTests don't run successfully with "--without-default-devices",
# so let's exclude the qtests from CI for now.
- ..\msys64\usr\bin\bash -lc 'make check MTESTARGS=\"--no-suite qtest\" || { cat meson-logs/testlog.txt; exit 1; } ;'
msys2-32bit:
extends: .shared_msys2_builder
script:
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
bison diffutils flex
git grep make sed
mingw-w64-i686-capstone
mingw-w64-i686-curl
mingw-w64-i686-cyrus-sasl
mingw-w64-i686-dtc
mingw-w64-i686-gcc
mingw-w64-i686-glib2
mingw-w64-i686-gnutls
mingw-w64-i686-gtk3
mingw-w64-i686-libgcrypt
mingw-w64-i686-libjpeg-turbo
mingw-w64-i686-libnfs
mingw-w64-i686-libpng
mingw-w64-i686-libssh
mingw-w64-i686-libtasn1
mingw-w64-i686-libusb
mingw-w64-i686-lzo2
mingw-w64-i686-nettle
mingw-w64-i686-ninja
mingw-w64-i686-pixman
mingw-w64-i686-pkgconf
mingw-w64-i686-python
mingw-w64-i686-SDL2
mingw-w64-i686-SDL2_image
mingw-w64-i686-snappy
mingw-w64-i686-spice
mingw-w64-i686-usbredir
mingw-w64-i686-zstd "
- $env:CHERE_INVOKING = 'yes' # Preserve the current working directory
- $env:MSYSTEM = 'MINGW32' # Start a 32-bit MinGW environment
- $env:MSYS = 'winsymlinks:native' # Enable native Windows symlink
- mkdir build
- cd build
- ..\msys64\usr\bin\bash -lc '../configure --target-list=ppc64-softmmu
--enable-fdt=system'
- ..\msys64\usr\bin\bash -lc 'make'
- ..\msys64\usr\bin\bash -lc 'make check MTESTARGS=\"--no-suite qtest\" ||
{ cat meson-logs/testlog.txt; exit 1; }'

View File

@@ -30,41 +30,22 @@ malc <av1474@comtv.ru> malc <malc@c046a42c-6fe2-441c-8c8c-71466251a162>
# Corrupted Author fields
Aaron Larson <alarson@ddci.com> alarson@ddci.com
Andreas Färber <andreas.faerber@web.de> Andreas Färber <andreas.faerber>
fanwenjie <fanwj@mail.ustc.edu.cn> fanwj@mail.ustc.edu.cn <fanwj@mail.ustc.edu.cn>
Jason Wang <jasowang@redhat.com> Jason Wang <jasowang>
Marek Dolata <mkdolata@us.ibm.com> mkdolata@us.ibm.com <mkdolata@us.ibm.com>
Michael Ellerman <mpe@ellerman.id.au> michael@ozlabs.org <michael@ozlabs.org>
Nick Hudson <hnick@vmware.com> hnick@vmware.com <hnick@vmware.com>
Timothée Cocault <timothee.cocault@gmail.com> timothee.cocault@gmail.com <timothee.cocault@gmail.com>
Stefan Weil <sw@weilnetz.de> <weil@mail.berlios.de>
Stefan Weil <sw@weilnetz.de> Stefan Weil <stefan@kiwi.(none)>
# There is also a:
# (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162>
# for the cvs2svn initialization commit e63c3dc74bf.
# Next, translate a few commits where mailman rewrote the From: line due
# to strict SPF and DMARC. Usually, our build process should be flagging
# commits like these before maintainer merges; if you find the need to add
# a line here, please also report a bug against the part of the build
# process that let the mis-attribution slip through in the first place.
#
# If the mailing list munges your emails, use:
# git config sendemail.from '"Your Name" <your.email@example.com>'
# the use of "" in that line will differ from the typically unquoted
# 'git config user.name', which in turn is sufficient for 'git send-email'
# to add an extra From: line in the body of your email that takes
# precedence over any munged From: in the mail's headers.
# See https://lists.openembedded.org/g/openembedded-core/message/166515
# and https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg06784.html
# to strict SPF, although we prefer to avoid adding more entries like that.
Ed Swierk <eswierk@skyportsystems.com> Ed Swierk via Qemu-devel <qemu-devel@nongnu.org>
Ian McKellar <ianloic@google.com> Ian McKellar via Qemu-devel <qemu-devel@nongnu.org>
Julia Suvorova <jusual@mail.ru> Julia Suvorova via Qemu-devel <qemu-devel@nongnu.org>
Justin Terry (VM) <juterry@microsoft.com> Justin Terry (VM) via Qemu-devel <qemu-devel@nongnu.org>
Stefan Weil <sw@weilnetz.de> Stefan Weil via <qemu-devel@nongnu.org>
Stefan Weil <sw@weilnetz.de> Stefan Weil via <qemu-trivial@nongnu.org>
Andrey Drobyshev <andrey.drobyshev@virtuozzo.com> Andrey Drobyshev via <qemu-block@nongnu.org>
BALATON Zoltan <balaton@eik.bme.hu> BALATON Zoltan via <qemu-ppc@nongnu.org>
# Next, replace old addresses by a more recent one.
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com> <aleksandar.markovic@mips.com>
@@ -84,12 +65,8 @@ Greg Kurz <groug@kaod.org> <gkurz@linux.vnet.ibm.com>
Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
Juan Quintela <quintela@trasno.org> <quintela@redhat.com>
Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
Luc Michel <luc@lmichel.fr> <luc.michel@git.antfield.fr>
Luc Michel <luc@lmichel.fr> <luc.michel@greensocs.com>
Luc Michel <luc@lmichel.fr> <lmichel@kalray.eu>
Radoslaw Biernacki <rad@semihalf.com> <radoslaw.biernacki@linaro.org>
Paul Brook <paul@nowt.org> <paul@codesourcery.com>
Paul Burton <paulburton@kernel.org> <paul.burton@mips.com>
@@ -101,7 +78,6 @@ Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@redhat.com>
Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@fungible.com>
Roman Bolshakov <rbolshakov@ddn.com> <r.bolshakov@yadro.com>
Stefan Brankovic <stefan.brankovic@syrmia.com> <stefan.brankovic@rt-rk.com.com>
Stefan Weil <sw@weilnetz.de> Stefan Weil <stefan@weilnetz.de>
Taylor Simpson <ltaylorsimpson@gmail.com> <tsimpson@quicinc.com>
Yongbok Kim <yongbok.kim@mips.com> <yongbok.kim@imgtec.com>

View File

@@ -5,21 +5,16 @@
# Required
version: 2
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.11"
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py
# We recommend specifying your dependencies to enable reproducible builds:
# https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
python:
install:
- requirements: docs/requirements.txt
# We want all the document formats
formats: all
# For consistency, we require that QEMU's Sphinx extensions
# run with at least the same minimum version of Python that
# we require for other Python in our codebase (our conf.py
# enforces this, and some code needs it.)
python:
version: 3.6

View File

@@ -34,7 +34,7 @@ env:
- BASE_CONFIG="--disable-docs --disable-tools"
- TEST_BUILD_CMD=""
- TEST_CMD="make check V=1"
# This is broadly a list of "mainline" system targets which have support across the major distros
# This is broadly a list of "mainline" softmmu targets which have support across the major distros
- MAIN_SOFTMMU_TARGETS="aarch64-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu"
- CCACHE_SLOPPINESS="include_file_ctime,include_file_mtime"
- CCACHE_MAXSIZE=1G
@@ -197,7 +197,7 @@ jobs:
$(exit $BUILD_RC);
fi
- name: "[s390x] GCC (other-system)"
- name: "[s390x] GCC (other-softmmu)"
arch: s390x
dist: focal
addons:

View File

@@ -11,9 +11,6 @@ config OPENGL
config X11
bool
config PIXMAN
bool
config SPICE
bool
@@ -49,6 +46,3 @@ config FUZZ
config VFIO_USER_SERVER_ALLOWED
bool
imply VFIO_USER_SERVER
config HV_BALLOON_POSSIBLE
bool

File diff suppressed because it is too large Load Diff

View File

@@ -164,6 +164,14 @@ ifneq ($(filter $(ninja-targets), $(ninja-cmd-goals)),)
endif
endif
ifeq ($(CONFIG_PLUGIN),y)
.PHONY: plugins
plugins:
$(call quiet-command,\
$(MAKE) $(SUBDIR_MAKEFLAGS) -C contrib/plugins V="$(V)", \
"BUILD", "example plugins")
endif # $(CONFIG_PLUGIN)
else # config-host.mak does not exist
ifneq ($(filter-out $(UNCHECKED_GOALS),$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
$(error Please call configure before running make)
@@ -176,20 +184,15 @@ include $(SRC_PATH)/tests/Makefile.include
all: recurse-all
SUBDIR_RULES=$(foreach t, all clean distclean, $(addsuffix /$(t), $(SUBDIRS)))
.PHONY: $(SUBDIR_RULES)
$(SUBDIR_RULES):
ROMS_RULES=$(foreach t, all clean distclean, $(addsuffix /$(t), $(ROMS)))
.PHONY: $(ROMS_RULES)
$(ROMS_RULES):
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C $(dir $@) V="$(V)" TARGET_DIR="$(dir $@)" $(notdir $@),)
ifneq ($(filter contrib/plugins, $(SUBDIRS)),)
.PHONY: plugins
plugins: contrib/plugins/all
endif
.PHONY: recurse-all recurse-clean
recurse-all: $(addsuffix /all, $(SUBDIRS))
recurse-clean: $(addsuffix /clean, $(SUBDIRS))
recurse-distclean: $(addsuffix /distclean, $(SUBDIRS))
recurse-all: $(addsuffix /all, $(ROMS))
recurse-clean: $(addsuffix /clean, $(ROMS))
recurse-distclean: $(addsuffix /distclean, $(ROMS))
######################################################################
@@ -202,7 +205,6 @@ clean: recurse-clean
! -path ./roms/edk2/ArmPkg/Library/GccLto/liblto-arm.a \
-exec rm {} +
rm -f TAGS cscope.* *~ */*~
@$(MAKE) -Ctests/qemu-iotests clean
VERSION = $(shell cat $(SRC_PATH)/VERSION)
@@ -284,13 +286,6 @@ include $(SRC_PATH)/tests/vm/Makefile.include
print-help-run = printf " %-30s - %s\\n" "$1" "$2"
print-help = @$(call print-help-run,$1,$2)
.PHONY: update-linux-vdso
update-linux-vdso:
@for m in $(SRC_PATH)/linux-user/*/Makefile.vdso; do \
$(MAKE) $(SUBDIR_MAKEFLAGS) -C $$(dirname $$m) -f Makefile.vdso \
SRC_PATH=$(SRC_PATH) BUILD_DIR=$(BUILD_DIR); \
done
.PHONY: help
help:
@echo 'Generic targets:'
@@ -301,7 +296,7 @@ help:
$(call print-help,cscope,Generate cscope index)
$(call print-help,sparse,Run sparse on the QEMU source)
@echo ''
ifneq ($(filter contrib/plugins, $(SUBDIRS)),)
ifeq ($(CONFIG_PLUGIN),y)
@echo 'Plugin targets:'
$(call print-help,plugins,Build the example TCG plugins)
@echo ''
@@ -311,9 +306,6 @@ endif
$(call print-help,distclean,Remove all generated files)
$(call print-help,dist,Build a distributable tarball)
@echo ''
@echo 'Linux-user targets:'
$(call print-help,update-linux-vdso,Build linux-user vdso images)
@echo ''
@echo 'Test targets:'
$(call print-help,check,Run all tests (check-help for details))
$(call print-help,bench,Run all benchmarks)
@@ -324,7 +316,7 @@ endif
@echo 'Documentation targets:'
$(call print-help,html man,Build documentation in specified format)
@echo ''
ifneq ($(filter msi, $(ninja-targets)),)
ifdef CONFIG_WIN32
@echo 'Windows targets:'
$(call print-help,installer,Build NSIS-based installer for QEMU)
$(call print-help,msi,Build MSI-based installer for qemu-ga)

View File

@@ -1 +1 @@
8.2.50
8.0.50

View File

@@ -4,6 +4,9 @@ config WHPX
config NVMM
bool
config HAX
bool
config HVF
bool
@@ -16,4 +19,3 @@ config KVM
config XEN
bool
select FSDEV_9P if VIRTFS
select XEN_BUS

View File

@@ -41,7 +41,7 @@ void accel_blocker_init(void)
void accel_ioctl_begin(void)
{
if (likely(bql_locked())) {
if (likely(qemu_mutex_iothread_locked())) {
return;
}
@@ -51,7 +51,7 @@ void accel_ioctl_begin(void)
void accel_ioctl_end(void)
{
if (likely(bql_locked())) {
if (likely(qemu_mutex_iothread_locked())) {
return;
}
@@ -62,7 +62,7 @@ void accel_ioctl_end(void)
void accel_cpu_ioctl_begin(CPUState *cpu)
{
if (unlikely(bql_locked())) {
if (unlikely(qemu_mutex_iothread_locked())) {
return;
}
@@ -72,7 +72,7 @@ void accel_cpu_ioctl_begin(CPUState *cpu)
void accel_cpu_ioctl_end(CPUState *cpu)
{
if (unlikely(bql_locked())) {
if (unlikely(qemu_mutex_iothread_locked())) {
return;
}
@@ -105,7 +105,7 @@ void accel_ioctl_inhibit_begin(void)
* We allow to inhibit only when holding the BQL, so we can identify
* when an inhibitor wants to issue an ioctl easily.
*/
g_assert(bql_locked());
g_assert(qemu_mutex_iothread_locked());
/* Block further invocations of the ioctls outside the BQL. */
CPU_FOREACH(cpu) {

View File

@@ -30,7 +30,7 @@
#include "hw/core/accel-cpu.h"
#ifndef CONFIG_USER_ONLY
#include "accel-system.h"
#include "accel-softmmu.h"
#endif /* !CONFIG_USER_ONLY */
static const TypeInfo accel_type = {
@@ -104,7 +104,7 @@ static void accel_init_cpu_interfaces(AccelClass *ac)
void accel_init_interfaces(AccelClass *ac)
{
#ifndef CONFIG_USER_ONLY
accel_system_init_ops_interfaces(ac);
accel_init_ops_interfaces(ac);
#endif /* !CONFIG_USER_ONLY */
accel_init_cpu_interfaces(ac);
@@ -119,37 +119,16 @@ void accel_cpu_instance_init(CPUState *cpu)
}
}
bool accel_cpu_common_realize(CPUState *cpu, Error **errp)
bool accel_cpu_realizefn(CPUState *cpu, Error **errp)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
AccelState *accel = current_accel();
AccelClass *acc = ACCEL_GET_CLASS(accel);
/* target specific realization */
if (cc->accel_cpu && cc->accel_cpu->cpu_target_realize
&& !cc->accel_cpu->cpu_target_realize(cpu, errp)) {
return false;
if (cc->accel_cpu && cc->accel_cpu->cpu_realizefn) {
return cc->accel_cpu->cpu_realizefn(cpu, errp);
}
/* generic realization */
if (acc->cpu_common_realize && !acc->cpu_common_realize(cpu, errp)) {
return false;
}
return true;
}
void accel_cpu_common_unrealize(CPUState *cpu)
{
AccelState *accel = current_accel();
AccelClass *acc = ACCEL_GET_CLASS(accel);
/* generic unrealization */
if (acc->cpu_common_unrealize) {
acc->cpu_common_unrealize(cpu);
}
}
int accel_supported_gdbstub_sstep_flags(void)
{
AccelState *accel = current_accel();

View File

@@ -28,7 +28,7 @@
#include "hw/boards.h"
#include "sysemu/cpus.h"
#include "qemu/error-report.h"
#include "accel-system.h"
#include "accel-softmmu.h"
int accel_init_machine(AccelState *accel, MachineState *ms)
{
@@ -62,7 +62,7 @@ void accel_setup_post(MachineState *ms)
}
/* initialize the arch-independent accel operation interfaces */
void accel_system_init_ops_interfaces(AccelClass *ac)
void accel_init_ops_interfaces(AccelClass *ac)
{
const char *ac_name;
char *ops_name;
@@ -99,8 +99,8 @@ static const TypeInfo accel_ops_type_info = {
.class_size = sizeof(AccelOpsClass),
};
static void accel_system_register_types(void)
static void accel_softmmu_register_types(void)
{
type_register_static(&accel_ops_type_info);
}
type_init(accel_system_register_types);
type_init(accel_softmmu_register_types);

View File

@@ -7,9 +7,9 @@
* See the COPYING file in the top-level directory.
*/
#ifndef ACCEL_SYSTEM_H
#define ACCEL_SYSTEM_H
#ifndef ACCEL_SOFTMMU_H
#define ACCEL_SOFTMMU_H
void accel_system_init_ops_interfaces(AccelClass *ac);
void accel_init_ops_interfaces(AccelClass *ac);
#endif /* ACCEL_SYSTEM_H */
#endif /* ACCEL_SOFTMMU_H */

View File

@@ -24,9 +24,10 @@ static void *dummy_cpu_thread_fn(void *arg)
rcu_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->can_do_io = 1;
current_cpu = cpu;
#ifndef _WIN32
@@ -42,7 +43,7 @@ static void *dummy_cpu_thread_fn(void *arg)
qemu_guest_random_seed_thread_part2(cpu->random_seed);
do {
bql_unlock();
qemu_mutex_unlock_iothread();
#ifndef _WIN32
do {
int sig;
@@ -55,11 +56,11 @@ static void *dummy_cpu_thread_fn(void *arg)
#else
qemu_sem_wait(&cpu->sem);
#endif
bql_lock();
qemu_mutex_lock_iothread();
qemu_wait_io_event(cpu);
} while (!cpu->unplug);
bql_unlock();
qemu_mutex_unlock_iothread();
rcu_unregister_thread();
return NULL;
}

View File

@@ -424,10 +424,11 @@ static void *hvf_cpu_thread_fn(void *arg)
rcu_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->can_do_io = 1;
current_cpu = cpu;
hvf_init_vcpu(cpu);
@@ -448,7 +449,7 @@ static void *hvf_cpu_thread_fn(void *arg)
hvf_vcpu_destroy(cpu);
cpu_thread_signal_destroyed(cpu);
bql_unlock();
qemu_mutex_unlock_iothread();
rcu_unregister_thread();
return NULL;
}
@@ -473,7 +474,7 @@ static void hvf_start_vcpu_thread(CPUState *cpu)
cpu, QEMU_THREAD_JOINABLE);
}
static int hvf_insert_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len)
static int hvf_insert_breakpoint(CPUState *cpu, int type, hwaddr addr, hwaddr len)
{
struct hvf_sw_breakpoint *bp;
int err;
@@ -511,7 +512,7 @@ static int hvf_insert_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len)
return 0;
}
static int hvf_remove_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len)
static int hvf_remove_breakpoint(CPUState *cpu, int type, hwaddr addr, hwaddr len)
{
struct hvf_sw_breakpoint *bp;
int err;

View File

@@ -51,7 +51,7 @@ void assert_hvf_ok(hv_return_t ret)
abort();
}
struct hvf_sw_breakpoint *hvf_find_sw_breakpoint(CPUState *cpu, vaddr pc)
struct hvf_sw_breakpoint *hvf_find_sw_breakpoint(CPUState *cpu, target_ulong pc)
{
struct hvf_sw_breakpoint *bp;

View File

@@ -33,9 +33,10 @@ static void *kvm_vcpu_thread_fn(void *arg)
rcu_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->can_do_io = 1;
current_cpu = cpu;
r = kvm_init_vcpu(cpu, &error_fatal);
@@ -57,7 +58,7 @@ static void *kvm_vcpu_thread_fn(void *arg)
kvm_destroy_vcpu(cpu);
cpu_thread_signal_destroyed(cpu);
bql_unlock();
qemu_mutex_unlock_iothread();
rcu_unregister_thread();
return NULL;
}

View File

@@ -69,6 +69,16 @@
#define KVM_GUESTDBG_BLOCKIRQ 0
#endif
//#define DEBUG_KVM
#ifdef DEBUG_KVM
#define DPRINTF(fmt, ...) \
do { fprintf(stderr, fmt, ## __VA_ARGS__); } while (0)
#else
#define DPRINTF(fmt, ...) \
do { } while (0)
#endif
struct KVMParkedVcpu {
unsigned long vcpu_id;
int kvm_fd;
@@ -80,6 +90,8 @@ bool kvm_kernel_irqchip;
bool kvm_split_irqchip;
bool kvm_async_interrupts_allowed;
bool kvm_halt_in_kernel_allowed;
bool kvm_eventfds_allowed;
bool kvm_irqfds_allowed;
bool kvm_resamplefds_allowed;
bool kvm_msi_via_irqfd_allowed;
bool kvm_gsi_routing_allowed;
@@ -87,8 +99,10 @@ bool kvm_gsi_direct_mapping;
bool kvm_allowed;
bool kvm_readonly_mem_allowed;
bool kvm_vm_attributes_allowed;
bool kvm_direct_msi_allowed;
bool kvm_ioeventfd_any_length_allowed;
bool kvm_msi_use_devid;
static bool kvm_has_guest_debug;
bool kvm_has_guest_debug;
static int kvm_sstep_flags;
static bool kvm_immediate_exit;
static hwaddr kvm_max_slot_size = ~0;
@@ -97,9 +111,6 @@ static const KVMCapabilityInfo kvm_required_capabilites[] = {
KVM_CAP_INFO(USER_MEMORY),
KVM_CAP_INFO(DESTROY_MEMORY_REGION_WORKS),
KVM_CAP_INFO(JOIN_MEMORY_REGIONS_WORKS),
KVM_CAP_INFO(INTERNAL_ERROR_DATA),
KVM_CAP_INFO(IOEVENTFD),
KVM_CAP_INFO(IOEVENTFD_ANY_LENGTH),
KVM_CAP_LAST_INFO
};
@@ -163,31 +174,13 @@ void kvm_resample_fd_notify(int gsi)
}
}
unsigned int kvm_get_max_memslots(void)
int kvm_get_max_memslots(void)
{
KVMState *s = KVM_STATE(current_accel());
return s->nr_slots;
}
unsigned int kvm_get_free_memslots(void)
{
unsigned int used_slots = 0;
KVMState *s = kvm_state;
int i;
kvm_slots_lock();
for (i = 0; i < s->nr_as; i++) {
if (!s->as[i].ml) {
continue;
}
used_slots = MAX(used_slots, s->as[i].ml->nr_used_slots);
}
kvm_slots_unlock();
return s->nr_slots - used_slots;
}
/* Called with KVMMemoryListener.slots_lock held */
static KVMSlot *kvm_get_free_slot(KVMMemoryListener *kml)
{
@@ -203,6 +196,19 @@ static KVMSlot *kvm_get_free_slot(KVMMemoryListener *kml)
return NULL;
}
bool kvm_has_free_slot(MachineState *ms)
{
KVMState *s = KVM_STATE(ms->accelerator);
bool result;
KVMMemoryListener *kml = &s->memory_listener;
kvm_slots_lock();
result = !!kvm_get_free_slot(kml);
kvm_slots_unlock();
return result;
}
/* Called with KVMMemoryListener.slots_lock held */
static KVMSlot *kvm_alloc_slot(KVMMemoryListener *kml)
{
@@ -321,7 +327,7 @@ static int do_kvm_destroy_vcpu(CPUState *cpu)
struct KVMParkedVcpu *vcpu = NULL;
int ret = 0;
trace_kvm_destroy_vcpu();
DPRINTF("kvm_destroy_vcpu\n");
ret = kvm_arch_destroy_vcpu(cpu);
if (ret < 0) {
@@ -331,7 +337,7 @@ static int do_kvm_destroy_vcpu(CPUState *cpu)
mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
if (mmap_size < 0) {
ret = mmap_size;
trace_kvm_failed_get_vcpu_mmap_size();
DPRINTF("KVM_GET_VCPU_MMAP_SIZE failed\n");
goto err;
}
@@ -433,6 +439,7 @@ int kvm_init_vcpu(CPUState *cpu, Error **errp)
PAGE_SIZE * KVM_DIRTY_LOG_PAGE_OFFSET);
if (cpu->kvm_dirty_gfns == MAP_FAILED) {
ret = -errno;
DPRINTF("mmap'ing vcpu dirty gfns failed: %d\n", ret);
goto err;
}
}
@@ -806,7 +813,7 @@ static void kvm_dirty_ring_flush(void)
* should always be with BQL held, serialization is guaranteed.
* However, let's be sure of it.
*/
assert(bql_locked());
assert(qemu_mutex_iothread_locked());
/*
* First make sure to flush the hardware buffers by kicking all
* vcpus out in a synchronous way.
@@ -1094,6 +1101,13 @@ static void kvm_coalesce_pio_del(MemoryListener *listener,
}
}
static MemoryListener kvm_coalesced_pio_listener = {
.name = "kvm-coalesced-pio",
.coalesced_io_add = kvm_coalesce_pio_add,
.coalesced_io_del = kvm_coalesce_pio_del,
.priority = MEMORY_LISTENER_PRIORITY_MIN,
};
int kvm_check_extension(KVMState *s, unsigned int extension)
{
int ret;
@@ -1119,11 +1133,6 @@ int kvm_vm_check_extension(KVMState *s, unsigned int extension)
return ret;
}
/*
* We track the poisoned pages to be able to:
* - replace them on VM reset
* - block a migration for a VM with a poisoned page
*/
typedef struct HWPoisonPage {
ram_addr_t ram_addr;
QLIST_ENTRY(HWPoisonPage) list;
@@ -1157,11 +1166,6 @@ void kvm_hwpoison_page_add(ram_addr_t ram_addr)
QLIST_INSERT_HEAD(&hwpoison_page_list, page, list);
}
bool kvm_hwpoisoned_mem(void)
{
return !QLIST_EMPTY(&hwpoison_page_list);
}
static uint32_t adjust_ioeventfd_endianness(uint32_t val, uint32_t size)
{
#if HOST_BIG_ENDIAN != TARGET_BIG_ENDIAN
@@ -1245,6 +1249,43 @@ static int kvm_set_ioeventfd_pio(int fd, uint16_t addr, uint16_t val,
}
static int kvm_check_many_ioeventfds(void)
{
/* Userspace can use ioeventfd for io notification. This requires a host
* that supports eventfd(2) and an I/O thread; since eventfd does not
* support SIGIO it cannot interrupt the vcpu.
*
* Older kernels have a 6 device limit on the KVM io bus. Find out so we
* can avoid creating too many ioeventfds.
*/
#if defined(CONFIG_EVENTFD)
int ioeventfds[7];
int i, ret = 0;
for (i = 0; i < ARRAY_SIZE(ioeventfds); i++) {
ioeventfds[i] = eventfd(0, EFD_CLOEXEC);
if (ioeventfds[i] < 0) {
break;
}
ret = kvm_set_ioeventfd_pio(ioeventfds[i], 0, i, true, 2, true);
if (ret < 0) {
close(ioeventfds[i]);
break;
}
}
/* Decide whether many devices are supported or not */
ret = i == ARRAY_SIZE(ioeventfds);
while (i-- > 0) {
kvm_set_ioeventfd_pio(ioeventfds[i], 0, i, false, 2, true);
close(ioeventfds[i]);
}
return ret;
#else
return 0;
#endif
}
static const KVMCapabilityInfo *
kvm_check_extension_list(KVMState *s, const KVMCapabilityInfo *list)
{
@@ -1346,7 +1387,6 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
}
start_addr += slot_size;
size -= slot_size;
kml->nr_used_slots--;
} while (size);
return;
}
@@ -1372,7 +1412,6 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
ram_start_offset += slot_size;
ram += slot_size;
size -= slot_size;
kml->nr_used_slots++;
} while (size);
}
@@ -1401,9 +1440,9 @@ static void *kvm_dirty_ring_reaper_thread(void *data)
trace_kvm_dirty_ring_reaper("wakeup");
r->reaper_state = KVM_DIRTY_RING_REAPER_REAPING;
bql_lock();
qemu_mutex_lock_iothread();
kvm_dirty_ring_reap(s, NULL);
bql_unlock();
qemu_mutex_unlock_iothread();
r->reaper_iteration++;
}
@@ -1415,13 +1454,15 @@ static void *kvm_dirty_ring_reaper_thread(void *data)
return NULL;
}
static void kvm_dirty_ring_reaper_init(KVMState *s)
static int kvm_dirty_ring_reaper_init(KVMState *s)
{
struct KVMDirtyRingReaper *r = &s->reaper;
qemu_thread_create(&r->reaper_thr, "kvm-reaper",
kvm_dirty_ring_reaper_thread,
s, QEMU_THREAD_JOINABLE);
return 0;
}
static int kvm_dirty_ring_init(KVMState *s)
@@ -1760,8 +1801,6 @@ void kvm_memory_listener_register(KVMState *s, KVMMemoryListener *kml,
static MemoryListener kvm_io_listener = {
.name = "kvm-io",
.coalesced_io_add = kvm_coalesce_pio_add,
.coalesced_io_del = kvm_coalesce_pio_del,
.eventfd_add = kvm_io_ioeventfd_add,
.eventfd_del = kvm_io_ioeventfd_del,
.priority = MEMORY_LISTENER_PRIORITY_DEV_BACKEND,
@@ -1803,7 +1842,7 @@ static void clear_gsi(KVMState *s, unsigned int gsi)
void kvm_init_irq_routing(KVMState *s)
{
int gsi_count;
int gsi_count, i;
gsi_count = kvm_check_extension(s, KVM_CAP_IRQ_ROUTING) - 1;
if (gsi_count > 0) {
@@ -1815,6 +1854,12 @@ void kvm_init_irq_routing(KVMState *s)
s->irq_routes = g_malloc0(sizeof(*s->irq_routes));
s->nr_allocated_irq_routes = 0;
if (!kvm_direct_msi_allowed) {
for (i = 0; i < KVM_MSI_HASHTAB_SIZE; i++) {
QTAILQ_INIT(&s->msi_hashtab[i]);
}
}
kvm_arch_init_irq_routing(s);
}
@@ -1934,10 +1979,41 @@ void kvm_irqchip_change_notify(void)
notifier_list_notify(&kvm_irqchip_change_notifiers, NULL);
}
static unsigned int kvm_hash_msi(uint32_t data)
{
/* This is optimized for IA32 MSI layout. However, no other arch shall
* repeat the mistake of not providing a direct MSI injection API. */
return data & 0xff;
}
static void kvm_flush_dynamic_msi_routes(KVMState *s)
{
KVMMSIRoute *route, *next;
unsigned int hash;
for (hash = 0; hash < KVM_MSI_HASHTAB_SIZE; hash++) {
QTAILQ_FOREACH_SAFE(route, &s->msi_hashtab[hash], entry, next) {
kvm_irqchip_release_virq(s, route->kroute.gsi);
QTAILQ_REMOVE(&s->msi_hashtab[hash], route, entry);
g_free(route);
}
}
}
static int kvm_irqchip_get_virq(KVMState *s)
{
int next_virq;
/*
* PIC and IOAPIC share the first 16 GSI numbers, thus the available
* GSI numbers are more than the number of IRQ route. Allocating a GSI
* number can succeed even though a new route entry cannot be added.
* When this happens, flush dynamic MSI entries to free IRQ route entries.
*/
if (!kvm_direct_msi_allowed && s->irq_routes->nr == s->gsi_count) {
kvm_flush_dynamic_msi_routes(s);
}
/* Return the lowest unused GSI in the bitmap */
next_virq = find_first_zero_bit(s->used_gsi_bitmap, s->gsi_count);
if (next_virq >= s->gsi_count) {
@@ -1947,17 +2023,63 @@ static int kvm_irqchip_get_virq(KVMState *s)
}
}
static KVMMSIRoute *kvm_lookup_msi_route(KVMState *s, MSIMessage msg)
{
unsigned int hash = kvm_hash_msi(msg.data);
KVMMSIRoute *route;
QTAILQ_FOREACH(route, &s->msi_hashtab[hash], entry) {
if (route->kroute.u.msi.address_lo == (uint32_t)msg.address &&
route->kroute.u.msi.address_hi == (msg.address >> 32) &&
route->kroute.u.msi.data == le32_to_cpu(msg.data)) {
return route;
}
}
return NULL;
}
int kvm_irqchip_send_msi(KVMState *s, MSIMessage msg)
{
struct kvm_msi msi;
KVMMSIRoute *route;
msi.address_lo = (uint32_t)msg.address;
msi.address_hi = msg.address >> 32;
msi.data = le32_to_cpu(msg.data);
msi.flags = 0;
memset(msi.pad, 0, sizeof(msi.pad));
if (kvm_direct_msi_allowed) {
msi.address_lo = (uint32_t)msg.address;
msi.address_hi = msg.address >> 32;
msi.data = le32_to_cpu(msg.data);
msi.flags = 0;
memset(msi.pad, 0, sizeof(msi.pad));
return kvm_vm_ioctl(s, KVM_SIGNAL_MSI, &msi);
return kvm_vm_ioctl(s, KVM_SIGNAL_MSI, &msi);
}
route = kvm_lookup_msi_route(s, msg);
if (!route) {
int virq;
virq = kvm_irqchip_get_virq(s);
if (virq < 0) {
return virq;
}
route = g_new0(KVMMSIRoute, 1);
route->kroute.gsi = virq;
route->kroute.type = KVM_IRQ_ROUTING_MSI;
route->kroute.flags = 0;
route->kroute.u.msi.address_lo = (uint32_t)msg.address;
route->kroute.u.msi.address_hi = msg.address >> 32;
route->kroute.u.msi.data = le32_to_cpu(msg.data);
kvm_add_routing_entry(s, &route->kroute);
kvm_irqchip_commit_routes(s);
QTAILQ_INSERT_TAIL(&s->msi_hashtab[kvm_hash_msi(msg.data)], route,
entry);
}
assert(route->kroute.type == KVM_IRQ_ROUTING_MSI);
return kvm_set_irq(s, route->kroute.gsi, 1);
}
int kvm_irqchip_add_msi_route(KVMRouteChange *c, int vector, PCIDevice *dev)
@@ -2084,6 +2206,10 @@ static int kvm_irqchip_assign_irqfd(KVMState *s, EventNotifier *event,
}
}
if (!kvm_irqfds_enabled()) {
return -ENOSYS;
}
return kvm_vm_ioctl(s, KVM_IRQFD, &irqfd);
}
@@ -2244,11 +2370,6 @@ static void kvm_irqchip_create(KVMState *s)
return;
}
if (kvm_check_extension(s, KVM_CAP_IRQFD) <= 0) {
fprintf(stderr, "kvm: irqfd not implemented\n");
exit(1);
}
/* First probe and see if there's a arch-specific hook to create the
* in-kernel irqchip for us */
ret = kvm_arch_irqchip_create(s);
@@ -2337,7 +2458,7 @@ static int kvm_init(MachineState *ms)
KVMState *s;
const KVMCapabilityInfo *missing_cap;
int ret;
int type;
int type = 0;
uint64_t dirty_log_manual_caps;
qemu_mutex_init(&kml_slots_lock);
@@ -2359,7 +2480,7 @@ static int kvm_init(MachineState *ms)
QTAILQ_INIT(&s->kvm_sw_breakpoints);
#endif
QLIST_INIT(&s->kvm_parked_vcpus);
s->fd = qemu_open_old(s->device ?: "/dev/kvm", O_RDWR);
s->fd = qemu_open_old("/dev/kvm", O_RDWR);
if (s->fd == -1) {
fprintf(stderr, "Could not access KVM kernel module: %m\n");
ret = -errno;
@@ -2402,13 +2523,6 @@ static int kvm_init(MachineState *ms)
type = mc->kvm_type(ms, kvm_type);
} else if (mc->kvm_type) {
type = mc->kvm_type(ms, NULL);
} else {
type = kvm_arch_get_default_type(ms);
}
if (type < 0) {
ret = -EINVAL;
goto err;
}
do {
@@ -2523,8 +2637,22 @@ static int kvm_init(MachineState *ms)
#ifdef KVM_CAP_VCPU_EVENTS
s->vcpu_events = kvm_check_extension(s, KVM_CAP_VCPU_EVENTS);
#endif
s->robust_singlestep =
kvm_check_extension(s, KVM_CAP_X86_ROBUST_SINGLESTEP);
#ifdef KVM_CAP_DEBUGREGS
s->debugregs = kvm_check_extension(s, KVM_CAP_DEBUGREGS);
#endif
s->max_nested_state_len = kvm_check_extension(s, KVM_CAP_NESTED_STATE);
#ifdef KVM_CAP_IRQ_ROUTING
kvm_direct_msi_allowed = (kvm_check_extension(s, KVM_CAP_SIGNAL_MSI) > 0);
#endif
s->intx_set_mask = kvm_check_extension(s, KVM_CAP_PCI_2_3);
s->irq_set_ioctl = KVM_IRQ_LINE;
if (kvm_check_extension(s, KVM_CAP_IRQ_INJECT_STATUS)) {
s->irq_set_ioctl = KVM_IRQ_LINE_STATUS;
@@ -2533,12 +2661,21 @@ static int kvm_init(MachineState *ms)
kvm_readonly_mem_allowed =
(kvm_check_extension(s, KVM_CAP_READONLY_MEM) > 0);
kvm_eventfds_allowed =
(kvm_check_extension(s, KVM_CAP_IOEVENTFD) > 0);
kvm_irqfds_allowed =
(kvm_check_extension(s, KVM_CAP_IRQFD) > 0);
kvm_resamplefds_allowed =
(kvm_check_extension(s, KVM_CAP_IRQFD_RESAMPLE) > 0);
kvm_vm_attributes_allowed =
(kvm_check_extension(s, KVM_CAP_VM_ATTRIBUTES) > 0);
kvm_ioeventfd_any_length_allowed =
(kvm_check_extension(s, KVM_CAP_IOEVENTFD_ANY_LENGTH) > 0);
#ifdef KVM_CAP_SET_GUEST_DEBUG
kvm_has_guest_debug =
(kvm_check_extension(s, KVM_CAP_SET_GUEST_DEBUG) > 0);
@@ -2575,16 +2712,24 @@ static int kvm_init(MachineState *ms)
kvm_irqchip_create(s);
}
s->memory_listener.listener.eventfd_add = kvm_mem_ioeventfd_add;
s->memory_listener.listener.eventfd_del = kvm_mem_ioeventfd_del;
if (kvm_eventfds_allowed) {
s->memory_listener.listener.eventfd_add = kvm_mem_ioeventfd_add;
s->memory_listener.listener.eventfd_del = kvm_mem_ioeventfd_del;
}
s->memory_listener.listener.coalesced_io_add = kvm_coalesce_mmio_region;
s->memory_listener.listener.coalesced_io_del = kvm_uncoalesce_mmio_region;
kvm_memory_listener_register(s, &s->memory_listener,
&address_space_memory, 0, "kvm-memory");
memory_listener_register(&kvm_io_listener,
if (kvm_eventfds_allowed) {
memory_listener_register(&kvm_io_listener,
&address_space_io);
}
memory_listener_register(&kvm_coalesced_pio_listener,
&address_space_io);
s->many_ioeventfds = kvm_check_many_ioeventfds();
s->sync_mmu = !!kvm_vm_check_extension(kvm_state, KVM_CAP_SYNC_MMU);
if (!s->sync_mmu) {
ret = ram_block_discard_disable(true);
@@ -2592,7 +2737,10 @@ static int kvm_init(MachineState *ms)
}
if (s->kvm_dirty_ring_size) {
kvm_dirty_ring_reaper_init(s);
ret = kvm_dirty_ring_reaper_init(s);
if (ret) {
goto err;
}
}
if (kvm_check_extension(kvm_state, KVM_CAP_BINARY_STATS_FD)) {
@@ -2610,7 +2758,6 @@ err:
if (s->fd != -1) {
close(s->fd);
}
g_free(s->as);
g_free(s->memory_listener.slots);
return ret;
@@ -2637,14 +2784,16 @@ static void kvm_handle_io(uint16_t port, MemTxAttrs attrs, void *data, int direc
static int kvm_handle_internal_error(CPUState *cpu, struct kvm_run *run)
{
int i;
fprintf(stderr, "KVM internal error. Suberror: %d\n",
run->internal.suberror);
for (i = 0; i < run->internal.ndata; ++i) {
fprintf(stderr, "extra data[%d]: 0x%016"PRIx64"\n",
i, (uint64_t)run->internal.data[i]);
if (kvm_check_extension(kvm_state, KVM_CAP_INTERNAL_ERROR_DATA)) {
int i;
for (i = 0; i < run->internal.ndata; ++i) {
fprintf(stderr, "extra data[%d]: 0x%016"PRIx64"\n",
i, (uint64_t)run->internal.data[i]);
}
}
if (run->internal.suberror == KVM_INTERNAL_ERROR_EMULATION) {
fprintf(stderr, "emulation failure\n");
@@ -2663,7 +2812,7 @@ void kvm_flush_coalesced_mmio_buffer(void)
{
KVMState *s = kvm_state;
if (!s || s->coalesced_flush_in_progress) {
if (s->coalesced_flush_in_progress) {
return;
}
@@ -2699,13 +2848,7 @@ bool kvm_cpu_check_are_resettable(void)
static void do_kvm_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
{
if (!cpu->vcpu_dirty) {
int ret = kvm_arch_get_registers(cpu);
if (ret) {
error_report("Failed to get registers: %s", strerror(-ret));
cpu_dump_state(cpu, stderr, CPU_DUMP_CODE);
vm_stop(RUN_STATE_INTERNAL_ERROR);
}
kvm_arch_get_registers(cpu);
cpu->vcpu_dirty = true;
}
}
@@ -2719,13 +2862,7 @@ void kvm_cpu_synchronize_state(CPUState *cpu)
static void do_kvm_cpu_synchronize_post_reset(CPUState *cpu, run_on_cpu_data arg)
{
int ret = kvm_arch_put_registers(cpu, KVM_PUT_RESET_STATE);
if (ret) {
error_report("Failed to put registers after reset: %s", strerror(-ret));
cpu_dump_state(cpu, stderr, CPU_DUMP_CODE);
vm_stop(RUN_STATE_INTERNAL_ERROR);
}
kvm_arch_put_registers(cpu, KVM_PUT_RESET_STATE);
cpu->vcpu_dirty = false;
}
@@ -2736,12 +2873,7 @@ void kvm_cpu_synchronize_post_reset(CPUState *cpu)
static void do_kvm_cpu_synchronize_post_init(CPUState *cpu, run_on_cpu_data arg)
{
int ret = kvm_arch_put_registers(cpu, KVM_PUT_FULL_STATE);
if (ret) {
error_report("Failed to put registers after init: %s", strerror(-ret));
exit(1);
}
kvm_arch_put_registers(cpu, KVM_PUT_FULL_STATE);
cpu->vcpu_dirty = false;
}
@@ -2820,34 +2952,27 @@ int kvm_cpu_exec(CPUState *cpu)
struct kvm_run *run = cpu->kvm_run;
int ret, run_ret;
trace_kvm_cpu_exec();
DPRINTF("kvm_cpu_exec()\n");
if (kvm_arch_process_async_events(cpu)) {
qatomic_set(&cpu->exit_request, 0);
return EXCP_HLT;
}
bql_unlock();
qemu_mutex_unlock_iothread();
cpu_exec_start(cpu);
do {
MemTxAttrs attrs;
if (cpu->vcpu_dirty) {
ret = kvm_arch_put_registers(cpu, KVM_PUT_RUNTIME_STATE);
if (ret) {
error_report("Failed to put registers after init: %s",
strerror(-ret));
ret = -1;
break;
}
kvm_arch_put_registers(cpu, KVM_PUT_RUNTIME_STATE);
cpu->vcpu_dirty = false;
}
kvm_arch_pre_run(cpu, run);
if (qatomic_read(&cpu->exit_request)) {
trace_kvm_interrupt_exit_request();
DPRINTF("interrupt exit requested\n");
/*
* KVM requires us to reenter the kernel after IO exits to complete
* instruction emulation. This self-signal will ensure that we
@@ -2867,17 +2992,17 @@ int kvm_cpu_exec(CPUState *cpu)
#ifdef KVM_HAVE_MCE_INJECTION
if (unlikely(have_sigbus_pending)) {
bql_lock();
qemu_mutex_lock_iothread();
kvm_arch_on_sigbus_vcpu(cpu, pending_sigbus_code,
pending_sigbus_addr);
have_sigbus_pending = false;
bql_unlock();
qemu_mutex_unlock_iothread();
}
#endif
if (run_ret < 0) {
if (run_ret == -EINTR || run_ret == -EAGAIN) {
trace_kvm_io_window_exit();
DPRINTF("io window exit\n");
kvm_eat_signals(cpu);
ret = EXCP_INTERRUPT;
break;
@@ -2899,6 +3024,7 @@ int kvm_cpu_exec(CPUState *cpu)
trace_kvm_run_exit(cpu->cpu_index, run->exit_reason);
switch (run->exit_reason) {
case KVM_EXIT_IO:
DPRINTF("handle_io\n");
/* Called outside BQL */
kvm_handle_io(run->io.port, attrs,
(uint8_t *)run + run->io.data_offset,
@@ -2908,6 +3034,7 @@ int kvm_cpu_exec(CPUState *cpu)
ret = 0;
break;
case KVM_EXIT_MMIO:
DPRINTF("handle_mmio\n");
/* Called outside BQL */
address_space_rw(&address_space_memory,
run->mmio.phys_addr, attrs,
@@ -2917,9 +3044,11 @@ int kvm_cpu_exec(CPUState *cpu)
ret = 0;
break;
case KVM_EXIT_IRQ_WINDOW_OPEN:
DPRINTF("irq_window_open\n");
ret = EXCP_INTERRUPT;
break;
case KVM_EXIT_SHUTDOWN:
DPRINTF("shutdown\n");
qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
ret = EXCP_INTERRUPT;
break;
@@ -2937,7 +3066,7 @@ int kvm_cpu_exec(CPUState *cpu)
* still full. Got kicked by KVM_RESET_DIRTY_RINGS.
*/
trace_kvm_dirty_ring_full(cpu->cpu_index);
bql_lock();
qemu_mutex_lock_iothread();
/*
* We throttle vCPU by making it sleep once it exit from kernel
* due to dirty ring full. In the dirtylimit scenario, reaping
@@ -2949,12 +3078,11 @@ int kvm_cpu_exec(CPUState *cpu)
} else {
kvm_dirty_ring_reap(kvm_state, NULL);
}
bql_unlock();
qemu_mutex_unlock_iothread();
dirtylimit_vcpu_execute(cpu);
ret = 0;
break;
case KVM_EXIT_SYSTEM_EVENT:
trace_kvm_run_exit_system_event(cpu->cpu_index, run->system_event.type);
switch (run->system_event.type) {
case KVM_SYSTEM_EVENT_SHUTDOWN:
qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
@@ -2966,24 +3094,26 @@ int kvm_cpu_exec(CPUState *cpu)
break;
case KVM_SYSTEM_EVENT_CRASH:
kvm_cpu_synchronize_state(cpu);
bql_lock();
qemu_mutex_lock_iothread();
qemu_system_guest_panicked(cpu_get_crash_info(cpu));
bql_unlock();
qemu_mutex_unlock_iothread();
ret = 0;
break;
default:
DPRINTF("kvm_arch_handle_exit\n");
ret = kvm_arch_handle_exit(cpu, run);
break;
}
break;
default:
DPRINTF("kvm_arch_handle_exit\n");
ret = kvm_arch_handle_exit(cpu, run);
break;
}
} while (ret == 0);
cpu_exec_end(cpu);
bql_lock();
qemu_mutex_lock_iothread();
if (ret < 0) {
cpu_dump_state(cpu, stderr, CPU_DUMP_CODE);
@@ -3133,11 +3263,29 @@ int kvm_has_vcpu_events(void)
return kvm_state->vcpu_events;
}
int kvm_has_robust_singlestep(void)
{
return kvm_state->robust_singlestep;
}
int kvm_has_debugregs(void)
{
return kvm_state->debugregs;
}
int kvm_max_nested_state_length(void)
{
return kvm_state->max_nested_state_len;
}
int kvm_has_many_ioeventfds(void)
{
if (!kvm_enabled()) {
return 0;
}
return kvm_state->many_ioeventfds;
}
int kvm_has_gsi_routing(void)
{
#ifdef KVM_CAP_IRQ_ROUTING
@@ -3147,13 +3295,19 @@ int kvm_has_gsi_routing(void)
#endif
}
int kvm_has_intx_set_mask(void)
{
return kvm_state->intx_set_mask;
}
bool kvm_arm_supports_user_irq(void)
{
return kvm_check_extension(kvm_state, KVM_CAP_ARM_USER_IRQ);
}
#ifdef KVM_CAP_SET_GUEST_DEBUG
struct kvm_sw_breakpoint *kvm_find_sw_breakpoint(CPUState *cpu, vaddr pc)
struct kvm_sw_breakpoint *kvm_find_sw_breakpoint(CPUState *cpu,
target_ulong pc)
{
struct kvm_sw_breakpoint *bp;
@@ -3595,24 +3749,6 @@ static void kvm_set_dirty_ring_size(Object *obj, Visitor *v,
s->kvm_dirty_ring_size = value;
}
static char *kvm_get_device(Object *obj,
Error **errp G_GNUC_UNUSED)
{
KVMState *s = KVM_STATE(obj);
return g_strdup(s->device);
}
static void kvm_set_device(Object *obj,
const char *value,
Error **errp G_GNUC_UNUSED)
{
KVMState *s = KVM_STATE(obj);
g_free(s->device);
s->device = g_strdup(value);
}
static void kvm_accel_instance_init(Object *obj)
{
KVMState *s = KVM_STATE(obj);
@@ -3625,13 +3761,11 @@ static void kvm_accel_instance_init(Object *obj)
/* KVM dirty ring is by default off */
s->kvm_dirty_ring_size = 0;
s->kvm_dirty_ring_with_bitmap = false;
s->kvm_eager_split_size = 0;
s->notify_vmexit = NOTIFY_VMEXIT_OPTION_RUN;
s->notify_window = 0;
s->xen_version = 0;
s->xen_gnttab_max_frames = 64;
s->xen_evtchn_max_pirq = 256;
s->device = NULL;
}
/**
@@ -3672,10 +3806,6 @@ static void kvm_accel_class_init(ObjectClass *oc, void *data)
object_class_property_set_description(oc, "dirty-ring-size",
"Size of KVM dirty page ring buffer (default: 0, i.e. use bitmap)");
object_class_property_add_str(oc, "device", kvm_get_device, kvm_set_device);
object_class_property_set_description(oc, "device",
"Path to the device node to use (default: /dev/kvm)");
kvm_arch_accel_class_init(oc);
}

View File

@@ -25,9 +25,4 @@ kvm_dirty_ring_reaper(const char *s) "%s"
kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"PRIu64" pages (took %"PRIi64" us)"
kvm_dirty_ring_reaper_kick(const char *reason) "%s"
kvm_dirty_ring_flush(int finished) "%d"
kvm_destroy_vcpu(void) ""
kvm_failed_get_vcpu_mmap_size(void) ""
kvm_cpu_exec(void) ""
kvm_interrupt_exit_request(void) ""
kvm_io_window_exit(void) ""
kvm_run_exit_system_event(int cpu_index, uint32_t event_type) "cpu_index %d, system_even_type %"PRIu32

View File

@@ -1,5 +1,5 @@
specific_ss.add(files('accel-target.c'))
system_ss.add(files('accel-system.c', 'accel-blocker.c'))
specific_ss.add(files('accel-common.c', 'accel-blocker.c'))
system_ss.add(files('accel-softmmu.c'))
user_ss.add(files('accel-user.c'))
subdir('tcg')

24
accel/stubs/hax-stub.c Normal file
View File

@@ -0,0 +1,24 @@
/*
* QEMU HAXM support
*
* Copyright (c) 2015, Intel Corporation
*
* Copyright 2016 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "sysemu/hax.h"
bool hax_allowed;
int hax_sync_vcpus(void)
{
return 0;
}

View File

@@ -17,13 +17,17 @@
KVMState *kvm_state;
bool kvm_kernel_irqchip;
bool kvm_async_interrupts_allowed;
bool kvm_eventfds_allowed;
bool kvm_irqfds_allowed;
bool kvm_resamplefds_allowed;
bool kvm_msi_via_irqfd_allowed;
bool kvm_gsi_routing_allowed;
bool kvm_gsi_direct_mapping;
bool kvm_allowed;
bool kvm_readonly_mem_allowed;
bool kvm_ioeventfd_any_length_allowed;
bool kvm_msi_use_devid;
bool kvm_direct_msi_allowed;
void kvm_flush_coalesced_mmio_buffer(void)
{
@@ -38,6 +42,11 @@ bool kvm_has_sync_mmu(void)
return false;
}
int kvm_has_many_ioeventfds(void)
{
return 0;
}
int kvm_on_sigbus_vcpu(CPUState *cpu, int code, void *addr)
{
return 1;
@@ -83,6 +92,11 @@ void kvm_irqchip_change_notify(void)
{
}
int kvm_irqchip_add_adapter_route(KVMState *s, AdapterInfo *adapter)
{
return -ENOSYS;
}
int kvm_irqchip_add_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
EventNotifier *rn, int virq)
{
@@ -95,14 +109,9 @@ int kvm_irqchip_remove_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
return -ENOSYS;
}
unsigned int kvm_get_max_memslots(void)
bool kvm_has_free_slot(MachineState *ms)
{
return 0;
}
unsigned int kvm_get_free_memslots(void)
{
return 0;
return false;
}
void kvm_init_cpu_signals(CPUState *cpu)
@@ -124,8 +133,3 @@ uint32_t kvm_dirty_ring_size(void)
{
return 0;
}
bool kvm_hwpoisoned_mem(void)
{
return false;
}

View File

@@ -1,6 +1,7 @@
system_stubs_ss = ss.source_set()
system_stubs_ss.add(when: 'CONFIG_XEN', if_false: files('xen-stub.c'))
system_stubs_ss.add(when: 'CONFIG_KVM', if_false: files('kvm-stub.c'))
system_stubs_ss.add(when: 'CONFIG_TCG', if_false: files('tcg-stub.c'))
sysemu_stubs_ss = ss.source_set()
sysemu_stubs_ss.add(when: 'CONFIG_HAX', if_false: files('hax-stub.c'))
sysemu_stubs_ss.add(when: 'CONFIG_XEN', if_false: files('xen-stub.c'))
sysemu_stubs_ss.add(when: 'CONFIG_KVM', if_false: files('kvm-stub.c'))
sysemu_stubs_ss.add(when: 'CONFIG_TCG', if_false: files('tcg-stub.c'))
specific_ss.add_all(when: ['CONFIG_SYSTEM_ONLY'], if_true: system_stubs_ss)
specific_ss.add_all(when: ['CONFIG_SYSTEM_ONLY'], if_true: sysemu_stubs_ss)

View File

@@ -22,6 +22,10 @@ void tlb_set_dirty(CPUState *cpu, vaddr vaddr)
{
}
void tcg_flush_jmp_cache(CPUState *cpu)
{
}
int probe_access_flags(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx,
bool nonfault, void **phost, uintptr_t retaddr)

View File

@@ -41,7 +41,7 @@ CMPXCHG_HELPER(cmpxchgq_be, uint64_t)
CMPXCHG_HELPER(cmpxchgq_le, uint64_t)
#endif
#if HAVE_CMPXCHG128
#ifdef CONFIG_CMPXCHG128
CMPXCHG_HELPER(cmpxchgo_be, Int128)
CMPXCHG_HELPER(cmpxchgo_le, Int128)
#endif

View File

@@ -69,12 +69,11 @@
# define END _le
#endif
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE cmpv, ABI_TYPE newv,
MemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi,
DATA_SIZE, retaddr);
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr);
DATA_TYPE ret;
#if DATA_SIZE == 16
@@ -88,11 +87,10 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
}
#if DATA_SIZE < 16
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val,
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr, ABI_TYPE val,
MemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi,
DATA_SIZE, retaddr);
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr);
DATA_TYPE ret;
ret = qatomic_xchg__nocheck(haddr, val);
@@ -102,11 +100,11 @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val,
}
#define GEN_ATOMIC_HELPER(X) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE val, MemOpIdx oi, uintptr_t retaddr) \
{ \
DATA_TYPE *haddr, ret; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \
ret = qatomic_##X(haddr, val); \
ATOMIC_MMU_CLEANUP; \
atomic_trace_rmw_post(env, addr, oi); \
@@ -133,11 +131,11 @@ GEN_ATOMIC_HELPER(xor_fetch)
* of CF_PARALLEL's value, we'll trace just a read and a write.
*/
#define GEN_ATOMIC_HELPER_FN(X, FN, XDATA_TYPE, RET) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE xval, MemOpIdx oi, uintptr_t retaddr) \
{ \
XDATA_TYPE *haddr, cmp, old, new, val = xval; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \
smp_mb(); \
cmp = qatomic_read__nocheck(haddr); \
do { \
@@ -174,12 +172,11 @@ GEN_ATOMIC_HELPER_FN(umax_fetch, MAX, DATA_TYPE, new)
# define END _be
#endif
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE cmpv, ABI_TYPE newv,
MemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi,
DATA_SIZE, retaddr);
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr);
DATA_TYPE ret;
#if DATA_SIZE == 16
@@ -193,11 +190,10 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
}
#if DATA_SIZE < 16
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val,
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr, ABI_TYPE val,
MemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi,
DATA_SIZE, retaddr);
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr);
ABI_TYPE ret;
ret = qatomic_xchg__nocheck(haddr, BSWAP(val));
@@ -207,11 +203,11 @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val,
}
#define GEN_ATOMIC_HELPER(X) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE val, MemOpIdx oi, uintptr_t retaddr) \
{ \
DATA_TYPE *haddr, ret; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \
ret = qatomic_##X(haddr, BSWAP(val)); \
ATOMIC_MMU_CLEANUP; \
atomic_trace_rmw_post(env, addr, oi); \
@@ -235,11 +231,11 @@ GEN_ATOMIC_HELPER(xor_fetch)
* of CF_PARALLEL's value, we'll trace just a read and a write.
*/
#define GEN_ATOMIC_HELPER_FN(X, FN, XDATA_TYPE, RET) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE xval, MemOpIdx oi, uintptr_t retaddr) \
{ \
XDATA_TYPE *haddr, ldo, ldn, old, new, val = xval; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \
smp_mb(); \
ldn = qatomic_read__nocheck(haddr); \
do { \

View File

@@ -20,8 +20,9 @@
#include "qemu/osdep.h"
#include "sysemu/cpus.h"
#include "sysemu/tcg.h"
#include "exec/exec-all.h"
#include "qemu/plugin.h"
#include "internal-common.h"
#include "internal.h"
bool tcg_allowed;
@@ -32,10 +33,40 @@ void cpu_loop_exit_noexc(CPUState *cpu)
cpu_loop_exit(cpu);
}
#if defined(CONFIG_SOFTMMU)
void cpu_reloading_memory_map(void)
{
if (qemu_in_vcpu_thread() && current_cpu->running) {
/* The guest can in theory prolong the RCU critical section as long
* as it feels like. The major problem with this is that because it
* can do multiple reconfigurations of the memory map within the
* critical section, we could potentially accumulate an unbounded
* collection of memory data structures awaiting reclamation.
*
* Because the only thing we're currently protecting with RCU is the
* memory data structures, it's sufficient to break the critical section
* in this callback, which we know will get called every time the
* memory map is rearranged.
*
* (If we add anything else in the system that uses RCU to protect
* its data structures, we will need to implement some other mechanism
* to force TCG CPUs to exit the critical section, at which point this
* part of this callback might become unnecessary.)
*
* This pair matches cpu_exec's rcu_read_lock()/rcu_read_unlock(), which
* only protects cpu->as->dispatch. Since we know our caller is about
* to reload it, it's safe to split the critical section.
*/
rcu_read_unlock();
rcu_read_lock();
}
}
#endif
void cpu_loop_exit(CPUState *cpu)
{
/* Undo the setting in cpu_tb_exec. */
cpu->neg.can_do_io = true;
cpu->can_do_io = 1;
/* Undo any setting in generated code. */
qemu_plugin_disable_mem_helpers(cpu);
siglongjmp(cpu->jmp_env, 1);

View File

@@ -30,6 +30,9 @@
#include "qemu/rcu.h"
#include "exec/log.h"
#include "qemu/main-loop.h"
#if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
#include "hw/i386/apic.h"
#endif
#include "sysemu/cpus.h"
#include "exec/cpu-all.h"
#include "sysemu/cpu-timers.h"
@@ -39,8 +42,7 @@
#include "tb-jmp-cache.h"
#include "tb-hash.h"
#include "tb-context.h"
#include "internal-common.h"
#include "internal-target.h"
#include "internal.h"
/* -icount align implementation. */
@@ -71,7 +73,7 @@ static void align_clocks(SyncClocks *sc, CPUState *cpu)
return;
}
cpu_icount = cpu->icount_extra + cpu->neg.icount_decr.u16.low;
cpu_icount = cpu->icount_extra + cpu_neg(cpu)->icount_decr.u16.low;
sc->diff_clk += icount_to_ns(sc->last_cpu_icount - cpu_icount);
sc->last_cpu_icount = cpu_icount;
@@ -122,7 +124,7 @@ static void init_delay_params(SyncClocks *sc, CPUState *cpu)
sc->realtime_clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT);
sc->diff_clk = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - sc->realtime_clock;
sc->last_cpu_icount
= cpu->icount_extra + cpu->neg.icount_decr.u16.low;
= cpu->icount_extra + cpu_neg(cpu)->icount_decr.u16.low;
if (sc->diff_clk < max_delay) {
max_delay = sc->diff_clk;
}
@@ -220,7 +222,7 @@ static TranslationBlock *tb_htable_lookup(CPUState *cpu, vaddr pc,
struct tb_desc desc;
uint32_t h;
desc.env = cpu_env(cpu);
desc.env = cpu->env_ptr;
desc.cs_base = cs_base;
desc.flags = flags;
desc.cflags = cflags;
@@ -250,29 +252,43 @@ static inline TranslationBlock *tb_lookup(CPUState *cpu, vaddr pc,
hash = tb_jmp_cache_hash_func(pc);
jc = cpu->tb_jmp_cache;
tb = qatomic_read(&jc->array[hash].tb);
if (likely(tb &&
jc->array[hash].pc == pc &&
tb->cs_base == cs_base &&
tb->flags == flags &&
tb_cflags(tb) == cflags)) {
goto hit;
if (cflags & CF_PCREL) {
/* Use acquire to ensure current load of pc from jc. */
tb = qatomic_load_acquire(&jc->array[hash].tb);
if (likely(tb &&
jc->array[hash].pc == pc &&
tb->cs_base == cs_base &&
tb->flags == flags &&
tb_cflags(tb) == cflags)) {
return tb;
}
tb = tb_htable_lookup(cpu, pc, cs_base, flags, cflags);
if (tb == NULL) {
return NULL;
}
jc->array[hash].pc = pc;
/* Ensure pc is written first. */
qatomic_store_release(&jc->array[hash].tb, tb);
} else {
/* Use rcu_read to ensure current load of pc from *tb. */
tb = qatomic_rcu_read(&jc->array[hash].tb);
if (likely(tb &&
tb->pc == pc &&
tb->cs_base == cs_base &&
tb->flags == flags &&
tb_cflags(tb) == cflags)) {
return tb;
}
tb = tb_htable_lookup(cpu, pc, cs_base, flags, cflags);
if (tb == NULL) {
return NULL;
}
/* Use the pc value already stored in tb->pc. */
qatomic_set(&jc->array[hash].tb, tb);
}
tb = tb_htable_lookup(cpu, pc, cs_base, flags, cflags);
if (tb == NULL) {
return NULL;
}
jc->array[hash].pc = pc;
qatomic_set(&jc->array[hash].tb, tb);
hit:
/*
* As long as tb is not NULL, the contents are consistent. Therefore,
* the virtual PC has to match for non-CF_PCREL translations.
*/
assert((tb_cflags(tb) & CF_PCREL) || tb->pc == pc);
return tb;
}
@@ -282,7 +298,7 @@ static void log_cpu_exec(vaddr pc, CPUState *cpu,
if (qemu_log_in_addr_range(pc)) {
qemu_log_mask(CPU_LOG_EXEC,
"Trace %d: %p [%08" PRIx64
"/%016" VADDR_PRIx "/%08x/%08x] %s\n",
"/%" VADDR_PRIx "/%08x/%08x] %s\n",
cpu->cpu_index, tb->tc.ptr, tb->cs_base, pc,
tb->flags, tb->cflags, lookup_symbol(pc));
@@ -340,9 +356,9 @@ static bool check_for_breakpoints_slow(CPUState *cpu, vaddr pc,
#ifdef CONFIG_USER_ONLY
g_assert_not_reached();
#else
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
assert(tcg_ops->debug_check_breakpoint);
match_bp = tcg_ops->debug_check_breakpoint(cpu);
CPUClass *cc = CPU_GET_CLASS(cpu);
assert(cc->tcg_ops->debug_check_breakpoint);
match_bp = cc->tcg_ops->debug_check_breakpoint(cpu);
#endif
}
@@ -396,14 +412,6 @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
uint64_t cs_base;
uint32_t flags, cflags;
/*
* By definition we've just finished a TB, so I/O is OK.
* Avoid the possibility of calling cpu_io_recompile() if
* a page table walk triggered by tb_lookup() calling
* probe_access_internal() happens to touch an MMIO device.
* The next TB, if we chain to it, will clear the flag again.
*/
cpu->neg.can_do_io = true;
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
cflags = curr_cflags(cpu);
@@ -436,7 +444,7 @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
static inline TranslationBlock * QEMU_DISABLE_CFI
cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
{
CPUArchState *env = cpu_env(cpu);
CPUArchState *env = cpu->env_ptr;
uintptr_t ret;
TranslationBlock *last_tb;
const void *tb_ptr = itb->tc.ptr;
@@ -447,7 +455,7 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
qemu_thread_jit_execute();
ret = tcg_qemu_tb_exec(env, tb_ptr);
cpu->neg.can_do_io = true;
cpu->can_do_io = 1;
qemu_plugin_disable_mem_helpers(cpu);
/*
* TODO: Delay swapping back to the read-write region of the TB
@@ -467,11 +475,10 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
* counter hit zero); we must restore the guest PC to the address
* of the start of the TB.
*/
CPUClass *cc = cpu->cc;
const TCGCPUOps *tcg_ops = cc->tcg_ops;
CPUClass *cc = CPU_GET_CLASS(cpu);
if (tcg_ops->synchronize_from_tb) {
tcg_ops->synchronize_from_tb(cpu, last_tb);
if (cc->tcg_ops->synchronize_from_tb) {
cc->tcg_ops->synchronize_from_tb(cpu, last_tb);
} else {
tcg_debug_assert(!(tb_cflags(last_tb) & CF_PCREL));
assert(cc->set_pc);
@@ -480,7 +487,7 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
vaddr pc = log_pc(cpu, last_tb);
if (qemu_log_in_addr_range(pc)) {
qemu_log("Stopped execution of TB chain before %p [%016"
qemu_log("Stopped execution of TB chain before %p [%"
VADDR_PRIx "] %s\n",
last_tb->tc.ptr, pc, lookup_symbol(pc));
}
@@ -503,62 +510,25 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
static void cpu_exec_enter(CPUState *cpu)
{
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
CPUClass *cc = CPU_GET_CLASS(cpu);
if (tcg_ops->cpu_exec_enter) {
tcg_ops->cpu_exec_enter(cpu);
if (cc->tcg_ops->cpu_exec_enter) {
cc->tcg_ops->cpu_exec_enter(cpu);
}
}
static void cpu_exec_exit(CPUState *cpu)
{
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
CPUClass *cc = CPU_GET_CLASS(cpu);
if (tcg_ops->cpu_exec_exit) {
tcg_ops->cpu_exec_exit(cpu);
if (cc->tcg_ops->cpu_exec_exit) {
cc->tcg_ops->cpu_exec_exit(cpu);
}
}
static void cpu_exec_longjmp_cleanup(CPUState *cpu)
{
/* Non-buggy compilers preserve this; assert the correct value. */
g_assert(cpu == current_cpu);
#ifdef CONFIG_USER_ONLY
clear_helper_retaddr();
if (have_mmap_lock()) {
mmap_unlock();
}
#else
/*
* For softmmu, a tlb_fill fault during translation will land here,
* and we need to release any page locks held. In system mode we
* have one tcg_ctx per thread, so we know it was this cpu doing
* the translation.
*
* Alternative 1: Install a cleanup to be called via an exception
* handling safe longjmp. It seems plausible that all our hosts
* support such a thing. We'd have to properly register unwind info
* for the JIT for EH, rather that just for GDB.
*
* Alternative 2: Set and restore cpu->jmp_env in tb_gen_code to
* capture the cpu_loop_exit longjmp, perform the cleanup, and
* jump again to arrive here.
*/
if (tcg_ctx->gen_tb) {
tb_unlock_pages(tcg_ctx->gen_tb);
tcg_ctx->gen_tb = NULL;
}
#endif
if (bql_locked()) {
bql_unlock();
}
assert_no_pages_locked();
}
void cpu_exec_step_atomic(CPUState *cpu)
{
CPUArchState *env = cpu_env(cpu);
CPUArchState *env = cpu->env_ptr;
TranslationBlock *tb;
vaddr pc;
uint64_t cs_base;
@@ -598,7 +568,16 @@ void cpu_exec_step_atomic(CPUState *cpu)
cpu_tb_exec(cpu, tb, &tb_exit);
cpu_exec_exit(cpu);
} else {
cpu_exec_longjmp_cleanup(cpu);
#ifdef CONFIG_USER_ONLY
clear_helper_retaddr();
if (have_mmap_lock()) {
mmap_unlock();
}
#endif
if (qemu_mutex_iothread_locked()) {
qemu_mutex_unlock_iothread();
}
assert_no_pages_locked();
}
/*
@@ -669,11 +648,15 @@ static inline bool cpu_handle_halt(CPUState *cpu)
{
#ifndef CONFIG_USER_ONLY
if (cpu->halted) {
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
if (tcg_ops->cpu_exec_halt) {
tcg_ops->cpu_exec_halt(cpu);
#if defined(TARGET_I386)
if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
X86CPU *x86_cpu = X86_CPU(cpu);
qemu_mutex_lock_iothread();
apic_poll_irq(x86_cpu->apic_state);
cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
qemu_mutex_unlock_iothread();
}
#endif /* TARGET_I386 */
if (!cpu_has_work(cpu)) {
return true;
}
@@ -687,7 +670,7 @@ static inline bool cpu_handle_halt(CPUState *cpu)
static inline void cpu_handle_debug_exception(CPUState *cpu)
{
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
CPUClass *cc = CPU_GET_CLASS(cpu);
CPUWatchpoint *wp;
if (!cpu->watchpoint_hit) {
@@ -696,8 +679,8 @@ static inline void cpu_handle_debug_exception(CPUState *cpu)
}
}
if (tcg_ops->debug_excp_handler) {
tcg_ops->debug_excp_handler(cpu);
if (cc->tcg_ops->debug_excp_handler) {
cc->tcg_ops->debug_excp_handler(cpu);
}
}
@@ -706,7 +689,7 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
if (cpu->exception_index < 0) {
#ifndef CONFIG_USER_ONLY
if (replay_has_exception()
&& cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0) {
&& cpu_neg(cpu)->icount_decr.u16.low + cpu->icount_extra == 0) {
/* Execute just one insn to trigger exception pending in the log */
cpu->cflags_next_tb = (curr_cflags(cpu) & ~CF_USE_ICOUNT)
| CF_NOIRQ | 1;
@@ -714,7 +697,6 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
#endif
return false;
}
if (cpu->exception_index >= EXCP_INTERRUPT) {
/* exit request from the cpu execution loop */
*ret = cpu->exception_index;
@@ -723,59 +705,62 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
}
cpu->exception_index = -1;
return true;
}
} else {
#if defined(CONFIG_USER_ONLY)
/*
* If user mode only, we simulate a fake exception which will be
* handled outside the cpu execution loop.
*/
/* if user mode only, we simulate a fake exception
which will be handled outside the cpu execution
loop */
#if defined(TARGET_I386)
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
tcg_ops->fake_user_interrupt(cpu);
CPUClass *cc = CPU_GET_CLASS(cpu);
cc->tcg_ops->fake_user_interrupt(cpu);
#endif /* TARGET_I386 */
*ret = cpu->exception_index;
cpu->exception_index = -1;
return true;
#else
if (replay_exception()) {
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
bql_lock();
tcg_ops->do_interrupt(cpu);
bql_unlock();
*ret = cpu->exception_index;
cpu->exception_index = -1;
return true;
#else
if (replay_exception()) {
CPUClass *cc = CPU_GET_CLASS(cpu);
qemu_mutex_lock_iothread();
cc->tcg_ops->do_interrupt(cpu);
qemu_mutex_unlock_iothread();
cpu->exception_index = -1;
if (unlikely(cpu->singlestep_enabled)) {
/*
* After processing the exception, ensure an EXCP_DEBUG is
* raised when single-stepping so that GDB doesn't miss the
* next instruction.
*/
*ret = EXCP_DEBUG;
cpu_handle_debug_exception(cpu);
if (unlikely(cpu->singlestep_enabled)) {
/*
* After processing the exception, ensure an EXCP_DEBUG is
* raised when single-stepping so that GDB doesn't miss the
* next instruction.
*/
*ret = EXCP_DEBUG;
cpu_handle_debug_exception(cpu);
return true;
}
} else if (!replay_has_interrupt()) {
/* give a chance to iothread in replay mode */
*ret = EXCP_INTERRUPT;
return true;
}
} else if (!replay_has_interrupt()) {
/* give a chance to iothread in replay mode */
*ret = EXCP_INTERRUPT;
return true;
}
#endif
}
return false;
}
static inline bool icount_exit_request(CPUState *cpu)
#ifndef CONFIG_USER_ONLY
/*
* CPU_INTERRUPT_POLL is a virtual event which gets converted into a
* "real" interrupt event later. It does not need to be recorded for
* replay purposes.
*/
static inline bool need_replay_interrupt(int interrupt_request)
{
if (!icount_enabled()) {
return false;
}
if (cpu->cflags_next_tb != -1 && !(cpu->cflags_next_tb & CF_USE_ICOUNT)) {
return false;
}
return cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0;
#if defined(TARGET_I386)
return !(interrupt_request & CPU_INTERRUPT_POLL);
#else
return true;
#endif
}
#endif /* !CONFIG_USER_ONLY */
static inline bool cpu_handle_interrupt(CPUState *cpu,
TranslationBlock **last_tb)
@@ -794,11 +779,11 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
* Ensure zeroing happens before reading cpu->exit_request or
* cpu->interrupt_request (see also smp_wmb in cpu_exit())
*/
qatomic_set_mb(&cpu->neg.icount_decr.u16.high, 0);
qatomic_set_mb(&cpu_neg(cpu)->icount_decr.u16.high, 0);
if (unlikely(qatomic_read(&cpu->interrupt_request))) {
int interrupt_request;
bql_lock();
qemu_mutex_lock_iothread();
interrupt_request = cpu->interrupt_request;
if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
/* Mask out external interrupts for this step. */
@@ -807,7 +792,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
if (interrupt_request & CPU_INTERRUPT_DEBUG) {
cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
cpu->exception_index = EXCP_DEBUG;
bql_unlock();
qemu_mutex_unlock_iothread();
return true;
}
#if !defined(CONFIG_USER_ONLY)
@@ -818,7 +803,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
cpu->interrupt_request &= ~CPU_INTERRUPT_HALT;
cpu->halted = 1;
cpu->exception_index = EXCP_HLT;
bql_unlock();
qemu_mutex_unlock_iothread();
return true;
}
#if defined(TARGET_I386)
@@ -829,14 +814,14 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
cpu_svm_check_intercept_param(env, SVM_EXIT_INIT, 0, 0);
do_cpu_init(x86_cpu);
cpu->exception_index = EXCP_HALTED;
bql_unlock();
qemu_mutex_unlock_iothread();
return true;
}
#else
else if (interrupt_request & CPU_INTERRUPT_RESET) {
replay_interrupt();
cpu_reset(cpu);
bql_unlock();
qemu_mutex_unlock_iothread();
return true;
}
#endif /* !TARGET_I386 */
@@ -845,12 +830,11 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
True when it is, and we should restart on a new TB,
and via longjmp via cpu_loop_exit. */
else {
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
CPUClass *cc = CPU_GET_CLASS(cpu);
if (tcg_ops->cpu_exec_interrupt &&
tcg_ops->cpu_exec_interrupt(cpu, interrupt_request)) {
if (!tcg_ops->need_replay_interrupt ||
tcg_ops->need_replay_interrupt(interrupt_request)) {
if (cc->tcg_ops->cpu_exec_interrupt &&
cc->tcg_ops->cpu_exec_interrupt(cpu, interrupt_request)) {
if (need_replay_interrupt(interrupt_request)) {
replay_interrupt();
}
/*
@@ -860,7 +844,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
*/
if (unlikely(cpu->singlestep_enabled)) {
cpu->exception_index = EXCP_DEBUG;
bql_unlock();
qemu_mutex_unlock_iothread();
return true;
}
cpu->exception_index = -1;
@@ -879,11 +863,14 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
}
/* If we exit via cpu_loop_exit/longjmp it is reset in cpu_exec */
bql_unlock();
qemu_mutex_unlock_iothread();
}
/* Finally, check if we need to exit to the main loop. */
if (unlikely(qatomic_read(&cpu->exit_request)) || icount_exit_request(cpu)) {
if (unlikely(qatomic_read(&cpu->exit_request))
|| (icount_enabled()
&& (cpu->cflags_next_tb == -1 || cpu->cflags_next_tb & CF_USE_ICOUNT)
&& cpu_neg(cpu)->icount_decr.u16.low + cpu->icount_extra == 0)) {
qatomic_set(&cpu->exit_request, 0);
if (cpu->exception_index == -1) {
cpu->exception_index = EXCP_INTERRUPT;
@@ -908,7 +895,7 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
}
*last_tb = NULL;
insns_left = qatomic_read(&cpu->neg.icount_decr.u32);
insns_left = qatomic_read(&cpu_neg(cpu)->icount_decr.u32);
if (insns_left < 0) {
/* Something asked us to stop executing chained TBs; just
* continue round the main loop. Whatever requested the exit
@@ -927,7 +914,7 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
icount_update(cpu);
/* Refill decrementer and continue execution. */
insns_left = MIN(0xffff, cpu->icount_budget);
cpu->neg.icount_decr.u16.low = insns_left;
cpu_neg(cpu)->icount_decr.u16.low = insns_left;
cpu->icount_extra = cpu->icount_budget - insns_left;
/*
@@ -961,7 +948,7 @@ cpu_exec_loop(CPUState *cpu, SyncClocks *sc)
uint64_t cs_base;
uint32_t flags, cflags;
cpu_get_tb_cpu_state(cpu_env(cpu), &pc, &cs_base, &flags);
cpu_get_tb_cpu_state(cpu->env_ptr, &pc, &cs_base, &flags);
/*
* When requested, use an exact setting for cflags for the next
@@ -996,8 +983,14 @@ cpu_exec_loop(CPUState *cpu, SyncClocks *sc)
*/
h = tb_jmp_cache_hash_func(pc);
jc = cpu->tb_jmp_cache;
jc->array[h].pc = pc;
qatomic_set(&jc->array[h].tb, tb);
if (cflags & CF_PCREL) {
jc->array[h].pc = pc;
/* Ensure pc is written first. */
qatomic_store_release(&jc->array[h].tb, tb);
} else {
/* Use the pc value already stored in tb->pc. */
qatomic_set(&jc->array[h].tb, tb);
}
}
#ifndef CONFIG_USER_ONLY
@@ -1030,7 +1023,20 @@ static int cpu_exec_setjmp(CPUState *cpu, SyncClocks *sc)
{
/* Prepare setjmp context for exception handling. */
if (unlikely(sigsetjmp(cpu->jmp_env, 0) != 0)) {
cpu_exec_longjmp_cleanup(cpu);
/* Non-buggy compilers preserve this; assert the correct value. */
g_assert(cpu == current_cpu);
#ifdef CONFIG_USER_ONLY
clear_helper_retaddr();
if (have_mmap_lock()) {
mmap_unlock();
}
#endif
if (qemu_mutex_iothread_locked()) {
qemu_mutex_unlock_iothread();
}
assert_no_pages_locked();
}
return cpu_exec_loop(cpu, sc);
@@ -1048,7 +1054,7 @@ int cpu_exec(CPUState *cpu)
return EXCP_HALTED;
}
RCU_READ_LOCK_GUARD();
rcu_read_lock();
cpu_exec_enter(cpu);
/*
@@ -1062,15 +1068,18 @@ int cpu_exec(CPUState *cpu)
ret = cpu_exec_setjmp(cpu, &sc);
cpu_exec_exit(cpu);
rcu_read_unlock();
return ret;
}
bool tcg_exec_realizefn(CPUState *cpu, Error **errp)
void tcg_exec_realizefn(CPUState *cpu, Error **errp)
{
static bool tcg_target_initialized;
CPUClass *cc = CPU_GET_CLASS(cpu);
if (!tcg_target_initialized) {
cpu->cc->tcg_ops->initialize();
cc->tcg_ops->initialize();
tcg_target_initialized = true;
}
@@ -1080,8 +1089,6 @@ bool tcg_exec_realizefn(CPUState *cpu, Error **errp)
tcg_iommu_init_notifier_list(cpu);
#endif /* !CONFIG_USER_ONLY */
/* qemu_plugin_vcpu_init_hook delayed until cpu_index assigned. */
return true;
}
/* undo the initializations in reverse order */

File diff suppressed because it is too large Load Diff

View File

@@ -6,10 +6,11 @@
#include "qemu/osdep.h"
#include "qemu/lockable.h"
#include "tcg/debuginfo.h"
#include <elfutils/libdwfl.h>
#include "debuginfo.h"
static QemuMutex lock;
static Dwfl *dwfl;
static const Dwfl_Callbacks dwfl_callbacks = {

View File

@@ -4,8 +4,8 @@
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef TCG_DEBUGINFO_H
#define TCG_DEBUGINFO_H
#ifndef ACCEL_TCG_DEBUGINFO_H
#define ACCEL_TCG_DEBUGINFO_H
#include "qemu/bitops.h"

View File

@@ -1,26 +0,0 @@
/*
* Internal execution defines for qemu (target agnostic)
*
* Copyright (c) 2003 Fabrice Bellard
*
* SPDX-License-Identifier: LGPL-2.1-or-later
*/
#ifndef ACCEL_TCG_INTERNAL_COMMON_H
#define ACCEL_TCG_INTERNAL_COMMON_H
#include "exec/translation-block.h"
extern int64_t max_delay;
extern int64_t max_advance;
/*
* Return true if CS is not running in parallel with other cpus, either
* because there are no other cpus or we are within an exclusive context.
*/
static inline bool cpu_in_serial_context(CPUState *cs)
{
return !(cs->tcg_cflags & CF_PARALLEL) || cpu_in_exclusive_context(cs);
}
#endif

View File

@@ -1,16 +1,15 @@
/*
* Internal execution defines for qemu (target specific)
* Internal execution defines for qemu
*
* Copyright (c) 2003 Fabrice Bellard
*
* SPDX-License-Identifier: LGPL-2.1-or-later
*/
#ifndef ACCEL_TCG_INTERNAL_TARGET_H
#define ACCEL_TCG_INTERNAL_TARGET_H
#ifndef ACCEL_TCG_INTERNAL_H
#define ACCEL_TCG_INTERNAL_H
#include "exec/exec-all.h"
#include "exec/translate-all.h"
/*
* Access to the various translations structures need to be serialised
@@ -36,32 +35,6 @@ static inline void page_table_config_init(void) { }
void page_table_config_init(void);
#endif
#ifdef CONFIG_USER_ONLY
/*
* For user-only, page_protect sets the page read-only.
* Since most execution is already on read-only pages, and we'd need to
* account for other TBs on the same page, defer undoing any page protection
* until we receive the write fault.
*/
static inline void tb_lock_page0(tb_page_addr_t p0)
{
page_protect(p0);
}
static inline void tb_lock_page1(tb_page_addr_t p0, tb_page_addr_t p1)
{
page_protect(p1);
}
static inline void tb_unlock_page1(tb_page_addr_t p0, tb_page_addr_t p1) { }
static inline void tb_unlock_pages(TranslationBlock *tb) { }
#else
void tb_lock_page0(tb_page_addr_t);
void tb_lock_page1(tb_page_addr_t, tb_page_addr_t);
void tb_unlock_page1(tb_page_addr_t, tb_page_addr_t);
void tb_unlock_pages(TranslationBlock *);
#endif
#ifdef CONFIG_SOFTMMU
void tb_invalidate_phys_range_fast(ram_addr_t ram_addr,
unsigned size,
@@ -75,14 +48,12 @@ TranslationBlock *tb_gen_code(CPUState *cpu, vaddr pc,
void page_init(void);
void tb_htable_init(void);
void tb_reset_jump(TranslationBlock *tb, int n);
TranslationBlock *tb_link_page(TranslationBlock *tb);
TranslationBlock *tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
tb_page_addr_t phys_page2);
bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc);
void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
uintptr_t host_pc);
bool tcg_exec_realizefn(CPUState *cpu, Error **errp);
void tcg_exec_unrealizefn(CPUState *cpu);
/* Return the current PC from CPU, which may be cached in TB. */
static inline vaddr log_pc(CPUState *cpu, const TranslationBlock *tb)
{
@@ -93,6 +64,18 @@ static inline vaddr log_pc(CPUState *cpu, const TranslationBlock *tb)
}
}
/*
* Return true if CS is not running in parallel with other cpus, either
* because there are no other cpus or we are within an exclusive context.
*/
static inline bool cpu_in_serial_context(CPUState *cs)
{
return !(cs->tcg_cflags & CF_PARALLEL) || cpu_in_exclusive_context(cs);
}
extern int64_t max_delay;
extern int64_t max_advance;
extern bool one_insn_per_tb;
/**

View File

@@ -26,7 +26,7 @@
* If the operation must be split into two operations to be
* examined separately for atomicity, return -lg2.
*/
static int required_atomicity(CPUState *cpu, uintptr_t p, MemOp memop)
static int required_atomicity(CPUArchState *env, uintptr_t p, MemOp memop)
{
MemOp atom = memop & MO_ATOM_MASK;
MemOp size = memop & MO_SIZE;
@@ -76,7 +76,7 @@ static int required_atomicity(CPUState *cpu, uintptr_t p, MemOp memop)
/*
* Examine the alignment of p to determine if there are subobjects
* that must be aligned. Note that we only really need ctz4() --
* any more significant bits are discarded by the immediately
* any more sigificant bits are discarded by the immediately
* following comparison.
*/
tmp = ctz32(p);
@@ -93,7 +93,7 @@ static int required_atomicity(CPUState *cpu, uintptr_t p, MemOp memop)
* host atomicity in order to avoid racing. This reduction
* avoids looping with cpu_loop_exit_atomic.
*/
if (cpu_in_serial_context(cpu)) {
if (cpu_in_serial_context(env_cpu(env))) {
return MO_8;
}
return atmax;
@@ -139,14 +139,14 @@ static inline uint64_t load_atomic8(void *pv)
/**
* load_atomic8_or_exit:
* @cpu: generic cpu state
* @env: cpu context
* @ra: host unwind address
* @pv: host address
*
* Atomically load 8 aligned bytes from @pv.
* If this is not possible, longjmp out to restart serially.
*/
static uint64_t load_atomic8_or_exit(CPUState *cpu, uintptr_t ra, void *pv)
static uint64_t load_atomic8_or_exit(CPUArchState *env, uintptr_t ra, void *pv)
{
if (HAVE_al8) {
return load_atomic8(pv);
@@ -159,28 +159,26 @@ static uint64_t load_atomic8_or_exit(CPUState *cpu, uintptr_t ra, void *pv)
* another process, because the fallback start_exclusive solution
* provides no protection across processes.
*/
WITH_MMAP_LOCK_GUARD() {
if (!page_check_range(h2g(pv), 8, PAGE_WRITE_ORG)) {
uint64_t *p = __builtin_assume_aligned(pv, 8);
return *p;
}
if (!page_check_range(h2g(pv), 8, PAGE_WRITE_ORG)) {
uint64_t *p = __builtin_assume_aligned(pv, 8);
return *p;
}
#endif
/* Ultimate fallback: re-execute in serial context. */
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
/**
* load_atomic16_or_exit:
* @cpu: generic cpu state
* @env: cpu context
* @ra: host unwind address
* @pv: host address
*
* Atomically load 16 aligned bytes from @pv.
* If this is not possible, longjmp out to restart serially.
*/
static Int128 load_atomic16_or_exit(CPUState *cpu, uintptr_t ra, void *pv)
static Int128 load_atomic16_or_exit(CPUArchState *env, uintptr_t ra, void *pv)
{
Int128 *p = __builtin_assume_aligned(pv, 16);
@@ -188,31 +186,29 @@ static Int128 load_atomic16_or_exit(CPUState *cpu, uintptr_t ra, void *pv)
return atomic16_read_ro(p);
}
#ifdef CONFIG_USER_ONLY
/*
* We can only use cmpxchg to emulate a load if the page is writable.
* If the page is not writable, then assume the value is immutable
* and requires no locking. This ignores the case of MAP_SHARED with
* another process, because the fallback start_exclusive solution
* provides no protection across processes.
*
* In system mode all guest pages are writable. For user mode,
* we must take mmap_lock so that the query remains valid until
* the write is complete -- tests/tcg/multiarch/munmap-pthread.c
* is an example that can race.
*/
WITH_MMAP_LOCK_GUARD() {
#ifdef CONFIG_USER_ONLY
if (!page_check_range(h2g(p), 16, PAGE_WRITE_ORG)) {
return *p;
}
if (!page_check_range(h2g(p), 16, PAGE_WRITE_ORG)) {
return *p;
}
#endif
if (HAVE_ATOMIC128_RW) {
return atomic16_read_rw(p);
}
/*
* In system mode all guest pages are writable, and for user-only
* we have just checked writability. Try cmpxchg.
*/
if (HAVE_ATOMIC128_RW) {
return atomic16_read_rw(p);
}
/* Ultimate fallback: re-execute in serial context. */
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
/**
@@ -263,7 +259,7 @@ static uint64_t load_atom_extract_al8x2(void *pv)
/**
* load_atom_extract_al8_or_exit:
* @cpu: generic cpu state
* @env: cpu context
* @ra: host unwind address
* @pv: host address
* @s: object size in bytes, @s <= 4.
@@ -273,7 +269,7 @@ static uint64_t load_atom_extract_al8x2(void *pv)
* 8-byte load and extract.
* The value is returned in the low bits of a uint32_t.
*/
static uint32_t load_atom_extract_al8_or_exit(CPUState *cpu, uintptr_t ra,
static uint32_t load_atom_extract_al8_or_exit(CPUArchState *env, uintptr_t ra,
void *pv, int s)
{
uintptr_t pi = (uintptr_t)pv;
@@ -281,12 +277,12 @@ static uint32_t load_atom_extract_al8_or_exit(CPUState *cpu, uintptr_t ra,
int shr = (HOST_BIG_ENDIAN ? 8 - s - o : o) * 8;
pv = (void *)(pi & ~7);
return load_atomic8_or_exit(cpu, ra, pv) >> shr;
return load_atomic8_or_exit(env, ra, pv) >> shr;
}
/**
* load_atom_extract_al16_or_exit:
* @cpu: generic cpu state
* @env: cpu context
* @ra: host unwind address
* @p: host address
* @s: object size in bytes, @s <= 8.
@@ -299,7 +295,7 @@ static uint32_t load_atom_extract_al8_or_exit(CPUState *cpu, uintptr_t ra,
*
* If this is not possible, longjmp out to restart serially.
*/
static uint64_t load_atom_extract_al16_or_exit(CPUState *cpu, uintptr_t ra,
static uint64_t load_atom_extract_al16_or_exit(CPUArchState *env, uintptr_t ra,
void *pv, int s)
{
uintptr_t pi = (uintptr_t)pv;
@@ -312,7 +308,7 @@ static uint64_t load_atom_extract_al16_or_exit(CPUState *cpu, uintptr_t ra,
* Provoke SIGBUS if possible otherwise.
*/
pv = (void *)(pi & ~7);
r = load_atomic16_or_exit(cpu, ra, pv);
r = load_atomic16_or_exit(env, ra, pv);
r = int128_urshift(r, shr);
return int128_getlo(r);
@@ -394,7 +390,7 @@ static inline uint64_t load_atom_8_by_8_or_4(void *pv)
*
* Load 2 bytes from @p, honoring the atomicity of @memop.
*/
static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra,
static uint16_t load_atom_2(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop)
{
uintptr_t pi = (uintptr_t)pv;
@@ -404,13 +400,10 @@ static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra,
return load_atomic2(pv);
}
if (HAVE_ATOMIC128_RO) {
intptr_t left_in_page = -(pi | TARGET_PAGE_MASK);
if (likely(left_in_page > 8)) {
return load_atom_extract_al16_or_al8(pv, 2);
}
return load_atom_extract_al16_or_al8(pv, 2);
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
switch (atmax) {
case MO_8:
return lduw_he_p(pv);
@@ -421,9 +414,9 @@ static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra,
return load_atomic4(pv - 1) >> 8;
}
if ((pi & 15) != 7) {
return load_atom_extract_al8_or_exit(cpu, ra, pv, 2);
return load_atom_extract_al8_or_exit(env, ra, pv, 2);
}
return load_atom_extract_al16_or_exit(cpu, ra, pv, 2);
return load_atom_extract_al16_or_exit(env, ra, pv, 2);
default:
g_assert_not_reached();
}
@@ -436,7 +429,7 @@ static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra,
*
* Load 4 bytes from @p, honoring the atomicity of @memop.
*/
static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra,
static uint32_t load_atom_4(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop)
{
uintptr_t pi = (uintptr_t)pv;
@@ -446,13 +439,10 @@ static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra,
return load_atomic4(pv);
}
if (HAVE_ATOMIC128_RO) {
intptr_t left_in_page = -(pi | TARGET_PAGE_MASK);
if (likely(left_in_page > 8)) {
return load_atom_extract_al16_or_al8(pv, 4);
}
return load_atom_extract_al16_or_al8(pv, 4);
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
switch (atmax) {
case MO_8:
case MO_16:
@@ -466,9 +456,9 @@ static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra,
return load_atom_extract_al4x2(pv);
case MO_32:
if (!(pi & 4)) {
return load_atom_extract_al8_or_exit(cpu, ra, pv, 4);
return load_atom_extract_al8_or_exit(env, ra, pv, 4);
}
return load_atom_extract_al16_or_exit(cpu, ra, pv, 4);
return load_atom_extract_al16_or_exit(env, ra, pv, 4);
default:
g_assert_not_reached();
}
@@ -481,7 +471,7 @@ static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra,
*
* Load 8 bytes from @p, honoring the atomicity of @memop.
*/
static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
static uint64_t load_atom_8(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop)
{
uintptr_t pi = (uintptr_t)pv;
@@ -498,12 +488,12 @@ static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
return load_atom_extract_al16_or_al8(pv, 8);
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
if (atmax == MO_64) {
if (!HAVE_al8 && (pi & 7) == 0) {
load_atomic8_or_exit(cpu, ra, pv);
load_atomic8_or_exit(env, ra, pv);
}
return load_atom_extract_al16_or_exit(cpu, ra, pv, 8);
return load_atom_extract_al16_or_exit(env, ra, pv, 8);
}
if (HAVE_al8_fast) {
return load_atom_extract_al8x2(pv);
@@ -519,7 +509,7 @@ static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
if (HAVE_al8) {
return load_atom_extract_al8x2(pv);
}
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
default:
g_assert_not_reached();
}
@@ -532,7 +522,7 @@ static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
*
* Load 16 bytes from @p, honoring the atomicity of @memop.
*/
static Int128 load_atom_16(CPUState *cpu, uintptr_t ra,
static Int128 load_atom_16(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop)
{
uintptr_t pi = (uintptr_t)pv;
@@ -548,7 +538,7 @@ static Int128 load_atom_16(CPUState *cpu, uintptr_t ra,
return atomic16_read_ro(pv);
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
switch (atmax) {
case MO_8:
memcpy(&r, pv, 16);
@@ -563,20 +553,20 @@ static Int128 load_atom_16(CPUState *cpu, uintptr_t ra,
break;
case MO_64:
if (!HAVE_al8) {
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
a = load_atomic8(pv);
b = load_atomic8(pv + 8);
break;
case -MO_64:
if (!HAVE_al8) {
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
a = load_atom_extract_al8x2(pv);
b = load_atom_extract_al8x2(pv + 8);
break;
case MO_128:
return load_atomic16_or_exit(cpu, ra, pv);
return load_atomic16_or_exit(env, ra, pv);
default:
g_assert_not_reached();
}
@@ -825,7 +815,7 @@ static uint64_t store_whole_le16(void *pv, int size, Int128 val_le)
int sh = o * 8;
Int128 m, v;
qemu_build_assert(HAVE_CMPXCHG128);
qemu_build_assert(HAVE_ATOMIC128_RW);
/* Like MAKE_64BIT_MASK(0, sz), but larger. */
if (sz <= 64) {
@@ -857,7 +847,7 @@ static uint64_t store_whole_le16(void *pv, int size, Int128 val_le)
*
* Store 2 bytes to @p, honoring the atomicity of @memop.
*/
static void store_atom_2(CPUState *cpu, uintptr_t ra,
static void store_atom_2(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop, uint16_t val)
{
uintptr_t pi = (uintptr_t)pv;
@@ -868,7 +858,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
return;
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
if (atmax == MO_8) {
stw_he_p(pv, val);
return;
@@ -887,7 +877,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
return;
}
} else if ((pi & 15) == 7) {
if (HAVE_CMPXCHG128) {
if (HAVE_ATOMIC128_RW) {
Int128 v = int128_lshift(int128_make64(val), 56);
Int128 m = int128_lshift(int128_make64(0xffff), 56);
store_atom_insert_al16(pv - 7, v, m);
@@ -897,7 +887,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
g_assert_not_reached();
}
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
/**
@@ -908,7 +898,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
*
* Store 4 bytes to @p, honoring the atomicity of @memop.
*/
static void store_atom_4(CPUState *cpu, uintptr_t ra,
static void store_atom_4(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop, uint32_t val)
{
uintptr_t pi = (uintptr_t)pv;
@@ -919,7 +909,7 @@ static void store_atom_4(CPUState *cpu, uintptr_t ra,
return;
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
switch (atmax) {
case MO_8:
stl_he_p(pv, val);
@@ -956,12 +946,12 @@ static void store_atom_4(CPUState *cpu, uintptr_t ra,
return;
}
} else {
if (HAVE_CMPXCHG128) {
if (HAVE_ATOMIC128_RW) {
store_whole_le16(pv, 4, int128_make64(cpu_to_le32(val)));
return;
}
}
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
default:
g_assert_not_reached();
}
@@ -975,7 +965,7 @@ static void store_atom_4(CPUState *cpu, uintptr_t ra,
*
* Store 8 bytes to @p, honoring the atomicity of @memop.
*/
static void store_atom_8(CPUState *cpu, uintptr_t ra,
static void store_atom_8(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop, uint64_t val)
{
uintptr_t pi = (uintptr_t)pv;
@@ -986,7 +976,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
return;
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
switch (atmax) {
case MO_8:
stq_he_p(pv, val);
@@ -1021,7 +1011,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
}
break;
case MO_64:
if (HAVE_CMPXCHG128) {
if (HAVE_ATOMIC128_RW) {
store_whole_le16(pv, 8, int128_make64(cpu_to_le64(val)));
return;
}
@@ -1029,7 +1019,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
default:
g_assert_not_reached();
}
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}
/**
@@ -1040,7 +1030,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
*
* Store 16 bytes to @p, honoring the atomicity of @memop.
*/
static void store_atom_16(CPUState *cpu, uintptr_t ra,
static void store_atom_16(CPUArchState *env, uintptr_t ra,
void *pv, MemOp memop, Int128 val)
{
uintptr_t pi = (uintptr_t)pv;
@@ -1052,7 +1042,7 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
return;
}
atmax = required_atomicity(cpu, pi, memop);
atmax = required_atomicity(env, pi, memop);
a = HOST_BIG_ENDIAN ? int128_gethi(val) : int128_getlo(val);
b = HOST_BIG_ENDIAN ? int128_getlo(val) : int128_gethi(val);
@@ -1076,7 +1066,7 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
}
break;
case -MO_64:
if (HAVE_CMPXCHG128) {
if (HAVE_ATOMIC128_RW) {
uint64_t val_le;
int s2 = pi & 15;
int s1 = 16 - s2;
@@ -1103,9 +1093,13 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
}
break;
case MO_128:
if (HAVE_ATOMIC128_RW) {
atomic16_set(pv, val);
return;
}
break;
default:
g_assert_not_reached();
}
cpu_loop_exit_atomic(cpu, ra);
cpu_loop_exit_atomic(env_cpu(env), ra);
}

View File

@@ -8,231 +8,6 @@
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
/*
* Load helpers for tcg-ldst.h
*/
tcg_target_ulong helper_ldub_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_8);
return do_ld1_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
tcg_target_ulong helper_lduw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
return do_ld2_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
tcg_target_ulong helper_ldul_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
return do_ld4_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
uint64_t helper_ldq_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
return do_ld8_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
/*
* Provide signed versions of the load routines as well. We can of course
* avoid this for 64-bit data, or for 32-bit data on 32-bit host.
*/
tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
return (int8_t)helper_ldub_mmu(env, addr, oi, retaddr);
}
tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
return (int16_t)helper_lduw_mmu(env, addr, oi, retaddr);
}
tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
return (int32_t)helper_ldul_mmu(env, addr, oi, retaddr);
}
Int128 helper_ld16_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
return do_ld16_mmu(env_cpu(env), addr, oi, retaddr);
}
Int128 helper_ld_i128(CPUArchState *env, uint64_t addr, uint32_t oi)
{
return helper_ld16_mmu(env, addr, oi, GETPC());
}
/*
* Store helpers for tcg-ldst.h
*/
void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_8);
do_st1_mmu(env_cpu(env), addr, val, oi, ra);
}
void helper_stw_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
do_st2_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_stl_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
do_st4_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_stq_mmu(CPUArchState *env, uint64_t addr, uint64_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
do_st8_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_st16_mmu(CPUArchState *env, uint64_t addr, Int128 val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
do_st16_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx oi)
{
helper_st16_mmu(env, addr, val, oi, GETPC());
}
/*
* Load helpers for cpu_ldst.h
*/
static void plugin_load_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi)
{
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
}
uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra)
{
uint8_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_UB);
ret = do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, oi);
return ret;
}
uint16_t cpu_ldw_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint16_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
ret = do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, oi);
return ret;
}
uint32_t cpu_ldl_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint32_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
ret = do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, oi);
return ret;
}
uint64_t cpu_ldq_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint64_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
ret = do_ld8_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, oi);
return ret;
}
Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
Int128 ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
ret = do_ld16_mmu(env_cpu(env), addr, oi, ra);
plugin_load_cb(env, addr, oi);
return ret;
}
/*
* Store helpers for cpu_ldst.h
*/
static void plugin_store_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi)
{
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val,
MemOpIdx oi, uintptr_t retaddr)
{
helper_stb_mmu(env, addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uint16_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
do_st2_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uint32_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
do_st4_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
do_st8_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
void cpu_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
do_st16_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, oi);
}
/*
* Wrappers of the above
*/
uint32_t cpu_ldub_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra)
@@ -354,8 +129,7 @@ void cpu_stq_le_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
uint32_t cpu_ldub_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldub_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_ldub_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
int cpu_ldsb_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
@@ -365,8 +139,7 @@ int cpu_ldsb_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
uint32_t cpu_lduw_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_lduw_be_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_lduw_be_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
int cpu_ldsw_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
@@ -376,20 +149,17 @@ int cpu_ldsw_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
uint32_t cpu_ldl_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldl_be_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_ldl_be_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
uint64_t cpu_ldq_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldq_be_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_ldq_be_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
uint32_t cpu_lduw_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_lduw_le_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_lduw_le_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
int cpu_ldsw_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
@@ -399,63 +169,54 @@ int cpu_ldsw_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
uint32_t cpu_ldl_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldl_le_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_ldl_le_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
uint64_t cpu_ldq_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldq_le_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_ldq_le_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
void cpu_stb_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stb_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stb_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stw_be_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stw_be_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stw_be_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stl_be_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stl_be_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stl_be_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stq_be_data_ra(CPUArchState *env, abi_ptr addr,
uint64_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stq_be_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stq_be_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stw_le_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stw_le_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stw_le_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stl_le_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stl_le_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stl_le_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stq_le_data_ra(CPUArchState *env, abi_ptr addr,
uint64_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stq_le_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stq_le_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
/*--------------------------*/

View File

@@ -1,9 +1,7 @@
common_ss.add(when: 'CONFIG_TCG', if_true: files(
'cpu-exec-common.c',
))
tcg_specific_ss = ss.source_set()
tcg_specific_ss.add(files(
tcg_ss = ss.source_set()
tcg_ss.add(files(
'tcg-all.c',
'cpu-exec-common.c',
'cpu-exec.c',
'tb-maint.c',
'tcg-runtime-gvec.c',
@@ -11,20 +9,15 @@ tcg_specific_ss.add(files(
'translate-all.c',
'translator.c',
))
tcg_specific_ss.add(when: 'CONFIG_USER_ONLY', if_true: files('user-exec.c'))
tcg_specific_ss.add(when: 'CONFIG_SYSTEM_ONLY', if_false: files('user-exec-stub.c'))
if get_option('plugins')
tcg_specific_ss.add(files('plugin-gen.c'))
endif
specific_ss.add_all(when: 'CONFIG_TCG', if_true: tcg_specific_ss)
tcg_ss.add(when: 'CONFIG_USER_ONLY', if_true: files('user-exec.c'))
tcg_ss.add(when: 'CONFIG_SYSTEM_ONLY', if_false: files('user-exec-stub.c'))
tcg_ss.add(when: 'CONFIG_PLUGIN', if_true: [files('plugin-gen.c')])
tcg_ss.add(when: libdw, if_true: files('debuginfo.c'))
tcg_ss.add(when: 'CONFIG_LINUX', if_true: files('perf.c'))
specific_ss.add_all(when: 'CONFIG_TCG', if_true: tcg_ss)
specific_ss.add(when: ['CONFIG_SYSTEM_ONLY', 'CONFIG_TCG'], if_true: files(
'cputlb.c',
'watchpoint.c',
))
system_ss.add(when: ['CONFIG_TCG'], if_true: files(
'icount-common.c',
'monitor.c',
))

View File

@@ -8,7 +8,6 @@
#include "qemu/osdep.h"
#include "qemu/accel.h"
#include "qemu/qht.h"
#include "qapi/error.h"
#include "qapi/type-helpers.h"
#include "qapi/qapi-commands-machine.h"
@@ -17,8 +16,7 @@
#include "sysemu/cpu-timers.h"
#include "sysemu/tcg.h"
#include "tcg/tcg.h"
#include "internal-common.h"
#include "tb-context.h"
#include "internal.h"
static void dump_drift_info(GString *buf)
@@ -52,153 +50,6 @@ static void dump_accel_info(GString *buf)
one_insn_per_tb ? "on" : "off");
}
static void print_qht_statistics(struct qht_stats hst, GString *buf)
{
uint32_t hgram_opts;
size_t hgram_bins;
char *hgram;
if (!hst.head_buckets) {
return;
}
g_string_append_printf(buf, "TB hash buckets %zu/%zu "
"(%0.2f%% head buckets used)\n",
hst.used_head_buckets, hst.head_buckets,
(double)hst.used_head_buckets /
hst.head_buckets * 100);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_opts |= QDIST_PR_100X | QDIST_PR_PERCENT;
if (qdist_xmax(&hst.occupancy) - qdist_xmin(&hst.occupancy) == 1) {
hgram_opts |= QDIST_PR_NODECIMAL;
}
hgram = qdist_pr(&hst.occupancy, 10, hgram_opts);
g_string_append_printf(buf, "TB hash occupancy %0.2f%% avg chain occ. "
"Histogram: %s\n",
qdist_avg(&hst.occupancy) * 100, hgram);
g_free(hgram);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_bins = qdist_xmax(&hst.chain) - qdist_xmin(&hst.chain);
if (hgram_bins > 10) {
hgram_bins = 10;
} else {
hgram_bins = 0;
hgram_opts |= QDIST_PR_NODECIMAL | QDIST_PR_NOBINRANGE;
}
hgram = qdist_pr(&hst.chain, hgram_bins, hgram_opts);
g_string_append_printf(buf, "TB hash avg chain %0.3f buckets. "
"Histogram: %s\n",
qdist_avg(&hst.chain), hgram);
g_free(hgram);
}
struct tb_tree_stats {
size_t nb_tbs;
size_t host_size;
size_t target_size;
size_t max_target_size;
size_t direct_jmp_count;
size_t direct_jmp2_count;
size_t cross_page;
};
static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data)
{
const TranslationBlock *tb = value;
struct tb_tree_stats *tst = data;
tst->nb_tbs++;
tst->host_size += tb->tc.size;
tst->target_size += tb->size;
if (tb->size > tst->max_target_size) {
tst->max_target_size = tb->size;
}
if (tb->page_addr[1] != -1) {
tst->cross_page++;
}
if (tb->jmp_reset_offset[0] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp_count++;
if (tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp2_count++;
}
}
return false;
}
static void tlb_flush_counts(size_t *pfull, size_t *ppart, size_t *pelide)
{
CPUState *cpu;
size_t full = 0, part = 0, elide = 0;
CPU_FOREACH(cpu) {
full += qatomic_read(&cpu->neg.tlb.c.full_flush_count);
part += qatomic_read(&cpu->neg.tlb.c.part_flush_count);
elide += qatomic_read(&cpu->neg.tlb.c.elide_flush_count);
}
*pfull = full;
*ppart = part;
*pelide = elide;
}
static void tcg_dump_info(GString *buf)
{
g_string_append_printf(buf, "[TCG profiler not compiled]\n");
}
static void dump_exec_info(GString *buf)
{
struct tb_tree_stats tst = {};
struct qht_stats hst;
size_t nb_tbs, flush_full, flush_part, flush_elide;
tcg_tb_foreach(tb_tree_stats_iter, &tst);
nb_tbs = tst.nb_tbs;
/* XXX: avoid using doubles ? */
g_string_append_printf(buf, "Translation buffer state:\n");
/*
* Report total code size including the padding and TB structs;
* otherwise users might think "-accel tcg,tb-size" is not honoured.
* For avg host size we use the precise numbers from tb_tree_stats though.
*/
g_string_append_printf(buf, "gen code size %zu/%zu\n",
tcg_code_size(), tcg_code_capacity());
g_string_append_printf(buf, "TB count %zu\n", nb_tbs);
g_string_append_printf(buf, "TB avg target size %zu max=%zu bytes\n",
nb_tbs ? tst.target_size / nb_tbs : 0,
tst.max_target_size);
g_string_append_printf(buf, "TB avg host size %zu bytes "
"(expansion ratio: %0.1f)\n",
nb_tbs ? tst.host_size / nb_tbs : 0,
tst.target_size ?
(double)tst.host_size / tst.target_size : 0);
g_string_append_printf(buf, "cross page TB count %zu (%zu%%)\n",
tst.cross_page,
nb_tbs ? (tst.cross_page * 100) / nb_tbs : 0);
g_string_append_printf(buf, "direct jump count %zu (%zu%%) "
"(2 jumps=%zu %zu%%)\n",
tst.direct_jmp_count,
nb_tbs ? (tst.direct_jmp_count * 100) / nb_tbs : 0,
tst.direct_jmp2_count,
nb_tbs ? (tst.direct_jmp2_count * 100) / nb_tbs : 0);
qht_statistics_init(&tb_ctx.htable, &hst);
print_qht_statistics(hst, buf);
qht_statistics_destroy(&hst);
g_string_append_printf(buf, "\nStatistics:\n");
g_string_append_printf(buf, "TB flush count %u\n",
qatomic_read(&tb_ctx.tb_flush_count));
g_string_append_printf(buf, "TB invalidate count %u\n",
qatomic_read(&tb_ctx.tb_phys_invalidate_count));
tlb_flush_counts(&flush_full, &flush_part, &flush_elide);
g_string_append_printf(buf, "TLB full flushes %zu\n", flush_full);
g_string_append_printf(buf, "TLB partial flushes %zu\n", flush_part);
g_string_append_printf(buf, "TLB elided flushes %zu\n", flush_elide);
tcg_dump_info(buf);
}
HumanReadableText *qmp_x_query_jit(Error **errp)
{
g_autoptr(GString) buf = g_string_new("");
@@ -215,11 +66,6 @@ HumanReadableText *qmp_x_query_jit(Error **errp)
return human_readable_text_from_str(buf);
}
static void tcg_dump_op_count(GString *buf)
{
g_string_append_printf(buf, "[TCG profiler not compiled]\n");
}
HumanReadableText *qmp_x_query_opcount(Error **errp)
{
g_autoptr(GString) buf = g_string_new("");

View File

@@ -10,13 +10,13 @@
#include "qemu/osdep.h"
#include "elf.h"
#include "exec/target_page.h"
#include "exec/translation-block.h"
#include "exec/exec-all.h"
#include "qemu/timer.h"
#include "tcg/debuginfo.h"
#include "tcg/perf.h"
#include "tcg/tcg.h"
#include "debuginfo.h"
#include "perf.h"
static FILE *safe_fopen_w(const char *path)
{
int saved_errno;
@@ -335,7 +335,11 @@ void perf_report_code(uint64_t guest_pc, TranslationBlock *tb,
/* FIXME: This replicates the restore_state_to_opc() logic. */
q[insn].address = gen_insn_data[insn * start_words + 0];
if (tb_cflags(tb) & CF_PCREL) {
q[insn].address |= (guest_pc & qemu_target_page_mask());
q[insn].address |= (guest_pc & TARGET_PAGE_MASK);
} else {
#if defined(TARGET_I386)
q[insn].address -= tb->cs_base;
#endif
}
q[insn].flags = DEBUGINFO_SYMBOL | (jitdump ? DEBUGINFO_LINE : 0);
}

View File

@@ -4,8 +4,8 @@
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef TCG_PERF_H
#define TCG_PERF_H
#ifndef ACCEL_TCG_PERF_H
#define ACCEL_TCG_PERF_H
#if defined(CONFIG_TCG) && defined(CONFIG_LINUX)
/* Start writing perf-<pid>.map. */

View File

@@ -43,7 +43,6 @@
* CPU's index into a TCG temp, since the first callback did it already.
*/
#include "qemu/osdep.h"
#include "qemu/plugin.h"
#include "cpu.h"
#include "tcg/tcg.h"
#include "tcg/tcg-temp-internal.h"
@@ -80,7 +79,6 @@ enum plugin_gen_from {
enum plugin_gen_cb {
PLUGIN_GEN_CB_UDATA,
PLUGIN_GEN_CB_UDATA_R,
PLUGIN_GEN_CB_INLINE,
PLUGIN_GEN_CB_MEM,
PLUGIN_GEN_ENABLE_MEM_HELPER,
@@ -92,10 +90,7 @@ enum plugin_gen_cb {
* These helpers are stubs that get dynamically switched out for calls
* direct to the plugin if they are subscribed to.
*/
void HELPER(plugin_vcpu_udata_cb_no_wg)(uint32_t cpu_index, void *udata)
{ }
void HELPER(plugin_vcpu_udata_cb_no_rwg)(uint32_t cpu_index, void *udata)
void HELPER(plugin_vcpu_udata_cb)(uint32_t cpu_index, void *udata)
{ }
void HELPER(plugin_vcpu_mem_cb)(unsigned int vcpu_index,
@@ -103,58 +98,36 @@ void HELPER(plugin_vcpu_mem_cb)(unsigned int vcpu_index,
void *userdata)
{ }
static void gen_empty_udata_cb(void (*gen_helper)(TCGv_i32, TCGv_ptr))
static void gen_empty_udata_cb(void)
{
TCGv_i32 cpu_index = tcg_temp_ebb_new_i32();
TCGv_ptr udata = tcg_temp_ebb_new_ptr();
tcg_gen_movi_ptr(udata, 0);
tcg_gen_ld_i32(cpu_index, tcg_env,
tcg_gen_ld_i32(cpu_index, cpu_env,
-offsetof(ArchCPU, env) + offsetof(CPUState, cpu_index));
gen_helper(cpu_index, udata);
gen_helper_plugin_vcpu_udata_cb(cpu_index, udata);
tcg_temp_free_ptr(udata);
tcg_temp_free_i32(cpu_index);
}
static void gen_empty_udata_cb_no_wg(void)
{
gen_empty_udata_cb(gen_helper_plugin_vcpu_udata_cb_no_wg);
}
static void gen_empty_udata_cb_no_rwg(void)
{
gen_empty_udata_cb(gen_helper_plugin_vcpu_udata_cb_no_rwg);
}
/*
* For now we only support addi_i64.
* When we support more ops, we can generate one empty inline cb for each.
*/
static void gen_empty_inline_cb(void)
{
TCGv_i32 cpu_index = tcg_temp_ebb_new_i32();
TCGv_ptr cpu_index_as_ptr = tcg_temp_ebb_new_ptr();
TCGv_i64 val = tcg_temp_ebb_new_i64();
TCGv_ptr ptr = tcg_temp_ebb_new_ptr();
tcg_gen_ld_i32(cpu_index, tcg_env,
-offsetof(ArchCPU, env) + offsetof(CPUState, cpu_index));
/* second operand will be replaced by immediate value */
tcg_gen_mul_i32(cpu_index, cpu_index, cpu_index);
tcg_gen_ext_i32_ptr(cpu_index_as_ptr, cpu_index);
tcg_gen_movi_ptr(ptr, 0);
tcg_gen_add_ptr(ptr, ptr, cpu_index_as_ptr);
tcg_gen_ld_i64(val, ptr, 0);
/* second operand will be replaced by immediate value */
tcg_gen_add_i64(val, val, val);
/* pass an immediate != 0 so that it doesn't get optimized away */
tcg_gen_addi_i64(val, val, 0xdeadface);
tcg_gen_st_i64(val, ptr, 0);
tcg_temp_free_ptr(ptr);
tcg_temp_free_i64(val);
tcg_temp_free_ptr(cpu_index_as_ptr);
tcg_temp_free_i32(cpu_index);
}
static void gen_empty_mem_cb(TCGv_i64 addr, uint32_t info)
@@ -165,7 +138,7 @@ static void gen_empty_mem_cb(TCGv_i64 addr, uint32_t info)
tcg_gen_movi_i32(meminfo, info);
tcg_gen_movi_ptr(udata, 0);
tcg_gen_ld_i32(cpu_index, tcg_env,
tcg_gen_ld_i32(cpu_index, cpu_env,
-offsetof(ArchCPU, env) + offsetof(CPUState, cpu_index));
gen_helper_plugin_vcpu_mem_cb(cpu_index, meminfo, addr, udata);
@@ -184,7 +157,7 @@ static void gen_empty_mem_helper(void)
TCGv_ptr ptr = tcg_temp_ebb_new_ptr();
tcg_gen_movi_ptr(ptr, 0);
tcg_gen_st_ptr(ptr, tcg_env, offsetof(CPUState, plugin_mem_cbs) -
tcg_gen_st_ptr(ptr, cpu_env, offsetof(CPUState, plugin_mem_cbs) -
offsetof(ArchCPU, env));
tcg_temp_free_ptr(ptr);
}
@@ -219,8 +192,7 @@ static void plugin_gen_empty_callback(enum plugin_gen_from from)
gen_empty_mem_helper);
/* fall through */
case PLUGIN_GEN_FROM_TB:
gen_wrapped(from, PLUGIN_GEN_CB_UDATA, gen_empty_udata_cb_no_rwg);
gen_wrapped(from, PLUGIN_GEN_CB_UDATA_R, gen_empty_udata_cb_no_wg);
gen_wrapped(from, PLUGIN_GEN_CB_UDATA, gen_empty_udata_cb);
gen_wrapped(from, PLUGIN_GEN_CB_INLINE, gen_empty_inline_cb);
break;
default:
@@ -302,37 +274,12 @@ static TCGOp *copy_const_ptr(TCGOp **begin_op, TCGOp *op, void *ptr)
return op;
}
static TCGOp *copy_ld_i32(TCGOp **begin_op, TCGOp *op)
{
return copy_op(begin_op, op, INDEX_op_ld_i32);
}
static TCGOp *copy_ext_i32_ptr(TCGOp **begin_op, TCGOp *op)
{
if (UINTPTR_MAX == UINT32_MAX) {
op = copy_op(begin_op, op, INDEX_op_mov_i32);
} else {
op = copy_op(begin_op, op, INDEX_op_ext_i32_i64);
}
return op;
}
static TCGOp *copy_add_ptr(TCGOp **begin_op, TCGOp *op)
{
if (UINTPTR_MAX == UINT32_MAX) {
op = copy_op(begin_op, op, INDEX_op_add_i32);
} else {
op = copy_op(begin_op, op, INDEX_op_add_i64);
}
return op;
}
static TCGOp *copy_ld_i64(TCGOp **begin_op, TCGOp *op)
{
if (TCG_TARGET_REG_BITS == 32) {
/* 2x ld_i32 */
op = copy_ld_i32(begin_op, op);
op = copy_ld_i32(begin_op, op);
op = copy_op(begin_op, op, INDEX_op_ld_i32);
op = copy_op(begin_op, op, INDEX_op_ld_i32);
} else {
/* ld_i64 */
op = copy_op(begin_op, op, INDEX_op_ld_i64);
@@ -368,13 +315,6 @@ static TCGOp *copy_add_i64(TCGOp **begin_op, TCGOp *op, uint64_t v)
return op;
}
static TCGOp *copy_mul_i32(TCGOp **begin_op, TCGOp *op, uint32_t v)
{
op = copy_op(begin_op, op, INDEX_op_mul_i32);
op->args[2] = tcgv_i32_arg(tcg_constant_i32(v));
return op;
}
static TCGOp *copy_st_ptr(TCGOp **begin_op, TCGOp *op)
{
if (UINTPTR_MAX == UINT32_MAX) {
@@ -387,7 +327,8 @@ static TCGOp *copy_st_ptr(TCGOp **begin_op, TCGOp *op)
return op;
}
static TCGOp *copy_call(TCGOp **begin_op, TCGOp *op, void *func, int *cb_idx)
static TCGOp *copy_call(TCGOp **begin_op, TCGOp *op, void *empty_func,
void *func, int *cb_idx)
{
TCGOp *old_op;
int func_idx;
@@ -431,7 +372,8 @@ static TCGOp *append_udata_cb(const struct qemu_plugin_dyn_cb *cb,
}
/* call */
op = copy_call(&begin_op, op, cb->f.vcpu_udata, cb_idx);
op = copy_call(&begin_op, op, HELPER(plugin_vcpu_udata_cb),
cb->f.vcpu_udata, cb_idx);
return op;
}
@@ -440,19 +382,18 @@ static TCGOp *append_inline_cb(const struct qemu_plugin_dyn_cb *cb,
TCGOp *begin_op, TCGOp *op,
int *unused)
{
char *ptr = cb->inline_insn.entry.score->data->data;
size_t elem_size = g_array_get_element_size(
cb->inline_insn.entry.score->data);
size_t offset = cb->inline_insn.entry.offset;
/* const_ptr */
op = copy_const_ptr(&begin_op, op, cb->userp);
op = copy_ld_i32(&begin_op, op);
op = copy_mul_i32(&begin_op, op, elem_size);
op = copy_ext_i32_ptr(&begin_op, op);
op = copy_const_ptr(&begin_op, op, ptr + offset);
op = copy_add_ptr(&begin_op, op);
/* ld_i64 */
op = copy_ld_i64(&begin_op, op);
/* add_i64 */
op = copy_add_i64(&begin_op, op, cb->inline_insn.imm);
/* st_i64 */
op = copy_st_i64(&begin_op, op);
return op;
}
@@ -479,7 +420,8 @@ static TCGOp *append_mem_cb(const struct qemu_plugin_dyn_cb *cb,
if (type == PLUGIN_GEN_CB_MEM) {
/* call */
op = copy_call(&begin_op, op, cb->f.vcpu_udata, cb_idx);
op = copy_call(&begin_op, op, HELPER(plugin_vcpu_mem_cb),
cb->f.vcpu_udata, cb_idx);
}
return op;
@@ -639,7 +581,7 @@ void plugin_gen_disable_mem_helpers(void)
if (!tcg_ctx->plugin_tb->mem_helper) {
return;
}
tcg_gen_st_ptr(tcg_constant_ptr(NULL), tcg_env,
tcg_gen_st_ptr(tcg_constant_ptr(NULL), cpu_env,
offsetof(CPUState, plugin_mem_cbs) - offsetof(ArchCPU, env));
}
@@ -649,12 +591,6 @@ static void plugin_gen_tb_udata(const struct qemu_plugin_tb *ptb,
inject_udata_cb(ptb->cbs[PLUGIN_CB_REGULAR], begin_op);
}
static void plugin_gen_tb_udata_r(const struct qemu_plugin_tb *ptb,
TCGOp *begin_op)
{
inject_udata_cb(ptb->cbs[PLUGIN_CB_REGULAR_R], begin_op);
}
static void plugin_gen_tb_inline(const struct qemu_plugin_tb *ptb,
TCGOp *begin_op)
{
@@ -669,14 +605,6 @@ static void plugin_gen_insn_udata(const struct qemu_plugin_tb *ptb,
inject_udata_cb(insn->cbs[PLUGIN_CB_INSN][PLUGIN_CB_REGULAR], begin_op);
}
static void plugin_gen_insn_udata_r(const struct qemu_plugin_tb *ptb,
TCGOp *begin_op, int insn_idx)
{
struct qemu_plugin_insn *insn = g_ptr_array_index(ptb->insns, insn_idx);
inject_udata_cb(insn->cbs[PLUGIN_CB_INSN][PLUGIN_CB_REGULAR_R], begin_op);
}
static void plugin_gen_insn_inline(const struct qemu_plugin_tb *ptb,
TCGOp *begin_op, int insn_idx)
{
@@ -796,9 +724,6 @@ static void plugin_gen_inject(struct qemu_plugin_tb *plugin_tb)
case PLUGIN_GEN_CB_UDATA:
plugin_gen_tb_udata(plugin_tb, op);
break;
case PLUGIN_GEN_CB_UDATA_R:
plugin_gen_tb_udata_r(plugin_tb, op);
break;
case PLUGIN_GEN_CB_INLINE:
plugin_gen_tb_inline(plugin_tb, op);
break;
@@ -815,9 +740,6 @@ static void plugin_gen_inject(struct qemu_plugin_tb *plugin_tb)
case PLUGIN_GEN_CB_UDATA:
plugin_gen_insn_udata(plugin_tb, op, insn_idx);
break;
case PLUGIN_GEN_CB_UDATA_R:
plugin_gen_insn_udata_r(plugin_tb, op, insn_idx);
break;
case PLUGIN_GEN_CB_INLINE:
plugin_gen_insn_inline(plugin_tb, op, insn_idx);
break;
@@ -877,7 +799,7 @@ bool plugin_gen_tb_start(CPUState *cpu, const DisasContextBase *db,
{
bool ret = false;
if (test_bit(QEMU_PLUGIN_EV_VCPU_TB_TRANS, cpu->plugin_state->event_mask)) {
if (test_bit(QEMU_PLUGIN_EV_VCPU_TB_TRANS, cpu->plugin_mask)) {
struct qemu_plugin_tb *ptb = tcg_ctx->plugin_tb;
int i;
@@ -927,7 +849,7 @@ void plugin_gen_insn_start(CPUState *cpu, const DisasContextBase *db)
} else {
if (ptb->vaddr2 == -1) {
ptb->vaddr2 = TARGET_PAGE_ALIGN(db->pc_first);
get_page_addr_code_hostp(cpu_env(cpu), ptb->vaddr2, &ptb->haddr2);
get_page_addr_code_hostp(cpu->env_ptr, ptb->vaddr2, &ptb->haddr2);
}
pinsn->haddr = ptb->haddr2 + pinsn->vaddr - ptb->vaddr2;
}
@@ -944,14 +866,10 @@ void plugin_gen_insn_end(void)
* do any clean-up here and make sure things are reset in
* plugin_gen_tb_start.
*/
void plugin_gen_tb_end(CPUState *cpu, size_t num_insns)
void plugin_gen_tb_end(CPUState *cpu)
{
struct qemu_plugin_tb *ptb = tcg_ctx->plugin_tb;
/* translator may have removed instructions, update final count */
g_assert(num_insns <= ptb->n);
ptb->n = num_insns;
/* collect instrumentation requests */
qemu_plugin_tb_trans_cb(cpu, ptb);

View File

@@ -1,5 +1,4 @@
#ifdef CONFIG_PLUGIN
DEF_HELPER_FLAGS_2(plugin_vcpu_udata_cb_no_wg, TCG_CALL_NO_WG | TCG_CALL_PLUGIN, void, i32, ptr)
DEF_HELPER_FLAGS_2(plugin_vcpu_udata_cb_no_rwg, TCG_CALL_NO_RWG | TCG_CALL_PLUGIN, void, i32, ptr)
DEF_HELPER_FLAGS_2(plugin_vcpu_udata_cb, TCG_CALL_NO_RWG | TCG_CALL_PLUGIN, void, i32, ptr)
DEF_HELPER_FLAGS_4(plugin_vcpu_mem_cb, TCG_CALL_NO_RWG | TCG_CALL_PLUGIN, void, i32, i32, i64, ptr)
#endif

View File

@@ -13,11 +13,9 @@
#define TB_JMP_CACHE_SIZE (1 << TB_JMP_CACHE_BITS)
/*
* Invalidated in parallel; all accesses to 'tb' must be atomic.
* A valid entry is read/written by a single CPU, therefore there is
* no need for qatomic_rcu_read() and pc is always consistent with a
* non-NULL value of 'tb'. Strictly speaking pc is only needed for
* CF_PCREL, but it's used always for simplicity.
* Accessed in parallel; all accesses to 'tb' must be atomic.
* For CF_PCREL, accesses to 'pc' must be protected by a
* load_acquire/store_release to 'tb'.
*/
struct CPUJumpCache {
struct rcu_head rcu;

View File

@@ -1,5 +1,5 @@
/*
* Translation Block Maintenance
* Translation Block Maintaince
*
* Copyright (c) 2003 Fabrice Bellard
*
@@ -29,8 +29,7 @@
#include "tcg/tcg.h"
#include "tb-hash.h"
#include "tb-context.h"
#include "internal-common.h"
#include "internal-target.h"
#include "internal.h"
/* List iterators for lists of tagged pointers in TranslationBlock. */
@@ -71,7 +70,17 @@ typedef struct PageDesc PageDesc;
*/
#define assert_page_locked(pd) tcg_debug_assert(have_mmap_lock())
static inline void tb_lock_pages(const TranslationBlock *tb) { }
static inline void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1,
PageDesc **ret_p2, tb_page_addr_t phys2,
bool alloc)
{
*ret_p1 = NULL;
*ret_p2 = NULL;
}
static inline void page_unlock(PageDesc *pd) { }
static inline void page_lock_tb(const TranslationBlock *tb) { }
static inline void page_unlock_tb(const TranslationBlock *tb) { }
/*
* For user-only, since we are protecting all of memory with a single lock,
@@ -87,7 +96,7 @@ static void tb_remove_all(void)
}
/* Call with mmap_lock held. */
static void tb_record(TranslationBlock *tb)
static void tb_record(TranslationBlock *tb, PageDesc *p1, PageDesc *p2)
{
vaddr addr;
int flags;
@@ -208,12 +217,13 @@ static PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc)
{
PageDesc *pd;
void **lp;
int i;
/* Level 1. Always allocated. */
lp = l1_map + ((index >> v_l1_shift) & (v_l1_size - 1));
/* Level 2..N-1. */
for (int i = v_l2_levels; i > 0; i--) {
for (i = v_l2_levels; i > 0; i--) {
void **p = qatomic_rcu_read(lp);
if (p == NULL) {
@@ -381,108 +391,12 @@ static void page_lock(PageDesc *pd)
qemu_spin_lock(&pd->lock);
}
/* Like qemu_spin_trylock, returns false on success */
static bool page_trylock(PageDesc *pd)
{
bool busy = qemu_spin_trylock(&pd->lock);
if (!busy) {
page_lock__debug(pd);
}
return busy;
}
static void page_unlock(PageDesc *pd)
{
qemu_spin_unlock(&pd->lock);
page_unlock__debug(pd);
}
void tb_lock_page0(tb_page_addr_t paddr)
{
page_lock(page_find_alloc(paddr >> TARGET_PAGE_BITS, true));
}
void tb_lock_page1(tb_page_addr_t paddr0, tb_page_addr_t paddr1)
{
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS;
PageDesc *pd0, *pd1;
if (pindex0 == pindex1) {
/* Identical pages, and the first page is already locked. */
return;
}
pd1 = page_find_alloc(pindex1, true);
if (pindex0 < pindex1) {
/* Correct locking order, we may block. */
page_lock(pd1);
return;
}
/* Incorrect locking order, we cannot block lest we deadlock. */
if (!page_trylock(pd1)) {
return;
}
/*
* Drop the lock on page0 and get both page locks in the right order.
* Restart translation via longjmp.
*/
pd0 = page_find_alloc(pindex0, false);
page_unlock(pd0);
page_lock(pd1);
page_lock(pd0);
siglongjmp(tcg_ctx->jmp_trans, -3);
}
void tb_unlock_page1(tb_page_addr_t paddr0, tb_page_addr_t paddr1)
{
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS;
if (pindex0 != pindex1) {
page_unlock(page_find_alloc(pindex1, false));
}
}
static void tb_lock_pages(TranslationBlock *tb)
{
tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS;
if (unlikely(paddr0 == -1)) {
return;
}
if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
if (pindex0 < pindex1) {
page_lock(page_find_alloc(pindex0, true));
page_lock(page_find_alloc(pindex1, true));
return;
}
page_lock(page_find_alloc(pindex1, true));
}
page_lock(page_find_alloc(pindex0, true));
}
void tb_unlock_pages(TranslationBlock *tb)
{
tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS;
if (unlikely(paddr0 == -1)) {
return;
}
if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
page_unlock(page_find_alloc(pindex1, false));
}
page_unlock(page_find_alloc(pindex0, false));
}
static inline struct page_entry *
page_entry_new(PageDesc *pd, tb_page_addr_t index)
{
@@ -506,10 +420,13 @@ static void page_entry_destroy(gpointer p)
/* returns false on success */
static bool page_entry_trylock(struct page_entry *pe)
{
bool busy = page_trylock(pe->pd);
bool busy;
busy = qemu_spin_trylock(&pe->pd->lock);
if (!busy) {
g_assert(!pe->locked);
pe->locked = true;
page_lock__debug(pe->pd);
}
return busy;
}
@@ -687,7 +604,8 @@ static void tb_remove_all(void)
* Add the tb in the target page and protect it if necessary.
* Called with @p->lock held.
*/
static void tb_page_add(PageDesc *p, TranslationBlock *tb, unsigned int n)
static inline void tb_page_add(PageDesc *p, TranslationBlock *tb,
unsigned int n)
{
bool page_already_protected;
@@ -707,21 +625,15 @@ static void tb_page_add(PageDesc *p, TranslationBlock *tb, unsigned int n)
}
}
static void tb_record(TranslationBlock *tb)
static void tb_record(TranslationBlock *tb, PageDesc *p1, PageDesc *p2)
{
tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr0 >> TARGET_PAGE_BITS;
assert(paddr0 != -1);
if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
tb_page_add(page_find_alloc(pindex1, false), tb, 1);
tb_page_add(p1, tb, 0);
if (unlikely(p2)) {
tb_page_add(p2, tb, 1);
}
tb_page_add(page_find_alloc(pindex0, false), tb, 0);
}
static void tb_page_remove(PageDesc *pd, TranslationBlock *tb)
static inline void tb_page_remove(PageDesc *pd, TranslationBlock *tb)
{
TranslationBlock *tb1;
uintptr_t *pprev;
@@ -741,16 +653,74 @@ static void tb_page_remove(PageDesc *pd, TranslationBlock *tb)
static void tb_remove(TranslationBlock *tb)
{
tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr0 >> TARGET_PAGE_BITS;
PageDesc *pd;
assert(paddr0 != -1);
if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
tb_page_remove(page_find_alloc(pindex1, false), tb);
pd = page_find(tb->page_addr[0] >> TARGET_PAGE_BITS);
tb_page_remove(pd, tb);
if (unlikely(tb->page_addr[1] != -1)) {
pd = page_find(tb->page_addr[1] >> TARGET_PAGE_BITS);
tb_page_remove(pd, tb);
}
}
static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1,
PageDesc **ret_p2, tb_page_addr_t phys2, bool alloc)
{
PageDesc *p1, *p2;
tb_page_addr_t page1;
tb_page_addr_t page2;
assert_memory_lock();
g_assert(phys1 != -1);
page1 = phys1 >> TARGET_PAGE_BITS;
page2 = phys2 >> TARGET_PAGE_BITS;
p1 = page_find_alloc(page1, alloc);
if (ret_p1) {
*ret_p1 = p1;
}
if (likely(phys2 == -1)) {
page_lock(p1);
return;
} else if (page1 == page2) {
page_lock(p1);
if (ret_p2) {
*ret_p2 = p1;
}
return;
}
p2 = page_find_alloc(page2, alloc);
if (ret_p2) {
*ret_p2 = p2;
}
if (page1 < page2) {
page_lock(p1);
page_lock(p2);
} else {
page_lock(p2);
page_lock(p1);
}
}
/* lock the page(s) of a TB in the correct acquisition order */
static void page_lock_tb(const TranslationBlock *tb)
{
page_lock_pair(NULL, tb_page_addr0(tb), NULL, tb_page_addr1(tb), false);
}
static void page_unlock_tb(const TranslationBlock *tb)
{
PageDesc *p1 = page_find(tb_page_addr0(tb) >> TARGET_PAGE_BITS);
page_unlock(p1);
if (unlikely(tb_page_addr1(tb) != -1)) {
PageDesc *p2 = page_find(tb_page_addr1(tb) >> TARGET_PAGE_BITS);
if (p2 != p1) {
page_unlock(p2);
}
}
tb_page_remove(page_find_alloc(pindex0, false), tb);
}
#endif /* CONFIG_USER_ONLY */
@@ -955,16 +925,18 @@ static void tb_phys_invalidate__locked(TranslationBlock *tb)
void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
{
if (page_addr == -1 && tb_page_addr0(tb) != -1) {
tb_lock_pages(tb);
page_lock_tb(tb);
do_tb_phys_invalidate(tb, true);
tb_unlock_pages(tb);
page_unlock_tb(tb);
} else {
do_tb_phys_invalidate(tb, false);
}
}
/*
* Add a new TB and link it to the physical page tables.
* Add a new TB and link it to the physical page tables. phys_page2 is
* (-1) to indicate that only one page contains the TB.
*
* Called with mmap_lock held for user-mode emulation.
*
* Returns a pointer @tb, or a pointer to an existing TB that matches @tb.
@@ -972,29 +944,43 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
* for the same block of guest code that @tb corresponds to. In that case,
* the caller should discard the original @tb, and use instead the returned TB.
*/
TranslationBlock *tb_link_page(TranslationBlock *tb)
TranslationBlock *tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
tb_page_addr_t phys_page2)
{
PageDesc *p;
PageDesc *p2 = NULL;
void *existing_tb = NULL;
uint32_t h;
assert_memory_lock();
tcg_debug_assert(!(tb->cflags & CF_INVALID));
tb_record(tb);
/*
* Add the TB to the page list, acquiring first the pages's locks.
* We keep the locks held until after inserting the TB in the hash table,
* so that if the insertion fails we know for sure that the TBs are still
* in the page descriptors.
* Note that inserting into the hash table first isn't an option, since
* we can only insert TBs that are fully initialized.
*/
page_lock_pair(&p, phys_pc, &p2, phys_page2, true);
tb_record(tb, p, p2);
/* add in the hash table */
h = tb_hash_func(tb_page_addr0(tb), (tb->cflags & CF_PCREL ? 0 : tb->pc),
h = tb_hash_func(phys_pc, (tb->cflags & CF_PCREL ? 0 : tb->pc),
tb->flags, tb->cs_base, tb->cflags);
qht_insert(&tb_ctx.htable, tb, h, &existing_tb);
/* remove TB from the page(s) if we couldn't insert it */
if (unlikely(existing_tb)) {
tb_remove(tb);
tb_unlock_pages(tb);
return existing_tb;
tb = existing_tb;
}
tb_unlock_pages(tb);
if (p2 && p2 != p) {
page_unlock(p2);
}
page_unlock(p);
return tb;
}
@@ -1021,7 +1007,7 @@ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t last)
* Called with mmap_lock held for user-mode emulation
* NOTE: this function must not be called while a TB is running.
*/
static void tb_invalidate_phys_page(tb_page_addr_t addr)
void tb_invalidate_phys_page(tb_page_addr_t addr)
{
tb_page_addr_t start, last;
@@ -1160,6 +1146,28 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages,
#endif
}
/*
* Invalidate all TBs which intersect with the target physical
* address page @addr.
*/
void tb_invalidate_phys_page(tb_page_addr_t addr)
{
struct page_collection *pages;
tb_page_addr_t start, last;
PageDesc *p;
p = page_find(addr >> TARGET_PAGE_BITS);
if (p == NULL) {
return;
}
start = addr & TARGET_PAGE_MASK;
last = addr | ~TARGET_PAGE_MASK;
pages = page_collection_lock(start, last);
tb_invalidate_phys_page_range__locked(pages, p, start, last, 0);
page_collection_unlock(pages);
}
/*
* Invalidate all TBs which intersect with the target physical address range
* [start;last]. NOTE: start and end may refer to *different* physical pages.

View File

@@ -111,24 +111,24 @@ void icount_prepare_for_run(CPUState *cpu, int64_t cpu_budget)
* each vCPU execution. However u16.high can be raised
* asynchronously by cpu_exit/cpu_interrupt/tcg_handle_interrupt
*/
g_assert(cpu->neg.icount_decr.u16.low == 0);
g_assert(cpu_neg(cpu)->icount_decr.u16.low == 0);
g_assert(cpu->icount_extra == 0);
replay_mutex_lock();
cpu->icount_budget = MIN(icount_get_limit(), cpu_budget);
insns_left = MIN(0xffff, cpu->icount_budget);
cpu->neg.icount_decr.u16.low = insns_left;
cpu_neg(cpu)->icount_decr.u16.low = insns_left;
cpu->icount_extra = cpu->icount_budget - insns_left;
if (cpu->icount_budget == 0) {
/*
* We're called without the BQL, so must take it while
* We're called without the iothread lock, so must take it while
* we're calling timer handlers.
*/
bql_lock();
qemu_mutex_lock_iothread();
icount_notify_aio_contexts();
bql_unlock();
qemu_mutex_unlock_iothread();
}
}
@@ -138,7 +138,7 @@ void icount_process_data(CPUState *cpu)
icount_update(cpu);
/* Reset the counters */
cpu->neg.icount_decr.u16.low = 0;
cpu_neg(cpu)->icount_decr.u16.low = 0;
cpu->icount_extra = 0;
cpu->icount_budget = 0;
@@ -153,7 +153,7 @@ void icount_handle_interrupt(CPUState *cpu, int mask)
tcg_handle_interrupt(cpu, mask);
if (qemu_cpu_is_self(cpu) &&
!cpu->neg.can_do_io
!cpu->can_do_io
&& (mask & ~old_mask) != 0) {
cpu_abort(cpu, "Raised interrupt while not in I/O function");
}

View File

@@ -32,7 +32,7 @@
#include "qemu/guest-random.h"
#include "exec/exec-all.h"
#include "hw/boards.h"
#include "tcg/startup.h"
#include "tcg/tcg.h"
#include "tcg-accel-ops.h"
#include "tcg-accel-ops-mttcg.h"
@@ -76,11 +76,11 @@ static void *mttcg_cpu_thread_fn(void *arg)
rcu_add_force_rcu_notifier(&force_rcu.notifier);
tcg_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
cpu->can_do_io = 1;
current_cpu = cpu;
cpu_thread_signal_created(cpu);
qemu_guest_random_seed_thread_part2(cpu->random_seed);
@@ -91,23 +91,28 @@ static void *mttcg_cpu_thread_fn(void *arg)
do {
if (cpu_can_run(cpu)) {
int r;
bql_unlock();
r = tcg_cpu_exec(cpu);
bql_lock();
qemu_mutex_unlock_iothread();
r = tcg_cpus_exec(cpu);
qemu_mutex_lock_iothread();
switch (r) {
case EXCP_DEBUG:
cpu_handle_guest_debug(cpu);
break;
case EXCP_HALTED:
/*
* Usually cpu->halted is set, but may have already been
* reset by another thread by the time we arrive here.
* during start-up the vCPU is reset and the thread is
* kicked several times. If we don't ensure we go back
* to sleep in the halted state we won't cleanly
* start-up when the vCPU is enabled.
*
* cpu->halted should ensure we sleep in wait_io_event
*/
g_assert(cpu->halted);
break;
case EXCP_ATOMIC:
bql_unlock();
qemu_mutex_unlock_iothread();
cpu_exec_step_atomic(cpu);
bql_lock();
qemu_mutex_lock_iothread();
default:
/* Ignore everything else? */
break;
@@ -118,8 +123,8 @@ static void *mttcg_cpu_thread_fn(void *arg)
qemu_wait_io_event(cpu);
} while (!cpu->unplug || cpu_can_run(cpu));
tcg_cpu_destroy(cpu);
bql_unlock();
tcg_cpus_destroy(cpu);
qemu_mutex_unlock_iothread();
rcu_remove_force_rcu_notifier(&force_rcu.notifier);
rcu_unregister_thread();
return NULL;

View File

@@ -32,7 +32,7 @@
#include "qemu/notify.h"
#include "qemu/guest-random.h"
#include "exec/exec-all.h"
#include "tcg/startup.h"
#include "tcg/tcg.h"
#include "tcg-accel-ops.h"
#include "tcg-accel-ops-rr.h"
#include "tcg-accel-ops-icount.h"
@@ -109,9 +109,9 @@ static void rr_wait_io_event(void)
{
CPUState *cpu;
while (all_cpu_threads_idle() && replay_can_wait()) {
while (all_cpu_threads_idle()) {
rr_stop_kick_timer();
qemu_cond_wait_bql(first_cpu->halt_cond);
qemu_cond_wait_iothread(first_cpu->halt_cond);
}
rr_start_kick_timer();
@@ -131,7 +131,7 @@ static void rr_deal_with_unplugged_cpus(void)
CPU_FOREACH(cpu) {
if (cpu->unplug && !cpu_can_run(cpu)) {
tcg_cpu_destroy(cpu);
tcg_cpus_destroy(cpu);
break;
}
}
@@ -188,17 +188,17 @@ static void *rr_cpu_thread_fn(void *arg)
rcu_add_force_rcu_notifier(&force_rcu);
tcg_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
cpu->can_do_io = 1;
cpu_thread_signal_created(cpu);
qemu_guest_random_seed_thread_part2(cpu->random_seed);
/* wait for initial kick-off after machine start */
while (first_cpu->stopped) {
qemu_cond_wait_bql(first_cpu->halt_cond);
qemu_cond_wait_iothread(first_cpu->halt_cond);
/* process any pending work */
CPU_FOREACH(cpu) {
@@ -218,9 +218,9 @@ static void *rr_cpu_thread_fn(void *arg)
/* Only used for icount_enabled() */
int64_t cpu_budget = 0;
bql_unlock();
qemu_mutex_unlock_iothread();
replay_mutex_lock();
bql_lock();
qemu_mutex_lock_iothread();
if (icount_enabled()) {
int cpu_count = rr_cpu_count();
@@ -254,23 +254,23 @@ static void *rr_cpu_thread_fn(void *arg)
if (cpu_can_run(cpu)) {
int r;
bql_unlock();
qemu_mutex_unlock_iothread();
if (icount_enabled()) {
icount_prepare_for_run(cpu, cpu_budget);
}
r = tcg_cpu_exec(cpu);
r = tcg_cpus_exec(cpu);
if (icount_enabled()) {
icount_process_data(cpu);
}
bql_lock();
qemu_mutex_lock_iothread();
if (r == EXCP_DEBUG) {
cpu_handle_guest_debug(cpu);
break;
} else if (r == EXCP_ATOMIC) {
bql_unlock();
qemu_mutex_unlock_iothread();
cpu_exec_step_atomic(cpu);
bql_lock();
qemu_mutex_lock_iothread();
break;
}
} else if (cpu->stop) {
@@ -334,7 +334,7 @@ void rr_start_vcpu_thread(CPUState *cpu)
cpu->thread = single_tcg_cpu_thread;
cpu->halt_cond = single_tcg_halt_cond;
cpu->thread_id = first_cpu->thread_id;
cpu->neg.can_do_io = 1;
cpu->can_do_io = 1;
cpu->created = true;
}
}

View File

@@ -34,7 +34,6 @@
#include "qemu/timer.h"
#include "exec/exec-all.h"
#include "exec/hwaddr.h"
#include "exec/tb-flush.h"
#include "exec/gdbstub.h"
#include "tcg-accel-ops.h"
@@ -63,12 +62,12 @@ void tcg_cpu_init_cflags(CPUState *cpu, bool parallel)
cpu->tcg_cflags |= cflags;
}
void tcg_cpu_destroy(CPUState *cpu)
void tcg_cpus_destroy(CPUState *cpu)
{
cpu_thread_signal_destroyed(cpu);
}
int tcg_cpu_exec(CPUState *cpu)
int tcg_cpus_exec(CPUState *cpu)
{
int ret;
assert(tcg_enabled());
@@ -78,17 +77,10 @@ int tcg_cpu_exec(CPUState *cpu)
return ret;
}
static void tcg_cpu_reset_hold(CPUState *cpu)
{
tcg_flush_jmp_cache(cpu);
tlb_flush(cpu);
}
/* mask must never be zero, except for A20 change call */
void tcg_handle_interrupt(CPUState *cpu, int mask)
{
g_assert(bql_locked());
g_assert(qemu_mutex_iothread_locked());
cpu->interrupt_request |= mask;
@@ -99,7 +91,7 @@ void tcg_handle_interrupt(CPUState *cpu, int mask)
if (!qemu_cpu_is_self(cpu)) {
qemu_cpu_kick(cpu);
} else {
qatomic_set(&cpu->neg.icount_decr.u16.high, -1);
qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1);
}
}
@@ -213,7 +205,6 @@ static void tcg_accel_ops_init(AccelOpsClass *ops)
}
}
ops->cpu_reset_hold = tcg_cpu_reset_hold;
ops->supports_guest_debug = tcg_supports_guest_debug;
ops->insert_breakpoint = tcg_insert_breakpoint;
ops->remove_breakpoint = tcg_remove_breakpoint;

View File

@@ -14,8 +14,8 @@
#include "sysemu/cpus.h"
void tcg_cpu_destroy(CPUState *cpu);
int tcg_cpu_exec(CPUState *cpu);
void tcg_cpus_destroy(CPUState *cpu);
int tcg_cpus_exec(CPUState *cpu);
void tcg_handle_interrupt(CPUState *cpu, int mask);
void tcg_cpu_init_cflags(CPUState *cpu, bool parallel);

View File

@@ -27,7 +27,7 @@
#include "sysemu/tcg.h"
#include "exec/replay-core.h"
#include "sysemu/cpu-timers.h"
#include "tcg/startup.h"
#include "tcg/tcg.h"
#include "tcg/oversized-guest.h"
#include "qapi/error.h"
#include "qemu/error-report.h"
@@ -38,7 +38,7 @@
#if !defined(CONFIG_USER_ONLY)
#include "hw/boards.h"
#endif
#include "internal-target.h"
#include "internal.h"
struct TCGState {
AccelState parent_obj;
@@ -121,7 +121,7 @@ static int tcg_init_machine(MachineState *ms)
* There's no guest base to take into account, so go ahead and
* initialize the prologue now.
*/
tcg_prologue_init();
tcg_prologue_init(tcg_ctx);
#endif
return 0;
@@ -227,8 +227,6 @@ static void tcg_accel_class_init(ObjectClass *oc, void *data)
AccelClass *ac = ACCEL_CLASS(oc);
ac->name = "tcg";
ac->init_machine = tcg_init_machine;
ac->cpu_common_realize = tcg_exec_realizefn;
ac->cpu_common_unrealize = tcg_exec_unrealizefn;
ac->allowed = &tcg_allowed;
ac->gdbstub_supported_sstep_flags = tcg_gdbstub_supported_sstep_flags;

View File

@@ -1042,32 +1042,6 @@ DO_CMP2(64)
#undef DO_CMP1
#undef DO_CMP2
#define DO_CMP1(NAME, TYPE, OP) \
void HELPER(NAME)(void *d, void *a, uint64_t b64, uint32_t desc) \
{ \
intptr_t oprsz = simd_oprsz(desc); \
TYPE inv = simd_data(desc), b = b64; \
for (intptr_t i = 0; i < oprsz; i += sizeof(TYPE)) { \
*(TYPE *)(d + i) = -((*(TYPE *)(a + i) OP b) ^ inv); \
} \
clear_high(d, oprsz, desc); \
}
#define DO_CMP2(SZ) \
DO_CMP1(gvec_eqs##SZ, uint##SZ##_t, ==) \
DO_CMP1(gvec_lts##SZ, int##SZ##_t, <) \
DO_CMP1(gvec_les##SZ, int##SZ##_t, <=) \
DO_CMP1(gvec_ltus##SZ, uint##SZ##_t, <) \
DO_CMP1(gvec_leus##SZ, uint##SZ##_t, <=)
DO_CMP2(8)
DO_CMP2(16)
DO_CMP2(32)
DO_CMP2(64)
#undef DO_CMP1
#undef DO_CMP2
void HELPER(gvec_ssadd8)(void *d, void *a, void *b, uint32_t desc)
{
intptr_t oprsz = simd_oprsz(desc);

View File

@@ -58,7 +58,7 @@ DEF_HELPER_FLAGS_5(atomic_cmpxchgq_be, TCG_CALL_NO_WG,
DEF_HELPER_FLAGS_5(atomic_cmpxchgq_le, TCG_CALL_NO_WG,
i64, env, i64, i64, i64, i32)
#endif
#if HAVE_CMPXCHG128
#ifdef CONFIG_CMPXCHG128
DEF_HELPER_FLAGS_5(atomic_cmpxchgo_be, TCG_CALL_NO_WG,
i128, env, i64, i128, i128, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgo_le, TCG_CALL_NO_WG,
@@ -297,29 +297,4 @@ DEF_HELPER_FLAGS_4(gvec_leu16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_leu32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_leu64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_eqs8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_eqs16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_eqs32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_eqs64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_lts8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_lts16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_lts32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_lts64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_les8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_les16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_les32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_les64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_ltus8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_ltus16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_ltus32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_ltus64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_leus8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_leus16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_leus32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_leus64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_5(gvec_bitsel, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)

View File

@@ -61,9 +61,8 @@
#include "tb-jmp-cache.h"
#include "tb-hash.h"
#include "tb-context.h"
#include "internal-common.h"
#include "internal-target.h"
#include "tcg/perf.h"
#include "internal.h"
#include "perf.h"
#include "tcg/insn-start-words.h"
TBContext tb_ctx;
@@ -215,7 +214,7 @@ void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
* Reset the cycle counter to the start of the block and
* shift if to the number of actually executed instructions.
*/
cpu->neg.icount_decr.u16.low += insns_left;
cpu_neg(cpu)->icount_decr.u16.low += insns_left;
}
cpu->cc->tcg_ops->restore_state_to_opc(cpu, tb, data);
@@ -256,6 +255,7 @@ bool cpu_unwind_state_data(CPUState *cpu, uintptr_t host_pc, uint64_t *data)
void page_init(void)
{
page_size_init();
page_table_config_init();
}
@@ -288,9 +288,9 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
vaddr pc, uint64_t cs_base,
uint32_t flags, int cflags)
{
CPUArchState *env = cpu_env(cpu);
CPUArchState *env = cpu->env_ptr;
TranslationBlock *tb, *existing_tb;
tb_page_addr_t phys_pc, phys_p2;
tb_page_addr_t phys_pc;
tcg_insn_unit *gen_code_buf;
int gen_code_size, search_size, max_insns;
int64_t ti;
@@ -303,7 +303,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
if (phys_pc == -1) {
/* Generate a one-shot TB with 1 insn in it */
cflags = (cflags & ~CF_COUNT_MASK) | 1;
cflags = (cflags & ~CF_COUNT_MASK) | CF_LAST_IO | 1;
}
max_insns = cflags & CF_COUNT_MASK;
@@ -313,7 +313,6 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
QEMU_BUILD_BUG_ON(CF_COUNT_MASK + 1 != TCG_MAX_INSNS);
buffer_overflow:
assert_no_pages_locked();
tb = tcg_tb_alloc(tcg_ctx);
if (unlikely(!tb)) {
/* flush must be done */
@@ -334,16 +333,14 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
tb->cflags = cflags;
tb_set_page_addr0(tb, phys_pc);
tb_set_page_addr1(tb, -1);
if (phys_pc != -1) {
tb_lock_page0(phys_pc);
}
tcg_ctx->gen_tb = tb;
tcg_ctx->addr_type = TARGET_LONG_BITS == 32 ? TCG_TYPE_I32 : TCG_TYPE_I64;
#ifdef CONFIG_SOFTMMU
tcg_ctx->page_bits = TARGET_PAGE_BITS;
tcg_ctx->page_mask = TARGET_PAGE_MASK;
tcg_ctx->tlb_dyn_max_bits = CPU_TLB_DYN_MAX_BITS;
tcg_ctx->tlb_fast_offset =
(int)offsetof(ArchCPU, neg.tlb.f) - (int)offsetof(ArchCPU, env);
#endif
tcg_ctx->insn_start_words = TARGET_INSN_START_WORDS;
#ifdef TCG_GUEST_DEFAULT_MO
@@ -352,7 +349,8 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
tcg_ctx->guest_mo = TCG_MO_ALL;
#endif
restart_translate:
tb_overflow:
trace_translate_block(tb, pc, tb->tc.ptr);
gen_code_size = setjmp_gen_code(env, tb, pc, host_pc, &max_insns, &ti);
@@ -371,8 +369,6 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
qemu_log_mask(CPU_LOG_TB_OP | CPU_LOG_TB_OP_OPT,
"Restarting code generation for "
"code_gen_buffer overflow\n");
tb_unlock_pages(tb);
tcg_ctx->gen_tb = NULL;
goto buffer_overflow;
case -2:
@@ -391,39 +387,14 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
"Restarting code generation with "
"smaller translation block (max %d insns)\n",
max_insns);
/*
* The half-sized TB may not cross pages.
* TODO: Fix all targets that cross pages except with
* the first insn, at which point this can't be reached.
*/
phys_p2 = tb_page_addr1(tb);
if (unlikely(phys_p2 != -1)) {
tb_unlock_page1(phys_pc, phys_p2);
tb_set_page_addr1(tb, -1);
}
goto restart_translate;
case -3:
/*
* We had a page lock ordering problem. In order to avoid
* deadlock we had to drop the lock on page0, which means
* that everything we translated so far is compromised.
* Restart with locks held on both pages.
*/
qemu_log_mask(CPU_LOG_TB_OP | CPU_LOG_TB_OP_OPT,
"Restarting code generation with re-locked pages");
goto restart_translate;
goto tb_overflow;
default:
g_assert_not_reached();
}
}
tcg_ctx->gen_tb = NULL;
search_size = encode_search(tb, (void *)gen_code_buf + gen_code_size);
if (unlikely(search_size < 0)) {
tb_unlock_pages(tb);
goto buffer_overflow;
}
tb->tc.size = gen_code_size;
@@ -533,7 +504,6 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
* before attempting to link to other TBs or add to the lookup table.
*/
if (tb_page_addr0(tb) == -1) {
assert_no_pages_locked();
return tb;
}
@@ -548,9 +518,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
* No explicit memory barrier is required -- tb_link_page() makes the
* TB visible in a consistent state.
*/
existing_tb = tb_link_page(tb);
assert_no_pages_locked();
existing_tb = tb_link_page(tb, tb_page_addr0(tb), tb_page_addr1(tb));
/* if the TB already exists, discard what we just translated */
if (unlikely(existing_tb != tb)) {
uintptr_t orig_aligned = (uintptr_t)gen_code_buf;
@@ -578,7 +546,7 @@ void tb_check_watchpoint(CPUState *cpu, uintptr_t retaddr)
} else {
/* The exception probably happened in a helper. The CPU state should
have been saved before calling it. Fetch the PC from there. */
CPUArchState *env = cpu_env(cpu);
CPUArchState *env = cpu->env_ptr;
vaddr pc;
uint64_t cs_base;
tb_page_addr_t addr;
@@ -621,7 +589,7 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
cc = CPU_GET_CLASS(cpu);
if (cc->tcg_ops->io_recompile_replay_branch &&
cc->tcg_ops->io_recompile_replay_branch(cpu, tb)) {
cpu->neg.icount_decr.u16.low++;
cpu_neg(cpu)->icount_decr.u16.low++;
n = 2;
}
@@ -631,12 +599,12 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
* operations only (which execute after completion) so we don't
* double instrument the instruction.
*/
cpu->cflags_next_tb = curr_cflags(cpu) | CF_MEMI_ONLY | n;
cpu->cflags_next_tb = curr_cflags(cpu) | CF_MEMI_ONLY | CF_LAST_IO | n;
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
vaddr pc = log_pc(cpu, tb);
if (qemu_log_in_addr_range(pc)) {
qemu_log("cpu_io_recompile: rewound execution of TB to %016"
qemu_log("cpu_io_recompile: rewound execution of TB to %"
VADDR_PRIx "\n", pc);
}
}
@@ -644,13 +612,140 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
cpu_loop_exit_noexc(cpu);
}
static void print_qht_statistics(struct qht_stats hst, GString *buf)
{
uint32_t hgram_opts;
size_t hgram_bins;
char *hgram;
if (!hst.head_buckets) {
return;
}
g_string_append_printf(buf, "TB hash buckets %zu/%zu "
"(%0.2f%% head buckets used)\n",
hst.used_head_buckets, hst.head_buckets,
(double)hst.used_head_buckets /
hst.head_buckets * 100);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_opts |= QDIST_PR_100X | QDIST_PR_PERCENT;
if (qdist_xmax(&hst.occupancy) - qdist_xmin(&hst.occupancy) == 1) {
hgram_opts |= QDIST_PR_NODECIMAL;
}
hgram = qdist_pr(&hst.occupancy, 10, hgram_opts);
g_string_append_printf(buf, "TB hash occupancy %0.2f%% avg chain occ. "
"Histogram: %s\n",
qdist_avg(&hst.occupancy) * 100, hgram);
g_free(hgram);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_bins = qdist_xmax(&hst.chain) - qdist_xmin(&hst.chain);
if (hgram_bins > 10) {
hgram_bins = 10;
} else {
hgram_bins = 0;
hgram_opts |= QDIST_PR_NODECIMAL | QDIST_PR_NOBINRANGE;
}
hgram = qdist_pr(&hst.chain, hgram_bins, hgram_opts);
g_string_append_printf(buf, "TB hash avg chain %0.3f buckets. "
"Histogram: %s\n",
qdist_avg(&hst.chain), hgram);
g_free(hgram);
}
struct tb_tree_stats {
size_t nb_tbs;
size_t host_size;
size_t target_size;
size_t max_target_size;
size_t direct_jmp_count;
size_t direct_jmp2_count;
size_t cross_page;
};
static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data)
{
const TranslationBlock *tb = value;
struct tb_tree_stats *tst = data;
tst->nb_tbs++;
tst->host_size += tb->tc.size;
tst->target_size += tb->size;
if (tb->size > tst->max_target_size) {
tst->max_target_size = tb->size;
}
if (tb_page_addr1(tb) != -1) {
tst->cross_page++;
}
if (tb->jmp_reset_offset[0] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp_count++;
if (tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp2_count++;
}
}
return false;
}
void dump_exec_info(GString *buf)
{
struct tb_tree_stats tst = {};
struct qht_stats hst;
size_t nb_tbs, flush_full, flush_part, flush_elide;
tcg_tb_foreach(tb_tree_stats_iter, &tst);
nb_tbs = tst.nb_tbs;
/* XXX: avoid using doubles ? */
g_string_append_printf(buf, "Translation buffer state:\n");
/*
* Report total code size including the padding and TB structs;
* otherwise users might think "-accel tcg,tb-size" is not honoured.
* For avg host size we use the precise numbers from tb_tree_stats though.
*/
g_string_append_printf(buf, "gen code size %zu/%zu\n",
tcg_code_size(), tcg_code_capacity());
g_string_append_printf(buf, "TB count %zu\n", nb_tbs);
g_string_append_printf(buf, "TB avg target size %zu max=%zu bytes\n",
nb_tbs ? tst.target_size / nb_tbs : 0,
tst.max_target_size);
g_string_append_printf(buf, "TB avg host size %zu bytes "
"(expansion ratio: %0.1f)\n",
nb_tbs ? tst.host_size / nb_tbs : 0,
tst.target_size ?
(double)tst.host_size / tst.target_size : 0);
g_string_append_printf(buf, "cross page TB count %zu (%zu%%)\n",
tst.cross_page,
nb_tbs ? (tst.cross_page * 100) / nb_tbs : 0);
g_string_append_printf(buf, "direct jump count %zu (%zu%%) "
"(2 jumps=%zu %zu%%)\n",
tst.direct_jmp_count,
nb_tbs ? (tst.direct_jmp_count * 100) / nb_tbs : 0,
tst.direct_jmp2_count,
nb_tbs ? (tst.direct_jmp2_count * 100) / nb_tbs : 0);
qht_statistics_init(&tb_ctx.htable, &hst);
print_qht_statistics(hst, buf);
qht_statistics_destroy(&hst);
g_string_append_printf(buf, "\nStatistics:\n");
g_string_append_printf(buf, "TB flush count %u\n",
qatomic_read(&tb_ctx.tb_flush_count));
g_string_append_printf(buf, "TB invalidate count %u\n",
qatomic_read(&tb_ctx.tb_phys_invalidate_count));
tlb_flush_counts(&flush_full, &flush_part, &flush_elide);
g_string_append_printf(buf, "TLB full flushes %zu\n", flush_full);
g_string_append_printf(buf, "TLB partial flushes %zu\n", flush_part);
g_string_append_printf(buf, "TLB elided flushes %zu\n", flush_elide);
tcg_dump_info(buf);
}
#else /* CONFIG_USER_ONLY */
void cpu_interrupt(CPUState *cpu, int mask)
{
g_assert(bql_locked());
g_assert(qemu_mutex_iothread_locked());
cpu->interrupt_request |= mask;
qatomic_set(&cpu->neg.icount_decr.u16.high, -1);
qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1);
}
#endif /* CONFIG_USER_ONLY */
@@ -672,3 +767,11 @@ void tcg_flush_jmp_cache(CPUState *cpu)
qatomic_set(&jc->array[i].tb, NULL);
}
}
/* This is a wrapper for common code that can not use CONFIG_SOFTMMU */
void tcg_flush_softmmu_tlb(CPUState *cs)
{
#ifdef CONFIG_SOFTMMU
tlb_flush(cs);
#endif
}

View File

@@ -12,25 +12,30 @@
#include "qemu/error-report.h"
#include "exec/exec-all.h"
#include "exec/translator.h"
#include "exec/translate-all.h"
#include "exec/plugin-gen.h"
#include "tcg/tcg-op-common.h"
#include "internal-target.h"
static void set_can_do_io(DisasContextBase *db, bool val)
static void gen_io_start(void)
{
if (db->saved_can_do_io != val) {
db->saved_can_do_io = val;
QEMU_BUILD_BUG_ON(sizeof_field(CPUState, neg.can_do_io) != 1);
tcg_gen_st8_i32(tcg_constant_i32(val), tcg_env,
offsetof(ArchCPU, parent_obj.neg.can_do_io) -
offsetof(ArchCPU, env));
}
tcg_gen_st_i32(tcg_constant_i32(1), cpu_env,
offsetof(ArchCPU, parent_obj.can_do_io) -
offsetof(ArchCPU, env));
}
bool translator_io_start(DisasContextBase *db)
{
set_can_do_io(db, true);
uint32_t cflags = tb_cflags(db->tb);
if (!(cflags & CF_USE_ICOUNT)) {
return false;
}
if (db->num_insns == db->max_insns && (cflags & CF_LAST_IO)) {
/* Already started in translator_loop. */
return true;
}
gen_io_start();
/*
* Ensure that this instruction will be the last in the TB.
@@ -42,17 +47,14 @@ bool translator_io_start(DisasContextBase *db)
return true;
}
static TCGOp *gen_tb_start(DisasContextBase *db, uint32_t cflags)
static TCGOp *gen_tb_start(uint32_t cflags)
{
TCGv_i32 count = NULL;
TCGv_i32 count = tcg_temp_new_i32();
TCGOp *icount_start_insn = NULL;
if ((cflags & CF_USE_ICOUNT) || !(cflags & CF_NOIRQ)) {
count = tcg_temp_new_i32();
tcg_gen_ld_i32(count, tcg_env,
offsetof(ArchCPU, parent_obj.neg.icount_decr.u32)
- offsetof(ArchCPU, env));
}
tcg_gen_ld_i32(count, cpu_env,
offsetof(ArchCPU, neg.icount_decr.u32) -
offsetof(ArchCPU, env));
if (cflags & CF_USE_ICOUNT) {
/*
@@ -79,18 +81,21 @@ static TCGOp *gen_tb_start(DisasContextBase *db, uint32_t cflags)
}
if (cflags & CF_USE_ICOUNT) {
tcg_gen_st16_i32(count, tcg_env,
offsetof(ArchCPU, parent_obj.neg.icount_decr.u16.low)
- offsetof(ArchCPU, env));
tcg_gen_st16_i32(count, cpu_env,
offsetof(ArchCPU, neg.icount_decr.u16.low) -
offsetof(ArchCPU, env));
/*
* cpu->can_do_io is cleared automatically here at the beginning of
* each translation block. The cost is minimal and only paid for
* -icount, plus it would be very easy to forget doing it in the
* translator. Doing it here means we don't need a gen_io_end() to
* go with gen_io_start().
*/
tcg_gen_st_i32(tcg_constant_i32(0), cpu_env,
offsetof(ArchCPU, parent_obj.can_do_io) -
offsetof(ArchCPU, env));
}
/*
* cpu->neg.can_do_io is set automatically here at the beginning of
* each translation block. The cost is minimal, plus it would be
* very easy to forget doing it in the translator.
*/
set_can_do_io(db, db->max_insns == 1);
return icount_start_insn;
}
@@ -139,20 +144,22 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
db->num_insns = 0;
db->max_insns = *max_insns;
db->singlestep_enabled = cflags & CF_SINGLE_STEP;
db->saved_can_do_io = -1;
db->host_addr[0] = host_pc;
db->host_addr[1] = NULL;
#ifdef CONFIG_USER_ONLY
page_protect(pc);
#endif
ops->init_disas_context(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
/* Start translating. */
icount_start_insn = gen_tb_start(db, cflags);
icount_start_insn = gen_tb_start(cflags);
ops->tb_start(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
plugin_enabled = plugin_gen_tb_start(cpu, db, cflags & CF_MEMI_ONLY);
db->plugin_enabled = plugin_enabled;
while (true) {
*max_insns = ++db->num_insns;
@@ -163,17 +170,19 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
plugin_gen_insn_start(cpu, db);
}
/*
* Disassemble one instruction. The translate_insn hook should
* update db->pc_next and db->is_jmp to indicate what should be
* done next -- either exiting this loop or locate the start of
* the next instruction.
*/
if (db->num_insns == db->max_insns) {
/* Disassemble one instruction. The translate_insn hook should
update db->pc_next and db->is_jmp to indicate what should be
done next -- either exiting this loop or locate the start of
the next instruction. */
if (db->num_insns == db->max_insns && (cflags & CF_LAST_IO)) {
/* Accept I/O on the last instruction. */
set_can_do_io(db, true);
gen_io_start();
ops->translate_insn(db, cpu);
} else {
/* we should only see CF_MEMI_ONLY for io_recompile */
tcg_debug_assert(!(cflags & CF_MEMI_ONLY));
ops->translate_insn(db, cpu);
}
ops->translate_insn(db, cpu);
/*
* We can't instrument after instructions that change control
@@ -206,7 +215,7 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
gen_tb_end(tb, cflags, icount_start_insn, db->num_insns);
if (plugin_enabled) {
plugin_gen_tb_end(cpu, db->num_insns);
plugin_gen_tb_end(cpu);
}
/* The disas_log hook may use these values rather than recompute. */
@@ -247,36 +256,22 @@ static void *translator_access(CPUArchState *env, DisasContextBase *db,
host = db->host_addr[1];
base = TARGET_PAGE_ALIGN(db->pc_first);
if (host == NULL) {
tb_page_addr_t page0, old_page1, new_page1;
new_page1 = get_page_addr_code_hostp(env, base, &db->host_addr[1]);
tb_page_addr_t phys_page =
get_page_addr_code_hostp(env, base, &db->host_addr[1]);
/*
* If the second page is MMIO, treat as if the first page
* was MMIO as well, so that we do not cache the TB.
*/
if (unlikely(new_page1 == -1)) {
tb_unlock_pages(tb);
if (unlikely(phys_page == -1)) {
tb_set_page_addr0(tb, -1);
return NULL;
}
/*
* If this is not the first time around, and page1 matches,
* then we already have the page locked. Alternately, we're
* not doing anything to prevent the PTE from changing, so
* we might wind up with a different page, requiring us to
* re-do the locking.
*/
old_page1 = tb_page_addr1(tb);
if (likely(new_page1 != old_page1)) {
page0 = tb_page_addr0(tb);
if (unlikely(old_page1 != -1)) {
tb_unlock_page1(page0, old_page1);
}
tb_set_page_addr1(tb, new_page1);
tb_lock_page1(page0, new_page1);
}
tb_set_page_addr1(tb, phys_page);
#ifdef CONFIG_USER_ONLY
page_protect(end);
#endif
host = db->host_addr[1];
}

View File

@@ -2,6 +2,8 @@
#include "hw/core/cpu.h"
#include "exec/replay-core.h"
bool enable_cpu_pm = false;
void cpu_resume(CPUState *cpu)
{
}
@@ -14,10 +16,6 @@ void qemu_init_vcpu(CPUState *cpu)
{
}
void cpu_exec_reset_hold(CPUState *cpu)
{
}
/* User mode emulation does not support record/replay yet. */
bool replay_exception(void)

View File

@@ -29,8 +29,7 @@
#include "qemu/atomic128.h"
#include "trace/trace-root.h"
#include "tcg/tcg-ldst.h"
#include "internal-common.h"
#include "internal-target.h"
#include "internal.h"
__thread uintptr_t helper_retaddr;
@@ -145,7 +144,7 @@ typedef struct PageFlagsNode {
static IntervalTreeRoot pageflags_root;
static PageFlagsNode *pageflags_find(target_ulong start, target_ulong last)
static PageFlagsNode *pageflags_find(target_ulong start, target_long last)
{
IntervalTreeNode *n;
@@ -154,7 +153,7 @@ static PageFlagsNode *pageflags_find(target_ulong start, target_ulong last)
}
static PageFlagsNode *pageflags_next(PageFlagsNode *p, target_ulong start,
target_ulong last)
target_long last)
{
IntervalTreeNode *n;
@@ -521,19 +520,19 @@ void page_set_flags(target_ulong start, target_ulong last, int flags)
}
}
bool page_check_range(target_ulong start, target_ulong len, int flags)
int page_check_range(target_ulong start, target_ulong len, int flags)
{
target_ulong last;
int locked; /* tri-state: =0: unlocked, +1: global, -1: local */
bool ret;
int ret;
if (len == 0) {
return true; /* trivial length */
return 0; /* trivial length */
}
last = start + len - 1;
if (last < start) {
return false; /* wrap around */
return -1; /* wrap around */
}
locked = have_mmap_lock();
@@ -552,33 +551,33 @@ bool page_check_range(target_ulong start, target_ulong len, int flags)
p = pageflags_find(start, last);
}
if (!p) {
ret = false; /* entire region invalid */
ret = -1; /* entire region invalid */
break;
}
}
if (start < p->itree.start) {
ret = false; /* initial bytes invalid */
ret = -1; /* initial bytes invalid */
break;
}
missing = flags & ~p->flags;
if (missing & ~PAGE_WRITE) {
ret = false; /* page doesn't match */
if (missing & PAGE_READ) {
ret = -1; /* page not readable */
break;
}
if (missing & PAGE_WRITE) {
if (!(p->flags & PAGE_WRITE_ORG)) {
ret = false; /* page not writable */
ret = -1; /* page not writable */
break;
}
/* Asking about writable, but has been protected: undo. */
if (!page_unprotect(start, 0)) {
ret = false;
ret = -1;
break;
}
/* TODO: page_unprotect should take a range, not a single page. */
if (last - start < TARGET_PAGE_SIZE) {
ret = true; /* ok */
ret = 0; /* ok */
break;
}
start += TARGET_PAGE_SIZE;
@@ -586,7 +585,7 @@ bool page_check_range(target_ulong start, target_ulong len, int flags)
}
if (last <= p->itree.last) {
ret = true; /* ok */
ret = 0; /* ok */
break;
}
start = p->itree.last + 1;
@@ -599,69 +598,20 @@ bool page_check_range(target_ulong start, target_ulong len, int flags)
return ret;
}
bool page_check_range_empty(target_ulong start, target_ulong last)
{
assert(last >= start);
assert_memory_lock();
return pageflags_find(start, last) == NULL;
}
target_ulong page_find_range_empty(target_ulong min, target_ulong max,
target_ulong len, target_ulong align)
{
target_ulong len_m1, align_m1;
assert(min <= max);
assert(max <= GUEST_ADDR_MAX);
assert(len != 0);
assert(is_power_of_2(align));
assert_memory_lock();
len_m1 = len - 1;
align_m1 = align - 1;
/* Iteratively narrow the search region. */
while (1) {
PageFlagsNode *p;
/* Align min and double-check there's enough space remaining. */
min = (min + align_m1) & ~align_m1;
if (min > max) {
return -1;
}
if (len_m1 > max - min) {
return -1;
}
p = pageflags_find(min, min + len_m1);
if (p == NULL) {
/* Found! */
return min;
}
if (max <= p->itree.last) {
/* Existing allocation fills the remainder of the search region. */
return -1;
}
/* Skip across existing allocation. */
min = p->itree.last + 1;
}
}
void page_protect(tb_page_addr_t address)
{
PageFlagsNode *p;
target_ulong start, last;
int host_page_size = qemu_real_host_page_size();
int prot;
assert_memory_lock();
if (host_page_size <= TARGET_PAGE_SIZE) {
if (qemu_host_page_size <= TARGET_PAGE_SIZE) {
start = address & TARGET_PAGE_MASK;
last = start + TARGET_PAGE_SIZE - 1;
} else {
start = address & -host_page_size;
last = start + host_page_size - 1;
start = address & qemu_host_page_mask;
last = start + qemu_host_page_size - 1;
}
p = pageflags_find(start, last);
@@ -672,7 +622,7 @@ void page_protect(tb_page_addr_t address)
if (unlikely(p->itree.last < last)) {
/* More than one protection region covers the one host page. */
assert(TARGET_PAGE_SIZE < host_page_size);
assert(TARGET_PAGE_SIZE < qemu_host_page_size);
while ((p = pageflags_next(p, start, last)) != NULL) {
prot |= p->flags;
}
@@ -680,7 +630,7 @@ void page_protect(tb_page_addr_t address)
if (prot & PAGE_WRITE) {
pageflags_set_clear(start, last, 0, PAGE_WRITE);
mprotect(g2h_untagged(start), last - start + 1,
mprotect(g2h_untagged(start), qemu_host_page_size,
prot & (PAGE_READ | PAGE_EXEC) ? PROT_READ : PROT_NONE);
}
}
@@ -726,19 +676,18 @@ int page_unprotect(target_ulong address, uintptr_t pc)
}
#endif
} else {
int host_page_size = qemu_real_host_page_size();
target_ulong start, len, i;
int prot;
if (host_page_size <= TARGET_PAGE_SIZE) {
if (qemu_host_page_size <= TARGET_PAGE_SIZE) {
start = address & TARGET_PAGE_MASK;
len = TARGET_PAGE_SIZE;
prot = p->flags | PAGE_WRITE;
pageflags_set_clear(start, start + len - 1, PAGE_WRITE, 0);
current_tb_invalidated = tb_invalidate_phys_page_unwind(start, pc);
} else {
start = address & -host_page_size;
len = host_page_size;
start = address & qemu_host_page_mask;
len = qemu_host_page_size;
prot = 0;
for (i = 0; i < len; i += TARGET_PAGE_SIZE) {
@@ -864,7 +813,7 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, vaddr addr,
typedef struct TargetPageDataNode {
struct rcu_head rcu;
IntervalTreeNode itree;
char data[] __attribute__((aligned));
char data[TPD_PAGES][TARGET_PAGE_DATA_SIZE] __attribute__((aligned));
} TargetPageDataNode;
static IntervalTreeRoot targetdata_root;
@@ -902,8 +851,7 @@ void page_reset_target_data(target_ulong start, target_ulong last)
n_last = MIN(last, n->last);
p_len = (n_last + 1 - n_start) >> TARGET_PAGE_BITS;
memset(t->data + p_ofs * TARGET_PAGE_DATA_SIZE, 0,
p_len * TARGET_PAGE_DATA_SIZE);
memset(t->data[p_ofs], 0, p_len * TARGET_PAGE_DATA_SIZE);
}
}
@@ -911,7 +859,7 @@ void *page_get_target_data(target_ulong address)
{
IntervalTreeNode *n;
TargetPageDataNode *t;
target_ulong page, region, p_ofs;
target_ulong page, region;
page = address & TARGET_PAGE_MASK;
region = address & TBD_MASK;
@@ -927,8 +875,7 @@ void *page_get_target_data(target_ulong address)
mmap_lock();
n = interval_tree_iter_first(&targetdata_root, page, page);
if (!n) {
t = g_malloc0(sizeof(TargetPageDataNode)
+ TPD_PAGES * TARGET_PAGE_DATA_SIZE);
t = g_new0(TargetPageDataNode, 1);
n = &t->itree;
n->start = region;
n->last = region | ~TBD_MASK;
@@ -938,16 +885,15 @@ void *page_get_target_data(target_ulong address)
}
t = container_of(n, TargetPageDataNode, itree);
p_ofs = (page - region) >> TARGET_PAGE_BITS;
return t->data + p_ofs * TARGET_PAGE_DATA_SIZE;
return t->data[(page - region) >> TARGET_PAGE_BITS];
}
#else
void page_reset_target_data(target_ulong start, target_ulong last) { }
#endif /* TARGET_PAGE_DATA_SIZE */
/* The system-mode versions of these helpers are in cputlb.c. */
/* The softmmu versions of these helpers are in cputlb.c. */
static void *cpu_mmu_lookup(CPUState *cpu, vaddr addr,
static void *cpu_mmu_lookup(CPUArchState *env, vaddr addr,
MemOp mop, uintptr_t ra, MMUAccessType type)
{
int a_bits = get_alignment_bits(mop);
@@ -955,39 +901,60 @@ static void *cpu_mmu_lookup(CPUState *cpu, vaddr addr,
/* Enforce guest required alignment. */
if (unlikely(addr & ((1 << a_bits) - 1))) {
cpu_loop_exit_sigbus(cpu, addr, type, ra);
cpu_loop_exit_sigbus(env_cpu(env), addr, type, ra);
}
ret = g2h(cpu, addr);
ret = g2h(env_cpu(env), addr);
set_helper_retaddr(ra);
return ret;
}
#include "ldst_atomicity.c.inc"
static uint8_t do_ld1_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType access_type)
static uint8_t do_ld1_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{
void *haddr;
uint8_t ret;
tcg_debug_assert((mop & MO_SIZE) == MO_8);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, get_memop(oi), ra, access_type);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = ldub_p(haddr);
clear_helper_retaddr();
return ret;
}
static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType access_type)
tcg_target_ulong helper_ldub_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return do_ld1_mmu(env, addr, get_memop(oi), ra);
}
tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return (int8_t)do_ld1_mmu(env, addr, get_memop(oi), ra);
}
uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint8_t ret = do_ld1_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static uint16_t do_ld2_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{
void *haddr;
uint16_t ret;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_16);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, access_type);
ret = load_atom_2(cpu, ra, haddr, mop);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_2(env, ra, haddr, mop);
clear_helper_retaddr();
if (mop & MO_BSWAP) {
@@ -996,16 +963,36 @@ static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
return ret;
}
static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType access_type)
tcg_target_ulong helper_lduw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return do_ld2_mmu(env, addr, get_memop(oi), ra);
}
tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return (int16_t)do_ld2_mmu(env, addr, get_memop(oi), ra);
}
uint16_t cpu_ldw_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint16_t ret = do_ld2_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static uint32_t do_ld4_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{
void *haddr;
uint32_t ret;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_32);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, access_type);
ret = load_atom_4(cpu, ra, haddr, mop);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_4(env, ra, haddr, mop);
clear_helper_retaddr();
if (mop & MO_BSWAP) {
@@ -1014,16 +1001,36 @@ static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
return ret;
}
static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType access_type)
tcg_target_ulong helper_ldul_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return do_ld4_mmu(env, addr, get_memop(oi), ra);
}
tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return (int32_t)do_ld4_mmu(env, addr, get_memop(oi), ra);
}
uint32_t cpu_ldl_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint32_t ret = do_ld4_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static uint64_t do_ld8_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{
void *haddr;
uint64_t ret;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_64);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, access_type);
ret = load_atom_8(cpu, ra, haddr, mop);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_8(env, ra, haddr, mop);
clear_helper_retaddr();
if (mop & MO_BSWAP) {
@@ -1032,17 +1039,30 @@ static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
return ret;
}
static Int128 do_ld16_mmu(CPUState *cpu, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
uint64_t helper_ldq_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return do_ld8_mmu(env, addr, get_memop(oi), ra);
}
uint64_t cpu_ldq_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint64_t ret = do_ld8_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static Int128 do_ld16_mmu(CPUArchState *env, abi_ptr addr,
MemOp mop, uintptr_t ra)
{
void *haddr;
Int128 ret;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_128);
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_16(cpu, ra, haddr, mop);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_16(env, ra, haddr, mop);
clear_helper_retaddr();
if (mop & MO_BSWAP) {
@@ -1051,81 +1071,171 @@ static Int128 do_ld16_mmu(CPUState *cpu, abi_ptr addr,
return ret;
}
static void do_st1_mmu(CPUState *cpu, vaddr addr, uint8_t val,
Int128 helper_ld16_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t ra)
{
return do_ld16_mmu(env, addr, get_memop(oi), ra);
}
Int128 helper_ld_i128(CPUArchState *env, uint64_t addr, MemOpIdx oi)
{
return helper_ld16_mmu(env, addr, oi, GETPC());
}
Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
Int128 ret = do_ld16_mmu(env, addr, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
return ret;
}
static void do_st1_mmu(CPUArchState *env, abi_ptr addr, uint8_t val,
MemOp mop, uintptr_t ra)
{
void *haddr;
tcg_debug_assert((mop & MO_SIZE) == MO_8);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, get_memop(oi), ra, MMU_DATA_STORE);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
stb_p(haddr, val);
clear_helper_retaddr();
}
static void do_st2_mmu(CPUState *cpu, vaddr addr, uint16_t val,
MemOpIdx oi, uintptr_t ra)
void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st1_mmu(env, addr, val, get_memop(oi), ra);
}
void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st1_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
static void do_st2_mmu(CPUArchState *env, abi_ptr addr, uint16_t val,
MemOp mop, uintptr_t ra)
{
void *haddr;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_16);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) {
val = bswap16(val);
}
store_atom_2(cpu, ra, haddr, mop, val);
store_atom_2(env, ra, haddr, mop, val);
clear_helper_retaddr();
}
static void do_st4_mmu(CPUState *cpu, vaddr addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
void helper_stw_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st2_mmu(env, addr, val, get_memop(oi), ra);
}
void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uint16_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st2_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
static void do_st4_mmu(CPUArchState *env, abi_ptr addr, uint32_t val,
MemOp mop, uintptr_t ra)
{
void *haddr;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_32);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) {
val = bswap32(val);
}
store_atom_4(cpu, ra, haddr, mop, val);
store_atom_4(env, ra, haddr, mop, val);
clear_helper_retaddr();
}
static void do_st8_mmu(CPUState *cpu, vaddr addr, uint64_t val,
MemOpIdx oi, uintptr_t ra)
void helper_stl_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st4_mmu(env, addr, val, get_memop(oi), ra);
}
void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st4_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
static void do_st8_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
MemOp mop, uintptr_t ra)
{
void *haddr;
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_64);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) {
val = bswap64(val);
}
store_atom_8(cpu, ra, haddr, mop, val);
store_atom_8(env, ra, haddr, mop, val);
clear_helper_retaddr();
}
static void do_st16_mmu(CPUState *cpu, vaddr addr, Int128 val,
MemOpIdx oi, uintptr_t ra)
void helper_stq_mmu(CPUArchState *env, uint64_t addr, uint64_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st8_mmu(env, addr, val, get_memop(oi), ra);
}
void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
MemOpIdx oi, uintptr_t ra)
{
do_st8_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
static void do_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val,
MemOp mop, uintptr_t ra)
{
void *haddr;
MemOpIdx mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_128);
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE);
haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) {
val = bswap128(val);
}
store_atom_16(cpu, ra, haddr, mop, val);
store_atom_16(env, ra, haddr, mop, val);
clear_helper_retaddr();
}
void helper_st16_mmu(CPUArchState *env, uint64_t addr, Int128 val,
MemOpIdx oi, uintptr_t ra)
{
do_st16_mmu(env, addr, val, get_memop(oi), ra);
}
void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx oi)
{
helper_st16_mmu(env, addr, val, oi, GETPC());
}
void cpu_st16_mmu(CPUArchState *env, abi_ptr addr,
Int128 val, MemOpIdx oi, uintptr_t ra)
{
do_st16_mmu(env, addr, val, get_memop(oi), ra);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr ptr)
{
uint32_t ret;
@@ -1172,7 +1282,7 @@ uint8_t cpu_ldb_code_mmu(CPUArchState *env, abi_ptr addr,
void *haddr;
uint8_t ret;
haddr = cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH);
haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH);
ret = ldub_p(haddr);
clear_helper_retaddr();
return ret;
@@ -1184,7 +1294,7 @@ uint16_t cpu_ldw_code_mmu(CPUArchState *env, abi_ptr addr,
void *haddr;
uint16_t ret;
haddr = cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH);
haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH);
ret = lduw_p(haddr);
clear_helper_retaddr();
if (get_memop(oi) & MO_BSWAP) {
@@ -1199,7 +1309,7 @@ uint32_t cpu_ldl_code_mmu(CPUArchState *env, abi_ptr addr,
void *haddr;
uint32_t ret;
haddr = cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH);
haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH);
ret = ldl_p(haddr);
clear_helper_retaddr();
if (get_memop(oi) & MO_BSWAP) {
@@ -1214,7 +1324,7 @@ uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr addr,
void *haddr;
uint64_t ret;
haddr = cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD);
ret = ldq_p(haddr);
clear_helper_retaddr();
if (get_memop(oi) & MO_BSWAP) {
@@ -1228,7 +1338,7 @@ uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr addr,
/*
* Do not allow unaligned operations to proceed. Return the host address.
*/
static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
static void *atomic_mmu_lookup(CPUArchState *env, vaddr addr, MemOpIdx oi,
int size, uintptr_t retaddr)
{
MemOp mop = get_memop(oi);
@@ -1237,15 +1347,15 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
/* Enforce guest required alignment. */
if (unlikely(addr & ((1 << a_bits) - 1))) {
cpu_loop_exit_sigbus(cpu, addr, MMU_DATA_STORE, retaddr);
cpu_loop_exit_sigbus(env_cpu(env), addr, MMU_DATA_STORE, retaddr);
}
/* Enforce qemu required alignment. */
if (unlikely(addr & (size - 1))) {
cpu_loop_exit_atomic(cpu, retaddr);
cpu_loop_exit_atomic(env_cpu(env), retaddr);
}
ret = g2h(cpu, addr);
ret = g2h(env_cpu(env), addr);
set_helper_retaddr(retaddr);
return ret;
}
@@ -1275,7 +1385,7 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
#include "atomic_template.h"
#endif
#if defined(CONFIG_ATOMIC128) || HAVE_CMPXCHG128
#if defined(CONFIG_ATOMIC128) || defined(CONFIG_CMPXCHG128)
#define DATA_SIZE 16
#include "atomic_template.h"
#endif

View File

@@ -15,7 +15,6 @@
#include "hw/xen/xen_native.h"
#include "hw/xen/xen-legacy-backend.h"
#include "hw/xen/xen_pt.h"
#include "hw/xen/xen_igd.h"
#include "chardev/char.h"
#include "qemu/accel.h"
#include "sysemu/cpus.h"

View File

@@ -904,7 +904,7 @@ static void alsa_init_per_direction(AudiodevAlsaPerDirectionOptions *apdo)
}
}
static void *alsa_audio_init(Audiodev *dev, Error **errp)
static void *alsa_audio_init(Audiodev *dev)
{
AudiodevAlsaOptions *aopts;
assert(dev->driver == AUDIODEV_DRIVER_ALSA);
@@ -960,6 +960,7 @@ static struct audio_driver alsa_audio_driver = {
.init = alsa_audio_init,
.fini = alsa_audio_fini,
.pcm_ops = &alsa_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (ALSAVoiceOut),

View File

@@ -26,7 +26,6 @@
#include "audio/audio.h"
#include "monitor/hmp.h"
#include "monitor/monitor.h"
#include "qapi/error.h"
#include "qapi/qmp/qdict.h"
static QLIST_HEAD (capture_list_head, CaptureState) capture_head;
@@ -66,11 +65,10 @@ void hmp_wavcapture(Monitor *mon, const QDict *qdict)
int nchannels = qdict_get_try_int(qdict, "nchannels", 2);
const char *audiodev = qdict_get_str(qdict, "audiodev");
CaptureState *s;
Error *local_err = NULL;
AudioState *as = audio_state_by_name(audiodev, &local_err);
AudioState *as = audio_state_by_name(audiodev);
if (!as) {
error_report_err(local_err);
monitor_printf(mon, "Audiodev '%s' not found\n", audiodev);
return;
}

View File

@@ -32,9 +32,7 @@
#include "qapi/qobject-input-visitor.h"
#include "qapi/qapi-visit-audio.h"
#include "qapi/qapi-commands-audio.h"
#include "qapi/qmp/qdict.h"
#include "qemu/cutils.h"
#include "qemu/error-report.h"
#include "qemu/log.h"
#include "qemu/module.h"
#include "qemu/help_option.h"
@@ -63,22 +61,19 @@ const char *audio_prio_list[] = {
"spice",
CONFIG_AUDIO_DRIVERS
"none",
"wav",
NULL
};
static QLIST_HEAD(, audio_driver) audio_drivers;
static AudiodevListHead audiodevs =
QSIMPLEQ_HEAD_INITIALIZER(audiodevs);
static AudiodevListHead default_audiodevs =
QSIMPLEQ_HEAD_INITIALIZER(default_audiodevs);
static AudiodevListHead audiodevs = QSIMPLEQ_HEAD_INITIALIZER(audiodevs);
void audio_driver_register(audio_driver *drv)
{
QLIST_INSERT_HEAD(&audio_drivers, drv, next);
}
static audio_driver *audio_driver_lookup(const char *name)
audio_driver *audio_driver_lookup(const char *name)
{
struct audio_driver *d;
Error *local_err = NULL;
@@ -104,7 +99,6 @@ static audio_driver *audio_driver_lookup(const char *name)
static QTAILQ_HEAD(AudioStateHead, AudioState) audio_states =
QTAILQ_HEAD_INITIALIZER(audio_states);
static AudioState *default_audio_state;
const struct mixeng_volume nominal_volume = {
.mute = 0,
@@ -117,6 +111,8 @@ const struct mixeng_volume nominal_volume = {
#endif
};
static bool legacy_config = true;
int audio_bug (const char *funcname, int cond)
{
if (cond) {
@@ -1557,11 +1553,9 @@ size_t audio_generic_read(HWVoiceIn *hw, void *buf, size_t size)
}
static int audio_driver_init(AudioState *s, struct audio_driver *drv,
Audiodev *dev, Error **errp)
bool msg, Audiodev *dev)
{
Error *local_err = NULL;
s->drv_opaque = drv->init(dev, &local_err);
s->drv_opaque = drv->init(dev);
if (s->drv_opaque) {
if (!drv->pcm_ops->get_buffer_in) {
@@ -1573,15 +1567,13 @@ static int audio_driver_init(AudioState *s, struct audio_driver *drv,
drv->pcm_ops->put_buffer_out = audio_generic_put_buffer_out;
}
audio_init_nb_voices_out(s, drv, 1);
audio_init_nb_voices_in(s, drv, 0);
audio_init_nb_voices_out(s, drv);
audio_init_nb_voices_in(s, drv);
s->drv = drv;
return 0;
} else {
if (local_err) {
error_propagate(errp, local_err);
} else {
error_setg(errp, "Could not init `%s' audio driver", drv->name);
if (msg) {
dolog("Could not init `%s' audio driver\n", drv->name);
}
return -1;
}
@@ -1661,7 +1653,6 @@ static void free_audio_state(AudioState *s)
void audio_cleanup(void)
{
default_audio_state = NULL;
while (!QTAILQ_EMPTY(&audio_states)) {
AudioState *s = QTAILQ_FIRST(&audio_states);
QTAILQ_REMOVE(&audio_states, s, list);
@@ -1683,30 +1674,24 @@ static const VMStateDescription vmstate_audio = {
.version_id = 1,
.minimum_version_id = 1,
.needed = vmstate_audio_needed,
.fields = (const VMStateField[]) {
.fields = (VMStateField[]) {
VMSTATE_END_OF_LIST()
}
};
void audio_create_default_audiodevs(void)
static void audio_validate_opts(Audiodev *dev, Error **errp);
static AudiodevListEntry *audiodev_find(
AudiodevListHead *head, const char *drvname)
{
for (int i = 0; audio_prio_list[i]; i++) {
if (audio_driver_lookup(audio_prio_list[i])) {
QDict *dict = qdict_new();
Audiodev *dev = NULL;
Visitor *v;
qdict_put_str(dict, "driver", audio_prio_list[i]);
qdict_put_str(dict, "id", "#default");
v = qobject_input_visitor_new_keyval(QOBJECT(dict));
qobject_unref(dict);
visit_type_Audiodev(v, NULL, &dev, &error_fatal);
visit_free(v);
audio_define_default(dev, &error_abort);
AudiodevListEntry *e;
QSIMPLEQ_FOREACH(e, head, next) {
if (strcmp(AudiodevDriver_str(e->dev->driver), drvname) == 0) {
return e;
}
}
return NULL;
}
/*
@@ -1715,16 +1700,62 @@ void audio_create_default_audiodevs(void)
* if dev == NULL => legacy implicit initialization, return the already created
* state or create a new one
*/
static AudioState *audio_init(Audiodev *dev, Error **errp)
static AudioState *audio_init(Audiodev *dev, const char *name)
{
static bool atexit_registered;
size_t i;
int done = 0;
const char *drvname;
VMChangeStateEntry *vmse;
const char *drvname = NULL;
VMChangeStateEntry *e;
AudioState *s;
struct audio_driver *driver;
/* silence gcc warning about uninitialized variable */
AudiodevListHead head = QSIMPLEQ_HEAD_INITIALIZER(head);
if (using_spice) {
/*
* When using spice allow the spice audio driver being picked
* as default.
*
* Temporary hack. Using audio devices without explicit
* audiodev= property is already deprecated. Same goes for
* the -soundhw switch. Once this support gets finally
* removed we can also drop the concept of a default audio
* backend and this can go away.
*/
driver = audio_driver_lookup("spice");
if (driver) {
driver->can_be_default = 1;
}
}
if (dev) {
/* -audiodev option */
legacy_config = false;
drvname = AudiodevDriver_str(dev->driver);
} else if (!QTAILQ_EMPTY(&audio_states)) {
if (!legacy_config) {
dolog("Device %s: audiodev default parameter is deprecated, please "
"specify audiodev=%s\n", name,
QTAILQ_FIRST(&audio_states)->dev->id);
}
return QTAILQ_FIRST(&audio_states);
} else {
/* legacy implicit initialization */
head = audio_handle_legacy_opts();
/*
* In case of legacy initialization, all Audiodevs in the list will have
* the same configuration (except the driver), so it doesn't matter which
* one we chose. We need an Audiodev to set up AudioState before we can
* init a driver. Also note that dev at this point is still in the
* list.
*/
dev = QSIMPLEQ_FIRST(&head)->dev;
audio_validate_opts(dev, &error_abort);
}
s = g_new0(AudioState, 1);
s->dev = dev;
QLIST_INIT (&s->hw_head_out);
QLIST_INIT (&s->hw_head_in);
@@ -1736,39 +1767,56 @@ static AudioState *audio_init(Audiodev *dev, Error **errp)
s->ts = timer_new_ns(QEMU_CLOCK_VIRTUAL, audio_timer, s);
if (dev) {
/* -audiodev option */
s->dev = dev;
drvname = AudiodevDriver_str(dev->driver);
s->nb_hw_voices_out = audio_get_pdo_out(dev)->voices;
s->nb_hw_voices_in = audio_get_pdo_in(dev)->voices;
if (s->nb_hw_voices_out < 1) {
dolog ("Bogus number of playback voices %d, setting to 1\n",
s->nb_hw_voices_out);
s->nb_hw_voices_out = 1;
}
if (s->nb_hw_voices_in < 0) {
dolog ("Bogus number of capture voices %d, setting to 0\n",
s->nb_hw_voices_in);
s->nb_hw_voices_in = 0;
}
if (drvname) {
driver = audio_driver_lookup(drvname);
if (driver) {
done = !audio_driver_init(s, driver, dev, errp);
done = !audio_driver_init(s, driver, true, dev);
} else {
error_setg(errp, "Unknown audio driver `%s'", drvname);
dolog ("Unknown audio driver `%s'\n", drvname);
}
if (!done) {
goto out;
free_audio_state(s);
return NULL;
}
} else {
assert(!default_audio_state);
for (;;) {
AudiodevListEntry *e = QSIMPLEQ_FIRST(&default_audiodevs);
if (!e) {
error_setg(errp, "no default audio driver available");
goto out;
for (i = 0; audio_prio_list[i]; i++) {
AudiodevListEntry *e = audiodev_find(&head, audio_prio_list[i]);
driver = audio_driver_lookup(audio_prio_list[i]);
if (e && driver) {
s->dev = dev = e->dev;
audio_validate_opts(dev, &error_abort);
done = !audio_driver_init(s, driver, false, dev);
if (done) {
e->dev = NULL;
break;
}
}
s->dev = dev = e->dev;
QSIMPLEQ_REMOVE_HEAD(&default_audiodevs, next);
g_free(e);
drvname = AudiodevDriver_str(dev->driver);
driver = audio_driver_lookup(drvname);
if (!audio_driver_init(s, driver, dev, NULL)) {
break;
}
qapi_free_Audiodev(dev);
s->dev = NULL;
}
}
audio_free_audiodev_list(&head);
if (!done) {
driver = audio_driver_lookup("none");
done = !audio_driver_init(s, driver, false, dev);
assert(done);
dolog("warning: Using timer based audio emulation\n");
}
if (dev->timer_period <= 0) {
s->period_ticks = 1;
@@ -1776,51 +1824,37 @@ static AudioState *audio_init(Audiodev *dev, Error **errp)
s->period_ticks = dev->timer_period * (int64_t)SCALE_US;
}
vmse = qemu_add_vm_change_state_handler (audio_vm_change_state_handler, s);
if (!vmse) {
e = qemu_add_vm_change_state_handler (audio_vm_change_state_handler, s);
if (!e) {
dolog ("warning: Could not register change state handler\n"
"(Audio can continue looping even after stopping the VM)\n");
}
QTAILQ_INSERT_TAIL(&audio_states, s, list);
QLIST_INIT (&s->card_head);
vmstate_register_any(NULL, &vmstate_audio, s);
vmstate_register (NULL, 0, &vmstate_audio, s);
return s;
out:
free_audio_state(s);
return NULL;
}
AudioState *audio_get_default_audio_state(Error **errp)
void audio_free_audiodev_list(AudiodevListHead *head)
{
if (!default_audio_state) {
default_audio_state = audio_init(NULL, errp);
if (!default_audio_state) {
if (!QSIMPLEQ_EMPTY(&audiodevs)) {
error_append_hint(errp, "Perhaps you wanted to use -audio or set audiodev=%s?\n",
QSIMPLEQ_FIRST(&audiodevs)->dev->id);
}
}
AudiodevListEntry *e;
while ((e = QSIMPLEQ_FIRST(head))) {
QSIMPLEQ_REMOVE_HEAD(head, next);
qapi_free_Audiodev(e->dev);
g_free(e);
}
return default_audio_state;
}
bool AUD_register_card (const char *name, QEMUSoundCard *card, Error **errp)
void AUD_register_card (const char *name, QEMUSoundCard *card)
{
if (!card->state) {
card->state = audio_get_default_audio_state(errp);
if (!card->state) {
return false;
}
card->state = audio_init(NULL, name);
}
card->name = g_strdup (name);
memset (&card->entries, 0, sizeof (card->entries));
QLIST_INSERT_HEAD(&card->state->card_head, card, entries);
return true;
}
void AUD_remove_card (QEMUSoundCard *card)
@@ -1842,8 +1876,10 @@ CaptureVoiceOut *AUD_add_capture(
struct capture_callback *cb;
if (!s) {
error_report("Capturing without setting an audiodev is not supported");
abort();
if (!legacy_config) {
dolog("Capturing without setting an audiodev is deprecated\n");
}
s = audio_init(NULL, NULL);
}
if (!audio_get_pdo_out(s->dev)->mixing_engine) {
@@ -1864,8 +1900,10 @@ CaptureVoiceOut *AUD_add_capture(
cap = audio_pcm_capture_find_specific(s, as);
if (cap) {
QLIST_INSERT_HEAD (&cap->cb_head, cb, entries);
return cap;
} else {
HWVoiceOut *hw;
CaptureVoiceOut *cap;
cap = g_malloc0(sizeof(*cap));
@@ -1899,9 +1937,8 @@ CaptureVoiceOut *AUD_add_capture(
QLIST_FOREACH(hw, &s->hw_head_out, entries) {
audio_attach_capture (hw);
}
return cap;
}
return cap;
}
void AUD_del_capture (CaptureVoiceOut *cap, void *cb_opaque)
@@ -2147,24 +2184,17 @@ void audio_define(Audiodev *dev)
QSIMPLEQ_INSERT_TAIL(&audiodevs, e, next);
}
void audio_define_default(Audiodev *dev, Error **errp)
{
AudiodevListEntry *e;
audio_validate_opts(dev, errp);
e = g_new0(AudiodevListEntry, 1);
e->dev = dev;
QSIMPLEQ_INSERT_TAIL(&default_audiodevs, e, next);
}
void audio_init_audiodevs(void)
bool audio_init_audiodevs(void)
{
AudiodevListEntry *e;
QSIMPLEQ_FOREACH(e, &audiodevs, next) {
audio_init(e->dev, &error_fatal);
if (!audio_init(e->dev, NULL)) {
return false;
}
}
return true;
}
audsettings audiodev_to_audsettings(AudiodevPerDirectionOptions *pdo)
@@ -2226,7 +2256,7 @@ int audio_buffer_bytes(AudiodevPerDirectionOptions *pdo,
audioformat_bytes_per_sample(as->fmt);
}
AudioState *audio_state_by_name(const char *name, Error **errp)
AudioState *audio_state_by_name(const char *name)
{
AudioState *s;
QTAILQ_FOREACH(s, &audio_states, list) {
@@ -2235,7 +2265,6 @@ AudioState *audio_state_by_name(const char *name, Error **errp)
return s;
}
}
error_setg(errp, "audiodev '%s' not found", name);
return NULL;
}

View File

@@ -94,7 +94,7 @@ typedef struct QEMUAudioTimeStamp {
void AUD_vlog (const char *cap, const char *fmt, va_list ap) G_GNUC_PRINTF(2, 0);
void AUD_log (const char *cap, const char *fmt, ...) G_GNUC_PRINTF(2, 3);
bool AUD_register_card (const char *name, QEMUSoundCard *card, Error **errp);
void AUD_register_card (const char *name, QEMUSoundCard *card);
void AUD_remove_card (QEMUSoundCard *card);
CaptureVoiceOut *AUD_add_capture(
AudioState *s,
@@ -169,14 +169,12 @@ void audio_sample_from_uint64(void *samples, int pos,
uint64_t left, uint64_t right);
void audio_define(Audiodev *audio);
void audio_define_default(Audiodev *dev, Error **errp);
void audio_parse_option(const char *opt);
void audio_create_default_audiodevs(void);
void audio_init_audiodevs(void);
bool audio_init_audiodevs(void);
void audio_help(void);
void audio_legacy_help(void);
AudioState *audio_state_by_name(const char *name, Error **errp);
AudioState *audio_get_default_audio_state(Error **errp);
AudioState *audio_state_by_name(const char *name);
const char *audio_get_id(QEMUSoundCard *card);
#define DEFINE_AUDIO_PROPERTIES(_s, _f) \

View File

@@ -140,12 +140,13 @@ typedef struct audio_driver audio_driver;
struct audio_driver {
const char *name;
const char *descr;
void *(*init) (Audiodev *, Error **);
void *(*init) (Audiodev *);
void (*fini) (void *);
#ifdef CONFIG_GIO
void (*set_dbus_server) (AudioState *s, GDBusObjectManagerServer *manager, bool p2p);
#endif
struct audio_pcm_ops *pcm_ops;
int can_be_default;
int max_voices_out;
int max_voices_in;
size_t voice_size_out;
@@ -242,6 +243,7 @@ extern const struct mixeng_volume nominal_volume;
extern const char *audio_prio_list[];
void audio_driver_register(audio_driver *drv);
audio_driver *audio_driver_lookup(const char *name);
void audio_pcm_init_info (struct audio_pcm_info *info, struct audsettings *as);
void audio_pcm_info_clear_buf (struct audio_pcm_info *info, void *buf, int len);
@@ -295,6 +297,9 @@ typedef struct AudiodevListEntry {
} AudiodevListEntry;
typedef QSIMPLEQ_HEAD(, AudiodevListEntry) AudiodevListHead;
AudiodevListHead audio_handle_legacy_opts(void);
void audio_free_audiodev_list(AudiodevListHead *head);
void audio_create_pdos(Audiodev *dev);
AudiodevPerDirectionOptions *audio_get_pdo_in(Audiodev *dev);

591
audio/audio_legacy.c Normal file
View File

@@ -0,0 +1,591 @@
/*
* QEMU Audio subsystem: legacy configuration handling
*
* Copyright (c) 2015-2019 Zoltán Kővágó <DirtY.iCE.hu@gmail.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "audio.h"
#include "audio_int.h"
#include "qemu/cutils.h"
#include "qemu/timer.h"
#include "qapi/error.h"
#include "qapi/qapi-visit-audio.h"
#include "qapi/visitor-impl.h"
#define AUDIO_CAP "audio-legacy"
#include "audio_int.h"
static uint32_t toui32(const char *str)
{
uint64_t ret;
if (parse_uint_full(str, 10, &ret) || ret > UINT32_MAX) {
dolog("Invalid integer value `%s'\n", str);
exit(1);
}
return ret;
}
/* helper functions to convert env variables */
static void get_bool(const char *env, bool *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
*dst = toui32(val) != 0;
*has_dst = true;
}
}
static void get_int(const char *env, uint32_t *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
*dst = toui32(val);
*has_dst = true;
}
}
static void get_str(const char *env, char **dst)
{
const char *val = getenv(env);
if (val) {
g_free(*dst);
*dst = g_strdup(val);
}
}
static void get_fmt(const char *env, AudioFormat *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
size_t i;
for (i = 0; AudioFormat_lookup.size; ++i) {
if (strcasecmp(val, AudioFormat_lookup.array[i]) == 0) {
*dst = i;
*has_dst = true;
return;
}
}
dolog("Invalid audio format `%s'\n", val);
exit(1);
}
}
#if defined(CONFIG_AUDIO_ALSA) || defined(CONFIG_AUDIO_DSOUND)
static void get_millis_to_usecs(const char *env, uint32_t *dst, bool *has_dst)
{
const char *val = getenv(env);
if (val) {
*dst = toui32(val) * 1000;
*has_dst = true;
}
}
#endif
#if defined(CONFIG_AUDIO_ALSA) || defined(CONFIG_AUDIO_COREAUDIO) || \
defined(CONFIG_AUDIO_PA) || defined(CONFIG_AUDIO_SDL) || \
defined(CONFIG_AUDIO_DSOUND) || defined(CONFIG_AUDIO_OSS)
static uint32_t frames_to_usecs(uint32_t frames,
AudiodevPerDirectionOptions *pdo)
{
uint32_t freq = pdo->has_frequency ? pdo->frequency : 44100;
return (frames * 1000000 + freq / 2) / freq;
}
#endif
#ifdef CONFIG_AUDIO_COREAUDIO
static void get_frames_to_usecs(const char *env, uint32_t *dst, bool *has_dst,
AudiodevPerDirectionOptions *pdo)
{
const char *val = getenv(env);
if (val) {
*dst = frames_to_usecs(toui32(val), pdo);
*has_dst = true;
}
}
#endif
#if defined(CONFIG_AUDIO_PA) || defined(CONFIG_AUDIO_SDL) || \
defined(CONFIG_AUDIO_DSOUND) || defined(CONFIG_AUDIO_OSS)
static uint32_t samples_to_usecs(uint32_t samples,
AudiodevPerDirectionOptions *pdo)
{
uint32_t channels = pdo->has_channels ? pdo->channels : 2;
return frames_to_usecs(samples / channels, pdo);
}
#endif
#if defined(CONFIG_AUDIO_PA) || defined(CONFIG_AUDIO_SDL)
static void get_samples_to_usecs(const char *env, uint32_t *dst, bool *has_dst,
AudiodevPerDirectionOptions *pdo)
{
const char *val = getenv(env);
if (val) {
*dst = samples_to_usecs(toui32(val), pdo);
*has_dst = true;
}
}
#endif
#if defined(CONFIG_AUDIO_DSOUND) || defined(CONFIG_AUDIO_OSS)
static uint32_t bytes_to_usecs(uint32_t bytes, AudiodevPerDirectionOptions *pdo)
{
AudioFormat fmt = pdo->has_format ? pdo->format : AUDIO_FORMAT_S16;
uint32_t bytes_per_sample = audioformat_bytes_per_sample(fmt);
return samples_to_usecs(bytes / bytes_per_sample, pdo);
}
static void get_bytes_to_usecs(const char *env, uint32_t *dst, bool *has_dst,
AudiodevPerDirectionOptions *pdo)
{
const char *val = getenv(env);
if (val) {
*dst = bytes_to_usecs(toui32(val), pdo);
*has_dst = true;
}
}
#endif
/* backend specific functions */
#ifdef CONFIG_AUDIO_ALSA
/* ALSA */
static void handle_alsa_per_direction(
AudiodevAlsaPerDirectionOptions *apdo, const char *prefix)
{
char buf[64];
size_t len = strlen(prefix);
bool size_in_usecs = false;
bool dummy;
memcpy(buf, prefix, len);
strcpy(buf + len, "TRY_POLL");
get_bool(buf, &apdo->try_poll, &apdo->has_try_poll);
strcpy(buf + len, "DEV");
get_str(buf, &apdo->dev);
strcpy(buf + len, "SIZE_IN_USEC");
get_bool(buf, &size_in_usecs, &dummy);
strcpy(buf + len, "PERIOD_SIZE");
get_int(buf, &apdo->period_length, &apdo->has_period_length);
if (apdo->has_period_length && !size_in_usecs) {
apdo->period_length = frames_to_usecs(
apdo->period_length,
qapi_AudiodevAlsaPerDirectionOptions_base(apdo));
}
strcpy(buf + len, "BUFFER_SIZE");
get_int(buf, &apdo->buffer_length, &apdo->has_buffer_length);
if (apdo->has_buffer_length && !size_in_usecs) {
apdo->buffer_length = frames_to_usecs(
apdo->buffer_length,
qapi_AudiodevAlsaPerDirectionOptions_base(apdo));
}
}
static void handle_alsa(Audiodev *dev)
{
AudiodevAlsaOptions *aopt = &dev->u.alsa;
handle_alsa_per_direction(aopt->in, "QEMU_ALSA_ADC_");
handle_alsa_per_direction(aopt->out, "QEMU_ALSA_DAC_");
get_millis_to_usecs("QEMU_ALSA_THRESHOLD",
&aopt->threshold, &aopt->has_threshold);
}
#endif
#ifdef CONFIG_AUDIO_COREAUDIO
/* coreaudio */
static void handle_coreaudio(Audiodev *dev)
{
get_frames_to_usecs(
"QEMU_COREAUDIO_BUFFER_SIZE",
&dev->u.coreaudio.out->buffer_length,
&dev->u.coreaudio.out->has_buffer_length,
qapi_AudiodevCoreaudioPerDirectionOptions_base(dev->u.coreaudio.out));
get_int("QEMU_COREAUDIO_BUFFER_COUNT",
&dev->u.coreaudio.out->buffer_count,
&dev->u.coreaudio.out->has_buffer_count);
}
#endif
#ifdef CONFIG_AUDIO_DSOUND
/* dsound */
static void handle_dsound(Audiodev *dev)
{
get_millis_to_usecs("QEMU_DSOUND_LATENCY_MILLIS",
&dev->u.dsound.latency, &dev->u.dsound.has_latency);
get_bytes_to_usecs("QEMU_DSOUND_BUFSIZE_OUT",
&dev->u.dsound.out->buffer_length,
&dev->u.dsound.out->has_buffer_length,
dev->u.dsound.out);
get_bytes_to_usecs("QEMU_DSOUND_BUFSIZE_IN",
&dev->u.dsound.in->buffer_length,
&dev->u.dsound.in->has_buffer_length,
dev->u.dsound.in);
}
#endif
#ifdef CONFIG_AUDIO_OSS
/* OSS */
static void handle_oss_per_direction(
AudiodevOssPerDirectionOptions *opdo, const char *try_poll_env,
const char *dev_env)
{
get_bool(try_poll_env, &opdo->try_poll, &opdo->has_try_poll);
get_str(dev_env, &opdo->dev);
get_bytes_to_usecs("QEMU_OSS_FRAGSIZE",
&opdo->buffer_length, &opdo->has_buffer_length,
qapi_AudiodevOssPerDirectionOptions_base(opdo));
get_int("QEMU_OSS_NFRAGS", &opdo->buffer_count,
&opdo->has_buffer_count);
}
static void handle_oss(Audiodev *dev)
{
AudiodevOssOptions *oopt = &dev->u.oss;
handle_oss_per_direction(oopt->in, "QEMU_AUDIO_ADC_TRY_POLL",
"QEMU_OSS_ADC_DEV");
handle_oss_per_direction(oopt->out, "QEMU_AUDIO_DAC_TRY_POLL",
"QEMU_OSS_DAC_DEV");
get_bool("QEMU_OSS_MMAP", &oopt->try_mmap, &oopt->has_try_mmap);
get_bool("QEMU_OSS_EXCLUSIVE", &oopt->exclusive, &oopt->has_exclusive);
get_int("QEMU_OSS_POLICY", &oopt->dsp_policy, &oopt->has_dsp_policy);
}
#endif
#ifdef CONFIG_AUDIO_PA
/* pulseaudio */
static void handle_pa_per_direction(
AudiodevPaPerDirectionOptions *ppdo, const char *env)
{
get_str(env, &ppdo->name);
}
static void handle_pa(Audiodev *dev)
{
handle_pa_per_direction(dev->u.pa.in, "QEMU_PA_SOURCE");
handle_pa_per_direction(dev->u.pa.out, "QEMU_PA_SINK");
get_samples_to_usecs(
"QEMU_PA_SAMPLES", &dev->u.pa.in->buffer_length,
&dev->u.pa.in->has_buffer_length,
qapi_AudiodevPaPerDirectionOptions_base(dev->u.pa.in));
get_samples_to_usecs(
"QEMU_PA_SAMPLES", &dev->u.pa.out->buffer_length,
&dev->u.pa.out->has_buffer_length,
qapi_AudiodevPaPerDirectionOptions_base(dev->u.pa.out));
get_str("QEMU_PA_SERVER", &dev->u.pa.server);
}
#endif
#ifdef CONFIG_AUDIO_SDL
/* SDL */
static void handle_sdl(Audiodev *dev)
{
/* SDL is output only */
get_samples_to_usecs("QEMU_SDL_SAMPLES", &dev->u.sdl.out->buffer_length,
&dev->u.sdl.out->has_buffer_length,
qapi_AudiodevSdlPerDirectionOptions_base(dev->u.sdl.out));
}
#endif
/* wav */
static void handle_wav(Audiodev *dev)
{
get_int("QEMU_WAV_FREQUENCY",
&dev->u.wav.out->frequency, &dev->u.wav.out->has_frequency);
get_fmt("QEMU_WAV_FORMAT", &dev->u.wav.out->format,
&dev->u.wav.out->has_format);
get_int("QEMU_WAV_DAC_FIXED_CHANNELS",
&dev->u.wav.out->channels, &dev->u.wav.out->has_channels);
get_str("QEMU_WAV_PATH", &dev->u.wav.path);
}
/* general */
static void handle_per_direction(
AudiodevPerDirectionOptions *pdo, const char *prefix)
{
char buf[64];
size_t len = strlen(prefix);
memcpy(buf, prefix, len);
strcpy(buf + len, "FIXED_SETTINGS");
get_bool(buf, &pdo->fixed_settings, &pdo->has_fixed_settings);
strcpy(buf + len, "FIXED_FREQ");
get_int(buf, &pdo->frequency, &pdo->has_frequency);
strcpy(buf + len, "FIXED_FMT");
get_fmt(buf, &pdo->format, &pdo->has_format);
strcpy(buf + len, "FIXED_CHANNELS");
get_int(buf, &pdo->channels, &pdo->has_channels);
strcpy(buf + len, "VOICES");
get_int(buf, &pdo->voices, &pdo->has_voices);
}
static AudiodevListEntry *legacy_opt(const char *drvname)
{
AudiodevListEntry *e = g_new0(AudiodevListEntry, 1);
e->dev = g_new0(Audiodev, 1);
e->dev->id = g_strdup(drvname);
e->dev->driver = qapi_enum_parse(
&AudiodevDriver_lookup, drvname, -1, &error_abort);
audio_create_pdos(e->dev);
handle_per_direction(audio_get_pdo_in(e->dev), "QEMU_AUDIO_ADC_");
handle_per_direction(audio_get_pdo_out(e->dev), "QEMU_AUDIO_DAC_");
/* Original description: Timer period in HZ (0 - use lowest possible) */
get_int("QEMU_AUDIO_TIMER_PERIOD",
&e->dev->timer_period, &e->dev->has_timer_period);
if (e->dev->has_timer_period && e->dev->timer_period) {
e->dev->timer_period = NANOSECONDS_PER_SECOND / 1000 /
e->dev->timer_period;
}
switch (e->dev->driver) {
#ifdef CONFIG_AUDIO_ALSA
case AUDIODEV_DRIVER_ALSA:
handle_alsa(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_COREAUDIO
case AUDIODEV_DRIVER_COREAUDIO:
handle_coreaudio(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_DSOUND
case AUDIODEV_DRIVER_DSOUND:
handle_dsound(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_OSS
case AUDIODEV_DRIVER_OSS:
handle_oss(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_PA
case AUDIODEV_DRIVER_PA:
handle_pa(e->dev);
break;
#endif
#ifdef CONFIG_AUDIO_SDL
case AUDIODEV_DRIVER_SDL:
handle_sdl(e->dev);
break;
#endif
case AUDIODEV_DRIVER_WAV:
handle_wav(e->dev);
break;
default:
break;
}
return e;
}
AudiodevListHead audio_handle_legacy_opts(void)
{
const char *drvname = getenv("QEMU_AUDIO_DRV");
AudiodevListHead head = QSIMPLEQ_HEAD_INITIALIZER(head);
if (drvname) {
AudiodevListEntry *e;
audio_driver *driver = audio_driver_lookup(drvname);
if (!driver) {
dolog("Unknown audio driver `%s'\n", drvname);
exit(1);
}
e = legacy_opt(drvname);
QSIMPLEQ_INSERT_TAIL(&head, e, next);
} else {
for (int i = 0; audio_prio_list[i]; i++) {
audio_driver *driver = audio_driver_lookup(audio_prio_list[i]);
if (driver && driver->can_be_default) {
AudiodevListEntry *e = legacy_opt(driver->name);
QSIMPLEQ_INSERT_TAIL(&head, e, next);
}
}
if (QSIMPLEQ_EMPTY(&head)) {
dolog("Internal error: no default audio driver available\n");
exit(1);
}
}
return head;
}
/* visitor to print -audiodev option */
typedef struct {
Visitor visitor;
bool comma;
GList *path;
} LegacyPrintVisitor;
static bool lv_start_struct(Visitor *v, const char *name, void **obj,
size_t size, Error **errp)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
lv->path = g_list_append(lv->path, g_strdup(name));
return true;
}
static void lv_end_struct(Visitor *v, void **obj)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
lv->path = g_list_delete_link(lv->path, g_list_last(lv->path));
}
static void lv_print_key(Visitor *v, const char *name)
{
GList *e;
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
if (lv->comma) {
putchar(',');
} else {
lv->comma = true;
}
for (e = lv->path; e; e = e->next) {
if (e->data) {
printf("%s.", (const char *) e->data);
}
}
printf("%s=", name);
}
static bool lv_type_int64(Visitor *v, const char *name, int64_t *obj,
Error **errp)
{
lv_print_key(v, name);
printf("%" PRIi64, *obj);
return true;
}
static bool lv_type_uint64(Visitor *v, const char *name, uint64_t *obj,
Error **errp)
{
lv_print_key(v, name);
printf("%" PRIu64, *obj);
return true;
}
static bool lv_type_bool(Visitor *v, const char *name, bool *obj, Error **errp)
{
lv_print_key(v, name);
printf("%s", *obj ? "on" : "off");
return true;
}
static bool lv_type_str(Visitor *v, const char *name, char **obj, Error **errp)
{
const char *str = *obj;
lv_print_key(v, name);
while (*str) {
if (*str == ',') {
putchar(',');
}
putchar(*str++);
}
return true;
}
static void lv_complete(Visitor *v, void *opaque)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
assert(lv->path == NULL);
}
static void lv_free(Visitor *v)
{
LegacyPrintVisitor *lv = (LegacyPrintVisitor *) v;
g_list_free_full(lv->path, g_free);
g_free(lv);
}
static Visitor *legacy_visitor_new(void)
{
LegacyPrintVisitor *lv = g_new0(LegacyPrintVisitor, 1);
lv->visitor.start_struct = lv_start_struct;
lv->visitor.end_struct = lv_end_struct;
/* lists not supported */
lv->visitor.type_int64 = lv_type_int64;
lv->visitor.type_uint64 = lv_type_uint64;
lv->visitor.type_bool = lv_type_bool;
lv->visitor.type_str = lv_type_str;
lv->visitor.type = VISITOR_OUTPUT;
lv->visitor.complete = lv_complete;
lv->visitor.free = lv_free;
return &lv->visitor;
}
void audio_legacy_help(void)
{
AudiodevListHead head;
AudiodevListEntry *e;
printf("Environment variable based configuration deprecated.\n");
printf("Please use the new -audiodev option.\n");
head = audio_handle_legacy_opts();
printf("\nEquivalent -audiodev to your current environment variables:\n");
if (!getenv("QEMU_AUDIO_DRV")) {
printf("(Since you didn't specify QEMU_AUDIO_DRV, I'll list all "
"possibilities)\n");
}
QSIMPLEQ_FOREACH(e, &head, next) {
Visitor *v;
Audiodev *dev = e->dev;
printf("-audiodev ");
v = legacy_visitor_new();
visit_type_Audiodev(v, NULL, &dev, &error_abort);
visit_free(v);
printf("\n");
}
audio_free_audiodev_list(&head);
}

View File

@@ -37,12 +37,11 @@
#endif
static void glue(audio_init_nb_voices_, TYPE)(AudioState *s,
struct audio_driver *drv, int min_voices)
struct audio_driver *drv)
{
int max_voices = glue (drv->max_voices_, TYPE);
size_t voice_size = glue(drv->voice_size_, TYPE);
glue (s->nb_hw_voices_, TYPE) = glue(audio_get_pdo_, TYPE)(s->dev)->voices;
if (glue (s->nb_hw_voices_, TYPE) > max_voices) {
if (!max_voices) {
#ifdef DAC
@@ -57,12 +56,6 @@ static void glue(audio_init_nb_voices_, TYPE)(AudioState *s,
glue (s->nb_hw_voices_, TYPE) = max_voices;
}
if (glue (s->nb_hw_voices_, TYPE) < min_voices) {
dolog ("Bogus number of " NAME " voices %d, setting to %d\n",
glue (s->nb_hw_voices_, TYPE),
min_voices);
}
if (audio_bug(__func__, !voice_size && max_voices)) {
dolog ("drv=`%s' voice_size=0 max_voices=%d\n",
drv->name, max_voices);

View File

@@ -299,7 +299,7 @@ COREAUDIO_WRAPPER_FUNC(write, size_t, (HWVoiceOut *hw, void *buf, size_t size),
#undef COREAUDIO_WRAPPER_FUNC
/*
* callback to feed audiooutput buffer. called without BQL.
* callback to feed audiooutput buffer. called without iothread lock.
* allowed to lock "buf_mutex", but disallowed to have any other locks.
*/
static OSStatus audioDeviceIOProc(
@@ -538,7 +538,7 @@ static void update_device_playback_state(coreaudioVoiceOut *core)
}
}
/* called without BQL. */
/* called without iothread lock. */
static OSStatus handle_voice_change(
AudioObjectID in_object_id,
UInt32 in_number_addresses,
@@ -547,7 +547,7 @@ static OSStatus handle_voice_change(
{
coreaudioVoiceOut *core = in_client_data;
bql_lock();
qemu_mutex_lock_iothread();
if (core->outputDeviceID) {
fini_out_device(core);
@@ -557,7 +557,7 @@ static OSStatus handle_voice_change(
update_device_playback_state(core);
}
bql_unlock();
qemu_mutex_unlock_iothread();
return 0;
}
@@ -644,7 +644,7 @@ static void coreaudio_enable_out(HWVoiceOut *hw, bool enable)
update_device_playback_state(core);
}
static void *coreaudio_audio_init(Audiodev *dev, Error **errp)
static void *coreaudio_audio_init(Audiodev *dev)
{
return dev;
}
@@ -673,6 +673,7 @@ static struct audio_driver coreaudio_audio_driver = {
.init = coreaudio_audio_init,
.fini = coreaudio_audio_fini,
.pcm_ops = &coreaudio_pcm_ops,
.can_be_default = 1,
.max_voices_out = 1,
.max_voices_in = 0,
.voice_size_out = sizeof (coreaudioVoiceOut),

View File

@@ -395,7 +395,7 @@ dbus_enable_in(HWVoiceIn *hw, bool enable)
}
static void *
dbus_audio_init(Audiodev *dev, Error **errp)
dbus_audio_init(Audiodev *dev)
{
DBusAudio *da = g_new0(DBusAudio, 1);
@@ -676,6 +676,7 @@ static struct audio_driver dbus_audio_driver = {
.fini = dbus_audio_fini,
.set_dbus_server = dbus_audio_set_server,
.pcm_ops = &dbus_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof(DBusVoiceOut),

View File

@@ -619,7 +619,7 @@ static void dsound_audio_fini (void *opaque)
g_free(s);
}
static void *dsound_audio_init(Audiodev *dev, Error **errp)
static void *dsound_audio_init(Audiodev *dev)
{
int err;
HRESULT hr;
@@ -721,6 +721,7 @@ static struct audio_driver dsound_audio_driver = {
.init = dsound_audio_init,
.fini = dsound_audio_fini,
.pcm_ops = &dsound_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = 1,
.voice_size_out = sizeof (DSoundVoiceOut),

View File

@@ -70,9 +70,6 @@ typedef struct QJackClient {
int buffersize;
jack_port_t **port;
QJackBuffer fifo;
/* Used as workspace by qjack_process() */
float **process_buffers;
}
QJackClient;
@@ -270,21 +267,22 @@ static int qjack_process(jack_nframes_t nframes, void *arg)
}
/* get the buffers for the ports */
float *buffers[c->nchannels];
for (int i = 0; i < c->nchannels; ++i) {
c->process_buffers[i] = jack_port_get_buffer(c->port[i], nframes);
buffers[i] = jack_port_get_buffer(c->port[i], nframes);
}
if (c->out) {
if (likely(c->enabled)) {
qjack_buffer_read_l(&c->fifo, c->process_buffers, nframes);
qjack_buffer_read_l(&c->fifo, buffers, nframes);
} else {
for (int i = 0; i < c->nchannels; ++i) {
memset(c->process_buffers[i], 0, nframes * sizeof(float));
memset(buffers[i], 0, nframes * sizeof(float));
}
}
} else {
if (likely(c->enabled)) {
qjack_buffer_write_l(&c->fifo, c->process_buffers, nframes);
qjack_buffer_write_l(&c->fifo, buffers, nframes);
}
}
@@ -402,8 +400,7 @@ static void qjack_client_connect_ports(QJackClient *c)
static int qjack_client_init(QJackClient *c)
{
jack_status_t status;
int client_name_len = jack_client_name_size(); /* includes NUL */
g_autofree char *client_name = g_new(char, client_name_len);
char client_name[jack_client_name_size()];
jack_options_t options = JackNullOption;
if (c->state == QJACK_STATE_RUNNING) {
@@ -412,7 +409,7 @@ static int qjack_client_init(QJackClient *c)
c->connect_ports = true;
snprintf(client_name, client_name_len, "%s-%s",
snprintf(client_name, sizeof(client_name), "%s-%s",
c->out ? "out" : "in",
c->opt->client_name ? c->opt->client_name : audio_application_name());
@@ -450,9 +447,6 @@ static int qjack_client_init(QJackClient *c)
jack_get_client_name(c->client));
}
/* Allocate working buffer for process callback */
c->process_buffers = g_new(float *, c->nchannels);
jack_set_process_callback(c->client, qjack_process , c);
jack_set_port_registration_callback(c->client, qjack_port_registration, c);
jack_set_xrun_callback(c->client, qjack_xrun, c);
@@ -584,7 +578,6 @@ static void qjack_client_fini_locked(QJackClient *c)
qjack_buffer_free(&c->fifo);
g_free(c->port);
g_free(c->process_buffers);
c->state = QJACK_STATE_DISCONNECTED;
/* fallthrough */
@@ -645,7 +638,7 @@ static int qjack_thread_creator(jack_native_thread_t *thread,
}
#endif
static void *qjack_init(Audiodev *dev, Error **errp)
static void *qjack_init(Audiodev *dev)
{
assert(dev->driver == AUDIODEV_DRIVER_JACK);
return dev;
@@ -676,6 +669,7 @@ static struct audio_driver jack_driver = {
.init = qjack_init,
.fini = qjack_fini,
.pcm_ops = &jack_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof(QJackOut),

View File

@@ -1,6 +1,7 @@
system_ss.add([spice_headers, files('audio.c')])
system_ss.add(files(
'audio-hmp-cmds.c',
'audio_legacy.c',
'mixeng.c',
'noaudio.c',
'wavaudio.c',
@@ -30,8 +31,7 @@ endforeach
if dbus_display
module_ss = ss.source_set()
module_ss.add(when: [gio, dbus_display1_dep, pixman],
if_true: files('dbusaudio.c'))
module_ss.add(when: [gio, pixman], if_true: files('dbusaudio.c'))
audio_modules += {'dbus': module_ss}
endif

View File

@@ -38,7 +38,7 @@ typedef struct st_sample st_sample;
typedef void (t_sample) (struct st_sample *dst, const void *src, int samples);
typedef void (f_sample) (void *dst, const struct st_sample *src, int samples);
/* indices: [stereo][signed][swap endianness][8, 16 or 32-bits] */
/* indices: [stereo][signed][swap endiannes][8, 16 or 32-bits] */
extern t_sample *mixeng_conv[2][2][2][3];
extern f_sample *mixeng_clip[2][2][2][3];

View File

@@ -104,7 +104,7 @@ static void no_enable_in(HWVoiceIn *hw, bool enable)
}
}
static void *no_audio_init(Audiodev *dev, Error **errp)
static void *no_audio_init(Audiodev *dev)
{
return &no_audio_init;
}
@@ -135,6 +135,7 @@ static struct audio_driver no_audio_driver = {
.init = no_audio_init,
.fini = no_audio_fini,
.pcm_ops = &no_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (NoVoiceOut),

View File

@@ -28,7 +28,6 @@
#include "qemu/main-loop.h"
#include "qemu/module.h"
#include "qemu/host-utils.h"
#include "qapi/error.h"
#include "audio.h"
#include "trace.h"
@@ -549,6 +548,7 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
hw->size_emul);
hw->buf_emul = NULL;
} else {
int err;
int trig = 0;
if (ioctl (fd, SNDCTL_DSP_SETTRIGGER, &trig) < 0) {
oss_logerr (errno, "SNDCTL_DSP_SETTRIGGER 0 failed\n");
@@ -736,7 +736,7 @@ static void oss_init_per_direction(AudiodevOssPerDirectionOptions *opdo)
}
}
static void *oss_audio_init(Audiodev *dev, Error **errp)
static void *oss_audio_init(Audiodev *dev)
{
AudiodevOssOptions *oopts;
assert(dev->driver == AUDIODEV_DRIVER_OSS);
@@ -745,12 +745,8 @@ static void *oss_audio_init(Audiodev *dev, Error **errp)
oss_init_per_direction(oopts->in);
oss_init_per_direction(oopts->out);
if (access(oopts->in->dev ?: "/dev/dsp", R_OK | W_OK) < 0) {
error_setg_errno(errp, errno, "%s not accessible", oopts->in->dev ?: "/dev/dsp");
return NULL;
}
if (access(oopts->out->dev ?: "/dev/dsp", R_OK | W_OK) < 0) {
error_setg_errno(errp, errno, "%s not accessible", oopts->out->dev ?: "/dev/dsp");
if (access(oopts->in->dev ?: "/dev/dsp", R_OK | W_OK) < 0 ||
access(oopts->out->dev ?: "/dev/dsp", R_OK | W_OK) < 0) {
return NULL;
}
return dev;
@@ -783,6 +779,7 @@ static struct audio_driver oss_audio_driver = {
.init = oss_audio_init,
.fini = oss_audio_fini,
.pcm_ops = &oss_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (OSSVoiceOut),

View File

@@ -3,7 +3,7 @@
#include "qemu/osdep.h"
#include "qemu/module.h"
#include "audio.h"
#include "qapi/error.h"
#include "qapi/opts-visitor.h"
#include <pulse/pulseaudio.h>
@@ -818,7 +818,7 @@ fail:
return NULL;
}
static void *qpa_audio_init(Audiodev *dev, Error **errp)
static void *qpa_audio_init(Audiodev *dev)
{
paaudio *g;
AudiodevPaOptions *popts = &dev->u.pa;
@@ -834,12 +834,10 @@ static void *qpa_audio_init(Audiodev *dev, Error **errp)
runtime = getenv("XDG_RUNTIME_DIR");
if (!runtime) {
error_setg(errp, "XDG_RUNTIME_DIR not set");
return NULL;
}
snprintf(pidfile, sizeof(pidfile), "%s/pulse/pid", runtime);
if (stat(pidfile, &st) != 0) {
error_setg_errno(errp, errno, "could not stat pidfile %s", pidfile);
return NULL;
}
}
@@ -869,7 +867,6 @@ static void *qpa_audio_init(Audiodev *dev, Error **errp)
}
if (!g->conn) {
g_free(g);
error_setg(errp, "could not connect to PulseAudio server");
return NULL;
}
@@ -931,6 +928,7 @@ static struct audio_driver pa_audio_driver = {
.init = qpa_audio_init,
.fini = qpa_audio_fini,
.pcm_ops = &qpa_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (PAVoiceOut),

View File

@@ -1,5 +1,5 @@
/*
* QEMU PipeWire audio driver
* QEMU Pipewire audio driver
*
* Copyright (c) 2023 Red Hat Inc.
*
@@ -11,8 +11,8 @@
#include "qemu/osdep.h"
#include "qemu/module.h"
#include "audio.h"
#include <errno.h>
#include "qemu/error-report.h"
#include "qapi/error.h"
#include <spa/param/audio/format-utils.h>
#include <spa/utils/ringbuffer.h>
#include <spa/utils/result.h>
@@ -66,9 +66,6 @@ typedef struct PWVoiceIn {
PWVoice v;
} PWVoiceIn;
#define PW_VOICE_IN(v) ((PWVoiceIn *)v)
#define PW_VOICE_OUT(v) ((PWVoiceOut *)v)
static void
stream_destroy(void *data)
{
@@ -200,6 +197,16 @@ on_stream_state_changed(void *data, enum pw_stream_state old,
trace_pw_state_changed(pw_stream_get_node_id(v->stream),
pw_stream_state_as_string(state));
switch (state) {
case PW_STREAM_STATE_ERROR:
case PW_STREAM_STATE_UNCONNECTED:
break;
case PW_STREAM_STATE_PAUSED:
case PW_STREAM_STATE_CONNECTING:
case PW_STREAM_STATE_STREAMING:
break;
}
}
static const struct pw_stream_events capture_stream_events = {
@@ -417,8 +424,8 @@ pw_to_audfmt(enum spa_audio_format fmt, int *endianness,
}
static int
qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
const char *name, enum spa_direction dir)
create_stream(pwaudio *c, PWVoice *v, const char *stream_name,
const char *name, enum spa_direction dir)
{
int res;
uint32_t n_params;
@@ -429,10 +436,6 @@ qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
struct pw_properties *props;
props = pw_properties_new(NULL, NULL);
if (!props) {
error_report("Failed to create PW properties: %s", g_strerror(errno));
return -1;
}
/* 75% of the timer period for faster updates */
buf_samples = (uint64_t)v->g->dev->timer_period * v->info.rate
@@ -445,8 +448,8 @@ qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
pw_properties_set(props, PW_KEY_TARGET_OBJECT, name);
}
v->stream = pw_stream_new(c->core, stream_name, props);
if (v->stream == NULL) {
error_report("Failed to create PW stream: %s", g_strerror(errno));
return -1;
}
@@ -474,7 +477,6 @@ qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
PW_STREAM_FLAG_MAP_BUFFERS |
PW_STREAM_FLAG_RT_PROCESS, params, n_params);
if (res < 0) {
error_report("Failed to connect PW stream: %s", g_strerror(errno));
pw_stream_destroy(v->stream);
return -1;
}
@@ -482,37 +484,71 @@ qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
return 0;
}
static void
qpw_set_position(uint32_t channels, uint32_t position[SPA_AUDIO_MAX_CHANNELS])
static int
qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
const char *name, enum spa_direction dir)
{
memcpy(position, (uint32_t[SPA_AUDIO_MAX_CHANNELS]) { SPA_AUDIO_CHANNEL_UNKNOWN, },
sizeof(uint32_t) * SPA_AUDIO_MAX_CHANNELS);
/*
* TODO: This currently expects the only frontend supporting more than 2
* channels is the usb-audio. We will need some means to set channel
* order when a new frontend gains multi-channel support.
*/
switch (channels) {
int r;
switch (v->info.channels) {
case 8:
position[6] = SPA_AUDIO_CHANNEL_SL;
position[7] = SPA_AUDIO_CHANNEL_SR;
/* fallthrough */
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[2] = SPA_AUDIO_CHANNEL_FC;
v->info.position[3] = SPA_AUDIO_CHANNEL_LFE;
v->info.position[4] = SPA_AUDIO_CHANNEL_RL;
v->info.position[5] = SPA_AUDIO_CHANNEL_RR;
v->info.position[6] = SPA_AUDIO_CHANNEL_SL;
v->info.position[7] = SPA_AUDIO_CHANNEL_SR;
break;
case 6:
position[2] = SPA_AUDIO_CHANNEL_FC;
position[3] = SPA_AUDIO_CHANNEL_LFE;
position[4] = SPA_AUDIO_CHANNEL_RL;
position[5] = SPA_AUDIO_CHANNEL_RR;
/* fallthrough */
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[2] = SPA_AUDIO_CHANNEL_FC;
v->info.position[3] = SPA_AUDIO_CHANNEL_LFE;
v->info.position[4] = SPA_AUDIO_CHANNEL_RL;
v->info.position[5] = SPA_AUDIO_CHANNEL_RR;
break;
case 5:
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[2] = SPA_AUDIO_CHANNEL_FC;
v->info.position[3] = SPA_AUDIO_CHANNEL_LFE;
v->info.position[4] = SPA_AUDIO_CHANNEL_RC;
break;
case 4:
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[2] = SPA_AUDIO_CHANNEL_FC;
v->info.position[3] = SPA_AUDIO_CHANNEL_RC;
break;
case 3:
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[2] = SPA_AUDIO_CHANNEL_LFE;
break;
case 2:
position[0] = SPA_AUDIO_CHANNEL_FL;
position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
break;
case 1:
position[0] = SPA_AUDIO_CHANNEL_MONO;
v->info.position[0] = SPA_AUDIO_CHANNEL_MONO;
break;
default:
dolog("Internal error: unsupported channel count %d\n", channels);
for (size_t i = 0; i < v->info.channels; i++) {
v->info.position[i] = SPA_AUDIO_CHANNEL_UNKNOWN;
}
break;
}
/* create a new unconnected pwstream */
r = create_stream(c, v, stream_name, name, dir);
if (r < 0) {
AUD_log(AUDIO_CAP, "Failed to create stream.");
return -1;
}
return r;
}
static int
@@ -530,7 +566,6 @@ qpw_init_out(HWVoiceOut *hw, struct audsettings *as, void *drv_opaque)
v->info.format = audfmt_to_pw(as->fmt, as->endianness);
v->info.channels = as->nchannels;
qpw_set_position(as->nchannels, v->info.position);
v->info.rate = as->freq;
obt_as.fmt =
@@ -544,6 +579,7 @@ qpw_init_out(HWVoiceOut *hw, struct audsettings *as, void *drv_opaque)
r = qpw_stream_new(c, v, ppdo->stream_name ? : c->dev->id,
ppdo->name, SPA_DIRECTION_OUTPUT);
if (r < 0) {
error_report("qpw_stream_new for playback failed");
pw_thread_loop_unlock(c->thread_loop);
return -1;
}
@@ -577,7 +613,6 @@ qpw_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
v->info.format = audfmt_to_pw(as->fmt, as->endianness);
v->info.channels = as->nchannels;
qpw_set_position(as->nchannels, v->info.position);
v->info.rate = as->freq;
obt_as.fmt =
@@ -588,6 +623,7 @@ qpw_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
r = qpw_stream_new(c, v, ppdo->stream_name ? : c->dev->id,
ppdo->name, SPA_DIRECTION_INPUT);
if (r < 0) {
error_report("qpw_stream_new for recording failed");
pw_thread_loop_unlock(c->thread_loop);
return -1;
}
@@ -603,56 +639,63 @@ qpw_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
return 0;
}
static void
qpw_voice_fini(PWVoice *v)
{
pwaudio *c = v->g;
if (!v->stream) {
return;
}
pw_thread_loop_lock(c->thread_loop);
pw_stream_destroy(v->stream);
v->stream = NULL;
pw_thread_loop_unlock(c->thread_loop);
}
static void
qpw_fini_out(HWVoiceOut *hw)
{
qpw_voice_fini(&PW_VOICE_OUT(hw)->v);
PWVoiceOut *pw = (PWVoiceOut *) hw;
PWVoice *v = &pw->v;
if (v->stream) {
pwaudio *c = v->g;
pw_thread_loop_lock(c->thread_loop);
pw_stream_destroy(v->stream);
v->stream = NULL;
pw_thread_loop_unlock(c->thread_loop);
}
}
static void
qpw_fini_in(HWVoiceIn *hw)
{
qpw_voice_fini(&PW_VOICE_IN(hw)->v);
PWVoiceIn *pw = (PWVoiceIn *) hw;
PWVoice *v = &pw->v;
if (v->stream) {
pwaudio *c = v->g;
pw_thread_loop_lock(c->thread_loop);
pw_stream_destroy(v->stream);
v->stream = NULL;
pw_thread_loop_unlock(c->thread_loop);
}
}
static void
qpw_voice_set_enabled(PWVoice *v, bool enable)
qpw_enable_out(HWVoiceOut *hw, bool enable)
{
PWVoiceOut *po = (PWVoiceOut *) hw;
PWVoice *v = &po->v;
pwaudio *c = v->g;
pw_thread_loop_lock(c->thread_loop);
pw_stream_set_active(v->stream, enable);
pw_thread_loop_unlock(c->thread_loop);
}
static void
qpw_enable_out(HWVoiceOut *hw, bool enable)
{
qpw_voice_set_enabled(&PW_VOICE_OUT(hw)->v, enable);
}
static void
qpw_enable_in(HWVoiceIn *hw, bool enable)
{
qpw_voice_set_enabled(&PW_VOICE_IN(hw)->v, enable);
PWVoiceIn *pi = (PWVoiceIn *) hw;
PWVoice *v = &pi->v;
pwaudio *c = v->g;
pw_thread_loop_lock(c->thread_loop);
pw_stream_set_active(v->stream, enable);
pw_thread_loop_unlock(c->thread_loop);
}
static void
qpw_voice_set_volume(PWVoice *v, Volume *vol)
qpw_volume_out(HWVoiceOut *hw, Volume *vol)
{
PWVoiceOut *pw = (PWVoiceOut *) hw;
PWVoice *v = &pw->v;
pwaudio *c = v->g;
int i, ret;
@@ -673,16 +716,29 @@ qpw_voice_set_volume(PWVoice *v, Volume *vol)
pw_thread_loop_unlock(c->thread_loop);
}
static void
qpw_volume_out(HWVoiceOut *hw, Volume *vol)
{
qpw_voice_set_volume(&PW_VOICE_OUT(hw)->v, vol);
}
static void
qpw_volume_in(HWVoiceIn *hw, Volume *vol)
{
qpw_voice_set_volume(&PW_VOICE_IN(hw)->v, vol);
PWVoiceIn *pw = (PWVoiceIn *) hw;
PWVoice *v = &pw->v;
pwaudio *c = v->g;
int i, ret;
pw_thread_loop_lock(c->thread_loop);
v->volume.channels = vol->channels;
for (i = 0; i < vol->channels; ++i) {
v->volume.values[i] = (float)vol->vol[i] / 255;
}
ret = pw_stream_set_control(v->stream,
SPA_PROP_channelVolumes, v->volume.channels, v->volume.values, 0);
trace_pw_vol(ret == 0 ? "success" : "failed");
v->muted = vol->mute;
float val = v->muted ? 1.f : 0.f;
ret = pw_stream_set_control(v->stream, SPA_PROP_mute, 1, &val, 0);
pw_thread_loop_unlock(c->thread_loop);
}
static int wait_resync(pwaudio *pw)
@@ -704,7 +760,6 @@ static int wait_resync(pwaudio *pw)
}
return 0;
}
static void
on_core_error(void *data, uint32_t id, int seq, int res, const char *message)
{
@@ -736,31 +791,30 @@ static const struct pw_core_events core_events = {
};
static void *
qpw_audio_init(Audiodev *dev, Error **errp)
qpw_audio_init(Audiodev *dev)
{
g_autofree pwaudio *pw = g_new0(pwaudio, 1);
assert(dev->driver == AUDIODEV_DRIVER_PIPEWIRE);
trace_pw_audio_init();
pw_init(NULL, NULL);
trace_pw_audio_init();
assert(dev->driver == AUDIODEV_DRIVER_PIPEWIRE);
pw->dev = dev;
pw->thread_loop = pw_thread_loop_new("PipeWire thread loop", NULL);
pw->thread_loop = pw_thread_loop_new("Pipewire thread loop", NULL);
if (pw->thread_loop == NULL) {
error_setg_errno(errp, errno, "Could not create PipeWire loop");
error_report("Could not create Pipewire loop");
goto fail;
}
pw->context =
pw_context_new(pw_thread_loop_get_loop(pw->thread_loop), NULL, 0);
if (pw->context == NULL) {
error_setg_errno(errp, errno, "Could not create PipeWire context");
error_report("Could not create Pipewire context");
goto fail;
}
if (pw_thread_loop_start(pw->thread_loop) < 0) {
error_setg_errno(errp, errno, "Could not start PipeWire loop");
error_report("Could not start Pipewire loop");
goto fail;
}
@@ -769,13 +823,13 @@ qpw_audio_init(Audiodev *dev, Error **errp)
pw->core = pw_context_connect(pw->context, NULL, 0);
if (pw->core == NULL) {
pw_thread_loop_unlock(pw->thread_loop);
goto fail_error;
goto fail;
}
if (pw_core_add_listener(pw->core, &pw->core_listener,
&core_events, pw) < 0) {
pw_thread_loop_unlock(pw->thread_loop);
goto fail_error;
goto fail;
}
if (wait_resync(pw) < 0) {
pw_thread_loop_unlock(pw->thread_loop);
@@ -785,14 +839,17 @@ qpw_audio_init(Audiodev *dev, Error **errp)
return g_steal_pointer(&pw);
fail_error:
error_setg(errp, "Failed to initialize PW context");
fail:
AUD_log(AUDIO_CAP, "Failed to initialize PW context");
if (pw->thread_loop) {
pw_thread_loop_stop(pw->thread_loop);
}
g_clear_pointer(&pw->context, pw_context_destroy);
g_clear_pointer(&pw->thread_loop, pw_thread_loop_destroy);
if (pw->context) {
g_clear_pointer(&pw->context, pw_context_destroy);
}
if (pw->thread_loop) {
g_clear_pointer(&pw->thread_loop, pw_thread_loop_destroy);
}
return NULL;
}
@@ -842,6 +899,7 @@ static struct audio_driver pw_audio_driver = {
.init = qpw_audio_init,
.fini = qpw_audio_fini,
.pcm_ops = &qpw_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof(PWVoiceOut),

View File

@@ -26,7 +26,6 @@
#include <SDL.h>
#include <SDL_thread.h>
#include "qemu/module.h"
#include "qapi/error.h"
#include "audio.h"
#ifndef _WIN32
@@ -450,10 +449,10 @@ static void sdl_enable_in(HWVoiceIn *hw, bool enable)
SDL_PauseAudioDevice(sdl->devid, !enable);
}
static void *sdl_audio_init(Audiodev *dev, Error **errp)
static void *sdl_audio_init(Audiodev *dev)
{
if (SDL_InitSubSystem (SDL_INIT_AUDIO)) {
error_setg(errp, "SDL failed to initialize audio subsystem");
sdl_logerr ("SDL failed to initialize audio subsystem\n");
return NULL;
}
@@ -494,6 +493,7 @@ static struct audio_driver sdl_audio_driver = {
.init = sdl_audio_init,
.fini = sdl_audio_fini,
.pcm_ops = &sdl_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof(SDLVoiceOut),

View File

@@ -518,7 +518,7 @@ static void sndio_fini_in(HWVoiceIn *hw)
sndio_fini(self);
}
static void *sndio_audio_init(Audiodev *dev, Error **errp)
static void *sndio_audio_init(Audiodev *dev)
{
assert(dev->driver == AUDIODEV_DRIVER_SNDIO);
return dev;
@@ -550,6 +550,7 @@ static struct audio_driver sndio_audio_driver = {
.init = sndio_audio_init,
.fini = sndio_audio_fini,
.pcm_ops = &sndio_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof(SndioVoice),

View File

@@ -22,7 +22,6 @@
#include "qemu/module.h"
#include "qemu/error-report.h"
#include "qemu/timer.h"
#include "qapi/error.h"
#include "ui/qemu-spice.h"
#define AUDIO_CAP "spice"
@@ -72,13 +71,11 @@ static const SpiceRecordInterface record_sif = {
.base.minor_version = SPICE_INTERFACE_RECORD_MINOR,
};
static void *spice_audio_init(Audiodev *dev, Error **errp)
static void *spice_audio_init(Audiodev *dev)
{
if (!using_spice) {
error_setg(errp, "Cannot use spice audio without -spice");
return NULL;
}
return &spice_audio_init;
}

View File

@@ -24,7 +24,7 @@ pw_read(int32_t avail, uint32_t index, size_t len) "avail=%d index=%u len=%zu"
pw_write(int32_t filled, int32_t avail, uint32_t index, size_t len) "filled=%d avail=%d index=%u len=%zu"
pw_vol(const char *ret) "set volume: %s"
pw_period(uint64_t quantum, uint32_t rate) "period =%" PRIu64 "/%u"
pw_audio_init(void) "Initialize PipeWire context"
pw_audio_init(void) "Initialize Pipewire context"
# audio.c
audio_timer_start(int interval) "interval %d ms"

View File

@@ -97,10 +97,6 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
dolog ("WAVE files can not handle 32bit formats\n");
return -1;
case AUDIO_FORMAT_F32:
dolog("WAVE files can not handle float formats\n");
return -1;
default:
abort();
}
@@ -186,7 +182,7 @@ static void wav_enable_out(HWVoiceOut *hw, bool enable)
}
}
static void *wav_audio_init(Audiodev *dev, Error **errp)
static void *wav_audio_init(Audiodev *dev)
{
assert(dev->driver == AUDIODEV_DRIVER_WAV);
return dev;
@@ -212,6 +208,7 @@ static struct audio_driver wav_audio_driver = {
.init = wav_audio_init,
.fini = wav_audio_fini,
.pcm_ops = &wav_pcm_ops,
.can_be_default = 0,
.max_voices_out = 1,
.max_voices_in = 0,
.voice_size_out = sizeof (WAVVoiceOut),

View File

@@ -1,5 +1 @@
source tpm/Kconfig
config IOMMUFD
bool
depends on VFIO

View File

@@ -191,11 +191,6 @@ static int cryptodev_backend_account(CryptoDevBackend *backend,
if (algtype == QCRYPTODEV_BACKEND_ALG_ASYM) {
CryptoDevBackendAsymOpInfo *asym_op_info = op_info->u.asym_op_info;
len = asym_op_info->src_len;
if (unlikely(!backend->asym_stat)) {
error_report("cryptodev: Unexpected asym operation");
return -VIRTIO_CRYPTO_NOTSUPP;
}
switch (op_info->op_code) {
case VIRTIO_CRYPTO_AKCIPHER_ENCRYPT:
CryptodevAsymStatIncEncrypt(backend, len);
@@ -215,11 +210,6 @@ static int cryptodev_backend_account(CryptoDevBackend *backend,
} else if (algtype == QCRYPTODEV_BACKEND_ALG_SYM) {
CryptoDevBackendSymOpInfo *sym_op_info = op_info->u.sym_op_info;
len = sym_op_info->src_len;
if (unlikely(!backend->sym_stat)) {
error_report("cryptodev: Unexpected sym operation");
return -VIRTIO_CRYPTO_NOTSUPP;
}
switch (op_info->op_code) {
case VIRTIO_CRYPTO_CIPHER_ENCRYPT:
CryptodevSymStatIncEncrypt(backend, len);
@@ -252,11 +242,10 @@ static void cryptodev_backend_throttle_timer_cb(void *opaque)
continue;
}
throttle_account(&backend->ts, THROTTLE_WRITE, ret);
throttle_account(&backend->ts, true, ret);
cryptodev_backend_operation(backend, op_info);
if (throttle_enabled(&backend->tc) &&
throttle_schedule_timer(&backend->ts, &backend->tt,
THROTTLE_WRITE)) {
throttle_schedule_timer(&backend->ts, &backend->tt, true)) {
break;
}
}
@@ -272,7 +261,7 @@ int cryptodev_backend_crypto_operation(
goto do_account;
}
if (throttle_schedule_timer(&backend->ts, &backend->tt, THROTTLE_WRITE) ||
if (throttle_schedule_timer(&backend->ts, &backend->tt, true) ||
!QTAILQ_EMPTY(&backend->opinfos)) {
QTAILQ_INSERT_TAIL(&backend->opinfos, op_info, next);
return 0;
@@ -284,7 +273,7 @@ do_account:
return ret;
}
throttle_account(&backend->ts, THROTTLE_WRITE, ret);
throttle_account(&backend->ts, true, ret);
return cryptodev_backend_operation(backend, op_info);
}
@@ -342,7 +331,8 @@ static void cryptodev_backend_set_throttle(CryptoDevBackend *backend, int field,
if (!enabled) {
throttle_init(&backend->ts);
throttle_timers_init(&backend->tt, qemu_get_aio_context(),
QEMU_CLOCK_REALTIME, NULL,
QEMU_CLOCK_REALTIME,
cryptodev_backend_throttle_timer_cb, /* FIXME */
cryptodev_backend_throttle_timer_cb, backend);
}
@@ -398,7 +388,6 @@ static void cryptodev_backend_set_ops(Object *obj, Visitor *v,
static void
cryptodev_backend_complete(UserCreatable *uc, Error **errp)
{
ERRP_GUARD();
CryptoDevBackend *backend = CRYPTODEV_BACKEND(uc);
CryptoDevBackendClass *bc = CRYPTODEV_BACKEND_GET_CLASS(uc);
uint32_t services;
@@ -407,20 +396,11 @@ cryptodev_backend_complete(UserCreatable *uc, Error **errp)
QTAILQ_INIT(&backend->opinfos);
value = backend->tc.buckets[THROTTLE_OPS_TOTAL].avg;
cryptodev_backend_set_throttle(backend, THROTTLE_OPS_TOTAL, value, errp);
if (*errp) {
return;
}
value = backend->tc.buckets[THROTTLE_BPS_TOTAL].avg;
cryptodev_backend_set_throttle(backend, THROTTLE_BPS_TOTAL, value, errp);
if (*errp) {
return;
}
if (bc->init) {
bc->init(backend, errp);
if (*errp) {
return;
}
}
services = backend->conf.crypto_services;

View File

@@ -393,7 +393,7 @@ static const VMStateDescription dbus_vmstate = {
.version_id = 0,
.pre_save = dbus_vmstate_pre_save,
.post_load = dbus_vmstate_post_load,
.fields = (const VMStateField[]) {
.fields = (VMStateField[]) {
VMSTATE_UINT32(data_size, DBusVMState),
VMSTATE_VBUFFER_ALLOC_UINT32(data, DBusVMState, 0, 0, data_size),
VMSTATE_END_OF_LIST()
@@ -426,7 +426,8 @@ dbus_vmstate_complete(UserCreatable *uc, Error **errp)
return;
}
if (vmstate_register_any(VMSTATE_IF(self), &dbus_vmstate, self) < 0) {
if (vmstate_register(VMSTATE_IF(self), VMSTATE_INSTANCE_ID_ANY,
&dbus_vmstate, self) < 0) {
error_setg(errp, "Failed to register vmstate");
}
}

View File

@@ -17,29 +17,31 @@
#include "sysemu/hostmem.h"
#include "hw/i386/hostmem-epc.h"
static bool
static void
sgx_epc_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
g_autofree char *name = NULL;
uint32_t ram_flags;
char *name;
int fd;
if (!backend->size) {
error_setg(errp, "can't create backend with size 0");
return false;
return;
}
fd = qemu_open_old("/dev/sgx_vepc", O_RDWR);
if (fd < 0) {
error_setg_errno(errp, errno,
"failed to open /dev/sgx_vepc to alloc SGX EPC");
return false;
return;
}
name = object_get_canonical_path(OBJECT(backend));
ram_flags = (backend->share ? RAM_SHARED : 0) | RAM_PROTECTED;
return memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend), name,
backend->size, ram_flags, fd, 0, errp);
memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend),
name, backend->size, ram_flags,
fd, 0, errp);
g_free(name);
}
static void sgx_epc_backend_instance_init(Object *obj)

View File

@@ -18,8 +18,6 @@
#include "sysemu/hostmem.h"
#include "qom/object_interfaces.h"
#include "qom/object.h"
#include "qapi/visitor.h"
#include "qapi/qapi-visit-common.h"
OBJECT_DECLARE_SIMPLE_TYPE(HostMemoryBackendFile, MEMORY_BACKEND_FILE)
@@ -33,63 +31,38 @@ struct HostMemoryBackendFile {
bool discard_data;
bool is_pmem;
bool readonly;
OnOffAuto rom;
};
static bool
static void
file_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
#ifndef CONFIG_POSIX
error_setg(errp, "backend '%s' not supported on this host",
object_get_typename(OBJECT(backend)));
return false;
#else
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(backend);
g_autofree gchar *name = NULL;
uint32_t ram_flags;
gchar *name;
if (!backend->size) {
error_setg(errp, "can't create backend with size 0");
return false;
return;
}
if (!fb->mem_path) {
error_setg(errp, "mem-path property not set");
return false;
}
switch (fb->rom) {
case ON_OFF_AUTO_AUTO:
/* Traditionally, opening the file readonly always resulted in ROM. */
fb->rom = fb->readonly ? ON_OFF_AUTO_ON : ON_OFF_AUTO_OFF;
break;
case ON_OFF_AUTO_ON:
if (!fb->readonly) {
error_setg(errp, "property 'rom' = 'on' is not supported with"
" 'readonly' = 'off'");
return false;
}
break;
case ON_OFF_AUTO_OFF:
if (fb->readonly && backend->share) {
error_setg(errp, "property 'rom' = 'off' is incompatible with"
" 'readonly' = 'on' and 'share' = 'on'");
return false;
}
break;
default:
g_assert_not_reached();
return;
}
name = host_memory_backend_get_name(backend);
ram_flags = backend->share ? RAM_SHARED : 0;
ram_flags |= fb->readonly ? RAM_READONLY_FD : 0;
ram_flags |= fb->rom == ON_OFF_AUTO_ON ? RAM_READONLY : 0;
ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
ram_flags |= fb->is_pmem ? RAM_PMEM : 0;
ram_flags |= RAM_NAMED_FILE;
return memory_region_init_ram_from_file(&backend->mr, OBJECT(backend), name,
backend->size, fb->align, ram_flags,
fb->mem_path, fb->offset, errp);
memory_region_init_ram_from_file(&backend->mr, OBJECT(backend), name,
backend->size, fb->align, ram_flags,
fb->mem_path, fb->offset, fb->readonly,
errp);
g_free(name);
#endif
}
@@ -228,32 +201,6 @@ static void file_memory_backend_set_readonly(Object *obj, bool value,
fb->readonly = value;
}
static void file_memory_backend_get_rom(Object *obj, Visitor *v,
const char *name, void *opaque,
Error **errp)
{
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(obj);
OnOffAuto rom = fb->rom;
visit_type_OnOffAuto(v, name, &rom, errp);
}
static void file_memory_backend_set_rom(Object *obj, Visitor *v,
const char *name, void *opaque,
Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(obj);
if (host_memory_backend_mr_inited(backend)) {
error_setg(errp, "cannot change property '%s' of %s.", name,
object_get_typename(obj));
return;
}
visit_type_OnOffAuto(v, name, &fb->rom, errp);
}
static void file_backend_unparent(Object *obj)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
@@ -296,10 +243,6 @@ file_backend_class_init(ObjectClass *oc, void *data)
object_class_property_add_bool(oc, "readonly",
file_memory_backend_get_readonly,
file_memory_backend_set_readonly);
object_class_property_add(oc, "rom", "OnOffAuto",
file_memory_backend_get_rom, file_memory_backend_set_rom, NULL, NULL);
object_class_property_set_description(oc, "rom",
"Whether to create Read Only Memory (ROM)");
}
static void file_backend_instance_finalize(Object *o)

View File

@@ -31,17 +31,17 @@ struct HostMemoryBackendMemfd {
bool seal;
};
static bool
static void
memfd_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
HostMemoryBackendMemfd *m = MEMORY_BACKEND_MEMFD(backend);
g_autofree char *name = NULL;
uint32_t ram_flags;
char *name;
int fd;
if (!backend->size) {
error_setg(errp, "can't create backend with size 0");
return false;
return;
}
fd = qemu_memfd_create(TYPE_MEMORY_BACKEND_MEMFD, backend->size,
@@ -49,14 +49,15 @@ memfd_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
F_SEAL_GROW | F_SEAL_SHRINK | F_SEAL_SEAL : 0,
errp);
if (fd == -1) {
return false;
return;
}
name = host_memory_backend_get_name(backend);
ram_flags = backend->share ? RAM_SHARED : 0;
ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
return memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend), name,
backend->size, ram_flags, fd, 0, errp);
memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend), name,
backend->size, ram_flags, fd, 0, errp);
g_free(name);
}
static bool

View File

@@ -16,23 +16,23 @@
#include "qemu/module.h"
#include "qom/object_interfaces.h"
static bool
static void
ram_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
g_autofree char *name = NULL;
uint32_t ram_flags;
char *name;
if (!backend->size) {
error_setg(errp, "can't create backend with size 0");
return false;
return;
}
name = host_memory_backend_get_name(backend);
ram_flags = backend->share ? RAM_SHARED : 0;
ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
return memory_region_init_ram_flags_nomigrate(&backend->mr, OBJECT(backend),
name, backend->size,
ram_flags, errp);
memory_region_init_ram_flags_nomigrate(&backend->mr, OBJECT(backend), name,
backend->size, ram_flags, errp);
g_free(name);
}
static void

View File

@@ -20,7 +20,6 @@
#include "qom/object_interfaces.h"
#include "qemu/mmap-alloc.h"
#include "qemu/madvise.h"
#include "hw/qdev-core.h"
#ifdef CONFIG_NUMA
#include <numaif.h>
@@ -220,6 +219,7 @@ static bool host_memory_backend_get_prealloc(Object *obj, Error **errp)
static void host_memory_backend_set_prealloc(Object *obj, bool value,
Error **errp)
{
Error *local_err = NULL;
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (!backend->reserve && value) {
@@ -237,8 +237,10 @@ static void host_memory_backend_set_prealloc(Object *obj, bool value,
void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr);
if (!qemu_prealloc_mem(fd, ptr, sz, backend->prealloc_threads,
backend->prealloc_context, false, errp)) {
qemu_prealloc_mem(fd, ptr, sz, backend->prealloc_threads,
backend->prealloc_context, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
backend->prealloc = true;
@@ -322,92 +324,91 @@ host_memory_backend_memory_complete(UserCreatable *uc, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(uc);
HostMemoryBackendClass *bc = MEMORY_BACKEND_GET_CLASS(uc);
Error *local_err = NULL;
void *ptr;
uint64_t sz;
bool async = !phase_check(PHASE_LATE_BACKENDS_CREATED);
if (!bc->alloc) {
return;
}
if (!bc->alloc(backend, errp)) {
return;
}
if (bc->alloc) {
bc->alloc(backend, &local_err);
if (local_err) {
goto out;
}
ptr = memory_region_get_ram_ptr(&backend->mr);
sz = memory_region_size(&backend->mr);
ptr = memory_region_get_ram_ptr(&backend->mr);
sz = memory_region_size(&backend->mr);
if (backend->merge) {
qemu_madvise(ptr, sz, QEMU_MADV_MERGEABLE);
}
if (!backend->dump) {
qemu_madvise(ptr, sz, QEMU_MADV_DONTDUMP);
}
if (backend->merge) {
qemu_madvise(ptr, sz, QEMU_MADV_MERGEABLE);
}
if (!backend->dump) {
qemu_madvise(ptr, sz, QEMU_MADV_DONTDUMP);
}
#ifdef CONFIG_NUMA
unsigned long lastbit = find_last_bit(backend->host_nodes, MAX_NODES);
/* lastbit == MAX_NODES means maxnode = 0 */
unsigned long maxnode = (lastbit + 1) % (MAX_NODES + 1);
/*
* Ensure policy won't be ignored in case memory is preallocated
* before mbind(). note: MPOL_MF_STRICT is ignored on hugepages so
* this doesn't catch hugepage case.
*/
unsigned flags = MPOL_MF_STRICT | MPOL_MF_MOVE;
int mode = backend->policy;
unsigned long lastbit = find_last_bit(backend->host_nodes, MAX_NODES);
/* lastbit == MAX_NODES means maxnode = 0 */
unsigned long maxnode = (lastbit + 1) % (MAX_NODES + 1);
/* ensure policy won't be ignored in case memory is preallocated
* before mbind(). note: MPOL_MF_STRICT is ignored on hugepages so
* this doesn't catch hugepage case. */
unsigned flags = MPOL_MF_STRICT | MPOL_MF_MOVE;
int mode = backend->policy;
/* check for invalid host-nodes and policies and give more verbose
* error messages than mbind(). */
if (maxnode && backend->policy == MPOL_DEFAULT) {
error_setg(errp, "host-nodes must be empty for policy default,"
" or you should explicitly specify a policy other"
" than default");
return;
} else if (maxnode == 0 && backend->policy != MPOL_DEFAULT) {
error_setg(errp, "host-nodes must be set for policy %s",
HostMemPolicy_str(backend->policy));
return;
}
/*
* We can have up to MAX_NODES nodes, but we need to pass maxnode+1
* as argument to mbind() due to an old Linux bug (feature?) which
* cuts off the last specified node. This means backend->host_nodes
* must have MAX_NODES+1 bits available.
*/
assert(sizeof(backend->host_nodes) >=
BITS_TO_LONGS(MAX_NODES + 1) * sizeof(unsigned long));
assert(maxnode <= MAX_NODES);
#ifdef HAVE_NUMA_HAS_PREFERRED_MANY
if (mode == MPOL_PREFERRED && numa_has_preferred_many() > 0) {
/*
* Replace with MPOL_PREFERRED_MANY otherwise the mbind() below
* silently picks the first node.
*/
mode = MPOL_PREFERRED_MANY;
}
#endif
if (maxnode &&
mbind(ptr, sz, mode, backend->host_nodes, maxnode + 1, flags)) {
if (backend->policy != MPOL_DEFAULT || errno != ENOSYS) {
error_setg_errno(errp, errno,
"cannot bind memory to host NUMA nodes");
/* check for invalid host-nodes and policies and give more verbose
* error messages than mbind(). */
if (maxnode && backend->policy == MPOL_DEFAULT) {
error_setg(errp, "host-nodes must be empty for policy default,"
" or you should explicitly specify a policy other"
" than default");
return;
} else if (maxnode == 0 && backend->policy != MPOL_DEFAULT) {
error_setg(errp, "host-nodes must be set for policy %s",
HostMemPolicy_str(backend->policy));
return;
}
}
/* We can have up to MAX_NODES nodes, but we need to pass maxnode+1
* as argument to mbind() due to an old Linux bug (feature?) which
* cuts off the last specified node. This means backend->host_nodes
* must have MAX_NODES+1 bits available.
*/
assert(sizeof(backend->host_nodes) >=
BITS_TO_LONGS(MAX_NODES + 1) * sizeof(unsigned long));
assert(maxnode <= MAX_NODES);
#ifdef HAVE_NUMA_HAS_PREFERRED_MANY
if (mode == MPOL_PREFERRED && numa_has_preferred_many() > 0) {
/*
* Replace with MPOL_PREFERRED_MANY otherwise the mbind() below
* silently picks the first node.
*/
mode = MPOL_PREFERRED_MANY;
}
#endif
/*
* Preallocate memory after the NUMA policy has been instantiated.
* This is necessary to guarantee memory is allocated with
* specified NUMA policy in place.
*/
if (backend->prealloc && !qemu_prealloc_mem(memory_region_get_fd(&backend->mr),
ptr, sz,
backend->prealloc_threads,
backend->prealloc_context,
async, errp)) {
return;
if (maxnode &&
mbind(ptr, sz, mode, backend->host_nodes, maxnode + 1, flags)) {
if (backend->policy != MPOL_DEFAULT || errno != ENOSYS) {
error_setg_errno(errp, errno,
"cannot bind memory to host NUMA nodes");
return;
}
}
#endif
/* Preallocate memory after the NUMA policy has been instantiated.
* This is necessary to guarantee memory is allocated with
* specified NUMA policy in place.
*/
if (backend->prealloc) {
qemu_prealloc_mem(memory_region_get_fd(&backend->mr), ptr, sz,
backend->prealloc_threads,
backend->prealloc_context, &local_err);
if (local_err) {
goto out;
}
}
}
out:
error_propagate(errp, local_err);
}
static bool

Some files were not shown because too many files have changed in this diff Show More