Compare commits

..

39 Commits

Author SHA1 Message Date
Fabiano Rosas
9f1a8f4e85 tests/qtest/migration: Use the new migration_test_add
Replace the tests registration with the new function that prints tests
names.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:16 -03:00
Fabiano Rosas
47b2c3d4f6 tests/qtest/migration: Add a wrapper to print test names
Our usage of gtest results in us losing the very basic functionality
of "knowing which test failed". The issue is that gtest only prints
test names ("paths" in gtest parlance) once the test has finished, but
we use asserts in the tests and crash gtest itself before it can print
anything. We also use a final abort when the result of g_test_run is
not 0.

Depending on how the test failed/broke we can see the function that
trigged the abort, which may be representative of the test, but it
could also just be some generic function.

We have been relying on the primitive method of looking at the name of
the previous successful test and then looking at the code to figure
out which test should have come next.

Add a wrapper to the test registration that does the job of printing
the test name before running.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:16 -03:00
Fabiano Rosas
086d8dc142 tests/qtest: Add a test for fixed-ram with passing of fds
Add a multifd test for fixed-ram with passing of fds into QEMU. This
is how libvirt will consume the feature.

There are a couple of details to the fdset mechanism:

- multifd needs two distinct file descriptors (not duplicated with
  dup()) on the outgoing side so it can enable O_DIRECT only on the
  channels that write with alignment. The dup() system call creates
  file descriptors that share status flags, of which O_DIRECT is one.

  the incoming side doesn't set O_DIRECT, so it can dup() fds and
  therefore can receive only one in the fdset.

- the open() access mode flags used for the fds passed into QEMU need
  to match the flags QEMU uses to open the file. Currently O_WRONLY
  for src and O_RDONLY for dst.

O_DIRECT is not supported on all systems/filesystems, so run the fdset
test without O_DIRECT if that's the case. The migration code should
still work in that scenario.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:16 -03:00
Fabiano Rosas
4b6eeb2335 migration: Add support for fdset with multifd + file
Allow multifd to use an fdset when migrating to a file. This is useful
for the scenario where the management layer wants to have control over
the migration file.

By receiving the file descriptors directly, QEMU can delegate some
high level operating system operations to the management layer (such
as mandatory access control).

The management layer might also want to add its own headers before the
migration stream.

Enable the "file:/dev/fdset/#" syntax for the multifd migration with
fixed-ram. The fdset should contain two fds on the source side of
migration and 1 fd on the destination side. The two fds should not be
duplicates between themselves.

Multifd enables O_DIRECT on the source side using one of the fds and
keeps the other without the flag. None of the fds should have the
O_DIRECT flag already set.

The fdset mechanism also requires that the open() access mode flags be
the same as what QEMU uses internally: WRONLY for the source fds and
RDONLY for the destination fds.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:16 -03:00
Fabiano Rosas
2cb56cb34e monitor: fdset: Match against O_DIRECT
We're about to enable the use of O_DIRECT in the migration code and
due to the alignment restrictions imposed by filesystems we need to
make sure the flag is only used when doing aligned IO.

The migration will do parallel IO to different regions of a file, so
we need to use more than one file descriptor. Those cannot be obtained
by duplicating (dup()) since duplicated file descriptors share the
file status flags, including O_DIRECT. If one migration channel does
unaligned IO while another sets O_DIRECT to do aligned IO, the
filesystem would fail the unaligned operation.

The add-fd QMP command along with the fdset code are specifically
designed to allow the user to pass a set of file descriptors with
different access flags into QEMU to be later fetched by code that
needs to alternate between those flags when doing IO.

Extend the fdset matching function to behave the same with the
O_DIRECT flag.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:16 -03:00
Fabiano Rosas
5deb383d1a monitor: Extract fdset fd flags comparison into a function
We're about to add one more condition to the flags comparison that
requires an ifdef. Move the code into a separate function now to make
it cleaner after the next patch.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:16 -03:00
Fabiano Rosas
4478aef543 monitor: Honor QMP request for fd removal immediately
We're currently only removing an fd from the fdset if the VM is
running. This causes a QMP call to "remove-fd" to not actually remove
the fd if the VM happens to be stopped.

While the fd would eventually be removed when monitor_fdset_cleanup()
is called again, the user request should be honored and the fd
actually removed. Calling remove-fd + query-fdset shows a recently
removed fd still present.

The runstate_is_running() check was introduced by commit ebe52b592d
("monitor: Prevent removing fd from set during init"), which by the
shortlog indicates that they were trying to avoid removing an
yet-unduplicated fd too early.

I don't see why an fd explicitly removed with qmp_remove_fd() should
be under runstate_is_running(). I'm assuming this was a mistake when
adding the parenthesis around the expression.

Move the runstate_is_running() check to apply only to the
QLIST_EMPTY(dup_fds) side of the expression and ignore it when
mon_fdset_fd->removed has been explicitly set.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:15 -03:00
Fabiano Rosas
a8adda79e7 tests/qtest: Add a test for migration with direct-io and multifd
The test is only allowed to run in systems that know and in
filesystems which support O_DIRECT.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:15 -03:00
Fabiano Rosas
cf07faa04a migration: Add direct-io parameter
Add the direct-io migration parameter that tells the migration code to
use O_DIRECT when opening the migration stream file whenever possible.

This is currently only used for the secondary channels of fixed-ram
migration, which can guarantee that writes are page aligned.

However the parameter could be made to affect other types of
file-based migrations in the future.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:15 -03:00
Fabiano Rosas
b0839f1600 tests/qtest: Add a multifd + fixed-ram migration test
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:15 -03:00
Fabiano Rosas
db401f5302 migration/multifd: Support incoming fixed-ram stream format
For the incoming fixed-ram migration we need to read the ramblock
headers, get the pages bitmap and send the host address of each
non-zero page to the multifd channel thread for writing.

To read from the migration file we need a preadv function that can
read into the iovs in segments of contiguous pages because (as in the
writing case) the file offset applies to the entire iovec.

Usage on HMP is:

(qemu) migrate_set_capability multifd on
(qemu) migrate_set_capability fixed-ram on
(qemu) migrate_set_parameter max-bandwidth 0
(qemu) migrate_set_parameter multifd-channels 8
(qemu) migrate_incoming file:migfile
(qemu) info status
(qemu) c

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:15 -03:00
Fabiano Rosas
ed95cd0446 migration/multifd: Support outgoing fixed-ram stream format
The new fixed-ram stream format uses a file transport and puts ram
pages in the migration file at their respective offsets and can be
done in parallel by using the pwritev system call which takes iovecs
and an offset.

Add support to enabling the new format along with multifd to make use
of the threading and page handling already in place.

This requires multifd to stop sending headers and leaving the stream
format to the fixed-ram code. When it comes time to write the data, we
need to call a version of qio_channel_write that can take an offset.

Usage on HMP is:

(qemu) stop
(qemu) migrate_set_capability multifd on
(qemu) migrate_set_capability fixed-ram on
(qemu) migrate_set_parameter max-bandwidth 0
(qemu) migrate_set_parameter multifd-channels 8
(qemu) migrate file:migfile

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:15 -03:00
Fabiano Rosas
d5bce67e17 migration/ram: Ignore multifd flush when doing fixed-ram migration
Some functionalities of multifd are incompatible with the 'fixed-ram'
migration format.

The MULTIFD_FLUSH flag in particular is not used because in fixed-ram
there is no sinchronicity between migration source and destination so
there is not need for a sync packet. In fact, fixed-ram disables
packets in multifd as a whole.

Make sure RAM_SAVE_FLAG_MULTIFD_FLUSH is never emitted when fixed-ram
is enabled.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:14 -03:00
Fabiano Rosas
b00e0415ed migration/ram: Add a wrapper for fixed-ram shadow bitmap
We'll need to set the shadow_bmap bits from outside ram.c soon and
TARGET_PAGE_BITS is poisoned, so add a wrapper to it.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:14 -03:00
Fabiano Rosas
6822813ae9 migration/multifd: Allow receiving pages without packets
Currently multifd does not need to have knowledge of pages on the
receiving side because all the information needed is within the
packets that come in the stream.

We're about to add support to fixed-ram migration, which cannot use
packets because it expects the ramblock section in the migration file
to contain only the guest pages data.

Add a data structure to transfer pages between the ram migration code
and the multifd receiving threads.

We don't want to reuse MultiFDPages_t for two reasons:

a) multifd threads don't really need to know about the data they're
   receiving.

b) the receiving side has to be stopped to load the pages, which means
   we can experiment with larger granularities than page size when
   transferring data.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:14 -03:00
Fabiano Rosas
23e7e3fc41 migration/multifd: Decouple recv method from pages
Next patch will abstract the type of data being received by the
channels, so do some cleanup now to remove references to pages and
dependency on 'normal_num'.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:14 -03:00
Fabiano Rosas
a071d2f34e multifd: Rename MultiFDSendParams::data to compress_data
Use a more specific name for the compression data so we can use the
generic for the multifd core code.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:14 -03:00
Fabiano Rosas
ab194ba308 io: Add a pwritev/preadv version that takes a discontiguous iovec
For the upcoming support to fixed-ram migration with multifd, we need
to be able to accept an iovec array with non-contiguous data.

Add a pwritev and preadv version that splits the array into contiguous
segments before writing. With that we can have the ram code continue
to add pages in any order and the multifd code continue to send large
arrays for reading and writing.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
Since iovs can be non contiguous, we'd need a separate array on the
side to carry an extra file offset for each of them, so I'm relying on
the fact that iovs are all within a same host page and passing in an
encoded offset that takes the host page into account.
2023-11-14 13:30:14 -03:00
Fabiano Rosas
9954a41782 migration/multifd: Add incoming QIOChannelFile support
On the receiving side we don't need to differentiate between main
channel and threads, so whichever channel is defined first gets to be
the main one. And since there are no packets, use the atomic channel
count to index into the params array.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:13 -03:00
Fabiano Rosas
76798336e4 migration/multifd: Add outgoing QIOChannelFile support
Allow multifd to open file-backed channels. This will be used when
enabling the fixed-ram migration stream format which expects a
seekable transport.

The QIOChannel read and write methods will use the preadv/pwritev
versions which don't update the file offset at each call so we can
reuse the fd without re-opening for every channel.

Note that this is just setup code and multifd cannot yet make use of
the file channels.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:13 -03:00
Fabiano Rosas
ffd3e56398 migration/multifd: Allow multifd without packets
For the upcoming support to the new 'fixed-ram' migration stream
format, we cannot use multifd packets because each write into the
ramblock section in the migration file is expected to contain only the
guest pages. They are written at their respective offsets relative to
the ramblock section header.

There is no space for the packet information and the expected gains
from the new approach come partly from being able to write the pages
sequentially without extraneous data in between.

The new format also doesn't need the packets and all necessary
information can be taken from the standard migration headers with some
(future) changes to multifd code.

Use the presence of the fixed-ram capability to decide whether to send
packets. For now this has no effect as fixed-ram cannot yet be enabled
with multifd.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 13:30:13 -03:00
Nikolay Borisov
1c036fa56a tests/qtest: migration-test: Add tests for fixed-ram file-based migration
Add basic tests for 'fixed-ram' migration.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 09:19:24 -03:00
Nikolay Borisov
3eb7f2ab75 migration/ram: Add support for 'fixed-ram' migration restore
Add the necessary code to parse the format changes for the 'fixed-ram'
capability.

One of the more notable changes in behavior is that in the 'fixed-ram'
case ram pages are restored in one go rather than constantly looping
through the migration stream.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 09:19:24 -03:00
Nikolay Borisov
43ad5422c9 migration/ram: Add support for 'fixed-ram' outgoing migration
Implement the outgoing migration side for the 'fixed-ram' capability.

A bitmap is introduced to track which pages have been written in the
migration file. Pages are written at a fixed location for every
ramblock. Zero pages are ignored as they'd be zero in the destination
migration as well.

The migration stream is altered to put the dirty pages for a ramblock
after its header instead of having a sequential stream of pages that
follow the ramblock headers. Since all pages have a fixed location,
RAM_SAVE_FLAG_EOS is no longer generated on every migration iteration.

Without fixed-ram (current):        With fixed-ram (new):

 ---------------------               --------------------------------
 | ramblock 1 header |               | ramblock 1 header            |
 ---------------------               --------------------------------
 | ramblock 2 header |               | ramblock 1 fixed-ram header  |
 ---------------------               --------------------------------
 | ...               |               | padding to next 1MB boundary |
 ---------------------               | ...                          |
 | ramblock n header |               --------------------------------
 ---------------------               | ramblock 1 pages             |
 | RAM_SAVE_FLAG_EOS |               | ...                          |
 ---------------------               --------------------------------
 | stream of pages   |               | ramblock 2 header            |
 | (iter 1)          |               --------------------------------
 | ...               |               | ramblock 2 fixed-ram header  |
 ---------------------               --------------------------------
 | RAM_SAVE_FLAG_EOS |               | padding to next 1MB boundary |
 ---------------------               | ...                          |
 | stream of pages   |               --------------------------------
 | (iter 2)          |               | ramblock 2 pages             |
 | ...               |               | ...                          |
 ---------------------               --------------------------------
 | ...               |               | ...                          |
 ---------------------               --------------------------------
                                     | RAM_SAVE_FLAG_EOS            |
                                     --------------------------------
                                     | ...                          |
                                     -------------------------------

where:
 - ramblock header: the generic information for a ramblock, such as
   idstr, used_len, etc.

 - ramblock fixed-ram header: the new information added by this
   feature: bitmap of pages written, bitmap size and offset of pages
   in the migration file.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 09:19:23 -03:00
Fabiano Rosas
909f4a40f6 migration: Add fixed-ram URI compatibility check
The fixed-ram migration format needs a channel that supports seeking
to be able to write each page to an arbitrary offset in the migration
stream.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-11-14 09:19:23 -03:00
Fabiano Rosas
07acf019b1 migration/ram: Introduce 'fixed-ram' migration capability
Add a new migration capability 'fixed-ram'.

The core of the feature is to ensure that each RAM page has a specific
offset in the resulting migration stream. The reasons why we'd want
such behavior are:

 - The resulting file will have a bounded size, since pages which are
   dirtied multiple times will always go to a fixed location in the
   file, rather than constantly being added to a sequential
   stream. This eliminates cases where a VM with, say, 1G of RAM can
   result in a migration file that's 10s of GBs, provided that the
   workload constantly redirties memory.

 - It paves the way to implement O_DIRECT-enabled save/restore of the
   migration stream as the pages are ensured to be written at aligned
   offsets.

 - It allows the usage of multifd so we can write RAM pages to the
   migration file in parallel.

For now, enabling the capability has no effect. The next couple of
patches implement the core functionality.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 09:19:23 -03:00
Fabiano Rosas
b018f49a5b migration/ram: Initialize bitmap with used_length
We don't allow changing the size of the ramblock during migration. Use
used_length instead of max_length when initializing the bitmap.

Suggested-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 09:19:23 -03:00
Nikolay Borisov
f9c6197b58 migration/qemu-file: add utility methods for working with seekable channels
Add utility methods that will be needed when implementing 'fixed-ram'
migration capability.

qemu_file_is_seekable
qemu_put_buffer_at
qemu_get_buffer_at
qemu_set_offset
qemu_get_offset

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-11-14 09:19:23 -03:00
Nikolay Borisov
1b1da54c69 io: implement io_pwritev/preadv for QIOChannelFile
The upcoming 'fixed-ram' feature will require qemu to write data to
(and restore from) specific offsets of the migration file.

Add a minimal implementation of pwritev/preadv and expose them via the
io_pwritev and io_preadv interfaces.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-11-14 09:19:23 -03:00
Nikolay Borisov
d636220e69 io: Add generic pwritev/preadv interface
Introduce basic pwritev/preadv support in the generic channel layer.
Specific implementation will follow for the file channel as this is
required in order to support migration streams with fixed location of
each ram page.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 09:19:23 -03:00
Nikolay Borisov
5e25355c4b io: add and implement QIO_CHANNEL_FEATURE_SEEKABLE for channel file
Add a generic QIOChannel feature SEEKABLE which would be used by the
qemu_file* apis. For the time being this will be only implemented for
file channels.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-11-14 09:19:22 -03:00
Fabiano Rosas
f978a45734 tests/qtest: Re-enable multifd cancel test
We've found the source of flakiness in this test, so re-enable it.

Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 09:19:22 -03:00
Fabiano Rosas
d969e2d0ff migration: Report error in incoming migration
We're not currently reporting the errors set with migrate_set_error()
when incoming migration fails.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-14 09:19:22 -03:00
Fabiano Rosas
87dcefce00 migration/multifd: Allow QIOTask error reporting without an object
The only way for the channel backend to report an error to the multifd
core during creation is by setting the QIOTask error. We must allow
the channel backend to set the error even if the QIOChannel has failed
to be created, which means the QIOTask source object would be NULL.

At multifd_new_send_channel_async() move the QOM casting of the
channel until after we have checked for the QIOTask error.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
context: When doing multifd + file, it's possible that we fail to open
the file. I'll use the empty QIOTask to report the error back to
multifd.
2023-11-14 09:19:22 -03:00
Fabiano Rosas
2b12bbcfed migration/multifd: Stop setting p->ioc before connecting
This is being shadowed but the assignments at
multifd_channel_connect() and multifd_tls_channel_connect() .

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-13 10:59:23 -03:00
Fabiano Rosas
fd92544b1a migration/multifd: Fix multifd_pages_init argument
The 'size' argument is the number of pages that fit in a multifd
packet. Change it to uint32_t and rename.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-13 10:54:11 -03:00
Fabiano Rosas
19b0f579aa migration/multifd: Remove QEMUFile from where it is not needed
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-13 10:54:11 -03:00
Fabiano Rosas
ae1ea5b13e migration/multifd: Remove MultiFDPages_t::packet_num
This was introduced by commit 34c55a94b1 ("migration: Create multipage
support") and never used.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-13 10:54:11 -03:00
Fabiano Rosas
f6a85fa7a4 tests/qtest/migration: Print migration incoming errors
We're currently just asserting when incoming migration fails. Let's
print the error message from QMP as well.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-11-13 10:54:10 -03:00
3737 changed files with 112205 additions and 165777 deletions

View File

@@ -24,10 +24,6 @@ variables:
# Each script line from will be in a collapsible section in the job output # Each script line from will be in a collapsible section in the job output
# and show the duration of each line. # and show the duration of each line.
FF_SCRIPT_SECTIONS: 1 FF_SCRIPT_SECTIONS: 1
# The project has a fairly fat GIT repo so we try and avoid bringing in things
# we don't need. The --filter options avoid blobs and tree references we aren't going to use
# and we also avoid fetching tags.
GIT_FETCH_EXTRA_FLAGS: --filter=blob:none --filter=tree:0 --no-tags --prune --quiet
interruptible: true interruptible: true
@@ -45,10 +41,6 @@ variables:
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_TAG' - if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_TAG'
when: never when: never
# Scheduled runs on mainline don't get pipelines except for the special Coverity job
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_PIPELINE_SOURCE == "schedule"'
when: never
# Cirrus jobs can't run unless the creds / target repo are set # Cirrus jobs can't run unless the creds / target repo are set
- if: '$QEMU_JOB_CIRRUS && ($CIRRUS_GITHUB_REPO == null || $CIRRUS_API_TOKEN == null)' - if: '$QEMU_JOB_CIRRUS && ($CIRRUS_GITHUB_REPO == null || $CIRRUS_API_TOKEN == null)'
when: never when: never

View File

@@ -9,13 +9,11 @@
when: always when: always
before_script: before_script:
- JOBS=$(expr $(nproc) + 1) - JOBS=$(expr $(nproc) + 1)
- cat /packages.txt
script: script:
- export CCACHE_BASEDIR="$(pwd)" - export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache" - export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
- export CCACHE_MAXSIZE="500M" - export CCACHE_MAXSIZE="500M"
- export PATH="$CCACHE_WRAPPERSDIR:$PATH" - export PATH="$CCACHE_WRAPPERSDIR:$PATH"
- du -sh .git
- mkdir build - mkdir build
- cd build - cd build
- ccache --zero-stats - ccache --zero-stats
@@ -27,10 +25,10 @@
then then
pyvenv/bin/meson configure . -Dbackend_max_links="$LD_JOBS" ; pyvenv/bin/meson configure . -Dbackend_max_links="$LD_JOBS" ;
fi || exit 1; fi || exit 1;
- $MAKE -j"$JOBS" - make -j"$JOBS"
- if test -n "$MAKE_CHECK_ARGS"; - if test -n "$MAKE_CHECK_ARGS";
then then
$MAKE -j"$JOBS" $MAKE_CHECK_ARGS ; make -j"$JOBS" $MAKE_CHECK_ARGS ;
fi fi
- ccache --show-stats - ccache --show-stats
@@ -46,8 +44,10 @@
exclude: exclude:
- build/**/*.p - build/**/*.p
- build/**/*.a.p - build/**/*.a.p
- build/**/*.fa.p
- build/**/*.c.o - build/**/*.c.o
- build/**/*.c.o.d - build/**/*.c.o.d
- build/**/*.fa
.common_test_job_template: .common_test_job_template:
extends: .base_job_template extends: .base_job_template
@@ -59,7 +59,7 @@
- cd build - cd build
- find . -type f -exec touch {} + - find . -type f -exec touch {} +
# Avoid recompiling by hiding ninja with NINJA=":" # Avoid recompiling by hiding ninja with NINJA=":"
- $MAKE NINJA=":" $MAKE_CHECK_ARGS - make NINJA=":" $MAKE_CHECK_ARGS
.native_test_job_template: .native_test_job_template:
extends: .common_test_job_template extends: .common_test_job_template

View File

@@ -61,7 +61,7 @@ avocado-system-ubuntu:
variables: variables:
IMAGE: ubuntu2204 IMAGE: ubuntu2204
MAKE_CHECK_ARGS: check-avocado MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:alpha arch:microblazeel arch:mips64el AVOCADO_TAGS: arch:alpha arch:microblaze arch:mips64el
build-system-debian: build-system-debian:
extends: extends:
@@ -70,7 +70,7 @@ build-system-debian:
needs: needs:
job: amd64-debian-container job: amd64-debian-container
variables: variables:
IMAGE: debian IMAGE: debian-amd64
CONFIGURE_ARGS: --with-coroutine=sigaltstack CONFIGURE_ARGS: --with-coroutine=sigaltstack
TARGETS: arm-softmmu i386-softmmu riscv64-softmmu sh4eb-softmmu TARGETS: arm-softmmu i386-softmmu riscv64-softmmu sh4eb-softmmu
sparc-softmmu xtensa-softmmu sparc-softmmu xtensa-softmmu
@@ -82,7 +82,7 @@ check-system-debian:
- job: build-system-debian - job: build-system-debian
artifacts: true artifacts: true
variables: variables:
IMAGE: debian IMAGE: debian-amd64
MAKE_CHECK_ARGS: check MAKE_CHECK_ARGS: check
avocado-system-debian: avocado-system-debian:
@@ -91,7 +91,7 @@ avocado-system-debian:
- job: build-system-debian - job: build-system-debian
artifacts: true artifacts: true
variables: variables:
IMAGE: debian IMAGE: debian-amd64
MAKE_CHECK_ARGS: check-avocado MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:arm arch:i386 arch:riscv64 arch:sh4 arch:sparc arch:xtensa AVOCADO_TAGS: arch:arm arch:i386 arch:riscv64 arch:sh4 arch:sparc arch:xtensa
@@ -101,7 +101,7 @@ crash-test-debian:
- job: build-system-debian - job: build-system-debian
artifacts: true artifacts: true
variables: variables:
IMAGE: debian IMAGE: debian-amd64
script: script:
- cd build - cd build
- make NINJA=":" check-venv - make NINJA=":" check-venv
@@ -158,89 +158,22 @@ build-system-centos:
- .native_build_job_template - .native_build_job_template
- .native_build_artifact_template - .native_build_artifact_template
needs: needs:
job: amd64-centos9-container job: amd64-centos8-container
variables: variables:
IMAGE: centos9 IMAGE: centos8
CONFIGURE_ARGS: --disable-nettle --enable-gcrypt --enable-vfio-user-server CONFIGURE_ARGS: --disable-nettle --enable-gcrypt --enable-vfio-user-server
--enable-modules --enable-trace-backends=dtrace --enable-docs --enable-modules --enable-trace-backends=dtrace --enable-docs
TARGETS: ppc64-softmmu or1k-softmmu s390x-softmmu TARGETS: ppc64-softmmu or1k-softmmu s390x-softmmu
x86_64-softmmu rx-softmmu sh4-softmmu x86_64-softmmu rx-softmmu sh4-softmmu nios2-softmmu
MAKE_CHECK_ARGS: check-build MAKE_CHECK_ARGS: check-build
# Previous QEMU release. Used for cross-version migration tests.
build-previous-qemu:
extends: .native_build_job_template
artifacts:
when: on_success
expire_in: 2 days
paths:
- build-previous
exclude:
- build-previous/**/*.p
- build-previous/**/*.a.p
- build-previous/**/*.c.o
- build-previous/**/*.c.o.d
needs:
job: amd64-opensuse-leap-container
variables:
IMAGE: opensuse-leap
TARGETS: x86_64-softmmu aarch64-softmmu
# Override the default flags as we need more to grab the old version
GIT_FETCH_EXTRA_FLAGS: --prune --quiet
before_script:
- export QEMU_PREV_VERSION="$(sed 's/\([0-9.]*\)\.[0-9]*/v\1.0/' VERSION)"
- git remote add upstream https://gitlab.com/qemu-project/qemu
- git fetch upstream refs/tags/$QEMU_PREV_VERSION:refs/tags/$QEMU_PREV_VERSION
- git checkout $QEMU_PREV_VERSION
after_script:
- mv build build-previous
.migration-compat-common:
extends: .common_test_job_template
needs:
- job: build-previous-qemu
- job: build-system-opensuse
# The old QEMU could have bugs unrelated to migration that are
# already fixed in the current development branch, so this test
# might fail.
allow_failure: true
variables:
IMAGE: opensuse-leap
MAKE_CHECK_ARGS: check-build
script:
# Use the migration-tests from the older QEMU tree. This avoids
# testing an old QEMU against new features/tests that it is not
# compatible with.
- cd build-previous
# old to new
- QTEST_QEMU_BINARY_SRC=./qemu-system-${TARGET}
QTEST_QEMU_BINARY=../build/qemu-system-${TARGET} ./tests/qtest/migration-test
# new to old
- QTEST_QEMU_BINARY_DST=./qemu-system-${TARGET}
QTEST_QEMU_BINARY=../build/qemu-system-${TARGET} ./tests/qtest/migration-test
# This job needs to be disabled until we can have an aarch64 CPU model that
# will both (1) support both KVM and TCG, and (2) provide a stable ABI.
# Currently only "-cpu max" can provide (1), however it doesn't guarantee
# (2). Mark this test skipped until later.
migration-compat-aarch64:
extends: .migration-compat-common
variables:
TARGET: aarch64
QEMU_JOB_SKIPPED: 1
migration-compat-x86_64:
extends: .migration-compat-common
variables:
TARGET: x86_64
check-system-centos: check-system-centos:
extends: .native_test_job_template extends: .native_test_job_template
needs: needs:
- job: build-system-centos - job: build-system-centos
artifacts: true artifacts: true
variables: variables:
IMAGE: centos9 IMAGE: centos8
MAKE_CHECK_ARGS: check MAKE_CHECK_ARGS: check
avocado-system-centos: avocado-system-centos:
@@ -249,10 +182,10 @@ avocado-system-centos:
- job: build-system-centos - job: build-system-centos
artifacts: true artifacts: true
variables: variables:
IMAGE: centos9 IMAGE: centos8
MAKE_CHECK_ARGS: check-avocado MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:ppc64 arch:or1k arch:s390x arch:x86_64 arch:rx AVOCADO_TAGS: arch:ppc64 arch:or1k arch:390x arch:x86_64 arch:rx
arch:sh4 arch:sh4 arch:nios2
build-system-opensuse: build-system-opensuse:
extends: extends:
@@ -284,36 +217,6 @@ avocado-system-opensuse:
MAKE_CHECK_ARGS: check-avocado MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:s390x arch:x86_64 arch:aarch64 AVOCADO_TAGS: arch:s390x arch:x86_64 arch:aarch64
#
# Flaky tests. We don't run these by default and they are allow fail
# but often the CI system is the only way to trigger the failures.
#
build-system-flaky:
extends:
- .native_build_job_template
- .native_build_artifact_template
needs:
job: amd64-debian-container
variables:
IMAGE: debian
QEMU_JOB_OPTIONAL: 1
TARGETS: aarch64-softmmu arm-softmmu mips64el-softmmu
ppc64-softmmu rx-softmmu s390x-softmmu sh4-softmmu x86_64-softmmu
MAKE_CHECK_ARGS: check-build
avocado-system-flaky:
extends: .avocado_test_job_template
needs:
- job: build-system-flaky
artifacts: true
allow_failure: true
variables:
IMAGE: debian
MAKE_CHECK_ARGS: check-avocado
QEMU_JOB_OPTIONAL: 1
QEMU_TEST_FLAKY_TESTS: 1
AVOCADO_TAGS: flaky
# This jobs explicitly disable TCG (--disable-tcg), KVM is detected by # This jobs explicitly disable TCG (--disable-tcg), KVM is detected by
# the configure script. The container doesn't contain Xen headers so # the configure script. The container doesn't contain Xen headers so
@@ -325,9 +228,9 @@ avocado-system-flaky:
build-tcg-disabled: build-tcg-disabled:
extends: .native_build_job_template extends: .native_build_job_template
needs: needs:
job: amd64-centos9-container job: amd64-centos8-container
variables: variables:
IMAGE: centos9 IMAGE: centos8
script: script:
- mkdir build - mkdir build
- cd build - cd build
@@ -340,7 +243,7 @@ build-tcg-disabled:
- cd tests/qemu-iotests/ - cd tests/qemu-iotests/
- ./check -raw 001 002 003 004 005 008 009 010 011 012 021 025 032 033 048 - ./check -raw 001 002 003 004 005 008 009 010 011 012 021 025 032 033 048
052 063 077 086 101 104 106 113 148 150 151 152 157 159 160 163 052 063 077 086 101 104 106 113 148 150 151 152 157 159 160 163
170 171 184 192 194 208 221 226 227 236 253 277 image-fleecing 170 171 183 184 192 194 208 221 226 227 236 253 277 image-fleecing
- ./check -qcow2 028 051 056 057 058 065 068 082 085 091 095 096 102 122 - ./check -qcow2 028 051 056 057 058 065 068 082 085 091 095 096 102 122
124 132 139 142 144 145 151 152 155 157 165 194 196 200 202 124 132 139 142 144 145 151 152 155 157 165 194 196 200 202
208 209 216 218 227 234 246 247 248 250 254 255 257 258 208 209 216 218 227 234 246 247 248 250 254 255 257 258
@@ -430,7 +333,6 @@ clang-system:
IMAGE: fedora IMAGE: fedora
CONFIGURE_ARGS: --cc=clang --cxx=clang++ CONFIGURE_ARGS: --cc=clang --cxx=clang++
--extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined --extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined
--extra-cflags=-fno-sanitize=function
TARGETS: alpha-softmmu arm-softmmu m68k-softmmu mips64-softmmu s390x-softmmu TARGETS: alpha-softmmu arm-softmmu m68k-softmmu mips64-softmmu s390x-softmmu
MAKE_CHECK_ARGS: check-qtest check-tcg MAKE_CHECK_ARGS: check-qtest check-tcg
@@ -444,7 +346,6 @@ clang-user:
CONFIGURE_ARGS: --cc=clang --cxx=clang++ --disable-system CONFIGURE_ARGS: --cc=clang --cxx=clang++ --disable-system
--target-list-exclude=alpha-linux-user,microblazeel-linux-user,aarch64_be-linux-user,i386-linux-user,m68k-linux-user,mipsn32el-linux-user,xtensaeb-linux-user --target-list-exclude=alpha-linux-user,microblazeel-linux-user,aarch64_be-linux-user,i386-linux-user,m68k-linux-user,mipsn32el-linux-user,xtensaeb-linux-user
--extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined --extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined
--extra-cflags=-fno-sanitize=function
MAKE_CHECK_ARGS: check-unit check-tcg MAKE_CHECK_ARGS: check-unit check-tcg
# Set LD_JOBS=1 because this requires LTO and ld consumes a large amount of memory. # Set LD_JOBS=1 because this requires LTO and ld consumes a large amount of memory.
@@ -575,9 +476,6 @@ tsan-build:
CONFIGURE_ARGS: --enable-tsan --cc=clang --cxx=clang++ CONFIGURE_ARGS: --enable-tsan --cc=clang --cxx=clang++
--enable-trace-backends=ust --disable-slirp --enable-trace-backends=ust --disable-slirp
TARGETS: x86_64-softmmu ppc64-softmmu riscv64-softmmu x86_64-linux-user TARGETS: x86_64-softmmu ppc64-softmmu riscv64-softmmu x86_64-linux-user
# Remove when we switch to a distro with clang >= 18
# https://github.com/google/sanitizers/issues/1716
MAKE: setarch -R make
# gcov is a GCC features # gcov is a GCC features
gcov: gcov:
@@ -636,7 +534,7 @@ build-tci:
- TARGETS="aarch64 arm hppa m68k microblaze ppc64 s390x x86_64" - TARGETS="aarch64 arm hppa m68k microblaze ppc64 s390x x86_64"
- mkdir build - mkdir build
- cd build - cd build
- ../configure --enable-tcg-interpreter --disable-kvm --disable-docs --disable-gtk --disable-vnc - ../configure --enable-tcg-interpreter --disable-docs --disable-gtk --disable-vnc
--target-list="$(for tg in $TARGETS; do echo -n ${tg}'-softmmu '; done)" --target-list="$(for tg in $TARGETS; do echo -n ${tg}'-softmmu '; done)"
|| { cat config.log meson-logs/meson-log.txt && exit 1; } || { cat config.log meson-logs/meson-log.txt && exit 1; }
- make -j"$JOBS" - make -j"$JOBS"
@@ -651,15 +549,12 @@ build-tci:
- make check-tcg - make check-tcg
# Check our reduced build configurations # Check our reduced build configurations
# requires libfdt: aarch64, arm, loongarch64, microblaze, microblazeel,
# or1k, ppc64, riscv32, riscv64, rx
# fails qtest without boards: i386, x86_64
build-without-defaults: build-without-defaults:
extends: .native_build_job_template extends: .native_build_job_template
needs: needs:
job: amd64-centos9-container job: amd64-centos8-container
variables: variables:
IMAGE: centos9 IMAGE: centos8
CONFIGURE_ARGS: CONFIGURE_ARGS:
--without-default-devices --without-default-devices
--without-default-features --without-default-features
@@ -667,11 +562,8 @@ build-without-defaults:
--disable-pie --disable-pie
--disable-qom-cast-debug --disable-qom-cast-debug
--disable-strip --disable-strip
TARGETS: alpha-softmmu avr-softmmu cris-softmmu hppa-softmmu m68k-softmmu TARGETS: avr-softmmu mips64-softmmu s390x-softmmu sh4-softmmu
mips-softmmu mips64-softmmu mipsel-softmmu mips64el-softmmu sparc64-softmmu hexagon-linux-user i386-linux-user s390x-linux-user
ppc-softmmu s390x-softmmu sh4-softmmu sh4eb-softmmu sparc-softmmu
sparc64-softmmu tricore-softmmu xtensa-softmmu xtensaeb-softmmu
hexagon-linux-user i386-linux-user s390x-linux-user
MAKE_CHECK_ARGS: check MAKE_CHECK_ARGS: check
build-libvhost-user: build-libvhost-user:
@@ -697,7 +589,7 @@ build-tools-and-docs-debian:
# when running on 'master' we use pre-existing container # when running on 'master' we use pre-existing container
optional: true optional: true
variables: variables:
IMAGE: debian IMAGE: debian-amd64
MAKE_CHECK_ARGS: check-unit ctags TAGS cscope MAKE_CHECK_ARGS: check-unit ctags TAGS cscope
CONFIGURE_ARGS: --disable-system --disable-user --enable-docs --enable-tools CONFIGURE_ARGS: --disable-system --disable-user --enable-docs --enable-tools
QEMU_JOB_PUBLISH: 1 QEMU_JOB_PUBLISH: 1
@@ -717,7 +609,7 @@ build-tools-and-docs-debian:
# of what topic branch they're currently using # of what topic branch they're currently using
pages: pages:
extends: .base_job_template extends: .base_job_template
image: $CI_REGISTRY_IMAGE/qemu/debian:$QEMU_CI_CONTAINER_TAG image: $CI_REGISTRY_IMAGE/qemu/debian-amd64:$QEMU_CI_CONTAINER_TAG
stage: test stage: test
needs: needs:
- job: build-tools-and-docs-debian - job: build-tools-and-docs-debian
@@ -725,10 +617,7 @@ pages:
- mkdir -p public - mkdir -p public
# HTML-ised source tree # HTML-ised source tree
- make gtags - make gtags
# We unset variables to work around a bug in some htags versions - htags -anT --tree-view=filetree -m qemu_init
# which causes it to fail when the environment is large
- CI_COMMIT_MESSAGE= CI_COMMIT_TAG_MESSAGE= htags
-anT --tree-view=filetree -m qemu_init
-t "Welcome to the QEMU sourcecode" -t "Welcome to the QEMU sourcecode"
- mv HTML public/src - mv HTML public/src
# Project documentation # Project documentation
@@ -740,40 +629,3 @@ pages:
- public - public
variables: variables:
QEMU_JOB_PUBLISH: 1 QEMU_JOB_PUBLISH: 1
coverity:
image: $CI_REGISTRY_IMAGE/qemu/fedora:$QEMU_CI_CONTAINER_TAG
stage: build
allow_failure: true
timeout: 3h
needs:
- job: amd64-fedora-container
optional: true
before_script:
- dnf install -y curl wget
script:
# would be nice to cancel the job if over quota (https://gitlab.com/gitlab-org/gitlab/-/issues/256089)
# for example:
# curl --request POST --header "PRIVATE-TOKEN: $CI_JOB_TOKEN" "${CI_SERVER_URL}/api/v4/projects/${CI_PROJECT_ID}/jobs/${CI_JOB_ID}/cancel
- 'scripts/coverity-scan/run-coverity-scan --check-upload-only || { exitcode=$?; if test $exitcode = 1; then
exit 0;
else
exit $exitcode;
fi; };
scripts/coverity-scan/run-coverity-scan --update-tools-only > update-tools.log 2>&1 || { cat update-tools.log; exit 1; };
scripts/coverity-scan/run-coverity-scan --no-update-tools'
rules:
- if: '$COVERITY_TOKEN == null'
when: never
- if: '$COVERITY_EMAIL == null'
when: never
# Never included on upstream pipelines, except for schedules
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_PIPELINE_SOURCE == "schedule"'
when: on_success
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM'
when: never
# Forks don't get any pipeline unless QEMU_CI=1 or QEMU_CI=2 is set
- if: '$QEMU_CI != "1" && $QEMU_CI != "2"'
when: never
# Always manual on forks even if $QEMU_CI == "2"
- when: manual

View File

@@ -13,7 +13,7 @@
.cirrus_build_job: .cirrus_build_job:
extends: .base_job_template extends: .base_job_template
stage: build stage: build
image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:latest image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:master
needs: [] needs: []
# 20 mins larger than "timeout_in" in cirrus/build.yml # 20 mins larger than "timeout_in" in cirrus/build.yml
# as there's often a 5-10 minute delay before Cirrus CI # as there's often a 5-10 minute delay before Cirrus CI
@@ -52,42 +52,61 @@ x64-freebsd-13-build:
NAME: freebsd-13 NAME: freebsd-13
CIRRUS_VM_INSTANCE_TYPE: freebsd_instance CIRRUS_VM_INSTANCE_TYPE: freebsd_instance
CIRRUS_VM_IMAGE_SELECTOR: image_family CIRRUS_VM_IMAGE_SELECTOR: image_family
CIRRUS_VM_IMAGE_NAME: freebsd-13-3 CIRRUS_VM_IMAGE_NAME: freebsd-13-2
CIRRUS_VM_CPUS: 8 CIRRUS_VM_CPUS: 8
CIRRUS_VM_RAM: 8G CIRRUS_VM_RAM: 8G
UPDATE_COMMAND: pkg update; pkg upgrade -y UPDATE_COMMAND: pkg update; pkg upgrade -y
INSTALL_COMMAND: pkg install -y INSTALL_COMMAND: pkg install -y
CONFIGURE_ARGS: --target-list-exclude=arm-softmmu,i386-softmmu,microblaze-softmmu,mips64el-softmmu,mipsel-softmmu,mips-softmmu,ppc-softmmu,sh4eb-softmmu,xtensa-softmmu
TEST_TARGETS: check TEST_TARGETS: check
aarch64-macos-13-base-build: aarch64-macos-12-base-build:
extends: .cirrus_build_job extends: .cirrus_build_job
variables: variables:
NAME: macos-13 NAME: macos-12
CIRRUS_VM_INSTANCE_TYPE: macos_instance CIRRUS_VM_INSTANCE_TYPE: macos_instance
CIRRUS_VM_IMAGE_SELECTOR: image CIRRUS_VM_IMAGE_SELECTOR: image
CIRRUS_VM_IMAGE_NAME: ghcr.io/cirruslabs/macos-ventura-base:latest CIRRUS_VM_IMAGE_NAME: ghcr.io/cirruslabs/macos-monterey-base:latest
CIRRUS_VM_CPUS: 12 CIRRUS_VM_CPUS: 12
CIRRUS_VM_RAM: 24G CIRRUS_VM_RAM: 24G
UPDATE_COMMAND: brew update UPDATE_COMMAND: brew update
INSTALL_COMMAND: brew install INSTALL_COMMAND: brew install
PATH_EXTRA: /opt/homebrew/ccache/libexec:/opt/homebrew/gettext/bin PATH_EXTRA: /opt/homebrew/ccache/libexec:/opt/homebrew/gettext/bin
PKG_CONFIG_PATH: /opt/homebrew/curl/lib/pkgconfig:/opt/homebrew/ncurses/lib/pkgconfig:/opt/homebrew/readline/lib/pkgconfig PKG_CONFIG_PATH: /opt/homebrew/curl/lib/pkgconfig:/opt/homebrew/ncurses/lib/pkgconfig:/opt/homebrew/readline/lib/pkgconfig
CONFIGURE_ARGS: --target-list-exclude=arm-softmmu,i386-softmmu,microblazeel-softmmu,mips64-softmmu,mipsel-softmmu,mips-softmmu,ppc-softmmu,sh4-softmmu,xtensaeb-softmmu
TEST_TARGETS: check-unit check-block check-qapi-schema check-softfloat check-qtest-x86_64 TEST_TARGETS: check-unit check-block check-qapi-schema check-softfloat check-qtest-x86_64
aarch64-macos-14-base-build:
extends: .cirrus_build_job # The following jobs run VM-based tests via KVM on a Linux-based Cirrus-CI job
.cirrus_kvm_job:
extends: .base_job_template
stage: build
image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:master
needs: []
timeout: 80m
script:
- sed -e "s|[@]CI_REPOSITORY_URL@|$CI_REPOSITORY_URL|g"
-e "s|[@]CI_COMMIT_REF_NAME@|$CI_COMMIT_REF_NAME|g"
-e "s|[@]CI_COMMIT_SHA@|$CI_COMMIT_SHA|g"
-e "s|[@]NAME@|$NAME|g"
-e "s|[@]CONFIGURE_ARGS@|$CONFIGURE_ARGS|g"
-e "s|[@]TEST_TARGETS@|$TEST_TARGETS|g"
<.gitlab-ci.d/cirrus/kvm-build.yml >.gitlab-ci.d/cirrus/$NAME.yml
- cat .gitlab-ci.d/cirrus/$NAME.yml
- cirrus-run -v --show-build-log always .gitlab-ci.d/cirrus/$NAME.yml
variables: variables:
NAME: macos-14 QEMU_JOB_CIRRUS: 1
CIRRUS_VM_INSTANCE_TYPE: macos_instance
CIRRUS_VM_IMAGE_SELECTOR: image
CIRRUS_VM_IMAGE_NAME: ghcr.io/cirruslabs/macos-sonoma-base:latest
CIRRUS_VM_CPUS: 12
CIRRUS_VM_RAM: 24G
UPDATE_COMMAND: brew update
INSTALL_COMMAND: brew install
PATH_EXTRA: /opt/homebrew/ccache/libexec:/opt/homebrew/gettext/bin
PKG_CONFIG_PATH: /opt/homebrew/curl/lib/pkgconfig:/opt/homebrew/ncurses/lib/pkgconfig:/opt/homebrew/readline/lib/pkgconfig
TEST_TARGETS: check-unit check-block check-qapi-schema check-softfloat check-qtest-x86_64
QEMU_JOB_OPTIONAL: 1 QEMU_JOB_OPTIONAL: 1
x86-netbsd:
extends: .cirrus_kvm_job
variables:
NAME: netbsd
CONFIGURE_ARGS: --target-list=x86_64-softmmu,ppc64-softmmu,aarch64-softmmu
TEST_TARGETS: check
x86-openbsd:
extends: .cirrus_kvm_job
variables:
NAME: openbsd
CONFIGURE_ARGS: --target-list=i386-softmmu,riscv64-softmmu,mips64-softmmu
TEST_TARGETS: check

View File

@@ -21,7 +21,7 @@ build_task:
install_script: install_script:
- @UPDATE_COMMAND@ - @UPDATE_COMMAND@
- @INSTALL_COMMAND@ @PKGS@ - @INSTALL_COMMAND@ @PKGS@
- if test -n "@PYPI_PKGS@" ; then PYLIB=$(@PYTHON@ -c 'import sysconfig; print(sysconfig.get_path("stdlib"))'); rm -f $PYLIB/EXTERNALLY-MANAGED; @PIP3@ install @PYPI_PKGS@ ; fi - if test -n "@PYPI_PKGS@" ; then @PIP3@ install @PYPI_PKGS@ ; fi
clone_script: clone_script:
- git clone --depth 100 "$CI_REPOSITORY_URL" . - git clone --depth 100 "$CI_REPOSITORY_URL" .
- git fetch origin "$CI_COMMIT_REF_NAME" - git fetch origin "$CI_COMMIT_REF_NAME"

View File

@@ -11,6 +11,6 @@ MAKE='/usr/local/bin/gmake'
NINJA='/usr/local/bin/ninja' NINJA='/usr/local/bin/ninja'
PACKAGING_COMMAND='pkg' PACKAGING_COMMAND='pkg'
PIP3='/usr/local/bin/pip-3.8' PIP3='/usr/local/bin/pip-3.8'
PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache cmocka ctags curl cyrus-sasl dbus diffutils dtc flex fusefs-libs3 gettext git glib gmake gnutls gsed gtk-vnc gtk3 json-c libepoxy libffi libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm lzo2 meson mtools ncurses nettle ninja opencv pixman pkgconf png py311-numpy py311-pillow py311-pip py311-sphinx py311-sphinx_rtd_theme py311-tomli py311-yaml python3 rpm2cpio sdl2 sdl2_image snappy sndio socat spice-protocol tesseract usbredir virglrenderer vte3 xorriso zstd' PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache cmocka ctags curl cyrus-sasl dbus diffutils dtc flex fusefs-libs3 gettext git glib gmake gnutls gsed gtk3 json-c libepoxy libffi libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm lzo2 meson mtools ncurses nettle ninja opencv pixman pkgconf png py39-numpy py39-pillow py39-pip py39-sphinx py39-sphinx_rtd_theme py39-tomli py39-yaml python3 rpm2cpio sdl2 sdl2_image snappy sndio socat spice-protocol tesseract usbredir virglrenderer vte3 xorriso zstd'
PYPI_PKGS='' PYPI_PKGS=''
PYTHON='/usr/local/bin/python3' PYTHON='/usr/local/bin/python3'

View File

@@ -0,0 +1,31 @@
container:
image: fedora:35
cpu: 4
memory: 8Gb
kvm: true
env:
CIRRUS_CLONE_DEPTH: 1
CI_REPOSITORY_URL: "@CI_REPOSITORY_URL@"
CI_COMMIT_REF_NAME: "@CI_COMMIT_REF_NAME@"
CI_COMMIT_SHA: "@CI_COMMIT_SHA@"
@NAME@_task:
@NAME@_vm_cache:
folder: $HOME/.cache/qemu-vm
install_script:
- dnf update -y
- dnf install -y git make openssh-clients qemu-img qemu-system-x86 wget meson
clone_script:
- git clone --depth 100 "$CI_REPOSITORY_URL" .
- git fetch origin "$CI_COMMIT_REF_NAME"
- git reset --hard "$CI_COMMIT_SHA"
build_script:
- if [ -f $HOME/.cache/qemu-vm/images/@NAME@.img ]; then
make vm-build-@NAME@ J=$(getconf _NPROCESSORS_ONLN)
EXTRA_CONFIGURE_OPTS="@CONFIGURE_ARGS@"
BUILD_TARGET="@TEST_TARGETS@" ;
else
make vm-build-@NAME@ J=$(getconf _NPROCESSORS_ONLN) BUILD_TARGET=help
EXTRA_CONFIGURE_OPTS="--disable-system --disable-user --disable-tools" ;
fi

View File

@@ -1,6 +1,6 @@
# THIS FILE WAS AUTO-GENERATED # THIS FILE WAS AUTO-GENERATED
# #
# $ lcitool variables macos-13 qemu # $ lcitool variables macos-12 qemu
# #
# https://gitlab.com/libvirt/libvirt-ci # https://gitlab.com/libvirt/libvirt-ci
@@ -11,6 +11,6 @@ MAKE='/opt/homebrew/bin/gmake'
NINJA='/opt/homebrew/bin/ninja' NINJA='/opt/homebrew/bin/ninja'
PACKAGING_COMMAND='brew' PACKAGING_COMMAND='brew'
PIP3='/opt/homebrew/bin/pip3' PIP3='/opt/homebrew/bin/pip3'
PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 gtk-vnc jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd' PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd'
PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli' PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli'
PYTHON='/opt/homebrew/bin/python3' PYTHON='/opt/homebrew/bin/python3'

View File

@@ -1,16 +0,0 @@
# THIS FILE WAS AUTO-GENERATED
#
# $ lcitool variables macos-14 qemu
#
# https://gitlab.com/libvirt/libvirt-ci
CCACHE='/opt/homebrew/bin/ccache'
CPAN_PKGS=''
CROSS_PKGS=''
MAKE='/opt/homebrew/bin/gmake'
NINJA='/opt/homebrew/bin/ninja'
PACKAGING_COMMAND='brew'
PIP3='/opt/homebrew/bin/pip3'
PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 gtk-vnc jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd'
PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli'
PYTHON='/opt/homebrew/bin/python3'

View File

@@ -1,10 +1,10 @@
include: include:
- local: '/.gitlab-ci.d/container-template.yml' - local: '/.gitlab-ci.d/container-template.yml'
amd64-centos9-container: amd64-centos8-container:
extends: .container_job_template extends: .container_job_template
variables: variables:
NAME: centos9 NAME: centos8
amd64-fedora-container: amd64-fedora-container:
extends: .container_job_template extends: .container_job_template

View File

@@ -46,12 +46,6 @@ loongarch-debian-cross-container:
variables: variables:
NAME: debian-loongarch-cross NAME: debian-loongarch-cross
i686-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-i686-cross
mips64el-debian-cross-container: mips64el-debian-cross-container:
extends: .container_job_template extends: .container_job_template
stage: containers stage: containers
@@ -101,6 +95,16 @@ cris-fedora-cross-container:
variables: variables:
NAME: fedora-cris-cross NAME: fedora-cris-cross
i386-fedora-cross-container:
extends: .container_job_template
variables:
NAME: fedora-i386-cross
win32-fedora-cross-container:
extends: .container_job_template
variables:
NAME: fedora-win32-cross
win64-fedora-cross-container: win64-fedora-cross-container:
extends: .container_job_template extends: .container_job_template
variables: variables:

View File

@@ -11,7 +11,7 @@ amd64-debian-container:
extends: .container_job_template extends: .container_job_template
stage: containers stage: containers
variables: variables:
NAME: debian NAME: debian-amd64
amd64-ubuntu2204-container: amd64-ubuntu2204-container:
extends: .container_job_template extends: .container_job_template

View File

@@ -8,8 +8,6 @@
key: "$CI_JOB_NAME" key: "$CI_JOB_NAME"
when: always when: always
timeout: 80m timeout: 80m
before_script:
- cat /packages.txt
script: script:
- export CCACHE_BASEDIR="$(pwd)" - export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache" - export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
@@ -74,7 +72,7 @@
- ../configure --enable-werror --disable-docs $QEMU_CONFIGURE_OPTS - ../configure --enable-werror --disable-docs $QEMU_CONFIGURE_OPTS
--disable-system --target-list-exclude="aarch64_be-linux-user --disable-system --target-list-exclude="aarch64_be-linux-user
alpha-linux-user cris-linux-user m68k-linux-user microblazeel-linux-user alpha-linux-user cris-linux-user m68k-linux-user microblazeel-linux-user
or1k-linux-user ppc-linux-user sparc-linux-user nios2-linux-user or1k-linux-user ppc-linux-user sparc-linux-user
xtensa-linux-user $CROSS_SKIP_TARGETS" xtensa-linux-user $CROSS_SKIP_TARGETS"
- make -j$(expr $(nproc) + 1) all check-build $MAKE_CHECK_ARGS - make -j$(expr $(nproc) + 1) all check-build $MAKE_CHECK_ARGS

View File

@@ -37,38 +37,27 @@ cross-arm64-kvm-only:
IMAGE: debian-arm64-cross IMAGE: debian-arm64-cross
EXTRA_CONFIGURE_OPTS: --disable-tcg --without-default-features EXTRA_CONFIGURE_OPTS: --disable-tcg --without-default-features
cross-i686-system: cross-i386-user:
extends:
- .cross_system_build_job
- .cross_test_artifacts
needs:
job: i686-debian-cross-container
variables:
IMAGE: debian-i686-cross
EXTRA_CONFIGURE_OPTS: --disable-kvm
MAKE_CHECK_ARGS: check-qtest
cross-i686-user:
extends: extends:
- .cross_user_build_job - .cross_user_build_job
- .cross_test_artifacts - .cross_test_artifacts
needs: needs:
job: i686-debian-cross-container job: i386-fedora-cross-container
variables: variables:
IMAGE: debian-i686-cross IMAGE: fedora-i386-cross
MAKE_CHECK_ARGS: check MAKE_CHECK_ARGS: check
cross-i686-tci: cross-i386-tci:
extends: extends:
- .cross_accel_build_job - .cross_accel_build_job
- .cross_test_artifacts - .cross_test_artifacts
timeout: 60m timeout: 60m
needs: needs:
job: i686-debian-cross-container job: i386-fedora-cross-container
variables: variables:
IMAGE: debian-i686-cross IMAGE: fedora-i386-cross
ACCEL: tcg-interpreter ACCEL: tcg-interpreter
EXTRA_CONFIGURE_OPTS: --target-list=i386-softmmu,i386-linux-user,aarch64-softmmu,aarch64-linux-user,ppc-softmmu,ppc-linux-user --disable-plugins --disable-kvm EXTRA_CONFIGURE_OPTS: --target-list=i386-softmmu,i386-linux-user,aarch64-softmmu,aarch64-linux-user,ppc-softmmu,ppc-linux-user --disable-plugins
MAKE_CHECK_ARGS: check check-tcg MAKE_CHECK_ARGS: check check-tcg
cross-mipsel-system: cross-mipsel-system:
@@ -170,6 +159,20 @@ cross-mips64el-kvm-only:
IMAGE: debian-mips64el-cross IMAGE: debian-mips64el-cross
EXTRA_CONFIGURE_OPTS: --disable-tcg --target-list=mips64el-softmmu EXTRA_CONFIGURE_OPTS: --disable-tcg --target-list=mips64el-softmmu
cross-win32-system:
extends: .cross_system_build_job
needs:
job: win32-fedora-cross-container
variables:
IMAGE: fedora-win32-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu m68k-softmmu
microblazeel-softmmu mips64el-softmmu nios2-softmmu
artifacts:
when: on_success
paths:
- build/qemu-setup*.exe
cross-win64-system: cross-win64-system:
extends: .cross_system_build_job extends: .cross_system_build_job
needs: needs:
@@ -178,7 +181,7 @@ cross-win64-system:
IMAGE: fedora-win64-cross IMAGE: fedora-win64-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu
m68k-softmmu microblazeel-softmmu m68k-softmmu microblazeel-softmmu nios2-softmmu
or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu
tricore-softmmu xtensaeb-softmmu tricore-softmmu xtensaeb-softmmu
artifacts: artifacts:

View File

@@ -10,14 +10,13 @@
# gitlab-runner. To avoid problems that gitlab-runner can cause while # gitlab-runner. To avoid problems that gitlab-runner can cause while
# reusing the GIT repository, let's enable the clone strategy, which # reusing the GIT repository, let's enable the clone strategy, which
# guarantees a fresh repository on each job run. # guarantees a fresh repository on each job run.
variables:
GIT_STRATEGY: clone
# All custom runners can extend this template to upload the testlog # All custom runners can extend this template to upload the testlog
# data as an artifact and also feed the junit report # data as an artifact and also feed the junit report
.custom_runner_template: .custom_runner_template:
extends: .base_job_template extends: .base_job_template
variables:
GIT_STRATEGY: clone
GIT_FETCH_EXTRA_FLAGS: --no-tags --prune --quiet
artifacts: artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG" name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
expire_in: 7 days expire_in: 7 days
@@ -29,6 +28,7 @@
junit: build/meson-logs/testlog.junit.xml junit: build/meson-logs/testlog.junit.xml
include: include:
- local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-s390x.yml' - local: '/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml'
- local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch64.yml' - local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch64.yml'
- local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch32.yml' - local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch32.yml'
- local: '/.gitlab-ci.d/custom-runners/centos-stream-8-x86_64.yml'

View File

@@ -0,0 +1,24 @@
# All centos-stream-8 jobs should run successfully in an environment
# setup by the scripts/ci/setup/stream/8/build-environment.yml task
# "Installation of extra packages to build QEMU"
centos-stream-8-x86_64:
extends: .custom_runner_template
allow_failure: true
needs: []
stage: build
tags:
- centos_stream_8
- x86_64
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$CENTOS_STREAM_8_x86_64_RUNNER_AVAILABLE"
before_script:
- JOBS=$(expr $(nproc) + 1)
script:
- mkdir build
- cd build
- ../scripts/ci/org.centos/stream/8/x86_64/configure
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make -j"$JOBS"
- make NINJA=":" check check-avocado

View File

@@ -1,32 +1,34 @@
# All ubuntu-22.04 jobs should run successfully in an environment # All ubuntu-20.04 jobs should run successfully in an environment
# setup by the scripts/ci/setup/ubuntu/build-environment.yml task # setup by the scripts/ci/setup/build-environment.yml task
# "Install basic packages to build QEMU on Ubuntu 22.04" # "Install basic packages to build QEMU on Ubuntu 20.04/20.04"
ubuntu-22.04-s390x-all-linux: ubuntu-20.04-s390x-all-linux-static:
extends: .custom_runner_template extends: .custom_runner_template
needs: [] needs: []
stage: build stage: build
tags: tags:
- ubuntu_22.04 - ubuntu_20.04
- s390x - s390x
rules: rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/' - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$S390X_RUNNER_AVAILABLE" - if: "$S390X_RUNNER_AVAILABLE"
script: script:
# --disable-libssh is needed because of https://bugs.launchpad.net/qemu/+bug/1838763
# --disable-glusterfs is needed because there's no static version of those libs in distro supplied packages
- mkdir build - mkdir build
- cd build - cd build
- ../configure --enable-debug --disable-system --disable-tools --disable-docs - ../configure --enable-debug --static --disable-system --disable-glusterfs --disable-libssh
|| { cat config.log meson-logs/meson-log.txt; exit 1; } || { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc` - make --output-sync -j`nproc`
- make --output-sync check-tcg - make --output-sync check-tcg
- make --output-sync -j`nproc` check - make --output-sync -j`nproc` check
ubuntu-22.04-s390x-all-system: ubuntu-20.04-s390x-all:
extends: .custom_runner_template extends: .custom_runner_template
needs: [] needs: []
stage: build stage: build
tags: tags:
- ubuntu_22.04 - ubuntu_20.04
- s390x - s390x
timeout: 75m timeout: 75m
rules: rules:
@@ -35,17 +37,17 @@ ubuntu-22.04-s390x-all-system:
script: script:
- mkdir build - mkdir build
- cd build - cd build
- ../configure --disable-user - ../configure --disable-libssh
|| { cat config.log meson-logs/meson-log.txt; exit 1; } || { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc` - make --output-sync -j`nproc`
- make --output-sync -j`nproc` check - make --output-sync -j`nproc` check
ubuntu-22.04-s390x-alldbg: ubuntu-20.04-s390x-alldbg:
extends: .custom_runner_template extends: .custom_runner_template
needs: [] needs: []
stage: build stage: build
tags: tags:
- ubuntu_22.04 - ubuntu_20.04
- s390x - s390x
rules: rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/' - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
@@ -57,18 +59,18 @@ ubuntu-22.04-s390x-alldbg:
script: script:
- mkdir build - mkdir build
- cd build - cd build
- ../configure --enable-debug - ../configure --enable-debug --disable-libssh
|| { cat config.log meson-logs/meson-log.txt; exit 1; } || { cat config.log meson-logs/meson-log.txt; exit 1; }
- make clean - make clean
- make --output-sync -j`nproc` - make --output-sync -j`nproc`
- make --output-sync -j`nproc` check - make --output-sync -j`nproc` check
ubuntu-22.04-s390x-clang: ubuntu-20.04-s390x-clang:
extends: .custom_runner_template extends: .custom_runner_template
needs: [] needs: []
stage: build stage: build
tags: tags:
- ubuntu_22.04 - ubuntu_20.04
- s390x - s390x
rules: rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/' - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
@@ -80,16 +82,16 @@ ubuntu-22.04-s390x-clang:
script: script:
- mkdir build - mkdir build
- cd build - cd build
- ../configure --cc=clang --cxx=clang++ --enable-sanitizers - ../configure --disable-libssh --cc=clang --cxx=clang++ --enable-sanitizers
|| { cat config.log meson-logs/meson-log.txt; exit 1; } || { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc` - make --output-sync -j`nproc`
- make --output-sync -j`nproc` check - make --output-sync -j`nproc` check
ubuntu-22.04-s390x-tci: ubuntu-20.04-s390x-tci:
needs: [] needs: []
stage: build stage: build
tags: tags:
- ubuntu_22.04 - ubuntu_20.04
- s390x - s390x
rules: rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/' - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
@@ -101,16 +103,16 @@ ubuntu-22.04-s390x-tci:
script: script:
- mkdir build - mkdir build
- cd build - cd build
- ../configure --enable-tcg-interpreter - ../configure --disable-libssh --enable-tcg-interpreter
|| { cat config.log meson-logs/meson-log.txt; exit 1; } || { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc` - make --output-sync -j`nproc`
ubuntu-22.04-s390x-notcg: ubuntu-20.04-s390x-notcg:
extends: .custom_runner_template extends: .custom_runner_template
needs: [] needs: []
stage: build stage: build
tags: tags:
- ubuntu_22.04 - ubuntu_20.04
- s390x - s390x
rules: rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/' - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
@@ -122,7 +124,7 @@ ubuntu-22.04-s390x-notcg:
script: script:
- mkdir build - mkdir build
- cd build - cd build
- ../configure --disable-tcg - ../configure --disable-libssh --disable-tcg
|| { cat config.log meson-logs/meson-log.txt; exit 1; } || { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc` - make --output-sync -j`nproc`
- make --output-sync -j`nproc` check - make --output-sync -j`nproc` check

View File

@@ -1,5 +1,5 @@
# All ubuntu-22.04 jobs should run successfully in an environment # All ubuntu-22.04 jobs should run successfully in an environment
# setup by the scripts/ci/setup/ubuntu/build-environment.yml task # setup by the scripts/ci/setup/qemu/build-environment.yml task
# "Install basic packages to build QEMU on Ubuntu 22.04" # "Install basic packages to build QEMU on Ubuntu 22.04"
ubuntu-22.04-aarch32-all: ubuntu-22.04-aarch32-all:

View File

@@ -1,5 +1,5 @@
# All ubuntu-22.04 jobs should run successfully in an environment # All ubuntu-22.04 jobs should run successfully in an environment
# setup by the scripts/ci/setup/ubuntu/build-environment.yml task # setup by the scripts/ci/setup/qemu/build-environment.yml task
# "Install basic packages to build QEMU on Ubuntu 22.04" # "Install basic packages to build QEMU on Ubuntu 22.04"
ubuntu-22.04-aarch64-all-linux-static: ubuntu-22.04-aarch64-all-linux-static:

View File

@@ -24,10 +24,6 @@
- if: '$QEMU_CI == "1" && $CI_PROJECT_NAMESPACE != "qemu-project" && $CI_COMMIT_MESSAGE =~ /opensbi/i' - if: '$QEMU_CI == "1" && $CI_PROJECT_NAMESPACE != "qemu-project" && $CI_COMMIT_MESSAGE =~ /opensbi/i'
when: manual when: manual
# Scheduled runs on mainline don't get pipelines except for the special Coverity job
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_PIPELINE_SOURCE == "schedule"'
when: never
# Run if any files affecting the build output are touched # Run if any files affecting the build output are touched
- changes: - changes:
- .gitlab-ci.d/opensbi.yml - .gitlab-ci.d/opensbi.yml

View File

@@ -1,7 +1,9 @@
msys2-64bit: .shared_msys2_builder:
extends: .base_job_template extends: .base_job_template
tags: tags:
- saas-windows-medium-amd64 - shared-windows
- windows
- windows-1809
cache: cache:
key: "$CI_JOB_NAME" key: "$CI_JOB_NAME"
paths: paths:
@@ -12,19 +14,9 @@ msys2-64bit:
stage: build stage: build
timeout: 100m timeout: 100m
variables: variables:
# Select the "64 bit, gcc and MSVCRT" MSYS2 environment
MSYSTEM: MINGW64
# This feature doesn't (currently) work with PowerShell, it stops # This feature doesn't (currently) work with PowerShell, it stops
# the echo'ing of commands being run and doesn't show any timing # the echo'ing of commands being run and doesn't show any timing
FF_SCRIPT_SECTIONS: 0 FF_SCRIPT_SECTIONS: 0
# do not remove "--without-default-devices"!
# commit 9f8e6cad65a6 ("gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices"
# changed to compile QEMU with the --without-default-devices switch
# for this job, because otherwise the build could not complete within
# the project timeout.
CONFIGURE_ARGS: --target-list=sparc-softmmu --without-default-devices -Ddebug=false -Doptimization=0
# The Windows git is a bit older so override the default
GIT_FETCH_EXTRA_FLAGS: --no-tags --prune --quiet
artifacts: artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG" name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
expire_in: 7 days expire_in: 7 days
@@ -80,35 +72,35 @@ msys2-64bit:
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed - .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
bison diffutils flex bison diffutils flex
git grep make sed git grep make sed
mingw-w64-x86_64-binutils $MINGW_TARGET-binutils
mingw-w64-x86_64-capstone $MINGW_TARGET-capstone
mingw-w64-x86_64-ccache $MINGW_TARGET-ccache
mingw-w64-x86_64-curl $MINGW_TARGET-curl
mingw-w64-x86_64-cyrus-sasl $MINGW_TARGET-cyrus-sasl
mingw-w64-x86_64-dtc $MINGW_TARGET-dtc
mingw-w64-x86_64-gcc $MINGW_TARGET-gcc
mingw-w64-x86_64-glib2 $MINGW_TARGET-glib2
mingw-w64-x86_64-gnutls $MINGW_TARGET-gnutls
mingw-w64-x86_64-gtk3 $MINGW_TARGET-gtk3
mingw-w64-x86_64-libgcrypt $MINGW_TARGET-libgcrypt
mingw-w64-x86_64-libjpeg-turbo $MINGW_TARGET-libjpeg-turbo
mingw-w64-x86_64-libnfs $MINGW_TARGET-libnfs
mingw-w64-x86_64-libpng $MINGW_TARGET-libpng
mingw-w64-x86_64-libssh $MINGW_TARGET-libssh
mingw-w64-x86_64-libtasn1 $MINGW_TARGET-libtasn1
mingw-w64-x86_64-libusb $MINGW_TARGET-libusb
mingw-w64-x86_64-lzo2 $MINGW_TARGET-lzo2
mingw-w64-x86_64-nettle $MINGW_TARGET-nettle
mingw-w64-x86_64-ninja $MINGW_TARGET-ninja
mingw-w64-x86_64-pixman $MINGW_TARGET-pixman
mingw-w64-x86_64-pkgconf $MINGW_TARGET-pkgconf
mingw-w64-x86_64-python $MINGW_TARGET-python
mingw-w64-x86_64-SDL2 $MINGW_TARGET-SDL2
mingw-w64-x86_64-SDL2_image $MINGW_TARGET-SDL2_image
mingw-w64-x86_64-snappy $MINGW_TARGET-snappy
mingw-w64-x86_64-spice $MINGW_TARGET-spice
mingw-w64-x86_64-usbredir $MINGW_TARGET-usbredir
mingw-w64-x86_64-zstd" $MINGW_TARGET-zstd "
- Write-Output "Running build at $(Get-Date -Format u)" - Write-Output "Running build at $(Get-Date -Format u)"
- $env:CHERE_INVOKING = 'yes' # Preserve the current working directory - $env:CHERE_INVOKING = 'yes' # Preserve the current working directory
- $env:MSYS = 'winsymlinks:native' # Enable native Windows symlink - $env:MSYS = 'winsymlinks:native' # Enable native Windows symlink
@@ -125,3 +117,25 @@ msys2-64bit:
- ..\msys64\usr\bin\bash -lc "make check MTESTARGS='$TEST_ARGS' || { cat meson-logs/testlog.txt; exit 1; } ;" - ..\msys64\usr\bin\bash -lc "make check MTESTARGS='$TEST_ARGS' || { cat meson-logs/testlog.txt; exit 1; } ;"
- ..\msys64\usr\bin\bash -lc "ccache --show-stats" - ..\msys64\usr\bin\bash -lc "ccache --show-stats"
- Write-Output "Finished build at $(Get-Date -Format u)" - Write-Output "Finished build at $(Get-Date -Format u)"
msys2-64bit:
extends: .shared_msys2_builder
variables:
MINGW_TARGET: mingw-w64-x86_64
MSYSTEM: MINGW64
# do not remove "--without-default-devices"!
# commit 9f8e6cad65a6 ("gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices"
# changed to compile QEMU with the --without-default-devices switch
# for the msys2 64-bit job, due to the build could not complete within
CONFIGURE_ARGS: --target-list=x86_64-softmmu --without-default-devices -Ddebug=false -Doptimization=0
# qTests don't run successfully with "--without-default-devices",
# so let's exclude the qtests from CI for now.
TEST_ARGS: --no-suite qtest
msys2-32bit:
extends: .shared_msys2_builder
variables:
MINGW_TARGET: mingw-w64-i686
MSYSTEM: MINGW32
CONFIGURE_ARGS: --target-list=ppc64-softmmu -Ddebug=false -Doptimization=0
TEST_ARGS: --no-suite qtest

View File

@@ -36,8 +36,6 @@ Marek Dolata <mkdolata@us.ibm.com> mkdolata@us.ibm.com <mkdolata@us.ibm.com>
Michael Ellerman <mpe@ellerman.id.au> michael@ozlabs.org <michael@ozlabs.org> Michael Ellerman <mpe@ellerman.id.au> michael@ozlabs.org <michael@ozlabs.org>
Nick Hudson <hnick@vmware.com> hnick@vmware.com <hnick@vmware.com> Nick Hudson <hnick@vmware.com> hnick@vmware.com <hnick@vmware.com>
Timothée Cocault <timothee.cocault@gmail.com> timothee.cocault@gmail.com <timothee.cocault@gmail.com> Timothée Cocault <timothee.cocault@gmail.com> timothee.cocault@gmail.com <timothee.cocault@gmail.com>
Stefan Weil <sw@weilnetz.de> <weil@mail.berlios.de>
Stefan Weil <sw@weilnetz.de> Stefan Weil <stefan@kiwi.(none)>
# There is also a: # There is also a:
# (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162> # (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162>
@@ -62,7 +60,6 @@ Ian McKellar <ianloic@google.com> Ian McKellar via Qemu-devel <qemu-devel@nongnu
Julia Suvorova <jusual@mail.ru> Julia Suvorova via Qemu-devel <qemu-devel@nongnu.org> Julia Suvorova <jusual@mail.ru> Julia Suvorova via Qemu-devel <qemu-devel@nongnu.org>
Justin Terry (VM) <juterry@microsoft.com> Justin Terry (VM) via Qemu-devel <qemu-devel@nongnu.org> Justin Terry (VM) <juterry@microsoft.com> Justin Terry (VM) via Qemu-devel <qemu-devel@nongnu.org>
Stefan Weil <sw@weilnetz.de> Stefan Weil via <qemu-devel@nongnu.org> Stefan Weil <sw@weilnetz.de> Stefan Weil via <qemu-devel@nongnu.org>
Stefan Weil <sw@weilnetz.de> Stefan Weil via <qemu-trivial@nongnu.org>
Andrey Drobyshev <andrey.drobyshev@virtuozzo.com> Andrey Drobyshev via <qemu-block@nongnu.org> Andrey Drobyshev <andrey.drobyshev@virtuozzo.com> Andrey Drobyshev via <qemu-block@nongnu.org>
BALATON Zoltan <balaton@eik.bme.hu> BALATON Zoltan via <qemu-ppc@nongnu.org> BALATON Zoltan <balaton@eik.bme.hu> BALATON Zoltan via <qemu-ppc@nongnu.org>
@@ -84,7 +81,6 @@ Greg Kurz <groug@kaod.org> <gkurz@linux.vnet.ibm.com>
Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com> Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn> Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com> James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
Juan Quintela <quintela@trasno.org> <quintela@redhat.com>
Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org> Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com> Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
Luc Michel <luc@lmichel.fr> <luc.michel@git.antfield.fr> Luc Michel <luc@lmichel.fr> <luc.michel@git.antfield.fr>
@@ -100,9 +96,7 @@ Philippe Mathieu-Daudé <philmd@linaro.org> <f4bug@amsat.org>
Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@redhat.com> Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@redhat.com>
Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@fungible.com> Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@fungible.com>
Roman Bolshakov <rbolshakov@ddn.com> <r.bolshakov@yadro.com> Roman Bolshakov <rbolshakov@ddn.com> <r.bolshakov@yadro.com>
Sriram Yagnaraman <sriram.yagnaraman@ericsson.com> <sriram.yagnaraman@est.tech>
Stefan Brankovic <stefan.brankovic@syrmia.com> <stefan.brankovic@rt-rk.com.com> Stefan Brankovic <stefan.brankovic@syrmia.com> <stefan.brankovic@rt-rk.com.com>
Stefan Weil <sw@weilnetz.de> Stefan Weil <stefan@weilnetz.de>
Taylor Simpson <ltaylorsimpson@gmail.com> <tsimpson@quicinc.com> Taylor Simpson <ltaylorsimpson@gmail.com> <tsimpson@quicinc.com>
Yongbok Kim <yongbok.kim@mips.com> <yongbok.kim@imgtec.com> Yongbok Kim <yongbok.kim@mips.com> <yongbok.kim@imgtec.com>

View File

@@ -5,21 +5,16 @@
# Required # Required
version: 2 version: 2
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.11"
# Build documentation in the docs/ directory with Sphinx # Build documentation in the docs/ directory with Sphinx
sphinx: sphinx:
configuration: docs/conf.py configuration: docs/conf.py
# We recommend specifying your dependencies to enable reproducible builds:
# https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
python:
install:
- requirements: docs/requirements.txt
# We want all the document formats # We want all the document formats
formats: all formats: all
# For consistency, we require that QEMU's Sphinx extensions
# run with at least the same minimum version of Python that
# we require for other Python in our codebase (our conf.py
# enforces this, and some code needs it.)
python:
version: 3.6

View File

@@ -1,5 +1,5 @@
os: linux os: linux
dist: jammy dist: focal
language: c language: c
compiler: compiler:
- gcc - gcc
@@ -7,11 +7,13 @@ cache:
# There is one cache per branch and compiler version. # There is one cache per branch and compiler version.
# characteristics of each job are used to identify the cache: # characteristics of each job are used to identify the cache:
# - OS name (currently only linux) # - OS name (currently only linux)
# - OS distribution (e.g. "jammy" for Linux) # - OS distribution (for Linux, bionic or focal)
# - Names and values of visible environment variables set in .travis.yml or Settings panel # - Names and values of visible environment variables set in .travis.yml or Settings panel
timeout: 1200 timeout: 1200
ccache: true ccache: true
pip: true pip: true
directories:
- $HOME/avocado/data/cache
# The channel name "irc.oftc.net#qemu" is encrypted against qemu/qemu # The channel name "irc.oftc.net#qemu" is encrypted against qemu/qemu
@@ -33,7 +35,7 @@ env:
- TEST_BUILD_CMD="" - TEST_BUILD_CMD=""
- TEST_CMD="make check V=1" - TEST_CMD="make check V=1"
# This is broadly a list of "mainline" system targets which have support across the major distros # This is broadly a list of "mainline" system targets which have support across the major distros
- MAIN_SYSTEM_TARGETS="aarch64-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu" - MAIN_SOFTMMU_TARGETS="aarch64-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu"
- CCACHE_SLOPPINESS="include_file_ctime,include_file_mtime" - CCACHE_SLOPPINESS="include_file_ctime,include_file_mtime"
- CCACHE_MAXSIZE=1G - CCACHE_MAXSIZE=1G
- G_MESSAGES_DEBUG=error - G_MESSAGES_DEBUG=error
@@ -81,6 +83,7 @@ jobs:
- name: "[aarch64] GCC check-tcg" - name: "[aarch64] GCC check-tcg"
arch: arm64 arch: arm64
dist: focal
addons: addons:
apt_packages: apt_packages:
- libaio-dev - libaio-dev
@@ -106,17 +109,17 @@ jobs:
- libvdeplug-dev - libvdeplug-dev
- libvte-2.91-dev - libvte-2.91-dev
- ninja-build - ninja-build
- python3-tomli
# Tests dependencies # Tests dependencies
- genisoimage - genisoimage
env: env:
- TEST_CMD="make check check-tcg V=1" - TEST_CMD="make check check-tcg V=1"
- CONFIG="--disable-containers --enable-fdt=system - CONFIG="--disable-containers --enable-fdt=system
--target-list=${MAIN_SYSTEM_TARGETS} --cxx=/bin/false" --target-list=${MAIN_SOFTMMU_TARGETS} --cxx=/bin/false"
- UNRELIABLE=true
- name: "[ppc64] Clang check-tcg" - name: "[ppc64] GCC check-tcg"
arch: ppc64le arch: ppc64le
compiler: clang dist: focal
addons: addons:
apt_packages: apt_packages:
- libaio-dev - libaio-dev
@@ -142,7 +145,6 @@ jobs:
- libvdeplug-dev - libvdeplug-dev
- libvte-2.91-dev - libvte-2.91-dev
- ninja-build - ninja-build
- python3-tomli
# Tests dependencies # Tests dependencies
- genisoimage - genisoimage
env: env:
@@ -152,6 +154,7 @@ jobs:
- name: "[s390x] GCC check-tcg" - name: "[s390x] GCC check-tcg"
arch: s390x arch: s390x
dist: focal
addons: addons:
apt_packages: apt_packages:
- libaio-dev - libaio-dev
@@ -177,13 +180,13 @@ jobs:
- libvdeplug-dev - libvdeplug-dev
- libvte-2.91-dev - libvte-2.91-dev
- ninja-build - ninja-build
- python3-tomli
# Tests dependencies # Tests dependencies
- genisoimage - genisoimage
env: env:
- TEST_CMD="make check check-tcg V=1" - TEST_CMD="make check check-tcg V=1"
- CONFIG="--disable-containers - CONFIG="--disable-containers --enable-fdt=system
--target-list=hppa-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu" --target-list=${MAIN_SOFTMMU_TARGETS},s390x-linux-user"
- UNRELIABLE=true
script: script:
- BUILD_RC=0 && make -j${JOBS} || BUILD_RC=$? - BUILD_RC=0 && make -j${JOBS} || BUILD_RC=$?
- | - |
@@ -194,9 +197,9 @@ jobs:
$(exit $BUILD_RC); $(exit $BUILD_RC);
fi fi
- name: "[s390x] Clang (other-system)" - name: "[s390x] GCC (other-system)"
arch: s390x arch: s390x
compiler: clang dist: focal
addons: addons:
apt_packages: apt_packages:
- libaio-dev - libaio-dev
@@ -217,16 +220,17 @@ jobs:
- libsnappy-dev - libsnappy-dev
- libzstd-dev - libzstd-dev
- nettle-dev - nettle-dev
- xfslibs-dev
- ninja-build - ninja-build
- python3-tomli
# Tests dependencies # Tests dependencies
- genisoimage - genisoimage
env: env:
- CONFIG="--disable-containers --audio-drv-list=sdl --disable-user - CONFIG="--disable-containers --enable-fdt=system --audio-drv-list=sdl
--target-list=arm-softmmu,avr-softmmu,microblaze-softmmu,sh4eb-softmmu,sparc64-softmmu,xtensaeb-softmmu" --disable-user --target-list-exclude=${MAIN_SOFTMMU_TARGETS}"
- name: "[s390x] GCC (user)" - name: "[s390x] GCC (user)"
arch: s390x arch: s390x
dist: focal
addons: addons:
apt_packages: apt_packages:
- libgcrypt20-dev - libgcrypt20-dev
@@ -235,14 +239,13 @@ jobs:
- ninja-build - ninja-build
- flex - flex
- bison - bison
- python3-tomli
env: env:
- TEST_CMD="make check check-tcg V=1"
- CONFIG="--disable-containers --disable-system" - CONFIG="--disable-containers --disable-system"
- name: "[s390x] Clang (disable-tcg)" - name: "[s390x] Clang (disable-tcg)"
arch: s390x arch: s390x
compiler: clang dist: focal
compiler: clang-10
addons: addons:
apt_packages: apt_packages:
- libaio-dev - libaio-dev
@@ -268,8 +271,9 @@ jobs:
- libvdeplug-dev - libvdeplug-dev
- libvte-2.91-dev - libvte-2.91-dev
- ninja-build - ninja-build
- python3-tomli - clang-10
env: env:
- TEST_CMD="make check-unit" - TEST_CMD="make check-unit"
- CONFIG="--disable-containers --disable-tcg --enable-kvm --disable-tools - CONFIG="--disable-containers --disable-tcg --enable-kvm --disable-tools
--enable-fdt=system --host-cc=clang --cxx=clang++" --enable-fdt=system --host-cc=clang --cxx=clang++"
- UNRELIABLE=true

View File

@@ -23,9 +23,6 @@ config IVSHMEM
config TPM config TPM
bool bool
config FDT
bool
config VHOST_USER config VHOST_USER
bool bool
@@ -38,6 +35,9 @@ config VHOST_KERNEL
config VIRTFS config VIRTFS
bool bool
config PVRDMA
bool
config MULTIPROCESS_ALLOWED config MULTIPROCESS_ALLOWED
bool bool
imply MULTIPROCESS imply MULTIPROCESS

View File

@@ -70,6 +70,7 @@ R: Daniel P. Berrangé <berrange@redhat.com>
R: Thomas Huth <thuth@redhat.com> R: Thomas Huth <thuth@redhat.com>
R: Markus Armbruster <armbru@redhat.com> R: Markus Armbruster <armbru@redhat.com>
R: Philippe Mathieu-Daudé <philmd@linaro.org> R: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Juan Quintela <quintela@redhat.com>
W: https://www.qemu.org/docs/master/devel/index.html W: https://www.qemu.org/docs/master/devel/index.html
S: Odd Fixes S: Odd Fixes
F: docs/devel/style.rst F: docs/devel/style.rst
@@ -130,18 +131,6 @@ K: ^Subject:.*(?i)mips
F: docs/system/target-mips.rst F: docs/system/target-mips.rst
F: configs/targets/mips* F: configs/targets/mips*
X86 general architecture support
M: Paolo Bonzini <pbonzini@redhat.com>
S: Maintained
F: configs/devices/i386-softmmu/default.mak
F: configs/targets/i386-softmmu.mak
F: configs/targets/x86_64-softmmu.mak
F: docs/system/target-i386*
F: target/i386/*.[ch]
F: target/i386/Kconfig
F: target/i386/meson.build
F: tools/i386/
Guest CPU cores (TCG) Guest CPU cores (TCG)
--------------------- ---------------------
Overall TCG CPUs Overall TCG CPUs
@@ -168,14 +157,12 @@ F: include/exec/target_long.h
F: include/exec/helper*.h F: include/exec/helper*.h
F: include/exec/helper*.h.inc F: include/exec/helper*.h.inc
F: include/exec/helper-info.c.inc F: include/exec/helper-info.c.inc
F: include/exec/page-protection.h
F: include/sysemu/cpus.h F: include/sysemu/cpus.h
F: include/sysemu/tcg.h F: include/sysemu/tcg.h
F: include/hw/core/tcg-cpu-ops.h F: include/hw/core/tcg-cpu-ops.h
F: host/include/*/host/cpuinfo.h F: host/include/*/host/cpuinfo.h
F: util/cpuinfo-*.c F: util/cpuinfo-*.c
F: include/tcg/ F: include/tcg/
F: tests/decode/
FPU emulation FPU emulation
M: Aurelien Jarno <aurelien@aurel32.net> M: Aurelien Jarno <aurelien@aurel32.net>
@@ -245,7 +232,6 @@ F: disas/hexagon.c
F: configs/targets/hexagon-linux-user/default.mak F: configs/targets/hexagon-linux-user/default.mak
F: docker/dockerfiles/debian-hexagon-cross.docker F: docker/dockerfiles/debian-hexagon-cross.docker
F: gdb-xml/hexagon*.xml F: gdb-xml/hexagon*.xml
T: git https://github.com/quic/qemu.git hex-next
Hexagon idef-parser Hexagon idef-parser
M: Alessandro Di Federico <ale@rev.ng> M: Alessandro Di Federico <ale@rev.ng>
@@ -287,13 +273,26 @@ MIPS TCG CPUs
M: Philippe Mathieu-Daudé <philmd@linaro.org> M: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Aurelien Jarno <aurelien@aurel32.net> R: Aurelien Jarno <aurelien@aurel32.net>
R: Jiaxun Yang <jiaxun.yang@flygoat.com> R: Jiaxun Yang <jiaxun.yang@flygoat.com>
R: Aleksandar Rikalo <arikalo@gmail.com> R: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
S: Odd Fixes S: Odd Fixes
F: target/mips/ F: target/mips/
F: disas/*mips.c F: disas/*mips.c
F: docs/system/cpu-models-mips.rst.inc F: docs/system/cpu-models-mips.rst.inc
F: tests/tcg/mips/ F: tests/tcg/mips/
NiosII TCG CPUs
R: Chris Wulff <crwulff@gmail.com>
R: Marek Vasut <marex@denx.de>
S: Orphan
F: target/nios2/
F: hw/nios2/
F: hw/intc/nios2_vic.c
F: disas/nios2.c
F: include/hw/intc/nios2_vic.h
F: configs/devices/nios2-softmmu/default.mak
F: tests/docker/dockerfiles/debian-nios2-cross.d/build-toolchain.sh
F: tests/tcg/nios2/
OpenRISC TCG CPUs OpenRISC TCG CPUs
M: Stafford Horne <shorne@gmail.com> M: Stafford Horne <shorne@gmail.com>
S: Odd Fixes S: Odd Fixes
@@ -306,6 +305,7 @@ F: tests/tcg/openrisc/
PowerPC TCG CPUs PowerPC TCG CPUs
M: Nicholas Piggin <npiggin@gmail.com> M: Nicholas Piggin <npiggin@gmail.com>
M: Daniel Henrique Barboza <danielhb413@gmail.com> M: Daniel Henrique Barboza <danielhb413@gmail.com>
R: Cédric Le Goater <clg@kaod.org>
L: qemu-ppc@nongnu.org L: qemu-ppc@nongnu.org
S: Odd Fixes S: Odd Fixes
F: target/ppc/ F: target/ppc/
@@ -322,7 +322,7 @@ F: tests/tcg/ppc*/*
RISC-V TCG CPUs RISC-V TCG CPUs
M: Palmer Dabbelt <palmer@dabbelt.com> M: Palmer Dabbelt <palmer@dabbelt.com>
M: Alistair Francis <alistair.francis@wdc.com> M: Alistair Francis <alistair.francis@wdc.com>
M: Bin Meng <bmeng.cn@gmail.com> M: Bin Meng <bin.meng@windriver.com>
R: Weiwei Li <liwei1518@gmail.com> R: Weiwei Li <liwei1518@gmail.com>
R: Daniel Henrique Barboza <dbarboza@ventanamicro.com> R: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
R: Liu Zhiwei <zhiwei_liu@linux.alibaba.com> R: Liu Zhiwei <zhiwei_liu@linux.alibaba.com>
@@ -345,7 +345,6 @@ L: qemu-riscv@nongnu.org
S: Supported S: Supported
F: target/riscv/insn_trans/trans_xthead.c.inc F: target/riscv/insn_trans/trans_xthead.c.inc
F: target/riscv/xthead*.decode F: target/riscv/xthead*.decode
F: target/riscv/th_*
F: disas/riscv-xthead* F: disas/riscv-xthead*
RISC-V XVentanaCondOps extension RISC-V XVentanaCondOps extension
@@ -458,6 +457,7 @@ F: target/mips/sysemu/
PPC KVM CPUs PPC KVM CPUs
M: Nicholas Piggin <npiggin@gmail.com> M: Nicholas Piggin <npiggin@gmail.com>
R: Daniel Henrique Barboza <danielhb413@gmail.com> R: Daniel Henrique Barboza <danielhb413@gmail.com>
R: Cédric Le Goater <clg@kaod.org>
S: Odd Fixes S: Odd Fixes
F: target/ppc/kvm.c F: target/ppc/kvm.c
@@ -536,9 +536,8 @@ Guest CPU Cores (Xen)
--------------------- ---------------------
X86 Xen CPUs X86 Xen CPUs
M: Stefano Stabellini <sstabellini@kernel.org> M: Stefano Stabellini <sstabellini@kernel.org>
M: Anthony PERARD <anthony@xenproject.org> M: Anthony Perard <anthony.perard@citrix.com>
M: Paul Durrant <paul@xen.org> M: Paul Durrant <paul@xen.org>
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
L: xen-devel@lists.xenproject.org L: xen-devel@lists.xenproject.org
S: Supported S: Supported
F: */xen* F: */xen*
@@ -632,7 +631,6 @@ R: Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
L: qemu-arm@nongnu.org L: qemu-arm@nongnu.org
S: Odd Fixes S: Odd Fixes
F: hw/*/allwinner* F: hw/*/allwinner*
F: hw/ide/ahci-allwinner.c
F: include/hw/*/allwinner* F: include/hw/*/allwinner*
F: hw/arm/cubieboard.c F: hw/arm/cubieboard.c
F: docs/system/arm/cubieboard.rst F: docs/system/arm/cubieboard.rst
@@ -659,7 +657,6 @@ F: include/hw/dma/pl080.h
F: hw/dma/pl330.c F: hw/dma/pl330.c
F: hw/gpio/pl061.c F: hw/gpio/pl061.c
F: hw/input/pl050.c F: hw/input/pl050.c
F: include/hw/input/pl050.h
F: hw/intc/pl190.c F: hw/intc/pl190.c
F: hw/sd/pl181.c F: hw/sd/pl181.c
F: hw/ssi/pl022.c F: hw/ssi/pl022.c
@@ -810,13 +807,12 @@ F: include/hw/misc/imx7_*.h
F: hw/pci-host/designware.c F: hw/pci-host/designware.c
F: include/hw/pci-host/designware.h F: include/hw/pci-host/designware.h
MPS2 / MPS3 MPS2
M: Peter Maydell <peter.maydell@linaro.org> M: Peter Maydell <peter.maydell@linaro.org>
L: qemu-arm@nongnu.org L: qemu-arm@nongnu.org
S: Maintained S: Maintained
F: hw/arm/mps2.c F: hw/arm/mps2.c
F: hw/arm/mps2-tz.c F: hw/arm/mps2-tz.c
F: hw/arm/mps3r.c
F: hw/misc/mps2-*.c F: hw/misc/mps2-*.c
F: include/hw/misc/mps2-*.h F: include/hw/misc/mps2-*.h
F: hw/arm/armsse.c F: hw/arm/armsse.c
@@ -931,7 +927,6 @@ F: hw/*/pxa2xx*
F: hw/display/tc6393xb.c F: hw/display/tc6393xb.c
F: hw/gpio/max7310.c F: hw/gpio/max7310.c
F: hw/gpio/zaurus.c F: hw/gpio/zaurus.c
F: hw/input/ads7846.c
F: hw/misc/mst_fpga.c F: hw/misc/mst_fpga.c
F: hw/adc/max111x.c F: hw/adc/max111x.c
F: include/hw/adc/max111x.h F: include/hw/adc/max111x.h
@@ -984,9 +979,7 @@ M: Peter Maydell <peter.maydell@linaro.org>
L: qemu-arm@nongnu.org L: qemu-arm@nongnu.org
S: Maintained S: Maintained
F: hw/*/stellaris* F: hw/*/stellaris*
F: hw/display/ssd03*
F: include/hw/input/gamepad.h F: include/hw/input/gamepad.h
F: include/hw/timer/stellaris-gptm.h
F: docs/system/arm/stellaris.rst F: docs/system/arm/stellaris.rst
STM32VLDISCOVERY STM32VLDISCOVERY
@@ -1001,7 +994,6 @@ M: Peter Maydell <peter.maydell@linaro.org>
L: qemu-arm@nongnu.org L: qemu-arm@nongnu.org
S: Maintained S: Maintained
F: hw/arm/vexpress.c F: hw/arm/vexpress.c
F: hw/display/sii9022.c
F: docs/system/arm/vexpress.rst F: docs/system/arm/vexpress.rst
Versatile PB Versatile PB
@@ -1036,7 +1028,6 @@ F: hw/adc/zynq-xadc.c
F: include/hw/misc/zynq_slcr.h F: include/hw/misc/zynq_slcr.h
F: include/hw/adc/zynq-xadc.h F: include/hw/adc/zynq-xadc.h
X: hw/ssi/xilinx_* X: hw/ssi/xilinx_*
F: docs/system/arm/xlnx-zynq.rst
Xilinx ZynqMP and Versal Xilinx ZynqMP and Versal
M: Alistair Francis <alistair@alistair23.me> M: Alistair Francis <alistair@alistair23.me>
@@ -1115,26 +1106,6 @@ L: qemu-arm@nongnu.org
S: Maintained S: Maintained
F: hw/arm/olimex-stm32-h405.c F: hw/arm/olimex-stm32-h405.c
STM32L4x5 SoC Family
M: Arnaud Minier <arnaud.minier@telecom-paris.fr>
M: Inès Varhol <ines.varhol@telecom-paris.fr>
L: qemu-arm@nongnu.org
S: Maintained
F: hw/arm/stm32l4x5_soc.c
F: hw/char/stm32l4x5_usart.c
F: hw/misc/stm32l4x5_exti.c
F: hw/misc/stm32l4x5_syscfg.c
F: hw/misc/stm32l4x5_rcc.c
F: hw/gpio/stm32l4x5_gpio.c
F: include/hw/*/stm32l4x5_*.h
B-L475E-IOT01A IoT Node
M: Arnaud Minier <arnaud.minier@telecom-paris.fr>
M: Inès Varhol <ines.varhol@telecom-paris.fr>
L: qemu-arm@nongnu.org
S: Maintained
F: hw/arm/b-l475e-iot01a.c
SmartFusion2 SmartFusion2
M: Subbaraya Sundeep <sundeep.lkml@gmail.com> M: Subbaraya Sundeep <sundeep.lkml@gmail.com>
M: Peter Maydell <peter.maydell@linaro.org> M: Peter Maydell <peter.maydell@linaro.org>
@@ -1162,15 +1133,14 @@ F: docs/system/arm/emcraft-sf2.rst
ASPEED BMCs ASPEED BMCs
M: Cédric Le Goater <clg@kaod.org> M: Cédric Le Goater <clg@kaod.org>
M: Peter Maydell <peter.maydell@linaro.org> M: Peter Maydell <peter.maydell@linaro.org>
R: Steven Lee <steven_lee@aspeedtech.com>
R: Troy Lee <leetroy@gmail.com>
R: Jamin Lin <jamin_lin@aspeedtech.com>
R: Andrew Jeffery <andrew@codeconstruct.com.au> R: Andrew Jeffery <andrew@codeconstruct.com.au>
R: Joel Stanley <joel@jms.id.au> R: Joel Stanley <joel@jms.id.au>
L: qemu-arm@nongnu.org L: qemu-arm@nongnu.org
S: Maintained S: Maintained
F: hw/*/*aspeed* F: hw/*/*aspeed*
F: hw/misc/pca9552.c
F: include/hw/*/*aspeed* F: include/hw/*/*aspeed*
F: include/hw/misc/pca9552*.h
F: hw/net/ftgmac100.c F: hw/net/ftgmac100.c
F: include/hw/net/ftgmac100.h F: include/hw/net/ftgmac100.h
F: docs/system/arm/aspeed.rst F: docs/system/arm/aspeed.rst
@@ -1243,7 +1213,6 @@ LoongArch Machines
------------------ ------------------
Virt Virt
M: Song Gao <gaosong@loongson.cn> M: Song Gao <gaosong@loongson.cn>
R: Jiaxun Yang <jiaxun.yang@flygoat.com>
S: Maintained S: Maintained
F: docs/system/loongarch/virt.rst F: docs/system/loongarch/virt.rst
F: configs/targets/loongarch64-softmmu.mak F: configs/targets/loongarch64-softmmu.mak
@@ -1251,9 +1220,7 @@ F: configs/devices/loongarch64-softmmu/default.mak
F: hw/loongarch/ F: hw/loongarch/
F: include/hw/loongarch/virt.h F: include/hw/loongarch/virt.h
F: include/hw/intc/loongarch_*.h F: include/hw/intc/loongarch_*.h
F: include/hw/intc/loongson_ipi_common.h
F: hw/intc/loongarch_*.c F: hw/intc/loongarch_*.c
F: hw/intc/loongson_ipi_common.c
F: include/hw/pci-host/ls7a.h F: include/hw/pci-host/ls7a.h
F: hw/rtc/ls7a_rtc.c F: hw/rtc/ls7a_rtc.c
F: gdb-xml/loongarch*.xml F: gdb-xml/loongarch*.xml
@@ -1346,7 +1313,7 @@ F: include/hw/mips/
Jazz Jazz
M: Hervé Poussineau <hpoussin@reactos.org> M: Hervé Poussineau <hpoussin@reactos.org>
R: Aleksandar Rikalo <arikalo@gmail.com> R: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
S: Maintained S: Maintained
F: hw/mips/jazz.c F: hw/mips/jazz.c
F: hw/display/g364fb.c F: hw/display/g364fb.c
@@ -1359,7 +1326,6 @@ M: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Aurelien Jarno <aurelien@aurel32.net> R: Aurelien Jarno <aurelien@aurel32.net>
S: Odd Fixes S: Odd Fixes
F: hw/isa/piix.c F: hw/isa/piix.c
F: hw/isa/fdc37m81x-superio.c
F: hw/acpi/piix4.c F: hw/acpi/piix4.c
F: hw/mips/malta.c F: hw/mips/malta.c
F: hw/pci-host/gt64120.c F: hw/pci-host/gt64120.c
@@ -1368,7 +1334,7 @@ F: tests/avocado/linux_ssh_mips_malta.py
F: tests/avocado/machine_mips_malta.py F: tests/avocado/machine_mips_malta.py
Mipssim Mipssim
R: Aleksandar Rikalo <arikalo@gmail.com> R: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
S: Orphan S: Orphan
F: hw/mips/mipssim.c F: hw/mips/mipssim.c
F: hw/net/mipsnet.c F: hw/net/mipsnet.c
@@ -1387,20 +1353,16 @@ Loongson-3 virtual platforms
M: Huacai Chen <chenhuacai@kernel.org> M: Huacai Chen <chenhuacai@kernel.org>
R: Jiaxun Yang <jiaxun.yang@flygoat.com> R: Jiaxun Yang <jiaxun.yang@flygoat.com>
S: Maintained S: Maintained
F: hw/intc/loongson_ipi_common.c
F: hw/intc/loongson_ipi.c
F: hw/intc/loongson_liointc.c F: hw/intc/loongson_liointc.c
F: hw/mips/loongson3_bootp.c F: hw/mips/loongson3_bootp.c
F: hw/mips/loongson3_bootp.h F: hw/mips/loongson3_bootp.h
F: hw/mips/loongson3_virt.c F: hw/mips/loongson3_virt.c
F: include/hw/intc/loongson_ipi_common.h
F: include/hw/intc/loongson_ipi.h
F: include/hw/intc/loongson_liointc.h F: include/hw/intc/loongson_liointc.h
F: tests/avocado/machine_mips_loongson3v.py F: tests/avocado/machine_mips_loongson3v.py
Boston Boston
M: Paul Burton <paulburton@kernel.org> M: Paul Burton <paulburton@kernel.org>
R: Aleksandar Rikalo <arikalo@gmail.com> R: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
S: Odd Fixes S: Odd Fixes
F: hw/core/loader-fit.c F: hw/core/loader-fit.c
F: hw/mips/boston.c F: hw/mips/boston.c
@@ -1428,7 +1390,6 @@ Bamboo
L: qemu-ppc@nongnu.org L: qemu-ppc@nongnu.org
S: Orphan S: Orphan
F: hw/ppc/ppc440_bamboo.c F: hw/ppc/ppc440_bamboo.c
F: hw/pci-host/ppc4xx_pci.c
F: tests/avocado/ppc_bamboo.py F: tests/avocado/ppc_bamboo.py
e500 e500
@@ -1510,6 +1471,7 @@ F: tests/avocado/ppc_prep_40p.py
sPAPR (pseries) sPAPR (pseries)
M: Nicholas Piggin <npiggin@gmail.com> M: Nicholas Piggin <npiggin@gmail.com>
R: Daniel Henrique Barboza <danielhb413@gmail.com> R: Daniel Henrique Barboza <danielhb413@gmail.com>
R: Cédric Le Goater <clg@kaod.org>
R: David Gibson <david@gibson.dropbear.id.au> R: David Gibson <david@gibson.dropbear.id.au>
R: Harsh Prateek Bora <harshpb@linux.ibm.com> R: Harsh Prateek Bora <harshpb@linux.ibm.com>
L: qemu-ppc@nongnu.org L: qemu-ppc@nongnu.org
@@ -1530,7 +1492,6 @@ F: tests/qtest/libqos/*spapr*
F: tests/qtest/rtas* F: tests/qtest/rtas*
F: tests/qtest/libqos/rtas* F: tests/qtest/libqos/rtas*
F: tests/avocado/ppc_pseries.py F: tests/avocado/ppc_pseries.py
F: tests/avocado/ppc_hv_tests.py
PowerNV (Non-Virtualized) PowerNV (Non-Virtualized)
M: Cédric Le Goater <clg@kaod.org> M: Cédric Le Goater <clg@kaod.org>
@@ -1548,14 +1509,6 @@ F: include/hw/pci-host/pnv*
F: pc-bios/skiboot.lid F: pc-bios/skiboot.lid
F: tests/qtest/pnv* F: tests/qtest/pnv*
pca955x
M: Glenn Miles <milesg@linux.ibm.com>
L: qemu-ppc@nongnu.org
L: qemu-arm@nongnu.org
S: Odd Fixes
F: hw/gpio/pca955*.c
F: include/hw/gpio/pca955*.h
virtex_ml507 virtex_ml507
M: Edgar E. Iglesias <edgar.iglesias@gmail.com> M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
L: qemu-ppc@nongnu.org L: qemu-ppc@nongnu.org
@@ -1569,14 +1522,13 @@ L: qemu-ppc@nongnu.org
S: Maintained S: Maintained
F: hw/ppc/sam460ex.c F: hw/ppc/sam460ex.c
F: hw/ppc/ppc440_uc.c F: hw/ppc/ppc440_uc.c
F: hw/pci-host/ppc440_pcix.c F: hw/ppc/ppc440_pcix.c
F: hw/display/sm501* F: hw/display/sm501*
F: hw/ide/sii3112.c F: hw/ide/sii3112.c
F: hw/rtc/m41t80.c F: hw/rtc/m41t80.c
F: pc-bios/canyonlands.dt[sb] F: pc-bios/canyonlands.dt[sb]
F: pc-bios/u-boot-sam460ex-20100605.bin F: pc-bios/u-boot-sam460ex-20100605.bin
F: roms/u-boot-sam460ex F: roms/u-boot-sam460ex
F: docs/system/ppc/amigang.rst
pegasos2 pegasos2
M: BALATON Zoltan <balaton@eik.bme.hu> M: BALATON Zoltan <balaton@eik.bme.hu>
@@ -1618,7 +1570,7 @@ F: include/hw/riscv/opentitan.h
F: include/hw/*/ibex_*.h F: include/hw/*/ibex_*.h
Microchip PolarFire SoC Icicle Kit Microchip PolarFire SoC Icicle Kit
M: Bin Meng <bmeng.cn@gmail.com> M: Bin Meng <bin.meng@windriver.com>
L: qemu-riscv@nongnu.org L: qemu-riscv@nongnu.org
S: Supported S: Supported
F: docs/system/riscv/microchip-icicle-kit.rst F: docs/system/riscv/microchip-icicle-kit.rst
@@ -1645,7 +1597,7 @@ F: include/hw/char/shakti_uart.h
SiFive Machines SiFive Machines
M: Alistair Francis <Alistair.Francis@wdc.com> M: Alistair Francis <Alistair.Francis@wdc.com>
M: Bin Meng <bmeng.cn@gmail.com> M: Bin Meng <bin.meng@windriver.com>
M: Palmer Dabbelt <palmer@dabbelt.com> M: Palmer Dabbelt <palmer@dabbelt.com>
L: qemu-riscv@nongnu.org L: qemu-riscv@nongnu.org
S: Supported S: Supported
@@ -1725,12 +1677,13 @@ F: hw/rtc/sun4v-rtc.c
F: include/hw/rtc/sun4v-rtc.h F: include/hw/rtc/sun4v-rtc.h
Leon3 Leon3
M: Clément Chigot <chigot@adacore.com> M: Fabien Chouteau <chouteau@adacore.com>
M: Frederic Konrad <konrad.frederic@yahoo.fr> M: Frederic Konrad <konrad.frederic@yahoo.fr>
S: Maintained S: Maintained
F: hw/sparc/leon3.c F: hw/sparc/leon3.c
F: hw/*/grlib* F: hw/*/grlib*
F: include/hw/*/grlib* F: include/hw/*/grlib*
F: tests/avocado/machine_sparc_leon3.py
S390 Machines S390 Machines
------------- -------------
@@ -1882,10 +1835,8 @@ M: Eduardo Habkost <eduardo@habkost.net>
M: Marcel Apfelbaum <marcel.apfelbaum@gmail.com> M: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
R: Philippe Mathieu-Daudé <philmd@linaro.org> R: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Yanan Wang <wangyanan55@huawei.com> R: Yanan Wang <wangyanan55@huawei.com>
R: Zhao Liu <zhao1.liu@intel.com>
S: Supported S: Supported
F: hw/core/cpu-common.c F: hw/core/cpu.c
F: hw/core/cpu-sysemu.c
F: hw/core/machine-qmp-cmds.c F: hw/core/machine-qmp-cmds.c
F: hw/core/machine.c F: hw/core/machine.c
F: hw/core/machine-smp.c F: hw/core/machine-smp.c
@@ -1951,6 +1902,7 @@ IDE
M: John Snow <jsnow@redhat.com> M: John Snow <jsnow@redhat.com>
L: qemu-block@nongnu.org L: qemu-block@nongnu.org
S: Odd Fixes S: Odd Fixes
F: include/hw/ide.h
F: include/hw/ide/ F: include/hw/ide/
F: hw/ide/ F: hw/ide/
F: hw/block/block.c F: hw/block/block.c
@@ -2084,7 +2036,6 @@ F: hw/ppc/ppc4xx*.c
F: hw/ppc/ppc440_uc.c F: hw/ppc/ppc440_uc.c
F: hw/ppc/ppc440.h F: hw/ppc/ppc440.h
F: hw/i2c/ppc4xx_i2c.c F: hw/i2c/ppc4xx_i2c.c
F: include/hw/pci-host/ppc4xx.h
F: include/hw/ppc/ppc4xx.h F: include/hw/ppc/ppc4xx.h
F: include/hw/i2c/ppc4xx_i2c.h F: include/hw/i2c/ppc4xx_i2c.h
F: hw/intc/ppc-uic.c F: hw/intc/ppc-uic.c
@@ -2141,7 +2092,7 @@ F: hw/ssi/xilinx_*
SD (Secure Card) SD (Secure Card)
M: Philippe Mathieu-Daudé <philmd@linaro.org> M: Philippe Mathieu-Daudé <philmd@linaro.org>
M: Bin Meng <bmeng.cn@gmail.com> M: Bin Meng <bin.meng@windriver.com>
L: qemu-block@nongnu.org L: qemu-block@nongnu.org
S: Odd Fixes S: Odd Fixes
F: include/hw/sd/sd* F: include/hw/sd/sd*
@@ -2152,7 +2103,8 @@ F: tests/qtest/fuzz-sdcard-test.c
F: tests/qtest/sdhci-test.c F: tests/qtest/sdhci-test.c
USB USB
S: Orphan M: Gerd Hoffmann <kraxel@redhat.com>
S: Odd Fixes
F: hw/usb/* F: hw/usb/*
F: stubs/usb-dev-stub.c F: stubs/usb-dev-stub.c
F: tests/qtest/usb-*-test.c F: tests/qtest/usb-*-test.c
@@ -2161,6 +2113,7 @@ F: include/hw/usb.h
F: include/hw/usb/ F: include/hw/usb/
USB (serial adapter) USB (serial adapter)
R: Gerd Hoffmann <kraxel@redhat.com>
M: Samuel Thibault <samuel.thibault@ens-lyon.org> M: Samuel Thibault <samuel.thibault@ens-lyon.org>
S: Maintained S: Maintained
F: hw/usb/dev-serial.c F: hw/usb/dev-serial.c
@@ -2172,8 +2125,7 @@ S: Supported
F: hw/vfio/* F: hw/vfio/*
F: include/hw/vfio/ F: include/hw/vfio/
F: docs/igd-assign.txt F: docs/igd-assign.txt
F: docs/devel/migration/vfio.rst F: docs/devel/vfio-migration.rst
F: qapi/vfio.json
vfio-ccw vfio-ccw
M: Eric Farman <farman@linux.ibm.com> M: Eric Farman <farman@linux.ibm.com>
@@ -2198,22 +2150,8 @@ F: hw/vfio/ap.c
F: docs/system/s390x/vfio-ap.rst F: docs/system/s390x/vfio-ap.rst
L: qemu-s390x@nongnu.org L: qemu-s390x@nongnu.org
iommufd
M: Yi Liu <yi.l.liu@intel.com>
M: Eric Auger <eric.auger@redhat.com>
M: Zhenzhong Duan <zhenzhong.duan@intel.com>
S: Supported
F: backends/iommufd.c
F: include/sysemu/iommufd.h
F: backends/host_iommu_device.c
F: include/sysemu/host_iommu_device.h
F: include/qemu/chardev_open.h
F: util/chardev_open.c
F: docs/devel/vfio-iommufd.rst
vhost vhost
M: Michael S. Tsirkin <mst@redhat.com> M: Michael S. Tsirkin <mst@redhat.com>
R: Stefano Garzarella <sgarzare@redhat.com>
S: Supported S: Supported
F: hw/*/*vhost* F: hw/*/*vhost*
F: docs/interop/vhost-user.json F: docs/interop/vhost-user.json
@@ -2237,7 +2175,6 @@ F: qapi/virtio.json
F: net/vhost-user.c F: net/vhost-user.c
F: include/hw/virtio/ F: include/hw/virtio/
F: docs/devel/virtio* F: docs/devel/virtio*
F: docs/devel/migration/virtio.rst
virtio-balloon virtio-balloon
M: Michael S. Tsirkin <mst@redhat.com> M: Michael S. Tsirkin <mst@redhat.com>
@@ -2304,14 +2241,13 @@ M: Stefan Hajnoczi <stefanha@redhat.com>
S: Supported S: Supported
F: hw/virtio/vhost-user-fs* F: hw/virtio/vhost-user-fs*
F: include/hw/virtio/vhost-user-fs.h F: include/hw/virtio/vhost-user-fs.h
L: virtio-fs@lists.linux.dev L: virtio-fs@redhat.com
virtio-input virtio-input
M: Gerd Hoffmann <kraxel@redhat.com> M: Gerd Hoffmann <kraxel@redhat.com>
S: Odd Fixes S: Odd Fixes
F: docs/system/devices/vhost-user-input.rst F: hw/input/vhost-user-input.c
F: hw/input/virtio-input*.c F: hw/input/virtio-input*.c
F: hw/virtio/vhost-user-input.c
F: include/hw/virtio/virtio-input.h F: include/hw/virtio/virtio-input.h
F: contrib/vhost-user-input/* F: contrib/vhost-user-input/*
@@ -2340,12 +2276,6 @@ F: include/sysemu/rng*.h
F: backends/rng*.c F: backends/rng*.c
F: tests/qtest/virtio-rng-test.c F: tests/qtest/virtio-rng-test.c
vhost-user-stubs
M: Alex Bennée <alex.bennee@linaro.org>
S: Maintained
F: hw/virtio/vhost-user-base.c
F: hw/virtio/vhost-user-device*
vhost-user-rng vhost-user-rng
M: Mathieu Poirier <mathieu.poirier@linaro.org> M: Mathieu Poirier <mathieu.poirier@linaro.org>
S: Supported S: Supported
@@ -2363,13 +2293,6 @@ F: hw/virtio/vhost-user-gpio*
F: include/hw/virtio/vhost-user-gpio.h F: include/hw/virtio/vhost-user-gpio.h
F: tests/qtest/libqos/virtio-gpio.* F: tests/qtest/libqos/virtio-gpio.*
vhost-user-snd
M: Alex Bennée <alex.bennee@linaro.org>
R: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
S: Maintained
F: hw/virtio/vhost-user-snd*
F: include/hw/virtio/vhost-user-snd.h
vhost-user-scmi vhost-user-scmi
R: mzamazal@redhat.com R: mzamazal@redhat.com
S: Supported S: Supported
@@ -2412,7 +2335,6 @@ F: docs/system/devices/virtio-snd.rst
nvme nvme
M: Keith Busch <kbusch@kernel.org> M: Keith Busch <kbusch@kernel.org>
M: Klaus Jensen <its@irrelevant.dk> M: Klaus Jensen <its@irrelevant.dk>
R: Jesper Devantier <foss@defmacro.it>
L: qemu-block@nongnu.org L: qemu-block@nongnu.org
S: Supported S: Supported
F: hw/nvme/* F: hw/nvme/*
@@ -2449,13 +2371,8 @@ F: hw/net/net_tx_pkt*
Vmware Vmware
M: Dmitry Fleytman <dmitry.fleytman@gmail.com> M: Dmitry Fleytman <dmitry.fleytman@gmail.com>
S: Maintained S: Maintained
F: docs/specs/vmw_pvscsi-spec.txt
F: hw/display/vmware_vga.c
F: hw/net/vmxnet* F: hw/net/vmxnet*
F: hw/scsi/vmw_pvscsi* F: hw/scsi/vmw_pvscsi*
F: pc-bios/efi-vmxnet3.rom
F: pc-bios/vgabios-vmware.bin
F: roms/config.vga-vmware
F: tests/qtest/vmxnet3-test.c F: tests/qtest/vmxnet3-test.c
F: docs/specs/vwm_pvscsi-spec.rst F: docs/specs/vwm_pvscsi-spec.rst
@@ -2465,7 +2382,7 @@ S: Maintained
F: hw/net/rocker/ F: hw/net/rocker/
F: qapi/rocker.json F: qapi/rocker.json
F: tests/rocker/ F: tests/rocker/
F: docs/specs/rocker.rst F: docs/specs/rocker.txt
e1000x e1000x
M: Dmitry Fleytman <dmitry.fleytman@gmail.com> M: Dmitry Fleytman <dmitry.fleytman@gmail.com>
@@ -2484,7 +2401,7 @@ F: tests/qtest/libqos/e1000e.*
igb igb
M: Akihiko Odaki <akihiko.odaki@daynix.com> M: Akihiko Odaki <akihiko.odaki@daynix.com>
R: Sriram Yagnaraman <sriram.yagnaraman@ericsson.com> R: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
S: Maintained S: Maintained
F: docs/system/devices/igb.rst F: docs/system/devices/igb.rst
F: hw/net/igb* F: hw/net/igb*
@@ -2504,17 +2421,11 @@ F: hw/net/tulip.c
F: hw/net/tulip.h F: hw/net/tulip.h
pca954x pca954x
M: Patrick Leis <venture@google.com> M: Patrick Venture <venture@google.com>
S: Maintained S: Maintained
F: hw/i2c/i2c_mux_pca954x.c F: hw/i2c/i2c_mux_pca954x.c
F: include/hw/i2c/i2c_mux_pca954x.h F: include/hw/i2c/i2c_mux_pca954x.h
pcf8574
M: Dmitrii Sharikhin <d.sharikhin@yadro.com>
S: Maintained
F: hw/gpio/pcf8574.c
F: include/gpio/pcf8574.h
Generic Loader Generic Loader
M: Alistair Francis <alistair@alistair23.me> M: Alistair Francis <alistair@alistair23.me>
S: Maintained S: Maintained
@@ -2589,14 +2500,15 @@ F: hw/display/ramfb*.c
F: include/hw/display/ramfb.h F: include/hw/display/ramfb.h
virtio-gpu virtio-gpu
S: Orphan M: Gerd Hoffmann <kraxel@redhat.com>
S: Odd Fixes
F: hw/display/virtio-gpu* F: hw/display/virtio-gpu*
F: hw/display/virtio-vga.* F: hw/display/virtio-vga.*
F: include/hw/virtio/virtio-gpu.h F: include/hw/virtio/virtio-gpu.h
F: docs/system/devices/virtio-gpu.rst F: docs/system/devices/virtio-gpu.rst
vhost-user-blk vhost-user-blk
M: Raphael Norwitz <raphael@enfabrica.net> M: Raphael Norwitz <raphael.norwitz@nutanix.com>
S: Maintained S: Maintained
F: contrib/vhost-user-blk/ F: contrib/vhost-user-blk/
F: contrib/vhost-user-scsi/ F: contrib/vhost-user-scsi/
@@ -2611,6 +2523,7 @@ F: include/hw/virtio/virtio-blk-common.h
vhost-user-gpu vhost-user-gpu
M: Marc-André Lureau <marcandre.lureau@redhat.com> M: Marc-André Lureau <marcandre.lureau@redhat.com>
R: Gerd Hoffmann <kraxel@redhat.com>
S: Maintained S: Maintained
F: docs/interop/vhost-user-gpu.rst F: docs/interop/vhost-user-gpu.rst
F: contrib/vhost-user-gpu F: contrib/vhost-user-gpu
@@ -2881,6 +2794,7 @@ F: util/aio-*.h
F: util/defer-call.c F: util/defer-call.c
F: util/fdmon-*.c F: util/fdmon-*.c
F: block/io.c F: block/io.c
F: migration/block*
F: include/block/aio.h F: include/block/aio.h
F: include/block/aio-wait.h F: include/block/aio-wait.h
F: include/qemu/defer-call.h F: include/qemu/defer-call.h
@@ -2932,7 +2846,6 @@ S: Supported
F: hw/cxl/ F: hw/cxl/
F: hw/mem/cxl_type3.c F: hw/mem/cxl_type3.c
F: include/hw/cxl/ F: include/hw/cxl/
F: qapi/cxl.json
Dirty Bitmaps Dirty Bitmaps
M: Eric Blake <eblake@redhat.com> M: Eric Blake <eblake@redhat.com>
@@ -3012,7 +2925,7 @@ F: include/qapi/error.h
F: include/qemu/error-report.h F: include/qemu/error-report.h
F: qapi/error.json F: qapi/error.json
F: util/error.c F: util/error.c
F: util/error-report.c F: util/qemu-error.c
F: scripts/coccinelle/err-bad-newline.cocci F: scripts/coccinelle/err-bad-newline.cocci
F: scripts/coccinelle/error-use-after-free.cocci F: scripts/coccinelle/error-use-after-free.cocci
F: scripts/coccinelle/error_propagate_null.cocci F: scripts/coccinelle/error_propagate_null.cocci
@@ -3068,7 +2981,8 @@ F: stubs/memory_device.c
F: docs/nvdimm.txt F: docs/nvdimm.txt
SPICE SPICE
S: Orphan M: Gerd Hoffmann <kraxel@redhat.com>
S: Odd Fixes
F: include/ui/qemu-spice.h F: include/ui/qemu-spice.h
F: include/ui/spice-display.h F: include/ui/spice-display.h
F: ui/spice-*.c F: ui/spice-*.c
@@ -3078,6 +2992,7 @@ F: qapi/ui.json
F: docs/spice-port-fqdn.txt F: docs/spice-port-fqdn.txt
Graphics Graphics
M: Gerd Hoffmann <kraxel@redhat.com>
M: Marc-André Lureau <marcandre.lureau@redhat.com> M: Marc-André Lureau <marcandre.lureau@redhat.com>
S: Odd Fixes S: Odd Fixes
F: ui/ F: ui/
@@ -3224,7 +3139,6 @@ M: Eric Blake <eblake@redhat.com>
M: Markus Armbruster <armbru@redhat.com> M: Markus Armbruster <armbru@redhat.com>
S: Supported S: Supported
F: qapi/*.json F: qapi/*.json
F: qga/qapi-schema.json
T: git https://repo.or.cz/qemu/armbru.git qapi-next T: git https://repo.or.cz/qemu/armbru.git qapi-next
QObject QObject
@@ -3323,7 +3237,6 @@ F: tests/qtest/
F: docs/devel/qgraph.rst F: docs/devel/qgraph.rst
F: docs/devel/qtest.rst F: docs/devel/qtest.rst
X: tests/qtest/bios-tables-test* X: tests/qtest/bios-tables-test*
X: tests/qtest/migration-*
Device Fuzzing Device Fuzzing
M: Alexander Bulekov <alxndr@bu.edu> M: Alexander Bulekov <alxndr@bu.edu>
@@ -3359,7 +3272,6 @@ Stats
S: Orphan S: Orphan
F: include/sysemu/stats.h F: include/sysemu/stats.h
F: stats/ F: stats/
F: qapi/stats.json
Streams Streams
M: Edgar E. Iglesias <edgar.iglesias@gmail.com> M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
@@ -3405,19 +3317,15 @@ F: tests/qtest/*tpm*
F: docs/specs/tpm.rst F: docs/specs/tpm.rst
T: git https://github.com/stefanberger/qemu-tpm.git tpm-next T: git https://github.com/stefanberger/qemu-tpm.git tpm-next
SPDM
M: Alistair Francis <alistair.francis@wdc.com>
S: Maintained
F: backends/spdm-socket.c
F: include/sysemu/spdm-socket.h
Checkpatch Checkpatch
S: Odd Fixes S: Odd Fixes
F: scripts/checkpatch.pl F: scripts/checkpatch.pl
Migration Migration
M: Juan Quintela <quintela@redhat.com>
M: Peter Xu <peterx@redhat.com> M: Peter Xu <peterx@redhat.com>
M: Fabiano Rosas <farosas@suse.de> M: Fabiano Rosas <farosas@suse.de>
R: Leonardo Bras <leobras@redhat.com>
S: Maintained S: Maintained
F: hw/core/vmstate-if.c F: hw/core/vmstate-if.c
F: include/hw/vmstate-if.h F: include/hw/vmstate-if.h
@@ -3426,16 +3334,18 @@ F: include/qemu/userfaultfd.h
F: migration/ F: migration/
F: scripts/vmstate-static-checker.py F: scripts/vmstate-static-checker.py
F: tests/vmstate-static-checker-data/ F: tests/vmstate-static-checker-data/
F: tests/qtest/migration-* F: tests/qtest/migration-test.c
F: docs/devel/migration/ F: docs/devel/migration.rst
F: qapi/migration.json F: qapi/migration.json
F: tests/migration/ F: tests/migration/
F: util/userfaultfd.c F: util/userfaultfd.c
X: migration/rdma* X: migration/rdma*
RDMA Migration RDMA Migration
M: Juan Quintela <quintela@redhat.com>
R: Li Zhijian <lizhijian@fujitsu.com> R: Li Zhijian <lizhijian@fujitsu.com>
R: Peter Xu <peterx@redhat.com> R: Peter Xu <peterx@redhat.com>
R: Leonardo Bras <leobras@redhat.com>
S: Odd Fixes S: Odd Fixes
F: migration/rdma* F: migration/rdma*
@@ -3447,13 +3357,6 @@ F: include/sysemu/dirtylimit.h
F: migration/dirtyrate.c F: migration/dirtyrate.c
F: migration/dirtyrate.h F: migration/dirtyrate.h
F: include/sysemu/dirtyrate.h F: include/sysemu/dirtyrate.h
F: docs/devel/migration/dirty-limit.rst
Detached LUKS header
M: Hyman Huang <yong.huang@smartx.com>
S: Maintained
F: tests/qemu-iotests/tests/luks-detached-header
F: docs/devel/luks-detached-header.rst
D-Bus D-Bus
M: Marc-André Lureau <marcandre.lureau@redhat.com> M: Marc-André Lureau <marcandre.lureau@redhat.com>
@@ -3487,7 +3390,7 @@ F: qapi/crypto.json
F: tests/unit/test-crypto-* F: tests/unit/test-crypto-*
F: tests/bench/benchmark-crypto-* F: tests/bench/benchmark-crypto-*
F: tests/unit/crypto-tls-* F: tests/unit/crypto-tls-*
F: tests/unit/pkix_asn1_tab.c.inc F: tests/unit/pkix_asn1_tab.c
F: qemu.sasl F: qemu.sasl
Coroutines Coroutines
@@ -3604,7 +3507,6 @@ F: util/iova-tree.c
elf2dmp elf2dmp
M: Viktor Prutyanov <viktor.prutyanov@phystech.edu> M: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
R: Akihiko Odaki <akihiko.odaki@daynix.com>
S: Maintained S: Maintained
F: contrib/elf2dmp/ F: contrib/elf2dmp/
@@ -3639,15 +3541,6 @@ F: tests/qtest/adm1272-test.c
F: tests/qtest/max34451-test.c F: tests/qtest/max34451-test.c
F: tests/qtest/isl_pmbus_vr-test.c F: tests/qtest/isl_pmbus_vr-test.c
FSI
M: Ninad Palsule <ninad@linux.ibm.com>
R: Cédric Le Goater <clg@kaod.org>
S: Maintained
F: hw/fsi/*
F: include/hw/fsi/*
F: docs/specs/fsi.rst
F: tests/qtest/aspeed_fsi-test.c
Firmware schema specifications Firmware schema specifications
M: Philippe Mathieu-Daudé <philmd@linaro.org> M: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Daniel P. Berrange <berrange@redhat.com> R: Daniel P. Berrange <berrange@redhat.com>
@@ -3670,8 +3563,8 @@ F: tests/uefi-test-tools/
VT-d Emulation VT-d Emulation
M: Michael S. Tsirkin <mst@redhat.com> M: Michael S. Tsirkin <mst@redhat.com>
M: Peter Xu <peterx@redhat.com>
R: Jason Wang <jasowang@redhat.com> R: Jason Wang <jasowang@redhat.com>
R: Yi Liu <yi.l.liu@intel.com>
S: Supported S: Supported
F: hw/i386/intel_iommu.c F: hw/i386/intel_iommu.c
F: hw/i386/intel_iommu_internal.h F: hw/i386/intel_iommu_internal.h
@@ -3699,16 +3592,6 @@ F: hw/core/clock-vmstate.c
F: hw/core/qdev-clock.c F: hw/core/qdev-clock.c
F: docs/devel/clocks.rst F: docs/devel/clocks.rst
Reset framework
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: include/hw/resettable.h
F: include/hw/core/resetcontainer.h
F: include/sysemu/reset.h
F: hw/core/reset.c
F: hw/core/resettable.c
F: hw/core/resetcontainer.c
Usermode Emulation Usermode Emulation
------------------ ------------------
Overall usermode emulation Overall usermode emulation
@@ -3749,11 +3632,10 @@ TCG Plugins
M: Alex Bennée <alex.bennee@linaro.org> M: Alex Bennée <alex.bennee@linaro.org>
R: Alexandre Iooss <erdnaxe@crans.org> R: Alexandre Iooss <erdnaxe@crans.org>
R: Mahmoud Mandour <ma.mandourr@gmail.com> R: Mahmoud Mandour <ma.mandourr@gmail.com>
R: Pierrick Bouvier <pierrick.bouvier@linaro.org>
S: Maintained S: Maintained
F: docs/devel/tcg-plugins.rst F: docs/devel/tcg-plugins.rst
F: plugins/ F: plugins/
F: tests/tcg/plugins/ F: tests/plugin/
F: tests/avocado/tcg_plugins.py F: tests/avocado/tcg_plugins.py
F: contrib/plugins/ F: contrib/plugins/
@@ -3784,7 +3666,7 @@ M: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Aurelien Jarno <aurelien@aurel32.net> R: Aurelien Jarno <aurelien@aurel32.net>
R: Huacai Chen <chenhuacai@kernel.org> R: Huacai Chen <chenhuacai@kernel.org>
R: Jiaxun Yang <jiaxun.yang@flygoat.com> R: Jiaxun Yang <jiaxun.yang@flygoat.com>
R: Aleksandar Rikalo <arikalo@gmail.com> R: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
S: Odd Fixes S: Odd Fixes
F: tcg/mips/ F: tcg/mips/
@@ -3829,7 +3711,7 @@ F: block/vmdk.c
RBD RBD
M: Ilya Dryomov <idryomov@gmail.com> M: Ilya Dryomov <idryomov@gmail.com>
R: Peter Lieven <pl@dlhnet.de> R: Peter Lieven <pl@kamp.de>
L: qemu-block@nongnu.org L: qemu-block@nongnu.org
S: Supported S: Supported
F: block/rbd.c F: block/rbd.c
@@ -3855,7 +3737,7 @@ F: block/blkio.c
iSCSI iSCSI
M: Ronnie Sahlberg <ronniesahlberg@gmail.com> M: Ronnie Sahlberg <ronniesahlberg@gmail.com>
M: Paolo Bonzini <pbonzini@redhat.com> M: Paolo Bonzini <pbonzini@redhat.com>
M: Peter Lieven <pl@dlhnet.de> M: Peter Lieven <pl@kamp.de>
L: qemu-block@nongnu.org L: qemu-block@nongnu.org
S: Odd Fixes S: Odd Fixes
F: block/iscsi.c F: block/iscsi.c
@@ -3871,14 +3753,14 @@ F: nbd/
F: include/block/nbd* F: include/block/nbd*
F: qemu-nbd.* F: qemu-nbd.*
F: blockdev-nbd.c F: blockdev-nbd.c
F: docs/interop/nbd.rst F: docs/interop/nbd.txt
F: docs/tools/qemu-nbd.rst F: docs/tools/qemu-nbd.rst
F: tests/qemu-iotests/tests/*nbd* F: tests/qemu-iotests/tests/*nbd*
T: git https://repo.or.cz/qemu/ericb.git nbd T: git https://repo.or.cz/qemu/ericb.git nbd
T: git https://gitlab.com/vsementsov/qemu.git block T: git https://gitlab.com/vsementsov/qemu.git block
NFS NFS
M: Peter Lieven <pl@dlhnet.de> M: Peter Lieven <pl@kamp.de>
L: qemu-block@nongnu.org L: qemu-block@nongnu.org
S: Maintained S: Maintained
F: block/nfs.c F: block/nfs.c
@@ -3964,8 +3846,7 @@ L: qemu-block@nongnu.org
S: Supported S: Supported
F: block/parallels.c F: block/parallels.c
F: block/parallels-ext.c F: block/parallels-ext.c
F: docs/interop/parallels.rst F: docs/interop/parallels.txt
F: docs/interop/prl-xml.rst
T: git https://src.openvz.org/scm/~den/qemu.git parallels T: git https://src.openvz.org/scm/~den/qemu.git parallels
qed qed
@@ -4069,6 +3950,16 @@ F: block/replication.c
F: tests/unit/test-replication.c F: tests/unit/test-replication.c
F: docs/block-replication.txt F: docs/block-replication.txt
PVRDMA
M: Yuval Shaia <yuval.shaia.ml@gmail.com>
M: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
S: Odd Fixes
F: hw/rdma/*
F: hw/rdma/vmw/*
F: docs/pvrdma.txt
F: contrib/rdmacm-mux/*
F: qapi/rdma.json
Semihosting Semihosting
M: Alex Bennée <alex.bennee@linaro.org> M: Alex Bennée <alex.bennee@linaro.org>
S: Maintained S: Maintained
@@ -4241,7 +4132,6 @@ F: docs/conf.py
F: docs/*/conf.py F: docs/*/conf.py
F: docs/sphinx/ F: docs/sphinx/
F: docs/_templates/ F: docs/_templates/
F: docs/devel/docs.rst
Miscellaneous Miscellaneous
------------- -------------
@@ -4254,8 +4144,3 @@ Code Coverage Tools
M: Alex Bennée <alex.bennee@linaro.org> M: Alex Bennée <alex.bennee@linaro.org>
S: Odd Fixes S: Odd Fixes
F: scripts/coverage/ F: scripts/coverage/
Machine development tool
M: Maksim Davydov <davydov-max@yandex-team.ru>
S: Supported
F: scripts/compare-machine-types.py

View File

@@ -78,8 +78,7 @@ x := $(shell rm -rf meson-private meson-info meson-logs)
endif endif
# 1. ensure config-host.mak is up-to-date # 1. ensure config-host.mak is up-to-date
config-host.mak: $(SRC_PATH)/configure $(SRC_PATH)/scripts/meson-buildoptions.sh \ config-host.mak: $(SRC_PATH)/configure $(SRC_PATH)/scripts/meson-buildoptions.sh $(SRC_PATH)/VERSION
$(SRC_PATH)/pythondeps.toml $(SRC_PATH)/VERSION
@echo config-host.mak is out-of-date, running configure @echo config-host.mak is out-of-date, running configure
@if test -f meson-private/coredata.dat; then \ @if test -f meson-private/coredata.dat; then \
./config.status --skip-meson; \ ./config.status --skip-meson; \
@@ -142,13 +141,8 @@ MAKE.n = $(findstring n,$(firstword $(filter-out --%,$(MAKEFLAGS))))
MAKE.k = $(findstring k,$(firstword $(filter-out --%,$(MAKEFLAGS)))) MAKE.k = $(findstring k,$(firstword $(filter-out --%,$(MAKEFLAGS))))
MAKE.q = $(findstring q,$(firstword $(filter-out --%,$(MAKEFLAGS)))) MAKE.q = $(findstring q,$(firstword $(filter-out --%,$(MAKEFLAGS))))
MAKE.nq = $(if $(word 2, $(MAKE.n) $(MAKE.q)),nq) MAKE.nq = $(if $(word 2, $(MAKE.n) $(MAKE.q)),nq)
NINJAFLAGS = \ NINJAFLAGS = $(if $V,-v) $(if $(MAKE.n), -n) $(if $(MAKE.k), -k0) \
$(if $V,-v) \ $(filter-out -j, $(lastword -j1 $(filter -l% -j%, $(MAKEFLAGS)))) \
$(if $(MAKE.n), -n) \
$(if $(MAKE.k), -k0) \
$(filter-out -j, \
$(or $(filter -l% -j%, $(MAKEFLAGS)), \
$(if $(filter --jobserver-auth=%, $(MAKEFLAGS)),, -j1))) \
-d keepdepfile -d keepdepfile
ninja-cmd-goals = $(or $(MAKECMDGOALS), all) ninja-cmd-goals = $(or $(MAKECMDGOALS), all)
ninja-cmd-goals += $(foreach g, $(MAKECMDGOALS), $(.ninja-goals.$g)) ninja-cmd-goals += $(foreach g, $(MAKECMDGOALS), $(.ninja-goals.$g))
@@ -208,7 +202,6 @@ clean: recurse-clean
! -path ./roms/edk2/ArmPkg/Library/GccLto/liblto-arm.a \ ! -path ./roms/edk2/ArmPkg/Library/GccLto/liblto-arm.a \
-exec rm {} + -exec rm {} +
rm -f TAGS cscope.* *~ */*~ rm -f TAGS cscope.* *~ */*~
@$(MAKE) -Ctests/qemu-iotests clean
VERSION = $(shell cat $(SRC_PATH)/VERSION) VERSION = $(shell cat $(SRC_PATH)/VERSION)

View File

@@ -82,7 +82,7 @@ guidelines set out in the `style section
the Developers Guide. the Developers Guide.
Additional information on submitting patches can be found online via Additional information on submitting patches can be found online via
the QEMU website: the QEMU website
* `<https://wiki.qemu.org/Contribute/SubmitAPatch>`_ * `<https://wiki.qemu.org/Contribute/SubmitAPatch>`_
* `<https://wiki.qemu.org/Contribute/TrivialPatches>`_ * `<https://wiki.qemu.org/Contribute/TrivialPatches>`_
@@ -102,7 +102,7 @@ requires a working 'git send-email' setup, and by default doesn't
automate everything, so you may want to go through the above steps automate everything, so you may want to go through the above steps
manually for once. manually for once.
For installation instructions, please go to: For installation instructions, please go to
* `<https://github.com/stefanha/git-publish>`_ * `<https://github.com/stefanha/git-publish>`_
@@ -159,7 +159,7 @@ Contact
======= =======
The QEMU community can be contacted in a number of ways, with the two The QEMU community can be contacted in a number of ways, with the two
main methods being email and IRC: main methods being email and IRC
* `<mailto:qemu-devel@nongnu.org>`_ * `<mailto:qemu-devel@nongnu.org>`_
* `<https://lists.nongnu.org/mailman/listinfo/qemu-devel>`_ * `<https://lists.nongnu.org/mailman/listinfo/qemu-devel>`_

View File

@@ -1 +1 @@
9.0.93 8.1.50

View File

@@ -16,4 +16,3 @@ config KVM
config XEN config XEN
bool bool
select FSDEV_9P if VIRTFS select FSDEV_9P if VIRTFS
select XEN_BUS

View File

@@ -41,7 +41,7 @@ void accel_blocker_init(void)
void accel_ioctl_begin(void) void accel_ioctl_begin(void)
{ {
if (likely(bql_locked())) { if (likely(qemu_mutex_iothread_locked())) {
return; return;
} }
@@ -51,7 +51,7 @@ void accel_ioctl_begin(void)
void accel_ioctl_end(void) void accel_ioctl_end(void)
{ {
if (likely(bql_locked())) { if (likely(qemu_mutex_iothread_locked())) {
return; return;
} }
@@ -62,7 +62,7 @@ void accel_ioctl_end(void)
void accel_cpu_ioctl_begin(CPUState *cpu) void accel_cpu_ioctl_begin(CPUState *cpu)
{ {
if (unlikely(bql_locked())) { if (unlikely(qemu_mutex_iothread_locked())) {
return; return;
} }
@@ -72,7 +72,7 @@ void accel_cpu_ioctl_begin(CPUState *cpu)
void accel_cpu_ioctl_end(CPUState *cpu) void accel_cpu_ioctl_end(CPUState *cpu)
{ {
if (unlikely(bql_locked())) { if (unlikely(qemu_mutex_iothread_locked())) {
return; return;
} }
@@ -105,7 +105,7 @@ void accel_ioctl_inhibit_begin(void)
* We allow to inhibit only when holding the BQL, so we can identify * We allow to inhibit only when holding the BQL, so we can identify
* when an inhibitor wants to issue an ioctl easily. * when an inhibitor wants to issue an ioctl easily.
*/ */
g_assert(bql_locked()); g_assert(qemu_mutex_iothread_locked());
/* Block further invocations of the ioctls outside the BQL. */ /* Block further invocations of the ioctls outside the BQL. */
CPU_FOREACH(cpu) { CPU_FOREACH(cpu) {

View File

@@ -62,7 +62,7 @@ void accel_setup_post(MachineState *ms)
} }
/* initialize the arch-independent accel operation interfaces */ /* initialize the arch-independent accel operation interfaces */
void accel_system_init_ops_interfaces(AccelClass *ac) void accel_init_ops_interfaces(AccelClass *ac)
{ {
const char *ac_name; const char *ac_name;
char *ops_name; char *ops_name;

View File

@@ -10,6 +10,6 @@
#ifndef ACCEL_SYSTEM_H #ifndef ACCEL_SYSTEM_H
#define ACCEL_SYSTEM_H #define ACCEL_SYSTEM_H
void accel_system_init_ops_interfaces(AccelClass *ac); void accel_init_ops_interfaces(AccelClass *ac);
#endif /* ACCEL_SYSTEM_H */ #endif /* ACCEL_SYSTEM_H */

View File

@@ -104,7 +104,7 @@ static void accel_init_cpu_interfaces(AccelClass *ac)
void accel_init_interfaces(AccelClass *ac) void accel_init_interfaces(AccelClass *ac)
{ {
#ifndef CONFIG_USER_ONLY #ifndef CONFIG_USER_ONLY
accel_system_init_ops_interfaces(ac); accel_init_ops_interfaces(ac);
#endif /* !CONFIG_USER_ONLY */ #endif /* !CONFIG_USER_ONLY */
accel_init_cpu_interfaces(ac); accel_init_cpu_interfaces(ac);

View File

@@ -24,9 +24,10 @@ static void *dummy_cpu_thread_fn(void *arg)
rcu_register_thread(); rcu_register_thread();
bql_lock(); qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread); qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id(); cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
current_cpu = cpu; current_cpu = cpu;
#ifndef _WIN32 #ifndef _WIN32
@@ -42,7 +43,7 @@ static void *dummy_cpu_thread_fn(void *arg)
qemu_guest_random_seed_thread_part2(cpu->random_seed); qemu_guest_random_seed_thread_part2(cpu->random_seed);
do { do {
bql_unlock(); qemu_mutex_unlock_iothread();
#ifndef _WIN32 #ifndef _WIN32
do { do {
int sig; int sig;
@@ -55,11 +56,11 @@ static void *dummy_cpu_thread_fn(void *arg)
#else #else
qemu_sem_wait(&cpu->sem); qemu_sem_wait(&cpu->sem);
#endif #endif
bql_lock(); qemu_mutex_lock_iothread();
qemu_wait_io_event(cpu); qemu_wait_io_event(cpu);
} while (!cpu->unplug); } while (!cpu->unplug);
bql_unlock(); qemu_mutex_unlock_iothread();
rcu_unregister_thread(); rcu_unregister_thread();
return NULL; return NULL;
} }
@@ -68,6 +69,9 @@ void dummy_start_vcpu_thread(CPUState *cpu)
{ {
char thread_name[VCPU_THREAD_NAME_SIZE]; char thread_name[VCPU_THREAD_NAME_SIZE];
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/DUMMY", snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/DUMMY",
cpu->cpu_index); cpu->cpu_index);
qemu_thread_create(cpu->thread, thread_name, dummy_cpu_thread_fn, cpu, qemu_thread_create(cpu->thread, thread_name, dummy_cpu_thread_fn, cpu,

View File

@@ -52,7 +52,7 @@
#include "qemu/main-loop.h" #include "qemu/main-loop.h"
#include "exec/address-spaces.h" #include "exec/address-spaces.h"
#include "exec/exec-all.h" #include "exec/exec-all.h"
#include "gdbstub/enums.h" #include "exec/gdbstub.h"
#include "sysemu/cpus.h" #include "sysemu/cpus.h"
#include "sysemu/hvf.h" #include "sysemu/hvf.h"
#include "sysemu/hvf_int.h" #include "sysemu/hvf_int.h"
@@ -204,15 +204,15 @@ static void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg) static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
{ {
if (!cpu->accel->dirty) { if (!cpu->vcpu_dirty) {
hvf_get_registers(cpu); hvf_get_registers(cpu);
cpu->accel->dirty = true; cpu->vcpu_dirty = true;
} }
} }
static void hvf_cpu_synchronize_state(CPUState *cpu) static void hvf_cpu_synchronize_state(CPUState *cpu)
{ {
if (!cpu->accel->dirty) { if (!cpu->vcpu_dirty) {
run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL); run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL);
} }
} }
@@ -221,7 +221,7 @@ static void do_hvf_cpu_synchronize_set_dirty(CPUState *cpu,
run_on_cpu_data arg) run_on_cpu_data arg)
{ {
/* QEMU state is the reference, push it to HVF now and on next entry */ /* QEMU state is the reference, push it to HVF now and on next entry */
cpu->accel->dirty = true; cpu->vcpu_dirty = true;
} }
static void hvf_cpu_synchronize_post_reset(CPUState *cpu) static void hvf_cpu_synchronize_post_reset(CPUState *cpu)
@@ -400,9 +400,9 @@ static int hvf_init_vcpu(CPUState *cpu)
r = hv_vcpu_create(&cpu->accel->fd, r = hv_vcpu_create(&cpu->accel->fd,
(hv_vcpu_exit_t **)&cpu->accel->exit, NULL); (hv_vcpu_exit_t **)&cpu->accel->exit, NULL);
#else #else
r = hv_vcpu_create(&cpu->accel->fd, HV_VCPU_DEFAULT); r = hv_vcpu_create((hv_vcpuid_t *)&cpu->accel->fd, HV_VCPU_DEFAULT);
#endif #endif
cpu->accel->dirty = true; cpu->vcpu_dirty = 1;
assert_hvf_ok(r); assert_hvf_ok(r);
cpu->accel->guest_debug_enabled = false; cpu->accel->guest_debug_enabled = false;
@@ -424,10 +424,11 @@ static void *hvf_cpu_thread_fn(void *arg)
rcu_register_thread(); rcu_register_thread();
bql_lock(); qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread); qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id(); cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
current_cpu = cpu; current_cpu = cpu;
hvf_init_vcpu(cpu); hvf_init_vcpu(cpu);
@@ -448,7 +449,7 @@ static void *hvf_cpu_thread_fn(void *arg)
hvf_vcpu_destroy(cpu); hvf_vcpu_destroy(cpu);
cpu_thread_signal_destroyed(cpu); cpu_thread_signal_destroyed(cpu);
bql_unlock(); qemu_mutex_unlock_iothread();
rcu_unregister_thread(); rcu_unregister_thread();
return NULL; return NULL;
} }
@@ -463,6 +464,10 @@ static void hvf_start_vcpu_thread(CPUState *cpu)
*/ */
assert(hvf_enabled()); assert(hvf_enabled());
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/HVF", snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/HVF",
cpu->cpu_index); cpu->cpu_index);
qemu_thread_create(cpu->thread, thread_name, hvf_cpu_thread_fn, qemu_thread_create(cpu->thread, thread_name, hvf_cpu_thread_fn,

View File

@@ -13,30 +13,40 @@
#include "sysemu/hvf.h" #include "sysemu/hvf.h"
#include "sysemu/hvf_int.h" #include "sysemu/hvf_int.h"
const char *hvf_return_string(hv_return_t ret) void assert_hvf_ok(hv_return_t ret)
{
switch (ret) {
case HV_SUCCESS: return "HV_SUCCESS";
case HV_ERROR: return "HV_ERROR";
case HV_BUSY: return "HV_BUSY";
case HV_BAD_ARGUMENT: return "HV_BAD_ARGUMENT";
case HV_NO_RESOURCES: return "HV_NO_RESOURCES";
case HV_NO_DEVICE: return "HV_NO_DEVICE";
case HV_UNSUPPORTED: return "HV_UNSUPPORTED";
case HV_DENIED: return "HV_DENIED";
default: return "[unknown hv_return value]";
}
}
void assert_hvf_ok_impl(hv_return_t ret, const char *file, unsigned int line,
const char *exp)
{ {
if (ret == HV_SUCCESS) { if (ret == HV_SUCCESS) {
return; return;
} }
error_report("Error: %s = %s (0x%x, at %s:%u)", switch (ret) {
exp, hvf_return_string(ret), ret, file, line); case HV_ERROR:
error_report("Error: HV_ERROR");
break;
case HV_BUSY:
error_report("Error: HV_BUSY");
break;
case HV_BAD_ARGUMENT:
error_report("Error: HV_BAD_ARGUMENT");
break;
case HV_NO_RESOURCES:
error_report("Error: HV_NO_RESOURCES");
break;
case HV_NO_DEVICE:
error_report("Error: HV_NO_DEVICE");
break;
case HV_UNSUPPORTED:
error_report("Error: HV_UNSUPPORTED");
break;
#if defined(MAC_OS_VERSION_11_0) && \
MAC_OS_X_VERSION_MIN_REQUIRED >= MAC_OS_VERSION_11_0
case HV_DENIED:
error_report("Error: HV_DENIED");
break;
#endif
default:
error_report("Unknown Error");
}
abort(); abort();
} }

View File

@@ -33,9 +33,10 @@ static void *kvm_vcpu_thread_fn(void *arg)
rcu_register_thread(); rcu_register_thread();
bql_lock(); qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread); qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id(); cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
current_cpu = cpu; current_cpu = cpu;
r = kvm_init_vcpu(cpu, &error_fatal); r = kvm_init_vcpu(cpu, &error_fatal);
@@ -57,7 +58,7 @@ static void *kvm_vcpu_thread_fn(void *arg)
kvm_destroy_vcpu(cpu); kvm_destroy_vcpu(cpu);
cpu_thread_signal_destroyed(cpu); cpu_thread_signal_destroyed(cpu);
bql_unlock(); qemu_mutex_unlock_iothread();
rcu_unregister_thread(); rcu_unregister_thread();
return NULL; return NULL;
} }
@@ -66,6 +67,9 @@ static void kvm_start_vcpu_thread(CPUState *cpu)
{ {
char thread_name[VCPU_THREAD_NAME_SIZE]; char thread_name[VCPU_THREAD_NAME_SIZE];
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/KVM", snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/KVM",
cpu->cpu_index); cpu->cpu_index);
qemu_thread_create(cpu->thread, thread_name, kvm_vcpu_thread_fn, qemu_thread_create(cpu->thread, thread_name, kvm_vcpu_thread_fn,
@@ -79,10 +83,10 @@ static bool kvm_vcpu_thread_is_idle(CPUState *cpu)
static bool kvm_cpus_are_resettable(void) static bool kvm_cpus_are_resettable(void)
{ {
return !kvm_enabled() || !kvm_state->guest_state_protected; return !kvm_enabled() || kvm_cpu_check_are_resettable();
} }
#ifdef TARGET_KVM_HAVE_GUEST_DEBUG #ifdef KVM_CAP_SET_GUEST_DEBUG
static int kvm_update_guest_debug_ops(CPUState *cpu) static int kvm_update_guest_debug_ops(CPUState *cpu)
{ {
return kvm_update_guest_debug(cpu, 0); return kvm_update_guest_debug(cpu, 0);
@@ -101,7 +105,7 @@ static void kvm_accel_ops_class_init(ObjectClass *oc, void *data)
ops->synchronize_state = kvm_cpu_synchronize_state; ops->synchronize_state = kvm_cpu_synchronize_state;
ops->synchronize_pre_loadvm = kvm_cpu_synchronize_pre_loadvm; ops->synchronize_pre_loadvm = kvm_cpu_synchronize_pre_loadvm;
#ifdef TARGET_KVM_HAVE_GUEST_DEBUG #ifdef KVM_CAP_SET_GUEST_DEBUG
ops->update_guest_debug = kvm_update_guest_debug_ops; ops->update_guest_debug = kvm_update_guest_debug_ops;
ops->supports_guest_debug = kvm_supports_guest_debug; ops->supports_guest_debug = kvm_supports_guest_debug;
ops->insert_breakpoint = kvm_insert_breakpoint; ops->insert_breakpoint = kvm_insert_breakpoint;

View File

@@ -27,7 +27,7 @@
#include "hw/pci/msi.h" #include "hw/pci/msi.h"
#include "hw/pci/msix.h" #include "hw/pci/msix.h"
#include "hw/s390x/adapter.h" #include "hw/s390x/adapter.h"
#include "gdbstub/enums.h" #include "exec/gdbstub.h"
#include "sysemu/kvm_int.h" #include "sysemu/kvm_int.h"
#include "sysemu/runstate.h" #include "sysemu/runstate.h"
#include "sysemu/cpus.h" #include "sysemu/cpus.h"
@@ -69,6 +69,16 @@
#define KVM_GUESTDBG_BLOCKIRQ 0 #define KVM_GUESTDBG_BLOCKIRQ 0
#endif #endif
//#define DEBUG_KVM
#ifdef DEBUG_KVM
#define DPRINTF(fmt, ...) \
do { fprintf(stderr, fmt, ## __VA_ARGS__); } while (0)
#else
#define DPRINTF(fmt, ...) \
do { } while (0)
#endif
struct KVMParkedVcpu { struct KVMParkedVcpu {
unsigned long vcpu_id; unsigned long vcpu_id;
int kvm_fd; int kvm_fd;
@@ -88,11 +98,9 @@ bool kvm_allowed;
bool kvm_readonly_mem_allowed; bool kvm_readonly_mem_allowed;
bool kvm_vm_attributes_allowed; bool kvm_vm_attributes_allowed;
bool kvm_msi_use_devid; bool kvm_msi_use_devid;
static bool kvm_has_guest_debug; bool kvm_has_guest_debug;
static int kvm_sstep_flags; static int kvm_sstep_flags;
static bool kvm_immediate_exit; static bool kvm_immediate_exit;
static uint64_t kvm_supported_memory_attributes;
static bool kvm_guest_memfd_supported;
static hwaddr kvm_max_slot_size = ~0; static hwaddr kvm_max_slot_size = ~0;
static const KVMCapabilityInfo kvm_required_capabilites[] = { static const KVMCapabilityInfo kvm_required_capabilites[] = {
@@ -284,140 +292,46 @@ int kvm_physical_memory_addr_from_host(KVMState *s, void *ram,
static int kvm_set_user_memory_region(KVMMemoryListener *kml, KVMSlot *slot, bool new) static int kvm_set_user_memory_region(KVMMemoryListener *kml, KVMSlot *slot, bool new)
{ {
KVMState *s = kvm_state; KVMState *s = kvm_state;
struct kvm_userspace_memory_region2 mem; struct kvm_userspace_memory_region mem;
int ret; int ret;
mem.slot = slot->slot | (kml->as_id << 16); mem.slot = slot->slot | (kml->as_id << 16);
mem.guest_phys_addr = slot->start_addr; mem.guest_phys_addr = slot->start_addr;
mem.userspace_addr = (unsigned long)slot->ram; mem.userspace_addr = (unsigned long)slot->ram;
mem.flags = slot->flags; mem.flags = slot->flags;
mem.guest_memfd = slot->guest_memfd;
mem.guest_memfd_offset = slot->guest_memfd_offset;
if (slot->memory_size && !new && (mem.flags ^ slot->old_flags) & KVM_MEM_READONLY) { if (slot->memory_size && !new && (mem.flags ^ slot->old_flags) & KVM_MEM_READONLY) {
/* Set the slot size to 0 before setting the slot to the desired /* Set the slot size to 0 before setting the slot to the desired
* value. This is needed based on KVM commit 75d61fbc. */ * value. This is needed based on KVM commit 75d61fbc. */
mem.memory_size = 0; mem.memory_size = 0;
ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem);
if (kvm_guest_memfd_supported) {
ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION2, &mem);
} else {
ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem);
}
if (ret < 0) { if (ret < 0) {
goto err; goto err;
} }
} }
mem.memory_size = slot->memory_size; mem.memory_size = slot->memory_size;
if (kvm_guest_memfd_supported) { ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem);
ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION2, &mem);
} else {
ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem);
}
slot->old_flags = mem.flags; slot->old_flags = mem.flags;
err: err:
trace_kvm_set_user_memory(mem.slot >> 16, (uint16_t)mem.slot, mem.flags, trace_kvm_set_user_memory(mem.slot, mem.flags, mem.guest_phys_addr,
mem.guest_phys_addr, mem.memory_size, mem.memory_size, mem.userspace_addr, ret);
mem.userspace_addr, mem.guest_memfd,
mem.guest_memfd_offset, ret);
if (ret < 0) { if (ret < 0) {
if (kvm_guest_memfd_supported) { error_report("%s: KVM_SET_USER_MEMORY_REGION failed, slot=%d,"
error_report("%s: KVM_SET_USER_MEMORY_REGION2 failed, slot=%d," " start=0x%" PRIx64 ", size=0x%" PRIx64 ": %s",
" start=0x%" PRIx64 ", size=0x%" PRIx64 "," __func__, mem.slot, slot->start_addr,
" flags=0x%" PRIx32 ", guest_memfd=%" PRId32 "," (uint64_t)mem.memory_size, strerror(errno));
" guest_memfd_offset=0x%" PRIx64 ": %s",
__func__, mem.slot, slot->start_addr,
(uint64_t)mem.memory_size, mem.flags,
mem.guest_memfd, (uint64_t)mem.guest_memfd_offset,
strerror(errno));
} else {
error_report("%s: KVM_SET_USER_MEMORY_REGION failed, slot=%d,"
" start=0x%" PRIx64 ", size=0x%" PRIx64 ": %s",
__func__, mem.slot, slot->start_addr,
(uint64_t)mem.memory_size, strerror(errno));
}
} }
return ret; return ret;
} }
void kvm_park_vcpu(CPUState *cpu)
{
struct KVMParkedVcpu *vcpu;
trace_kvm_park_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
vcpu = g_malloc0(sizeof(*vcpu));
vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
vcpu->kvm_fd = cpu->kvm_fd;
QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu, node);
}
int kvm_unpark_vcpu(KVMState *s, unsigned long vcpu_id)
{
struct KVMParkedVcpu *cpu;
int kvm_fd = -ENOENT;
QLIST_FOREACH(cpu, &s->kvm_parked_vcpus, node) {
if (cpu->vcpu_id == vcpu_id) {
QLIST_REMOVE(cpu, node);
kvm_fd = cpu->kvm_fd;
g_free(cpu);
break;
}
}
trace_kvm_unpark_vcpu(vcpu_id, kvm_fd > 0 ? "unparked" : "!found parked");
return kvm_fd;
}
int kvm_create_vcpu(CPUState *cpu)
{
unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
KVMState *s = kvm_state;
int kvm_fd;
/* check if the KVM vCPU already exist but is parked */
kvm_fd = kvm_unpark_vcpu(s, vcpu_id);
if (kvm_fd < 0) {
/* vCPU not parked: create a new KVM vCPU */
kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
if (kvm_fd < 0) {
error_report("KVM_CREATE_VCPU IOCTL failed for vCPU %lu", vcpu_id);
return kvm_fd;
}
}
cpu->kvm_fd = kvm_fd;
cpu->kvm_state = s;
cpu->vcpu_dirty = true;
cpu->dirty_pages = 0;
cpu->throttle_us_per_full = 0;
trace_kvm_create_vcpu(cpu->cpu_index, vcpu_id, kvm_fd);
return 0;
}
int kvm_create_and_park_vcpu(CPUState *cpu)
{
int ret = 0;
ret = kvm_create_vcpu(cpu);
if (!ret) {
kvm_park_vcpu(cpu);
}
return ret;
}
static int do_kvm_destroy_vcpu(CPUState *cpu) static int do_kvm_destroy_vcpu(CPUState *cpu)
{ {
KVMState *s = kvm_state; KVMState *s = kvm_state;
long mmap_size; long mmap_size;
struct KVMParkedVcpu *vcpu = NULL;
int ret = 0; int ret = 0;
trace_kvm_destroy_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu)); DPRINTF("kvm_destroy_vcpu\n");
ret = kvm_arch_destroy_vcpu(cpu); ret = kvm_arch_destroy_vcpu(cpu);
if (ret < 0) { if (ret < 0) {
@@ -427,7 +341,7 @@ static int do_kvm_destroy_vcpu(CPUState *cpu)
mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0); mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
if (mmap_size < 0) { if (mmap_size < 0) {
ret = mmap_size; ret = mmap_size;
trace_kvm_failed_get_vcpu_mmap_size(); DPRINTF("KVM_GET_VCPU_MMAP_SIZE failed\n");
goto err; goto err;
} }
@@ -443,7 +357,10 @@ static int do_kvm_destroy_vcpu(CPUState *cpu)
} }
} }
kvm_park_vcpu(cpu); vcpu = g_malloc0(sizeof(*vcpu));
vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
vcpu->kvm_fd = cpu->kvm_fd;
QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu, node);
err: err:
return ret; return ret;
} }
@@ -456,6 +373,24 @@ void kvm_destroy_vcpu(CPUState *cpu)
} }
} }
static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id)
{
struct KVMParkedVcpu *cpu;
QLIST_FOREACH(cpu, &s->kvm_parked_vcpus, node) {
if (cpu->vcpu_id == vcpu_id) {
int kvm_fd;
QLIST_REMOVE(cpu, node);
kvm_fd = cpu->kvm_fd;
g_free(cpu);
return kvm_fd;
}
}
return kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
}
int kvm_init_vcpu(CPUState *cpu, Error **errp) int kvm_init_vcpu(CPUState *cpu, Error **errp)
{ {
KVMState *s = kvm_state; KVMState *s = kvm_state;
@@ -464,14 +399,19 @@ int kvm_init_vcpu(CPUState *cpu, Error **errp)
trace_kvm_init_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu)); trace_kvm_init_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
ret = kvm_create_vcpu(cpu); ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
if (ret < 0) { if (ret < 0) {
error_setg_errno(errp, -ret, error_setg_errno(errp, -ret, "kvm_init_vcpu: kvm_get_vcpu failed (%lu)",
"kvm_init_vcpu: kvm_create_vcpu failed (%lu)",
kvm_arch_vcpu_id(cpu)); kvm_arch_vcpu_id(cpu));
goto err; goto err;
} }
cpu->kvm_fd = ret;
cpu->kvm_state = s;
cpu->vcpu_dirty = true;
cpu->dirty_pages = 0;
cpu->throttle_us_per_full = 0;
mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0); mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
if (mmap_size < 0) { if (mmap_size < 0) {
ret = mmap_size; ret = mmap_size;
@@ -503,6 +443,7 @@ int kvm_init_vcpu(CPUState *cpu, Error **errp)
PAGE_SIZE * KVM_DIRTY_LOG_PAGE_OFFSET); PAGE_SIZE * KVM_DIRTY_LOG_PAGE_OFFSET);
if (cpu->kvm_dirty_gfns == MAP_FAILED) { if (cpu->kvm_dirty_gfns == MAP_FAILED) {
ret = -errno; ret = -errno;
DPRINTF("mmap'ing vcpu dirty gfns failed: %d\n", ret);
goto err; goto err;
} }
} }
@@ -534,10 +475,6 @@ static int kvm_mem_flags(MemoryRegion *mr)
if (readonly && kvm_readonly_mem_allowed) { if (readonly && kvm_readonly_mem_allowed) {
flags |= KVM_MEM_READONLY; flags |= KVM_MEM_READONLY;
} }
if (memory_region_has_guest_memfd(mr)) {
assert(kvm_guest_memfd_supported);
flags |= KVM_MEM_GUEST_MEMFD;
}
return flags; return flags;
} }
@@ -880,7 +817,7 @@ static void kvm_dirty_ring_flush(void)
* should always be with BQL held, serialization is guaranteed. * should always be with BQL held, serialization is guaranteed.
* However, let's be sure of it. * However, let's be sure of it.
*/ */
assert(bql_locked()); assert(qemu_mutex_iothread_locked());
/* /*
* First make sure to flush the hardware buffers by kicking all * First make sure to flush the hardware buffers by kicking all
* vcpus out in a synchronous way. * vcpus out in a synchronous way.
@@ -1193,11 +1130,6 @@ int kvm_vm_check_extension(KVMState *s, unsigned int extension)
return ret; return ret;
} }
/*
* We track the poisoned pages to be able to:
* - replace them on VM reset
* - block a migration for a VM with a poisoned page
*/
typedef struct HWPoisonPage { typedef struct HWPoisonPage {
ram_addr_t ram_addr; ram_addr_t ram_addr;
QLIST_ENTRY(HWPoisonPage) list; QLIST_ENTRY(HWPoisonPage) list;
@@ -1231,11 +1163,6 @@ void kvm_hwpoison_page_add(ram_addr_t ram_addr)
QLIST_INSERT_HEAD(&hwpoison_page_list, page, list); QLIST_INSERT_HEAD(&hwpoison_page_list, page, list);
} }
bool kvm_hwpoisoned_mem(void)
{
return !QLIST_EMPTY(&hwpoison_page_list);
}
static uint32_t adjust_ioeventfd_endianness(uint32_t val, uint32_t size) static uint32_t adjust_ioeventfd_endianness(uint32_t val, uint32_t size)
{ {
#if HOST_BIG_ENDIAN != TARGET_BIG_ENDIAN #if HOST_BIG_ENDIAN != TARGET_BIG_ENDIAN
@@ -1339,36 +1266,6 @@ void kvm_set_max_memslot_size(hwaddr max_slot_size)
kvm_max_slot_size = max_slot_size; kvm_max_slot_size = max_slot_size;
} }
static int kvm_set_memory_attributes(hwaddr start, uint64_t size, uint64_t attr)
{
struct kvm_memory_attributes attrs;
int r;
assert((attr & kvm_supported_memory_attributes) == attr);
attrs.attributes = attr;
attrs.address = start;
attrs.size = size;
attrs.flags = 0;
r = kvm_vm_ioctl(kvm_state, KVM_SET_MEMORY_ATTRIBUTES, &attrs);
if (r) {
error_report("failed to set memory (0x%" HWADDR_PRIx "+0x%" PRIx64 ") "
"with attr 0x%" PRIx64 " error '%s'",
start, size, attr, strerror(errno));
}
return r;
}
int kvm_set_memory_attributes_private(hwaddr start, uint64_t size)
{
return kvm_set_memory_attributes(start, size, KVM_MEMORY_ATTRIBUTE_PRIVATE);
}
int kvm_set_memory_attributes_shared(hwaddr start, uint64_t size)
{
return kvm_set_memory_attributes(start, size, 0);
}
/* Called with KVMMemoryListener.slots_lock held */ /* Called with KVMMemoryListener.slots_lock held */
static void kvm_set_phys_mem(KVMMemoryListener *kml, static void kvm_set_phys_mem(KVMMemoryListener *kml,
MemoryRegionSection *section, bool add) MemoryRegionSection *section, bool add)
@@ -1465,9 +1362,6 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
mem->ram_start_offset = ram_start_offset; mem->ram_start_offset = ram_start_offset;
mem->ram = ram; mem->ram = ram;
mem->flags = kvm_mem_flags(mr); mem->flags = kvm_mem_flags(mr);
mem->guest_memfd = mr->ram_block->guest_memfd;
mem->guest_memfd_offset = (uint8_t*)ram - mr->ram_block->host;
kvm_slot_init_dirty_bitmap(mem); kvm_slot_init_dirty_bitmap(mem);
err = kvm_set_user_memory_region(kml, mem, true); err = kvm_set_user_memory_region(kml, mem, true);
if (err) { if (err) {
@@ -1475,16 +1369,6 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
strerror(-err)); strerror(-err));
abort(); abort();
} }
if (memory_region_has_guest_memfd(mr)) {
err = kvm_set_memory_attributes_private(start_addr, slot_size);
if (err) {
error_report("%s: failed to set memory attribute private: %s",
__func__, strerror(-err));
exit(1);
}
}
start_addr += slot_size; start_addr += slot_size;
ram_start_offset += slot_size; ram_start_offset += slot_size;
ram += slot_size; ram += slot_size;
@@ -1518,9 +1402,9 @@ static void *kvm_dirty_ring_reaper_thread(void *data)
trace_kvm_dirty_ring_reaper("wakeup"); trace_kvm_dirty_ring_reaper("wakeup");
r->reaper_state = KVM_DIRTY_RING_REAPER_REAPING; r->reaper_state = KVM_DIRTY_RING_REAPER_REAPING;
bql_lock(); qemu_mutex_lock_iothread();
kvm_dirty_ring_reap(s, NULL); kvm_dirty_ring_reap(s, NULL);
bql_unlock(); qemu_mutex_unlock_iothread();
r->reaper_iteration++; r->reaper_iteration++;
} }
@@ -1953,8 +1837,8 @@ void kvm_irqchip_commit_routes(KVMState *s)
assert(ret == 0); assert(ret == 0);
} }
void kvm_add_routing_entry(KVMState *s, static void kvm_add_routing_entry(KVMState *s,
struct kvm_irq_routing_entry *entry) struct kvm_irq_routing_entry *entry)
{ {
struct kvm_irq_routing_entry *new; struct kvm_irq_routing_entry *new;
int n, size; int n, size;
@@ -2051,7 +1935,7 @@ void kvm_irqchip_change_notify(void)
notifier_list_notify(&kvm_irqchip_change_notifiers, NULL); notifier_list_notify(&kvm_irqchip_change_notifiers, NULL);
} }
int kvm_irqchip_get_virq(KVMState *s) static int kvm_irqchip_get_virq(KVMState *s)
{ {
int next_virq; int next_virq;
@@ -2116,17 +2000,12 @@ int kvm_irqchip_add_msi_route(KVMRouteChange *c, int vector, PCIDevice *dev)
return -EINVAL; return -EINVAL;
} }
if (s->irq_routes->nr < s->gsi_count) { trace_kvm_irqchip_add_msi_route(dev ? dev->name : (char *)"N/A",
trace_kvm_irqchip_add_msi_route(dev ? dev->name : (char *)"N/A", vector, virq);
vector, virq);
kvm_add_routing_entry(s, &kroute); kvm_add_routing_entry(s, &kroute);
kvm_arch_add_msi_route_post(&kroute, vector, dev); kvm_arch_add_msi_route_post(&kroute, vector, dev);
c->changes++; c->changes++;
} else {
kvm_irqchip_release_virq(s, virq);
return -ENOSPC;
}
return virq; return virq;
} }
@@ -2209,6 +2088,62 @@ static int kvm_irqchip_assign_irqfd(KVMState *s, EventNotifier *event,
return kvm_vm_ioctl(s, KVM_IRQFD, &irqfd); return kvm_vm_ioctl(s, KVM_IRQFD, &irqfd);
} }
int kvm_irqchip_add_adapter_route(KVMState *s, AdapterInfo *adapter)
{
struct kvm_irq_routing_entry kroute = {};
int virq;
if (!kvm_gsi_routing_enabled()) {
return -ENOSYS;
}
virq = kvm_irqchip_get_virq(s);
if (virq < 0) {
return virq;
}
kroute.gsi = virq;
kroute.type = KVM_IRQ_ROUTING_S390_ADAPTER;
kroute.flags = 0;
kroute.u.adapter.summary_addr = adapter->summary_addr;
kroute.u.adapter.ind_addr = adapter->ind_addr;
kroute.u.adapter.summary_offset = adapter->summary_offset;
kroute.u.adapter.ind_offset = adapter->ind_offset;
kroute.u.adapter.adapter_id = adapter->adapter_id;
kvm_add_routing_entry(s, &kroute);
return virq;
}
int kvm_irqchip_add_hv_sint_route(KVMState *s, uint32_t vcpu, uint32_t sint)
{
struct kvm_irq_routing_entry kroute = {};
int virq;
if (!kvm_gsi_routing_enabled()) {
return -ENOSYS;
}
if (!kvm_check_extension(s, KVM_CAP_HYPERV_SYNIC)) {
return -ENOSYS;
}
virq = kvm_irqchip_get_virq(s);
if (virq < 0) {
return virq;
}
kroute.gsi = virq;
kroute.type = KVM_IRQ_ROUTING_HV_SINT;
kroute.flags = 0;
kroute.u.hv_sint.vcpu = vcpu;
kroute.u.hv_sint.sint = sint;
kvm_add_routing_entry(s, &kroute);
kvm_irqchip_commit_routes(s);
return virq;
}
#else /* !KVM_CAP_IRQ_ROUTING */ #else /* !KVM_CAP_IRQ_ROUTING */
void kvm_init_irq_routing(KVMState *s) void kvm_init_irq_routing(KVMState *s)
@@ -2373,7 +2308,7 @@ bool kvm_vcpu_id_is_valid(int vcpu_id)
bool kvm_dirty_ring_enabled(void) bool kvm_dirty_ring_enabled(void)
{ {
return kvm_state && kvm_state->kvm_dirty_ring_size; return kvm_state->kvm_dirty_ring_size ? true : false;
} }
static void query_stats_cb(StatsResultList **result, StatsTarget target, static void query_stats_cb(StatsResultList **result, StatsTarget target,
@@ -2421,11 +2356,11 @@ static int kvm_init(MachineState *ms)
s->sigmask_len = 8; s->sigmask_len = 8;
accel_blocker_init(); accel_blocker_init();
#ifdef TARGET_KVM_HAVE_GUEST_DEBUG #ifdef KVM_CAP_SET_GUEST_DEBUG
QTAILQ_INIT(&s->kvm_sw_breakpoints); QTAILQ_INIT(&s->kvm_sw_breakpoints);
#endif #endif
QLIST_INIT(&s->kvm_parked_vcpus); QLIST_INIT(&s->kvm_parked_vcpus);
s->fd = qemu_open_old(s->device ?: "/dev/kvm", O_RDWR); s->fd = qemu_open_old("/dev/kvm", O_RDWR);
if (s->fd == -1) { if (s->fd == -1) {
fprintf(stderr, "Could not access KVM kernel module: %m\n"); fprintf(stderr, "Could not access KVM kernel module: %m\n");
ret = -errno; ret = -errno;
@@ -2447,12 +2382,6 @@ static int kvm_init(MachineState *ms)
goto err; goto err;
} }
kvm_supported_memory_attributes = kvm_check_extension(s, KVM_CAP_MEMORY_ATTRIBUTES);
kvm_guest_memfd_supported =
kvm_check_extension(s, KVM_CAP_GUEST_MEMFD) &&
kvm_check_extension(s, KVM_CAP_USER_MEMORY2) &&
(kvm_supported_memory_attributes & KVM_MEMORY_ATTRIBUTE_PRIVATE);
kvm_immediate_exit = kvm_check_extension(s, KVM_CAP_IMMEDIATE_EXIT); kvm_immediate_exit = kvm_check_extension(s, KVM_CAP_IMMEDIATE_EXIT);
s->nr_slots = kvm_check_extension(s, KVM_CAP_NR_MEMSLOTS); s->nr_slots = kvm_check_extension(s, KVM_CAP_NR_MEMSLOTS);
@@ -2611,7 +2540,7 @@ static int kvm_init(MachineState *ms)
kvm_vm_attributes_allowed = kvm_vm_attributes_allowed =
(kvm_check_extension(s, KVM_CAP_VM_ATTRIBUTES) > 0); (kvm_check_extension(s, KVM_CAP_VM_ATTRIBUTES) > 0);
#ifdef TARGET_KVM_HAVE_GUEST_DEBUG #ifdef KVM_CAP_SET_GUEST_DEBUG
kvm_has_guest_debug = kvm_has_guest_debug =
(kvm_check_extension(s, KVM_CAP_SET_GUEST_DEBUG) > 0); (kvm_check_extension(s, KVM_CAP_SET_GUEST_DEBUG) > 0);
#endif #endif
@@ -2620,7 +2549,7 @@ static int kvm_init(MachineState *ms)
if (kvm_has_guest_debug) { if (kvm_has_guest_debug) {
kvm_sstep_flags = SSTEP_ENABLE; kvm_sstep_flags = SSTEP_ENABLE;
#if defined TARGET_KVM_HAVE_GUEST_DEBUG #if defined KVM_CAP_SET_GUEST_DEBUG2
int guest_debug_flags = int guest_debug_flags =
kvm_check_extension(s, KVM_CAP_SET_GUEST_DEBUG2); kvm_check_extension(s, KVM_CAP_SET_GUEST_DEBUG2);
@@ -2763,9 +2692,14 @@ void kvm_flush_coalesced_mmio_buffer(void)
s->coalesced_flush_in_progress = false; s->coalesced_flush_in_progress = false;
} }
bool kvm_cpu_check_are_resettable(void)
{
return kvm_arch_cpu_check_are_resettable();
}
static void do_kvm_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg) static void do_kvm_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
{ {
if (!cpu->vcpu_dirty && !kvm_state->guest_state_protected) { if (!cpu->vcpu_dirty) {
int ret = kvm_arch_get_registers(cpu); int ret = kvm_arch_get_registers(cpu);
if (ret) { if (ret) {
error_report("Failed to get registers: %s", strerror(-ret)); error_report("Failed to get registers: %s", strerror(-ret));
@@ -2779,7 +2713,7 @@ static void do_kvm_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
void kvm_cpu_synchronize_state(CPUState *cpu) void kvm_cpu_synchronize_state(CPUState *cpu)
{ {
if (!cpu->vcpu_dirty && !kvm_state->guest_state_protected) { if (!cpu->vcpu_dirty) {
run_on_cpu(cpu, do_kvm_cpu_synchronize_state, RUN_ON_CPU_NULL); run_on_cpu(cpu, do_kvm_cpu_synchronize_state, RUN_ON_CPU_NULL);
} }
} }
@@ -2814,13 +2748,7 @@ static void do_kvm_cpu_synchronize_post_init(CPUState *cpu, run_on_cpu_data arg)
void kvm_cpu_synchronize_post_init(CPUState *cpu) void kvm_cpu_synchronize_post_init(CPUState *cpu)
{ {
if (!kvm_state->guest_state_protected) { run_on_cpu(cpu, do_kvm_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
/*
* This runs before the machine_init_done notifiers, and is the last
* opportunity to synchronize the state of confidential guests.
*/
run_on_cpu(cpu, do_kvm_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
}
} }
static void do_kvm_cpu_synchronize_pre_loadvm(CPUState *cpu, run_on_cpu_data arg) static void do_kvm_cpu_synchronize_pre_loadvm(CPUState *cpu, run_on_cpu_data arg)
@@ -2888,107 +2816,19 @@ static void kvm_eat_signals(CPUState *cpu)
} while (sigismember(&chkset, SIG_IPI)); } while (sigismember(&chkset, SIG_IPI));
} }
int kvm_convert_memory(hwaddr start, hwaddr size, bool to_private)
{
MemoryRegionSection section;
ram_addr_t offset;
MemoryRegion *mr;
RAMBlock *rb;
void *addr;
int ret = -1;
trace_kvm_convert_memory(start, size, to_private ? "shared_to_private" : "private_to_shared");
if (!QEMU_PTR_IS_ALIGNED(start, qemu_real_host_page_size()) ||
!QEMU_PTR_IS_ALIGNED(size, qemu_real_host_page_size())) {
return -1;
}
if (!size) {
return -1;
}
section = memory_region_find(get_system_memory(), start, size);
mr = section.mr;
if (!mr) {
/*
* Ignore converting non-assigned region to shared.
*
* TDX requires vMMIO region to be shared to inject #VE to guest.
* OVMF issues conservatively MapGPA(shared) on 32bit PCI MMIO region,
* and vIO-APIC 0xFEC00000 4K page.
* OVMF assigns 32bit PCI MMIO region to
* [top of low memory: typically 2GB=0xC000000, 0xFC00000)
*/
if (!to_private) {
return 0;
}
return -1;
}
if (!memory_region_has_guest_memfd(mr)) {
/*
* Because vMMIO region must be shared, guest TD may convert vMMIO
* region to shared explicitly. Don't complain such case. See
* memory_region_type() for checking if the region is MMIO region.
*/
if (!to_private &&
!memory_region_is_ram(mr) &&
!memory_region_is_ram_device(mr) &&
!memory_region_is_rom(mr) &&
!memory_region_is_romd(mr)) {
ret = 0;
} else {
error_report("Convert non guest_memfd backed memory region "
"(0x%"HWADDR_PRIx" ,+ 0x%"HWADDR_PRIx") to %s",
start, size, to_private ? "private" : "shared");
}
goto out_unref;
}
if (to_private) {
ret = kvm_set_memory_attributes_private(start, size);
} else {
ret = kvm_set_memory_attributes_shared(start, size);
}
if (ret) {
goto out_unref;
}
addr = memory_region_get_ram_ptr(mr) + section.offset_within_region;
rb = qemu_ram_block_from_host(addr, false, &offset);
if (to_private) {
if (rb->page_size != qemu_real_host_page_size()) {
/*
* shared memory is backed by hugetlb, which is supposed to be
* pre-allocated and doesn't need to be discarded
*/
goto out_unref;
}
ret = ram_block_discard_range(rb, offset, size);
} else {
ret = ram_block_discard_guest_memfd_range(rb, offset, size);
}
out_unref:
memory_region_unref(mr);
return ret;
}
int kvm_cpu_exec(CPUState *cpu) int kvm_cpu_exec(CPUState *cpu)
{ {
struct kvm_run *run = cpu->kvm_run; struct kvm_run *run = cpu->kvm_run;
int ret, run_ret; int ret, run_ret;
trace_kvm_cpu_exec(); DPRINTF("kvm_cpu_exec()\n");
if (kvm_arch_process_async_events(cpu)) { if (kvm_arch_process_async_events(cpu)) {
qatomic_set(&cpu->exit_request, 0); qatomic_set(&cpu->exit_request, 0);
return EXCP_HLT; return EXCP_HLT;
} }
bql_unlock(); qemu_mutex_unlock_iothread();
cpu_exec_start(cpu); cpu_exec_start(cpu);
do { do {
@@ -3008,7 +2848,7 @@ int kvm_cpu_exec(CPUState *cpu)
kvm_arch_pre_run(cpu, run); kvm_arch_pre_run(cpu, run);
if (qatomic_read(&cpu->exit_request)) { if (qatomic_read(&cpu->exit_request)) {
trace_kvm_interrupt_exit_request(); DPRINTF("interrupt exit requested\n");
/* /*
* KVM requires us to reenter the kernel after IO exits to complete * KVM requires us to reenter the kernel after IO exits to complete
* instruction emulation. This self-signal will ensure that we * instruction emulation. This self-signal will ensure that we
@@ -3028,40 +2868,39 @@ int kvm_cpu_exec(CPUState *cpu)
#ifdef KVM_HAVE_MCE_INJECTION #ifdef KVM_HAVE_MCE_INJECTION
if (unlikely(have_sigbus_pending)) { if (unlikely(have_sigbus_pending)) {
bql_lock(); qemu_mutex_lock_iothread();
kvm_arch_on_sigbus_vcpu(cpu, pending_sigbus_code, kvm_arch_on_sigbus_vcpu(cpu, pending_sigbus_code,
pending_sigbus_addr); pending_sigbus_addr);
have_sigbus_pending = false; have_sigbus_pending = false;
bql_unlock(); qemu_mutex_unlock_iothread();
} }
#endif #endif
if (run_ret < 0) { if (run_ret < 0) {
if (run_ret == -EINTR || run_ret == -EAGAIN) { if (run_ret == -EINTR || run_ret == -EAGAIN) {
trace_kvm_io_window_exit(); DPRINTF("io window exit\n");
kvm_eat_signals(cpu); kvm_eat_signals(cpu);
ret = EXCP_INTERRUPT; ret = EXCP_INTERRUPT;
break; break;
} }
if (!(run_ret == -EFAULT && run->exit_reason == KVM_EXIT_MEMORY_FAULT)) { fprintf(stderr, "error: kvm run failed %s\n",
fprintf(stderr, "error: kvm run failed %s\n", strerror(-run_ret));
strerror(-run_ret));
#ifdef TARGET_PPC #ifdef TARGET_PPC
if (run_ret == -EBUSY) { if (run_ret == -EBUSY) {
fprintf(stderr, fprintf(stderr,
"This is probably because your SMT is enabled.\n" "This is probably because your SMT is enabled.\n"
"VCPU can only run on primary threads with all " "VCPU can only run on primary threads with all "
"secondary threads offline.\n"); "secondary threads offline.\n");
}
#endif
ret = -1;
break;
} }
#endif
ret = -1;
break;
} }
trace_kvm_run_exit(cpu->cpu_index, run->exit_reason); trace_kvm_run_exit(cpu->cpu_index, run->exit_reason);
switch (run->exit_reason) { switch (run->exit_reason) {
case KVM_EXIT_IO: case KVM_EXIT_IO:
DPRINTF("handle_io\n");
/* Called outside BQL */ /* Called outside BQL */
kvm_handle_io(run->io.port, attrs, kvm_handle_io(run->io.port, attrs,
(uint8_t *)run + run->io.data_offset, (uint8_t *)run + run->io.data_offset,
@@ -3071,6 +2910,7 @@ int kvm_cpu_exec(CPUState *cpu)
ret = 0; ret = 0;
break; break;
case KVM_EXIT_MMIO: case KVM_EXIT_MMIO:
DPRINTF("handle_mmio\n");
/* Called outside BQL */ /* Called outside BQL */
address_space_rw(&address_space_memory, address_space_rw(&address_space_memory,
run->mmio.phys_addr, attrs, run->mmio.phys_addr, attrs,
@@ -3080,9 +2920,11 @@ int kvm_cpu_exec(CPUState *cpu)
ret = 0; ret = 0;
break; break;
case KVM_EXIT_IRQ_WINDOW_OPEN: case KVM_EXIT_IRQ_WINDOW_OPEN:
DPRINTF("irq_window_open\n");
ret = EXCP_INTERRUPT; ret = EXCP_INTERRUPT;
break; break;
case KVM_EXIT_SHUTDOWN: case KVM_EXIT_SHUTDOWN:
DPRINTF("shutdown\n");
qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET); qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
ret = EXCP_INTERRUPT; ret = EXCP_INTERRUPT;
break; break;
@@ -3100,7 +2942,7 @@ int kvm_cpu_exec(CPUState *cpu)
* still full. Got kicked by KVM_RESET_DIRTY_RINGS. * still full. Got kicked by KVM_RESET_DIRTY_RINGS.
*/ */
trace_kvm_dirty_ring_full(cpu->cpu_index); trace_kvm_dirty_ring_full(cpu->cpu_index);
bql_lock(); qemu_mutex_lock_iothread();
/* /*
* We throttle vCPU by making it sleep once it exit from kernel * We throttle vCPU by making it sleep once it exit from kernel
* due to dirty ring full. In the dirtylimit scenario, reaping * due to dirty ring full. In the dirtylimit scenario, reaping
@@ -3112,12 +2954,11 @@ int kvm_cpu_exec(CPUState *cpu)
} else { } else {
kvm_dirty_ring_reap(kvm_state, NULL); kvm_dirty_ring_reap(kvm_state, NULL);
} }
bql_unlock(); qemu_mutex_unlock_iothread();
dirtylimit_vcpu_execute(cpu); dirtylimit_vcpu_execute(cpu);
ret = 0; ret = 0;
break; break;
case KVM_EXIT_SYSTEM_EVENT: case KVM_EXIT_SYSTEM_EVENT:
trace_kvm_run_exit_system_event(cpu->cpu_index, run->system_event.type);
switch (run->system_event.type) { switch (run->system_event.type) {
case KVM_SYSTEM_EVENT_SHUTDOWN: case KVM_SYSTEM_EVENT_SHUTDOWN:
qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN); qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
@@ -3129,37 +2970,26 @@ int kvm_cpu_exec(CPUState *cpu)
break; break;
case KVM_SYSTEM_EVENT_CRASH: case KVM_SYSTEM_EVENT_CRASH:
kvm_cpu_synchronize_state(cpu); kvm_cpu_synchronize_state(cpu);
bql_lock(); qemu_mutex_lock_iothread();
qemu_system_guest_panicked(cpu_get_crash_info(cpu)); qemu_system_guest_panicked(cpu_get_crash_info(cpu));
bql_unlock(); qemu_mutex_unlock_iothread();
ret = 0; ret = 0;
break; break;
default: default:
DPRINTF("kvm_arch_handle_exit\n");
ret = kvm_arch_handle_exit(cpu, run); ret = kvm_arch_handle_exit(cpu, run);
break; break;
} }
break; break;
case KVM_EXIT_MEMORY_FAULT:
trace_kvm_memory_fault(run->memory_fault.gpa,
run->memory_fault.size,
run->memory_fault.flags);
if (run->memory_fault.flags & ~KVM_MEMORY_EXIT_FLAG_PRIVATE) {
error_report("KVM_EXIT_MEMORY_FAULT: Unknown flag 0x%" PRIx64,
(uint64_t)run->memory_fault.flags);
ret = -1;
break;
}
ret = kvm_convert_memory(run->memory_fault.gpa, run->memory_fault.size,
run->memory_fault.flags & KVM_MEMORY_EXIT_FLAG_PRIVATE);
break;
default: default:
DPRINTF("kvm_arch_handle_exit\n");
ret = kvm_arch_handle_exit(cpu, run); ret = kvm_arch_handle_exit(cpu, run);
break; break;
} }
} while (ret == 0); } while (ret == 0);
cpu_exec_end(cpu); cpu_exec_end(cpu);
bql_lock(); qemu_mutex_lock_iothread();
if (ret < 0) { if (ret < 0) {
cpu_dump_state(cpu, stderr, CPU_DUMP_CODE); cpu_dump_state(cpu, stderr, CPU_DUMP_CODE);
@@ -3328,7 +3158,7 @@ bool kvm_arm_supports_user_irq(void)
return kvm_check_extension(kvm_state, KVM_CAP_ARM_USER_IRQ); return kvm_check_extension(kvm_state, KVM_CAP_ARM_USER_IRQ);
} }
#ifdef TARGET_KVM_HAVE_GUEST_DEBUG #ifdef KVM_CAP_SET_GUEST_DEBUG
struct kvm_sw_breakpoint *kvm_find_sw_breakpoint(CPUState *cpu, vaddr pc) struct kvm_sw_breakpoint *kvm_find_sw_breakpoint(CPUState *cpu, vaddr pc)
{ {
struct kvm_sw_breakpoint *bp; struct kvm_sw_breakpoint *bp;
@@ -3488,7 +3318,7 @@ void kvm_remove_all_breakpoints(CPUState *cpu)
} }
} }
#endif /* !TARGET_KVM_HAVE_GUEST_DEBUG */ #endif /* !KVM_CAP_SET_GUEST_DEBUG */
static int kvm_set_signal_mask(CPUState *cpu, const sigset_t *sigset) static int kvm_set_signal_mask(CPUState *cpu, const sigset_t *sigset)
{ {
@@ -3771,39 +3601,6 @@ static void kvm_set_dirty_ring_size(Object *obj, Visitor *v,
s->kvm_dirty_ring_size = value; s->kvm_dirty_ring_size = value;
} }
static char *kvm_get_device(Object *obj,
Error **errp G_GNUC_UNUSED)
{
KVMState *s = KVM_STATE(obj);
return g_strdup(s->device);
}
static void kvm_set_device(Object *obj,
const char *value,
Error **errp G_GNUC_UNUSED)
{
KVMState *s = KVM_STATE(obj);
g_free(s->device);
s->device = g_strdup(value);
}
static void kvm_set_kvm_rapl(Object *obj, bool value, Error **errp)
{
KVMState *s = KVM_STATE(obj);
s->msr_energy.enable = value;
}
static void kvm_set_kvm_rapl_socket_path(Object *obj,
const char *str,
Error **errp)
{
KVMState *s = KVM_STATE(obj);
g_free(s->msr_energy.socket_path);
s->msr_energy.socket_path = g_strdup(str);
}
static void kvm_accel_instance_init(Object *obj) static void kvm_accel_instance_init(Object *obj)
{ {
KVMState *s = KVM_STATE(obj); KVMState *s = KVM_STATE(obj);
@@ -3822,8 +3619,6 @@ static void kvm_accel_instance_init(Object *obj)
s->xen_version = 0; s->xen_version = 0;
s->xen_gnttab_max_frames = 64; s->xen_gnttab_max_frames = 64;
s->xen_evtchn_max_pirq = 256; s->xen_evtchn_max_pirq = 256;
s->device = NULL;
s->msr_energy.enable = false;
} }
/** /**
@@ -3864,21 +3659,6 @@ static void kvm_accel_class_init(ObjectClass *oc, void *data)
object_class_property_set_description(oc, "dirty-ring-size", object_class_property_set_description(oc, "dirty-ring-size",
"Size of KVM dirty page ring buffer (default: 0, i.e. use bitmap)"); "Size of KVM dirty page ring buffer (default: 0, i.e. use bitmap)");
object_class_property_add_str(oc, "device", kvm_get_device, kvm_set_device);
object_class_property_set_description(oc, "device",
"Path to the device node to use (default: /dev/kvm)");
object_class_property_add_bool(oc, "rapl",
NULL,
kvm_set_kvm_rapl);
object_class_property_set_description(oc, "rapl",
"Allow energy related MSRs for RAPL interface in Guest");
object_class_property_add_str(oc, "rapl-helper-socket", NULL,
kvm_set_kvm_rapl_socket_path);
object_class_property_set_description(oc, "rapl-helper-socket",
"Socket Path for comminucating with the Virtual MSR helper daemon");
kvm_arch_accel_class_init(oc); kvm_arch_accel_class_init(oc);
} }
@@ -3949,7 +3729,7 @@ static StatsList *add_kvmstat_entry(struct kvm_stats_desc *pdesc,
/* Alloc and populate data list */ /* Alloc and populate data list */
stats = g_new0(Stats, 1); stats = g_new0(Stats, 1);
stats->name = g_strdup(pdesc->name); stats->name = g_strdup(pdesc->name);
stats->value = g_new0(StatsValue, 1); stats->value = g_new0(StatsValue, 1);;
if ((pdesc->flags & KVM_STATS_UNIT_MASK) == KVM_STATS_UNIT_BOOLEAN) { if ((pdesc->flags & KVM_STATS_UNIT_MASK) == KVM_STATS_UNIT_BOOLEAN) {
stats->value->u.boolean = *stats_data; stats->value->u.boolean = *stats_data;
@@ -4297,30 +4077,3 @@ void query_stats_schemas_cb(StatsSchemaList **result, Error **errp)
query_stats_schema_vcpu(first_cpu, &stats_args); query_stats_schema_vcpu(first_cpu, &stats_args);
} }
} }
void kvm_mark_guest_state_protected(void)
{
kvm_state->guest_state_protected = true;
}
int kvm_create_guest_memfd(uint64_t size, uint64_t flags, Error **errp)
{
int fd;
struct kvm_create_guest_memfd guest_memfd = {
.size = size,
.flags = flags,
};
if (!kvm_guest_memfd_supported) {
error_setg(errp, "KVM does not support guest_memfd");
return -1;
}
fd = kvm_vm_ioctl(kvm_state, KVM_CREATE_GUEST_MEMFD, &guest_memfd);
if (fd < 0) {
error_setg_errno(errp, errno, "Error creating KVM guest_memfd");
return -1;
}
return fd;
}

View File

@@ -22,4 +22,5 @@ bool kvm_supports_guest_debug(void);
int kvm_insert_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len); int kvm_insert_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len);
int kvm_remove_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len); int kvm_remove_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len);
void kvm_remove_all_breakpoints(CPUState *cpu); void kvm_remove_all_breakpoints(CPUState *cpu);
#endif /* KVM_CPUS_H */ #endif /* KVM_CPUS_H */

View File

@@ -9,17 +9,13 @@ kvm_device_ioctl(int fd, int type, void *arg) "dev fd %d, type 0x%x, arg %p"
kvm_failed_reg_get(uint64_t id, const char *msg) "Warning: Unable to retrieve ONEREG %" PRIu64 " from KVM: %s" kvm_failed_reg_get(uint64_t id, const char *msg) "Warning: Unable to retrieve ONEREG %" PRIu64 " from KVM: %s"
kvm_failed_reg_set(uint64_t id, const char *msg) "Warning: Unable to set ONEREG %" PRIu64 " to KVM: %s" kvm_failed_reg_set(uint64_t id, const char *msg) "Warning: Unable to set ONEREG %" PRIu64 " to KVM: %s"
kvm_init_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu" kvm_init_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu"
kvm_create_vcpu(int cpu_index, unsigned long arch_cpu_id, int kvm_fd) "index: %d, id: %lu, kvm fd: %d"
kvm_destroy_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu"
kvm_park_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu"
kvm_unpark_vcpu(unsigned long arch_cpu_id, const char *msg) "id: %lu %s"
kvm_irqchip_commit_routes(void) "" kvm_irqchip_commit_routes(void) ""
kvm_irqchip_add_msi_route(char *name, int vector, int virq) "dev %s vector %d virq %d" kvm_irqchip_add_msi_route(char *name, int vector, int virq) "dev %s vector %d virq %d"
kvm_irqchip_update_msi_route(int virq) "Updating MSI route virq=%d" kvm_irqchip_update_msi_route(int virq) "Updating MSI route virq=%d"
kvm_irqchip_release_virq(int virq) "virq %d" kvm_irqchip_release_virq(int virq) "virq %d"
kvm_set_ioeventfd_mmio(int fd, uint64_t addr, uint32_t val, bool assign, uint32_t size, bool datamatch) "fd: %d @0x%" PRIx64 " val=0x%x assign: %d size: %d match: %d" kvm_set_ioeventfd_mmio(int fd, uint64_t addr, uint32_t val, bool assign, uint32_t size, bool datamatch) "fd: %d @0x%" PRIx64 " val=0x%x assign: %d size: %d match: %d"
kvm_set_ioeventfd_pio(int fd, uint16_t addr, uint32_t val, bool assign, uint32_t size, bool datamatch) "fd: %d @0x%x val=0x%x assign: %d size: %d match: %d" kvm_set_ioeventfd_pio(int fd, uint16_t addr, uint32_t val, bool assign, uint32_t size, bool datamatch) "fd: %d @0x%x val=0x%x assign: %d size: %d match: %d"
kvm_set_user_memory(uint16_t as, uint16_t slot, uint32_t flags, uint64_t guest_phys_addr, uint64_t memory_size, uint64_t userspace_addr, uint32_t fd, uint64_t fd_offset, int ret) "AddrSpace#%d Slot#%d flags=0x%x gpa=0x%"PRIx64 " size=0x%"PRIx64 " ua=0x%"PRIx64 " guest_memfd=%d" " guest_memfd_offset=0x%" PRIx64 " ret=%d" kvm_set_user_memory(uint32_t slot, uint32_t flags, uint64_t guest_phys_addr, uint64_t memory_size, uint64_t userspace_addr, int ret) "Slot#%d flags=0x%x gpa=0x%"PRIx64 " size=0x%"PRIx64 " ua=0x%"PRIx64 " ret=%d"
kvm_clear_dirty_log(uint32_t slot, uint64_t start, uint32_t size) "slot#%"PRId32" start 0x%"PRIx64" size 0x%"PRIx32 kvm_clear_dirty_log(uint32_t slot, uint64_t start, uint32_t size) "slot#%"PRId32" start 0x%"PRIx64" size 0x%"PRIx32
kvm_resample_fd_notify(int gsi) "gsi %d" kvm_resample_fd_notify(int gsi) "gsi %d"
kvm_dirty_ring_full(int id) "vcpu %d" kvm_dirty_ring_full(int id) "vcpu %d"
@@ -29,10 +25,4 @@ kvm_dirty_ring_reaper(const char *s) "%s"
kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"PRIu64" pages (took %"PRIi64" us)" kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"PRIu64" pages (took %"PRIi64" us)"
kvm_dirty_ring_reaper_kick(const char *reason) "%s" kvm_dirty_ring_reaper_kick(const char *reason) "%s"
kvm_dirty_ring_flush(int finished) "%d" kvm_dirty_ring_flush(int finished) "%d"
kvm_failed_get_vcpu_mmap_size(void) ""
kvm_cpu_exec(void) ""
kvm_interrupt_exit_request(void) ""
kvm_io_window_exit(void) ""
kvm_run_exit_system_event(int cpu_index, uint32_t event_type) "cpu_index %d, system_even_type %"PRIu32
kvm_convert_memory(uint64_t start, uint64_t size, const char *msg) "start 0x%" PRIx64 " size 0x%" PRIx64 " %s"
kvm_memory_fault(uint64_t start, uint64_t size, uint64_t flags) "start 0x%" PRIx64 " size 0x%" PRIx64 " flags 0x%" PRIx64

View File

@@ -24,18 +24,6 @@
#include "qemu/main-loop.h" #include "qemu/main-loop.h"
#include "hw/core/cpu.h" #include "hw/core/cpu.h"
static int64_t qtest_clock_counter;
static int64_t qtest_get_virtual_clock(void)
{
return qatomic_read_i64(&qtest_clock_counter);
}
static void qtest_set_virtual_clock(int64_t count)
{
qatomic_set_i64(&qtest_clock_counter, count);
}
static int qtest_init_accel(MachineState *ms) static int qtest_init_accel(MachineState *ms)
{ {
return 0; return 0;
@@ -64,7 +52,6 @@ static void qtest_accel_ops_class_init(ObjectClass *oc, void *data)
ops->create_vcpu_thread = dummy_start_vcpu_thread; ops->create_vcpu_thread = dummy_start_vcpu_thread;
ops->get_virtual_clock = qtest_get_virtual_clock; ops->get_virtual_clock = qtest_get_virtual_clock;
ops->set_virtual_clock = qtest_set_virtual_clock;
}; };
static const TypeInfo qtest_accel_ops_type = { static const TypeInfo qtest_accel_ops_type = {

View File

@@ -124,13 +124,3 @@ uint32_t kvm_dirty_ring_size(void)
{ {
return 0; return 0;
} }
bool kvm_hwpoisoned_mem(void)
{
return false;
}
int kvm_create_guest_memfd(uint64_t size, uint64_t flags, Error **errp)
{
return -ENOSYS;
}

View File

@@ -18,6 +18,24 @@ void tb_flush(CPUState *cpu)
{ {
} }
void tlb_set_dirty(CPUState *cpu, vaddr vaddr)
{
}
int probe_access_flags(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx,
bool nonfault, void **phost, uintptr_t retaddr)
{
g_assert_not_reached();
}
void *probe_access(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr)
{
/* Handled by hardware accelerator. */
g_assert_not_reached();
}
G_NORETURN void cpu_loop_exit(CPUState *cpu) G_NORETURN void cpu_loop_exit(CPUState *cpu)
{ {
g_assert_not_reached(); g_assert_not_reached();

View File

@@ -30,6 +30,9 @@
#include "qemu/rcu.h" #include "qemu/rcu.h"
#include "exec/log.h" #include "exec/log.h"
#include "qemu/main-loop.h" #include "qemu/main-loop.h"
#if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
#include "hw/i386/apic.h"
#endif
#include "sysemu/cpus.h" #include "sysemu/cpus.h"
#include "exec/cpu-all.h" #include "exec/cpu-all.h"
#include "sysemu/cpu-timers.h" #include "sysemu/cpu-timers.h"
@@ -144,16 +147,6 @@ static void init_delay_params(SyncClocks *sc, const CPUState *cpu)
} }
#endif /* CONFIG USER ONLY */ #endif /* CONFIG USER ONLY */
bool tcg_cflags_has(CPUState *cpu, uint32_t flags)
{
return cpu->tcg_cflags & flags;
}
void tcg_cflags_set(CPUState *cpu, uint32_t flags)
{
cpu->tcg_cflags |= flags;
}
uint32_t curr_cflags(CPUState *cpu) uint32_t curr_cflags(CPUState *cpu)
{ {
uint32_t cflags = cpu->tcg_cflags; uint32_t cflags = cpu->tcg_cflags;
@@ -260,29 +253,43 @@ static inline TranslationBlock *tb_lookup(CPUState *cpu, vaddr pc,
hash = tb_jmp_cache_hash_func(pc); hash = tb_jmp_cache_hash_func(pc);
jc = cpu->tb_jmp_cache; jc = cpu->tb_jmp_cache;
tb = qatomic_read(&jc->array[hash].tb); if (cflags & CF_PCREL) {
if (likely(tb && /* Use acquire to ensure current load of pc from jc. */
jc->array[hash].pc == pc && tb = qatomic_load_acquire(&jc->array[hash].tb);
tb->cs_base == cs_base &&
tb->flags == flags && if (likely(tb &&
tb_cflags(tb) == cflags)) { jc->array[hash].pc == pc &&
goto hit; tb->cs_base == cs_base &&
tb->flags == flags &&
tb_cflags(tb) == cflags)) {
return tb;
}
tb = tb_htable_lookup(cpu, pc, cs_base, flags, cflags);
if (tb == NULL) {
return NULL;
}
jc->array[hash].pc = pc;
/* Ensure pc is written first. */
qatomic_store_release(&jc->array[hash].tb, tb);
} else {
/* Use rcu_read to ensure current load of pc from *tb. */
tb = qatomic_rcu_read(&jc->array[hash].tb);
if (likely(tb &&
tb->pc == pc &&
tb->cs_base == cs_base &&
tb->flags == flags &&
tb_cflags(tb) == cflags)) {
return tb;
}
tb = tb_htable_lookup(cpu, pc, cs_base, flags, cflags);
if (tb == NULL) {
return NULL;
}
/* Use the pc value already stored in tb->pc. */
qatomic_set(&jc->array[hash].tb, tb);
} }
tb = tb_htable_lookup(cpu, pc, cs_base, flags, cflags);
if (tb == NULL) {
return NULL;
}
jc->array[hash].pc = pc;
qatomic_set(&jc->array[hash].tb, tb);
hit:
/*
* As long as tb is not NULL, the contents are consistent. Therefore,
* the virtual PC has to match for non-CF_PCREL translations.
*/
assert((tb_cflags(tb) & CF_PCREL) || tb->pc == pc);
return tb; return tb;
} }
@@ -350,9 +357,9 @@ static bool check_for_breakpoints_slow(CPUState *cpu, vaddr pc,
#ifdef CONFIG_USER_ONLY #ifdef CONFIG_USER_ONLY
g_assert_not_reached(); g_assert_not_reached();
#else #else
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops; CPUClass *cc = CPU_GET_CLASS(cpu);
assert(tcg_ops->debug_check_breakpoint); assert(cc->tcg_ops->debug_check_breakpoint);
match_bp = tcg_ops->debug_check_breakpoint(cpu); match_bp = cc->tcg_ops->debug_check_breakpoint(cpu);
#endif #endif
} }
@@ -378,7 +385,7 @@ static bool check_for_breakpoints_slow(CPUState *cpu, vaddr pc,
* breakpoints are removed. * breakpoints are removed.
*/ */
if (match_page) { if (match_page) {
*cflags = (*cflags & ~CF_COUNT_MASK) | CF_NO_GOTO_TB | CF_BP_PAGE | 1; *cflags = (*cflags & ~CF_COUNT_MASK) | CF_NO_GOTO_TB | 1;
} }
return false; return false;
} }
@@ -406,14 +413,6 @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
uint64_t cs_base; uint64_t cs_base;
uint32_t flags, cflags; uint32_t flags, cflags;
/*
* By definition we've just finished a TB, so I/O is OK.
* Avoid the possibility of calling cpu_io_recompile() if
* a page table walk triggered by tb_lookup() calling
* probe_access_internal() happens to touch an MMIO device.
* The next TB, if we chain to it, will clear the flag again.
*/
cpu->neg.can_do_io = true;
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags); cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
cflags = curr_cflags(cpu); cflags = curr_cflags(cpu);
@@ -446,6 +445,7 @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
static inline TranslationBlock * QEMU_DISABLE_CFI static inline TranslationBlock * QEMU_DISABLE_CFI
cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit) cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
{ {
CPUArchState *env = cpu_env(cpu);
uintptr_t ret; uintptr_t ret;
TranslationBlock *last_tb; TranslationBlock *last_tb;
const void *tb_ptr = itb->tc.ptr; const void *tb_ptr = itb->tc.ptr;
@@ -455,7 +455,7 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
} }
qemu_thread_jit_execute(); qemu_thread_jit_execute();
ret = tcg_qemu_tb_exec(cpu_env(cpu), tb_ptr); ret = tcg_qemu_tb_exec(env, tb_ptr);
cpu->neg.can_do_io = true; cpu->neg.can_do_io = true;
qemu_plugin_disable_mem_helpers(cpu); qemu_plugin_disable_mem_helpers(cpu);
/* /*
@@ -476,11 +476,10 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
* counter hit zero); we must restore the guest PC to the address * counter hit zero); we must restore the guest PC to the address
* of the start of the TB. * of the start of the TB.
*/ */
CPUClass *cc = cpu->cc; CPUClass *cc = CPU_GET_CLASS(cpu);
const TCGCPUOps *tcg_ops = cc->tcg_ops;
if (tcg_ops->synchronize_from_tb) { if (cc->tcg_ops->synchronize_from_tb) {
tcg_ops->synchronize_from_tb(cpu, last_tb); cc->tcg_ops->synchronize_from_tb(cpu, last_tb);
} else { } else {
tcg_debug_assert(!(tb_cflags(last_tb) & CF_PCREL)); tcg_debug_assert(!(tb_cflags(last_tb) & CF_PCREL));
assert(cc->set_pc); assert(cc->set_pc);
@@ -512,19 +511,19 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
static void cpu_exec_enter(CPUState *cpu) static void cpu_exec_enter(CPUState *cpu)
{ {
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops; CPUClass *cc = CPU_GET_CLASS(cpu);
if (tcg_ops->cpu_exec_enter) { if (cc->tcg_ops->cpu_exec_enter) {
tcg_ops->cpu_exec_enter(cpu); cc->tcg_ops->cpu_exec_enter(cpu);
} }
} }
static void cpu_exec_exit(CPUState *cpu) static void cpu_exec_exit(CPUState *cpu)
{ {
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops; CPUClass *cc = CPU_GET_CLASS(cpu);
if (tcg_ops->cpu_exec_exit) { if (cc->tcg_ops->cpu_exec_exit) {
tcg_ops->cpu_exec_exit(cpu); cc->tcg_ops->cpu_exec_exit(cpu);
} }
} }
@@ -559,8 +558,8 @@ static void cpu_exec_longjmp_cleanup(CPUState *cpu)
tcg_ctx->gen_tb = NULL; tcg_ctx->gen_tb = NULL;
} }
#endif #endif
if (bql_locked()) { if (qemu_mutex_iothread_locked()) {
bql_unlock(); qemu_mutex_unlock_iothread();
} }
assert_no_pages_locked(); assert_no_pages_locked();
} }
@@ -678,10 +677,16 @@ static inline bool cpu_handle_halt(CPUState *cpu)
{ {
#ifndef CONFIG_USER_ONLY #ifndef CONFIG_USER_ONLY
if (cpu->halted) { if (cpu->halted) {
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops; #if defined(TARGET_I386)
bool leave_halt = tcg_ops->cpu_exec_halt(cpu); if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
X86CPU *x86_cpu = X86_CPU(cpu);
if (!leave_halt) { qemu_mutex_lock_iothread();
apic_poll_irq(x86_cpu->apic_state);
cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
qemu_mutex_unlock_iothread();
}
#endif /* TARGET_I386 */
if (!cpu_has_work(cpu)) {
return true; return true;
} }
@@ -694,7 +699,7 @@ static inline bool cpu_handle_halt(CPUState *cpu)
static inline void cpu_handle_debug_exception(CPUState *cpu) static inline void cpu_handle_debug_exception(CPUState *cpu)
{ {
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops; CPUClass *cc = CPU_GET_CLASS(cpu);
CPUWatchpoint *wp; CPUWatchpoint *wp;
if (!cpu->watchpoint_hit) { if (!cpu->watchpoint_hit) {
@@ -703,8 +708,8 @@ static inline void cpu_handle_debug_exception(CPUState *cpu)
} }
} }
if (tcg_ops->debug_excp_handler) { if (cc->tcg_ops->debug_excp_handler) {
tcg_ops->debug_excp_handler(cpu); cc->tcg_ops->debug_excp_handler(cpu);
} }
} }
@@ -716,12 +721,11 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
&& cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0) { && cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0) {
/* Execute just one insn to trigger exception pending in the log */ /* Execute just one insn to trigger exception pending in the log */
cpu->cflags_next_tb = (curr_cflags(cpu) & ~CF_USE_ICOUNT) cpu->cflags_next_tb = (curr_cflags(cpu) & ~CF_USE_ICOUNT)
| CF_NOIRQ | 1; | CF_LAST_IO | CF_NOIRQ | 1;
} }
#endif #endif
return false; return false;
} }
if (cpu->exception_index >= EXCP_INTERRUPT) { if (cpu->exception_index >= EXCP_INTERRUPT) {
/* exit request from the cpu execution loop */ /* exit request from the cpu execution loop */
*ret = cpu->exception_index; *ret = cpu->exception_index;
@@ -730,59 +734,62 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
} }
cpu->exception_index = -1; cpu->exception_index = -1;
return true; return true;
} } else {
#if defined(CONFIG_USER_ONLY) #if defined(CONFIG_USER_ONLY)
/* /* if user mode only, we simulate a fake exception
* If user mode only, we simulate a fake exception which will be which will be handled outside the cpu execution
* handled outside the cpu execution loop. loop */
*/
#if defined(TARGET_I386) #if defined(TARGET_I386)
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops; CPUClass *cc = CPU_GET_CLASS(cpu);
tcg_ops->fake_user_interrupt(cpu); cc->tcg_ops->fake_user_interrupt(cpu);
#endif /* TARGET_I386 */ #endif /* TARGET_I386 */
*ret = cpu->exception_index; *ret = cpu->exception_index;
cpu->exception_index = -1;
return true;
#else
if (replay_exception()) {
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
bql_lock();
tcg_ops->do_interrupt(cpu);
bql_unlock();
cpu->exception_index = -1; cpu->exception_index = -1;
return true;
#else
if (replay_exception()) {
CPUClass *cc = CPU_GET_CLASS(cpu);
qemu_mutex_lock_iothread();
cc->tcg_ops->do_interrupt(cpu);
qemu_mutex_unlock_iothread();
cpu->exception_index = -1;
if (unlikely(cpu->singlestep_enabled)) { if (unlikely(cpu->singlestep_enabled)) {
/* /*
* After processing the exception, ensure an EXCP_DEBUG is * After processing the exception, ensure an EXCP_DEBUG is
* raised when single-stepping so that GDB doesn't miss the * raised when single-stepping so that GDB doesn't miss the
* next instruction. * next instruction.
*/ */
*ret = EXCP_DEBUG; *ret = EXCP_DEBUG;
cpu_handle_debug_exception(cpu); cpu_handle_debug_exception(cpu);
return true;
}
} else if (!replay_has_interrupt()) {
/* give a chance to iothread in replay mode */
*ret = EXCP_INTERRUPT;
return true; return true;
} }
} else if (!replay_has_interrupt()) {
/* give a chance to iothread in replay mode */
*ret = EXCP_INTERRUPT;
return true;
}
#endif #endif
}
return false; return false;
} }
static inline bool icount_exit_request(CPUState *cpu) #ifndef CONFIG_USER_ONLY
/*
* CPU_INTERRUPT_POLL is a virtual event which gets converted into a
* "real" interrupt event later. It does not need to be recorded for
* replay purposes.
*/
static inline bool need_replay_interrupt(int interrupt_request)
{ {
if (!icount_enabled()) { #if defined(TARGET_I386)
return false; return !(interrupt_request & CPU_INTERRUPT_POLL);
} #else
if (cpu->cflags_next_tb != -1 && !(cpu->cflags_next_tb & CF_USE_ICOUNT)) { return true;
return false; #endif
}
return cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0;
} }
#endif /* !CONFIG_USER_ONLY */
static inline bool cpu_handle_interrupt(CPUState *cpu, static inline bool cpu_handle_interrupt(CPUState *cpu,
TranslationBlock **last_tb) TranslationBlock **last_tb)
@@ -805,7 +812,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
if (unlikely(qatomic_read(&cpu->interrupt_request))) { if (unlikely(qatomic_read(&cpu->interrupt_request))) {
int interrupt_request; int interrupt_request;
bql_lock(); qemu_mutex_lock_iothread();
interrupt_request = cpu->interrupt_request; interrupt_request = cpu->interrupt_request;
if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) { if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
/* Mask out external interrupts for this step. */ /* Mask out external interrupts for this step. */
@@ -814,7 +821,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
if (interrupt_request & CPU_INTERRUPT_DEBUG) { if (interrupt_request & CPU_INTERRUPT_DEBUG) {
cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG; cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
cpu->exception_index = EXCP_DEBUG; cpu->exception_index = EXCP_DEBUG;
bql_unlock(); qemu_mutex_unlock_iothread();
return true; return true;
} }
#if !defined(CONFIG_USER_ONLY) #if !defined(CONFIG_USER_ONLY)
@@ -825,7 +832,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
cpu->interrupt_request &= ~CPU_INTERRUPT_HALT; cpu->interrupt_request &= ~CPU_INTERRUPT_HALT;
cpu->halted = 1; cpu->halted = 1;
cpu->exception_index = EXCP_HLT; cpu->exception_index = EXCP_HLT;
bql_unlock(); qemu_mutex_unlock_iothread();
return true; return true;
} }
#if defined(TARGET_I386) #if defined(TARGET_I386)
@@ -836,14 +843,14 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
cpu_svm_check_intercept_param(env, SVM_EXIT_INIT, 0, 0); cpu_svm_check_intercept_param(env, SVM_EXIT_INIT, 0, 0);
do_cpu_init(x86_cpu); do_cpu_init(x86_cpu);
cpu->exception_index = EXCP_HALTED; cpu->exception_index = EXCP_HALTED;
bql_unlock(); qemu_mutex_unlock_iothread();
return true; return true;
} }
#else #else
else if (interrupt_request & CPU_INTERRUPT_RESET) { else if (interrupt_request & CPU_INTERRUPT_RESET) {
replay_interrupt(); replay_interrupt();
cpu_reset(cpu); cpu_reset(cpu);
bql_unlock(); qemu_mutex_unlock_iothread();
return true; return true;
} }
#endif /* !TARGET_I386 */ #endif /* !TARGET_I386 */
@@ -852,11 +859,11 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
True when it is, and we should restart on a new TB, True when it is, and we should restart on a new TB,
and via longjmp via cpu_loop_exit. */ and via longjmp via cpu_loop_exit. */
else { else {
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops; CPUClass *cc = CPU_GET_CLASS(cpu);
if (tcg_ops->cpu_exec_interrupt(cpu, interrupt_request)) { if (cc->tcg_ops->cpu_exec_interrupt &&
if (!tcg_ops->need_replay_interrupt || cc->tcg_ops->cpu_exec_interrupt(cpu, interrupt_request)) {
tcg_ops->need_replay_interrupt(interrupt_request)) { if (need_replay_interrupt(interrupt_request)) {
replay_interrupt(); replay_interrupt();
} }
/* /*
@@ -866,7 +873,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
*/ */
if (unlikely(cpu->singlestep_enabled)) { if (unlikely(cpu->singlestep_enabled)) {
cpu->exception_index = EXCP_DEBUG; cpu->exception_index = EXCP_DEBUG;
bql_unlock(); qemu_mutex_unlock_iothread();
return true; return true;
} }
cpu->exception_index = -1; cpu->exception_index = -1;
@@ -885,11 +892,14 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
} }
/* If we exit via cpu_loop_exit/longjmp it is reset in cpu_exec */ /* If we exit via cpu_loop_exit/longjmp it is reset in cpu_exec */
bql_unlock(); qemu_mutex_unlock_iothread();
} }
/* Finally, check if we need to exit to the main loop. */ /* Finally, check if we need to exit to the main loop. */
if (unlikely(qatomic_read(&cpu->exit_request)) || icount_exit_request(cpu)) { if (unlikely(qatomic_read(&cpu->exit_request))
|| (icount_enabled()
&& (cpu->cflags_next_tb == -1 || cpu->cflags_next_tb & CF_USE_ICOUNT)
&& cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0)) {
qatomic_set(&cpu->exit_request, 0); qatomic_set(&cpu->exit_request, 0);
if (cpu->exception_index == -1) { if (cpu->exception_index == -1) {
cpu->exception_index = EXCP_INTERRUPT; cpu->exception_index = EXCP_INTERRUPT;
@@ -904,6 +914,8 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
vaddr pc, TranslationBlock **last_tb, vaddr pc, TranslationBlock **last_tb,
int *tb_exit) int *tb_exit)
{ {
int32_t insns_left;
trace_exec_tb(tb, pc); trace_exec_tb(tb, pc);
tb = cpu_tb_exec(cpu, tb, tb_exit); tb = cpu_tb_exec(cpu, tb, tb_exit);
if (*tb_exit != TB_EXIT_REQUESTED) { if (*tb_exit != TB_EXIT_REQUESTED) {
@@ -912,7 +924,8 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
} }
*last_tb = NULL; *last_tb = NULL;
if (cpu_loop_exit_requested(cpu)) { insns_left = qatomic_read(&cpu->neg.icount_decr.u32);
if (insns_left < 0) {
/* Something asked us to stop executing chained TBs; just /* Something asked us to stop executing chained TBs; just
* continue round the main loop. Whatever requested the exit * continue round the main loop. Whatever requested the exit
* will also have set something else (eg exit_request or * will also have set something else (eg exit_request or
@@ -929,7 +942,7 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
/* Ensure global icount has gone forward */ /* Ensure global icount has gone forward */
icount_update(cpu); icount_update(cpu);
/* Refill decrementer and continue execution. */ /* Refill decrementer and continue execution. */
int32_t insns_left = MIN(0xffff, cpu->icount_budget); insns_left = MIN(0xffff, cpu->icount_budget);
cpu->neg.icount_decr.u16.low = insns_left; cpu->neg.icount_decr.u16.low = insns_left;
cpu->icount_extra = cpu->icount_budget - insns_left; cpu->icount_extra = cpu->icount_budget - insns_left;
@@ -999,8 +1012,14 @@ cpu_exec_loop(CPUState *cpu, SyncClocks *sc)
*/ */
h = tb_jmp_cache_hash_func(pc); h = tb_jmp_cache_hash_func(pc);
jc = cpu->tb_jmp_cache; jc = cpu->tb_jmp_cache;
jc->array[h].pc = pc; if (cflags & CF_PCREL) {
qatomic_set(&jc->array[h].tb, tb); jc->array[h].pc = pc;
/* Ensure pc is written first. */
qatomic_store_release(&jc->array[h].tb, tb);
} else {
/* Use the pc value already stored in tb->pc. */
qatomic_set(&jc->array[h].tb, tb);
}
} }
#ifndef CONFIG_USER_ONLY #ifndef CONFIG_USER_ONLY
@@ -1051,7 +1070,7 @@ int cpu_exec(CPUState *cpu)
return EXCP_HALTED; return EXCP_HALTED;
} }
RCU_READ_LOCK_GUARD(); rcu_read_lock();
cpu_exec_enter(cpu); cpu_exec_enter(cpu);
/* /*
@@ -1065,20 +1084,18 @@ int cpu_exec(CPUState *cpu)
ret = cpu_exec_setjmp(cpu, &sc); ret = cpu_exec_setjmp(cpu, &sc);
cpu_exec_exit(cpu); cpu_exec_exit(cpu);
rcu_read_unlock();
return ret; return ret;
} }
bool tcg_exec_realizefn(CPUState *cpu, Error **errp) bool tcg_exec_realizefn(CPUState *cpu, Error **errp)
{ {
static bool tcg_target_initialized; static bool tcg_target_initialized;
CPUClass *cc = CPU_GET_CLASS(cpu);
if (!tcg_target_initialized) { if (!tcg_target_initialized) {
/* Check mandatory TCGCPUOps handlers */ cc->tcg_ops->initialize();
#ifndef CONFIG_USER_ONLY
assert(cpu->cc->tcg_ops->cpu_exec_halt);
assert(cpu->cc->tcg_ops->cpu_exec_interrupt);
#endif /* !CONFIG_USER_ONLY */
cpu->cc->tcg_ops->initialize();
tcg_target_initialized = true; tcg_target_initialized = true;
} }

View File

@@ -21,16 +21,12 @@
#include "qemu/main-loop.h" #include "qemu/main-loop.h"
#include "hw/core/tcg-cpu-ops.h" #include "hw/core/tcg-cpu-ops.h"
#include "exec/exec-all.h" #include "exec/exec-all.h"
#include "exec/page-protection.h"
#include "exec/memory.h" #include "exec/memory.h"
#include "exec/cpu_ldst.h" #include "exec/cpu_ldst.h"
#include "exec/cputlb.h" #include "exec/cputlb.h"
#include "exec/tb-flush.h" #include "exec/tb-flush.h"
#include "exec/memory-internal.h" #include "exec/memory-internal.h"
#include "exec/ram_addr.h" #include "exec/ram_addr.h"
#include "exec/mmu-access-type.h"
#include "exec/tlb-common.h"
#include "exec/vaddr.h"
#include "tcg/tcg.h" #include "tcg/tcg.h"
#include "qemu/error-report.h" #include "qemu/error-report.h"
#include "exec/log.h" #include "exec/log.h"
@@ -99,54 +95,6 @@ static inline size_t sizeof_tlb(CPUTLBDescFast *fast)
return fast->mask + (1 << CPU_TLB_ENTRY_BITS); return fast->mask + (1 << CPU_TLB_ENTRY_BITS);
} }
static inline uint64_t tlb_read_idx(const CPUTLBEntry *entry,
MMUAccessType access_type)
{
/* Do not rearrange the CPUTLBEntry structure members. */
QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_read) !=
MMU_DATA_LOAD * sizeof(uint64_t));
QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_write) !=
MMU_DATA_STORE * sizeof(uint64_t));
QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_code) !=
MMU_INST_FETCH * sizeof(uint64_t));
#if TARGET_LONG_BITS == 32
/* Use qatomic_read, in case of addr_write; only care about low bits. */
const uint32_t *ptr = (uint32_t *)&entry->addr_idx[access_type];
ptr += HOST_BIG_ENDIAN;
return qatomic_read(ptr);
#else
const uint64_t *ptr = &entry->addr_idx[access_type];
# if TCG_OVERSIZED_GUEST
return *ptr;
# else
/* ofs might correspond to .addr_write, so use qatomic_read */
return qatomic_read(ptr);
# endif
#endif
}
static inline uint64_t tlb_addr_write(const CPUTLBEntry *entry)
{
return tlb_read_idx(entry, MMU_DATA_STORE);
}
/* Find the TLB index corresponding to the mmu_idx + address pair. */
static inline uintptr_t tlb_index(CPUState *cpu, uintptr_t mmu_idx,
vaddr addr)
{
uintptr_t size_mask = cpu->neg.tlb.f[mmu_idx].mask >> CPU_TLB_ENTRY_BITS;
return (addr >> TARGET_PAGE_BITS) & size_mask;
}
/* Find the TLB entry corresponding to the mmu_idx + address pair. */
static inline CPUTLBEntry *tlb_entry(CPUState *cpu, uintptr_t mmu_idx,
vaddr addr)
{
return &cpu->neg.tlb.f[mmu_idx].table[tlb_index(cpu, mmu_idx, addr)];
}
static void tlb_window_reset(CPUTLBDesc *desc, int64_t ns, static void tlb_window_reset(CPUTLBDesc *desc, int64_t ns,
size_t max_entries) size_t max_entries)
{ {
@@ -418,9 +366,12 @@ void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap)
{ {
tlb_debug("mmu_idx: 0x%" PRIx16 "\n", idxmap); tlb_debug("mmu_idx: 0x%" PRIx16 "\n", idxmap);
assert_cpu_is_self(cpu); if (cpu->created && !qemu_cpu_is_self(cpu)) {
async_run_on_cpu(cpu, tlb_flush_by_mmuidx_async_work,
tlb_flush_by_mmuidx_async_work(cpu, RUN_ON_CPU_HOST_INT(idxmap)); RUN_ON_CPU_HOST_INT(idxmap));
} else {
tlb_flush_by_mmuidx_async_work(cpu, RUN_ON_CPU_HOST_INT(idxmap));
}
} }
void tlb_flush(CPUState *cpu) void tlb_flush(CPUState *cpu)
@@ -428,6 +379,21 @@ void tlb_flush(CPUState *cpu)
tlb_flush_by_mmuidx(cpu, ALL_MMUIDX_BITS); tlb_flush_by_mmuidx(cpu, ALL_MMUIDX_BITS);
} }
void tlb_flush_by_mmuidx_all_cpus(CPUState *src_cpu, uint16_t idxmap)
{
const run_on_cpu_func fn = tlb_flush_by_mmuidx_async_work;
tlb_debug("mmu_idx: 0x%"PRIx16"\n", idxmap);
flush_all_helper(src_cpu, fn, RUN_ON_CPU_HOST_INT(idxmap));
fn(src_cpu, RUN_ON_CPU_HOST_INT(idxmap));
}
void tlb_flush_all_cpus(CPUState *src_cpu)
{
tlb_flush_by_mmuidx_all_cpus(src_cpu, ALL_MMUIDX_BITS);
}
void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src_cpu, uint16_t idxmap) void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src_cpu, uint16_t idxmap)
{ {
const run_on_cpu_func fn = tlb_flush_by_mmuidx_async_work; const run_on_cpu_func fn = tlb_flush_by_mmuidx_async_work;
@@ -609,12 +575,28 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, vaddr addr, uint16_t idxmap)
{ {
tlb_debug("addr: %016" VADDR_PRIx " mmu_idx:%" PRIx16 "\n", addr, idxmap); tlb_debug("addr: %016" VADDR_PRIx " mmu_idx:%" PRIx16 "\n", addr, idxmap);
assert_cpu_is_self(cpu);
/* This should already be page aligned */ /* This should already be page aligned */
addr &= TARGET_PAGE_MASK; addr &= TARGET_PAGE_MASK;
tlb_flush_page_by_mmuidx_async_0(cpu, addr, idxmap); if (qemu_cpu_is_self(cpu)) {
tlb_flush_page_by_mmuidx_async_0(cpu, addr, idxmap);
} else if (idxmap < TARGET_PAGE_SIZE) {
/*
* Most targets have only a few mmu_idx. In the case where
* we can stuff idxmap into the low TARGET_PAGE_BITS, avoid
* allocating memory for this operation.
*/
async_run_on_cpu(cpu, tlb_flush_page_by_mmuidx_async_1,
RUN_ON_CPU_TARGET_PTR(addr | idxmap));
} else {
TLBFlushPageByMMUIdxData *d = g_new(TLBFlushPageByMMUIdxData, 1);
/* Otherwise allocate a structure, freed by the worker. */
d->addr = addr;
d->idxmap = idxmap;
async_run_on_cpu(cpu, tlb_flush_page_by_mmuidx_async_2,
RUN_ON_CPU_HOST_PTR(d));
}
} }
void tlb_flush_page(CPUState *cpu, vaddr addr) void tlb_flush_page(CPUState *cpu, vaddr addr)
@@ -622,6 +604,46 @@ void tlb_flush_page(CPUState *cpu, vaddr addr)
tlb_flush_page_by_mmuidx(cpu, addr, ALL_MMUIDX_BITS); tlb_flush_page_by_mmuidx(cpu, addr, ALL_MMUIDX_BITS);
} }
void tlb_flush_page_by_mmuidx_all_cpus(CPUState *src_cpu, vaddr addr,
uint16_t idxmap)
{
tlb_debug("addr: %016" VADDR_PRIx " mmu_idx:%"PRIx16"\n", addr, idxmap);
/* This should already be page aligned */
addr &= TARGET_PAGE_MASK;
/*
* Allocate memory to hold addr+idxmap only when needed.
* See tlb_flush_page_by_mmuidx for details.
*/
if (idxmap < TARGET_PAGE_SIZE) {
flush_all_helper(src_cpu, tlb_flush_page_by_mmuidx_async_1,
RUN_ON_CPU_TARGET_PTR(addr | idxmap));
} else {
CPUState *dst_cpu;
/* Allocate a separate data block for each destination cpu. */
CPU_FOREACH(dst_cpu) {
if (dst_cpu != src_cpu) {
TLBFlushPageByMMUIdxData *d
= g_new(TLBFlushPageByMMUIdxData, 1);
d->addr = addr;
d->idxmap = idxmap;
async_run_on_cpu(dst_cpu, tlb_flush_page_by_mmuidx_async_2,
RUN_ON_CPU_HOST_PTR(d));
}
}
}
tlb_flush_page_by_mmuidx_async_0(src_cpu, addr, idxmap);
}
void tlb_flush_page_all_cpus(CPUState *src, vaddr addr)
{
tlb_flush_page_by_mmuidx_all_cpus(src, addr, ALL_MMUIDX_BITS);
}
void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *src_cpu, void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
vaddr addr, vaddr addr,
uint16_t idxmap) uint16_t idxmap)
@@ -777,8 +799,6 @@ void tlb_flush_range_by_mmuidx(CPUState *cpu, vaddr addr,
{ {
TLBFlushRangeData d; TLBFlushRangeData d;
assert_cpu_is_self(cpu);
/* /*
* If all bits are significant, and len is small, * If all bits are significant, and len is small,
* this devolves to tlb_flush_page. * this devolves to tlb_flush_page.
@@ -799,7 +819,14 @@ void tlb_flush_range_by_mmuidx(CPUState *cpu, vaddr addr,
d.idxmap = idxmap; d.idxmap = idxmap;
d.bits = bits; d.bits = bits;
tlb_flush_range_by_mmuidx_async_0(cpu, d); if (qemu_cpu_is_self(cpu)) {
tlb_flush_range_by_mmuidx_async_0(cpu, d);
} else {
/* Otherwise allocate a structure, freed by the worker. */
TLBFlushRangeData *p = g_memdup(&d, sizeof(d));
async_run_on_cpu(cpu, tlb_flush_range_by_mmuidx_async_1,
RUN_ON_CPU_HOST_PTR(p));
}
} }
void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, vaddr addr, void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, vaddr addr,
@@ -808,6 +835,54 @@ void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, vaddr addr,
tlb_flush_range_by_mmuidx(cpu, addr, TARGET_PAGE_SIZE, idxmap, bits); tlb_flush_range_by_mmuidx(cpu, addr, TARGET_PAGE_SIZE, idxmap, bits);
} }
void tlb_flush_range_by_mmuidx_all_cpus(CPUState *src_cpu,
vaddr addr, vaddr len,
uint16_t idxmap, unsigned bits)
{
TLBFlushRangeData d;
CPUState *dst_cpu;
/*
* If all bits are significant, and len is small,
* this devolves to tlb_flush_page.
*/
if (bits >= TARGET_LONG_BITS && len <= TARGET_PAGE_SIZE) {
tlb_flush_page_by_mmuidx_all_cpus(src_cpu, addr, idxmap);
return;
}
/* If no page bits are significant, this devolves to tlb_flush. */
if (bits < TARGET_PAGE_BITS) {
tlb_flush_by_mmuidx_all_cpus(src_cpu, idxmap);
return;
}
/* This should already be page aligned */
d.addr = addr & TARGET_PAGE_MASK;
d.len = len;
d.idxmap = idxmap;
d.bits = bits;
/* Allocate a separate data block for each destination cpu. */
CPU_FOREACH(dst_cpu) {
if (dst_cpu != src_cpu) {
TLBFlushRangeData *p = g_memdup(&d, sizeof(d));
async_run_on_cpu(dst_cpu,
tlb_flush_range_by_mmuidx_async_1,
RUN_ON_CPU_HOST_PTR(p));
}
}
tlb_flush_range_by_mmuidx_async_0(src_cpu, d);
}
void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
vaddr addr, uint16_t idxmap,
unsigned bits)
{
tlb_flush_range_by_mmuidx_all_cpus(src_cpu, addr, TARGET_PAGE_SIZE,
idxmap, bits);
}
void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *src_cpu, void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
vaddr addr, vaddr addr,
vaddr len, vaddr len,
@@ -964,7 +1039,7 @@ static inline void tlb_set_dirty1_locked(CPUTLBEntry *tlb_entry,
/* update the TLB corresponding to virtual page vaddr /* update the TLB corresponding to virtual page vaddr
so that it is no longer dirty */ so that it is no longer dirty */
static void tlb_set_dirty(CPUState *cpu, vaddr addr) void tlb_set_dirty(CPUState *cpu, vaddr addr)
{ {
int mmu_idx; int mmu_idx;
@@ -1070,11 +1145,14 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
" prot=%x idx=%d\n", " prot=%x idx=%d\n",
addr, full->phys_addr, prot, mmu_idx); addr, full->phys_addr, prot, mmu_idx);
read_flags = full->tlb_fill_flags; read_flags = 0;
if (full->lg_page_size < TARGET_PAGE_BITS) { if (full->lg_page_size < TARGET_PAGE_BITS) {
/* Repeat the MMU check and TLB fill on every access. */ /* Repeat the MMU check and TLB fill on every access. */
read_flags |= TLB_INVALID_MASK; read_flags |= TLB_INVALID_MASK;
} }
if (full->attrs.byte_swap) {
read_flags |= TLB_BSWAP;
}
is_ram = memory_region_is_ram(section->mr); is_ram = memory_region_is_ram(section->mr);
is_romd = memory_region_is_romd(section->mr); is_romd = memory_region_is_romd(section->mr);
@@ -1378,8 +1456,9 @@ static int probe_access_internal(CPUState *cpu, vaddr addr,
flags |= full->slow_flags[access_type]; flags |= full->slow_flags[access_type];
/* Fold all "mmio-like" bits into TLB_MMIO. This is not RAM. */ /* Fold all "mmio-like" bits into TLB_MMIO. This is not RAM. */
if (unlikely(flags & ~(TLB_WATCHPOINT | TLB_NOTDIRTY | TLB_CHECK_ALIGNED)) if (unlikely(flags & ~(TLB_WATCHPOINT | TLB_NOTDIRTY))
|| (access_type != MMU_INST_FETCH && force_mmio)) { ||
(access_type != MMU_INST_FETCH && force_mmio)) {
*phost = NULL; *phost = NULL;
return TLB_MMIO; return TLB_MMIO;
} }
@@ -1400,8 +1479,7 @@ int probe_access_full(CPUArchState *env, vaddr addr, int size,
/* Handle clean RAM pages. */ /* Handle clean RAM pages. */
if (unlikely(flags & TLB_NOTDIRTY)) { if (unlikely(flags & TLB_NOTDIRTY)) {
int dirtysize = size == 0 ? 1 : size; notdirty_write(env_cpu(env), addr, 1, *pfull, retaddr);
notdirty_write(env_cpu(env), addr, dirtysize, *pfull, retaddr);
flags &= ~TLB_NOTDIRTY; flags &= ~TLB_NOTDIRTY;
} }
@@ -1424,8 +1502,7 @@ int probe_access_full_mmu(CPUArchState *env, vaddr addr, int size,
/* Handle clean RAM pages. */ /* Handle clean RAM pages. */
if (unlikely(flags & TLB_NOTDIRTY)) { if (unlikely(flags & TLB_NOTDIRTY)) {
int dirtysize = size == 0 ? 1 : size; notdirty_write(env_cpu(env), addr, 1, *pfull, 0);
notdirty_write(env_cpu(env), addr, dirtysize, *pfull, 0);
flags &= ~TLB_NOTDIRTY; flags &= ~TLB_NOTDIRTY;
} }
@@ -1447,8 +1524,7 @@ int probe_access_flags(CPUArchState *env, vaddr addr, int size,
/* Handle clean RAM pages. */ /* Handle clean RAM pages. */
if (unlikely(flags & TLB_NOTDIRTY)) { if (unlikely(flags & TLB_NOTDIRTY)) {
int dirtysize = size == 0 ? 1 : size; notdirty_write(env_cpu(env), addr, 1, full, retaddr);
notdirty_write(env_cpu(env), addr, dirtysize, full, retaddr);
flags &= ~TLB_NOTDIRTY; flags &= ~TLB_NOTDIRTY;
} }
@@ -1484,7 +1560,7 @@ void *probe_access(CPUArchState *env, vaddr addr, int size,
/* Handle clean RAM pages. */ /* Handle clean RAM pages. */
if (flags & TLB_NOTDIRTY) { if (flags & TLB_NOTDIRTY) {
notdirty_write(env_cpu(env), addr, size, full, retaddr); notdirty_write(env_cpu(env), addr, 1, full, retaddr);
} }
} }
@@ -1522,7 +1598,7 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, vaddr addr,
void *p; void *p;
(void)probe_access_internal(env_cpu(env), addr, 1, MMU_INST_FETCH, (void)probe_access_internal(env_cpu(env), addr, 1, MMU_INST_FETCH,
cpu_mmu_index(env_cpu(env), true), false, cpu_mmu_index(env, true), false,
&p, &full, 0, false); &p, &full, 0, false);
if (p == NULL) { if (p == NULL) {
return -1; return -1;
@@ -1760,31 +1836,6 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
tcg_debug_assert((flags & TLB_BSWAP) == 0); tcg_debug_assert((flags & TLB_BSWAP) == 0);
} }
/*
* This alignment check differs from the one above, in that this is
* based on the atomicity of the operation. The intended use case is
* the ARM memory type field of each PTE, where access to pages with
* Device memory type require alignment.
*/
if (unlikely(flags & TLB_CHECK_ALIGNED)) {
MemOp size = l->memop & MO_SIZE;
switch (l->memop & MO_ATOM_MASK) {
case MO_ATOM_NONE:
size = MO_8;
break;
case MO_ATOM_IFALIGN_PAIR:
case MO_ATOM_WITHIN16_PAIR:
size = size ? size - 1 : 0;
break;
default:
break;
}
if (addr & ((1 << size) - 1)) {
cpu_unaligned_access(cpu, addr, type, l->mmu_idx, ra);
}
}
return crosspage; return crosspage;
} }
@@ -1921,7 +1972,7 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
* @size: number of bytes * @size: number of bytes
* @mmu_idx: virtual address context * @mmu_idx: virtual address context
* @ra: return address into tcg generated code, or 0 * @ra: return address into tcg generated code, or 0
* Context: BQL held * Context: iothread lock held
* *
* Load @size bytes from @addr, which is memory-mapped i/o. * Load @size bytes from @addr, which is memory-mapped i/o.
* The bytes are concatenated in big-endian order with @ret_be. * The bytes are concatenated in big-endian order with @ret_be.
@@ -1968,6 +2019,7 @@ static uint64_t do_ld_mmio_beN(CPUState *cpu, CPUTLBEntryFull *full,
MemoryRegion *mr; MemoryRegion *mr;
hwaddr mr_offset; hwaddr mr_offset;
MemTxAttrs attrs; MemTxAttrs attrs;
uint64_t ret;
tcg_debug_assert(size > 0 && size <= 8); tcg_debug_assert(size > 0 && size <= 8);
@@ -1975,9 +2027,12 @@ static uint64_t do_ld_mmio_beN(CPUState *cpu, CPUTLBEntryFull *full,
section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, ra); section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, ra);
mr = section->mr; mr = section->mr;
BQL_LOCK_GUARD(); qemu_mutex_lock_iothread();
return int_ld_mmio_beN(cpu, full, ret_be, addr, size, mmu_idx, ret = int_ld_mmio_beN(cpu, full, ret_be, addr, size, mmu_idx,
type, ra, mr, mr_offset); type, ra, mr, mr_offset);
qemu_mutex_unlock_iothread();
return ret;
} }
static Int128 do_ld16_mmio_beN(CPUState *cpu, CPUTLBEntryFull *full, static Int128 do_ld16_mmio_beN(CPUState *cpu, CPUTLBEntryFull *full,
@@ -1996,11 +2051,13 @@ static Int128 do_ld16_mmio_beN(CPUState *cpu, CPUTLBEntryFull *full,
section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, ra); section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, ra);
mr = section->mr; mr = section->mr;
BQL_LOCK_GUARD(); qemu_mutex_lock_iothread();
a = int_ld_mmio_beN(cpu, full, ret_be, addr, size - 8, mmu_idx, a = int_ld_mmio_beN(cpu, full, ret_be, addr, size - 8, mmu_idx,
MMU_DATA_LOAD, ra, mr, mr_offset); MMU_DATA_LOAD, ra, mr, mr_offset);
b = int_ld_mmio_beN(cpu, full, ret_be, addr + size - 8, 8, mmu_idx, b = int_ld_mmio_beN(cpu, full, ret_be, addr + size - 8, 8, mmu_idx,
MMU_DATA_LOAD, ra, mr, mr_offset + size - 8); MMU_DATA_LOAD, ra, mr, mr_offset + size - 8);
qemu_mutex_unlock_iothread();
return int128_make128(b, a); return int128_make128(b, a);
} }
@@ -2461,7 +2518,7 @@ static Int128 do_ld16_mmu(CPUState *cpu, vaddr addr,
* @size: number of bytes * @size: number of bytes
* @mmu_idx: virtual address context * @mmu_idx: virtual address context
* @ra: return address into tcg generated code, or 0 * @ra: return address into tcg generated code, or 0
* Context: BQL held * Context: iothread lock held
* *
* Store @size bytes at @addr, which is memory-mapped i/o. * Store @size bytes at @addr, which is memory-mapped i/o.
* The bytes to store are extracted in little-endian order from @val_le; * The bytes to store are extracted in little-endian order from @val_le;
@@ -2509,6 +2566,7 @@ static uint64_t do_st_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full,
hwaddr mr_offset; hwaddr mr_offset;
MemoryRegion *mr; MemoryRegion *mr;
MemTxAttrs attrs; MemTxAttrs attrs;
uint64_t ret;
tcg_debug_assert(size > 0 && size <= 8); tcg_debug_assert(size > 0 && size <= 8);
@@ -2516,9 +2574,12 @@ static uint64_t do_st_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full,
section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, ra); section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, ra);
mr = section->mr; mr = section->mr;
BQL_LOCK_GUARD(); qemu_mutex_lock_iothread();
return int_st_mmio_leN(cpu, full, val_le, addr, size, mmu_idx, ret = int_st_mmio_leN(cpu, full, val_le, addr, size, mmu_idx,
ra, mr, mr_offset); ra, mr, mr_offset);
qemu_mutex_unlock_iothread();
return ret;
} }
static uint64_t do_st16_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full, static uint64_t do_st16_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full,
@@ -2529,6 +2590,7 @@ static uint64_t do_st16_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full,
MemoryRegion *mr; MemoryRegion *mr;
hwaddr mr_offset; hwaddr mr_offset;
MemTxAttrs attrs; MemTxAttrs attrs;
uint64_t ret;
tcg_debug_assert(size > 8 && size <= 16); tcg_debug_assert(size > 8 && size <= 16);
@@ -2536,11 +2598,14 @@ static uint64_t do_st16_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full,
section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, ra); section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, ra);
mr = section->mr; mr = section->mr;
BQL_LOCK_GUARD(); qemu_mutex_lock_iothread();
int_st_mmio_leN(cpu, full, int128_getlo(val_le), addr, 8, int_st_mmio_leN(cpu, full, int128_getlo(val_le), addr, 8,
mmu_idx, ra, mr, mr_offset); mmu_idx, ra, mr, mr_offset);
return int_st_mmio_leN(cpu, full, int128_gethi(val_le), addr + 8, ret = int_st_mmio_leN(cpu, full, int128_gethi(val_le), addr + 8,
size - 8, mmu_idx, ra, mr, mr_offset + 8); size - 8, mmu_idx, ra, mr, mr_offset + 8);
qemu_mutex_unlock_iothread();
return ret;
} }
/* /*
@@ -2891,30 +2956,26 @@ static void do_st16_mmu(CPUState *cpu, vaddr addr, Int128 val,
uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr) uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr)
{ {
CPUState *cs = env_cpu(env); MemOpIdx oi = make_memop_idx(MO_UB, cpu_mmu_index(env, true));
MemOpIdx oi = make_memop_idx(MO_UB, cpu_mmu_index(cs, true)); return do_ld1_mmu(env_cpu(env), addr, oi, 0, MMU_INST_FETCH);
return do_ld1_mmu(cs, addr, oi, 0, MMU_INST_FETCH);
} }
uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr addr) uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr addr)
{ {
CPUState *cs = env_cpu(env); MemOpIdx oi = make_memop_idx(MO_TEUW, cpu_mmu_index(env, true));
MemOpIdx oi = make_memop_idx(MO_TEUW, cpu_mmu_index(cs, true)); return do_ld2_mmu(env_cpu(env), addr, oi, 0, MMU_INST_FETCH);
return do_ld2_mmu(cs, addr, oi, 0, MMU_INST_FETCH);
} }
uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr) uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr)
{ {
CPUState *cs = env_cpu(env); MemOpIdx oi = make_memop_idx(MO_TEUL, cpu_mmu_index(env, true));
MemOpIdx oi = make_memop_idx(MO_TEUL, cpu_mmu_index(cs, true)); return do_ld4_mmu(env_cpu(env), addr, oi, 0, MMU_INST_FETCH);
return do_ld4_mmu(cs, addr, oi, 0, MMU_INST_FETCH);
} }
uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr addr) uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr addr)
{ {
CPUState *cs = env_cpu(env); MemOpIdx oi = make_memop_idx(MO_TEUQ, cpu_mmu_index(env, true));
MemOpIdx oi = make_memop_idx(MO_TEUQ, cpu_mmu_index(cs, true)); return do_ld8_mmu(env_cpu(env), addr, oi, 0, MMU_INST_FETCH);
return do_ld8_mmu(cs, addr, oi, 0, MMU_INST_FETCH);
} }
uint8_t cpu_ldb_code_mmu(CPUArchState *env, abi_ptr addr, uint8_t cpu_ldb_code_mmu(CPUArchState *env, abi_ptr addr,

View File

@@ -6,10 +6,11 @@
#include "qemu/osdep.h" #include "qemu/osdep.h"
#include "qemu/lockable.h" #include "qemu/lockable.h"
#include "tcg/debuginfo.h"
#include <elfutils/libdwfl.h> #include <elfutils/libdwfl.h>
#include "debuginfo.h"
static QemuMutex lock; static QemuMutex lock;
static Dwfl *dwfl; static Dwfl *dwfl;
static const Dwfl_Callbacks dwfl_callbacks = { static const Dwfl_Callbacks dwfl_callbacks = {

View File

@@ -4,8 +4,8 @@
* SPDX-License-Identifier: GPL-2.0-or-later * SPDX-License-Identifier: GPL-2.0-or-later
*/ */
#ifndef TCG_DEBUGINFO_H #ifndef ACCEL_TCG_DEBUGINFO_H
#define TCG_DEBUGINFO_H #define ACCEL_TCG_DEBUGINFO_H
#include "qemu/bitops.h" #include "qemu/bitops.h"

View File

@@ -49,19 +49,21 @@ static bool icount_sleep = true;
/* Arbitrarily pick 1MIPS as the minimum allowable speed. */ /* Arbitrarily pick 1MIPS as the minimum allowable speed. */
#define MAX_ICOUNT_SHIFT 10 #define MAX_ICOUNT_SHIFT 10
/* Do not count executed instructions */ /*
ICountMode use_icount = ICOUNT_DISABLED; * 0 = Do not count executed instructions.
* 1 = Fixed conversion of insn to ns via "shift" option
* 2 = Runtime adaptive algorithm to compute shift
*/
int use_icount;
static void icount_enable_precise(void) static void icount_enable_precise(void)
{ {
/* Fixed conversion of insn to ns via "shift" option */ use_icount = 1;
use_icount = ICOUNT_PRECISE;
} }
static void icount_enable_adaptive(void) static void icount_enable_adaptive(void)
{ {
/* Runtime adaptive algorithm to compute shift */ use_icount = 2;
use_icount = ICOUNT_ADAPTATIVE;
} }
/* /*
@@ -254,7 +256,7 @@ static void icount_warp_rt(void)
int64_t warp_delta; int64_t warp_delta;
warp_delta = clock - timers_state.vm_clock_warp_start; warp_delta = clock - timers_state.vm_clock_warp_start;
if (icount_enabled() == ICOUNT_ADAPTATIVE) { if (icount_enabled() == 2) {
/* /*
* In adaptive mode, do not let QEMU_CLOCK_VIRTUAL run too far * In adaptive mode, do not let QEMU_CLOCK_VIRTUAL run too far
* ahead of real time (it might already be ahead so careful not * ahead of real time (it might already be ahead so careful not
@@ -336,8 +338,10 @@ void icount_start_warp_timer(void)
deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL, deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL,
~QEMU_TIMER_ATTR_EXTERNAL); ~QEMU_TIMER_ATTR_EXTERNAL);
if (deadline < 0) { if (deadline < 0) {
if (!icount_sleep) { static bool notified;
warn_report_once("icount sleep disabled and no active timers"); if (!icount_sleep && !notified) {
warn_report("icount sleep disabled and no active timers");
notified = true;
} }
return; return;
} }
@@ -415,7 +419,7 @@ void icount_account_warp_timer(void)
icount_warp_rt(); icount_warp_rt();
} }
bool icount_configure(QemuOpts *opts, Error **errp) void icount_configure(QemuOpts *opts, Error **errp)
{ {
const char *option = qemu_opt_get(opts, "shift"); const char *option = qemu_opt_get(opts, "shift");
bool sleep = qemu_opt_get_bool(opts, "sleep", true); bool sleep = qemu_opt_get_bool(opts, "sleep", true);
@@ -425,28 +429,27 @@ bool icount_configure(QemuOpts *opts, Error **errp)
if (!option) { if (!option) {
if (qemu_opt_get(opts, "align") != NULL) { if (qemu_opt_get(opts, "align") != NULL) {
error_setg(errp, "Please specify shift option when using align"); error_setg(errp, "Please specify shift option when using align");
return false;
} }
return true; return;
} }
if (align && !sleep) { if (align && !sleep) {
error_setg(errp, "align=on and sleep=off are incompatible"); error_setg(errp, "align=on and sleep=off are incompatible");
return false; return;
} }
if (strcmp(option, "auto") != 0) { if (strcmp(option, "auto") != 0) {
if (qemu_strtol(option, NULL, 0, &time_shift) < 0 if (qemu_strtol(option, NULL, 0, &time_shift) < 0
|| time_shift < 0 || time_shift > MAX_ICOUNT_SHIFT) { || time_shift < 0 || time_shift > MAX_ICOUNT_SHIFT) {
error_setg(errp, "icount: Invalid shift value"); error_setg(errp, "icount: Invalid shift value");
return false; return;
} }
} else if (icount_align_option) { } else if (icount_align_option) {
error_setg(errp, "shift=auto and align=on are incompatible"); error_setg(errp, "shift=auto and align=on are incompatible");
return false; return;
} else if (!icount_sleep) { } else if (!icount_sleep) {
error_setg(errp, "shift=auto and sleep=off are incompatible"); error_setg(errp, "shift=auto and sleep=off are incompatible");
return false; return;
} }
icount_sleep = sleep; icount_sleep = sleep;
@@ -460,7 +463,7 @@ bool icount_configure(QemuOpts *opts, Error **errp)
if (time_shift >= 0) { if (time_shift >= 0) {
timers_state.icount_time_shift = time_shift; timers_state.icount_time_shift = time_shift;
icount_enable_precise(); icount_enable_precise();
return true; return;
} }
icount_enable_adaptive(); icount_enable_adaptive();
@@ -488,14 +491,11 @@ bool icount_configure(QemuOpts *opts, Error **errp)
timer_mod(timers_state.icount_vm_timer, timer_mod(timers_state.icount_vm_timer,
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
NANOSECONDS_PER_SECOND / 10); NANOSECONDS_PER_SECOND / 10);
return true;
} }
void icount_notify_exit(void) void icount_notify_exit(void)
{ {
assert(icount_enabled()); if (icount_enabled() && current_cpu) {
if (current_cpu) {
qemu_cpu_kick(current_cpu); qemu_cpu_kick(current_cpu);
qemu_clock_notify(QEMU_CLOCK_VIRTUAL); qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
} }

View File

@@ -9,51 +9,18 @@
#ifndef ACCEL_TCG_INTERNAL_COMMON_H #ifndef ACCEL_TCG_INTERNAL_COMMON_H
#define ACCEL_TCG_INTERNAL_COMMON_H #define ACCEL_TCG_INTERNAL_COMMON_H
#include "exec/cpu-common.h"
#include "exec/translation-block.h" #include "exec/translation-block.h"
extern int64_t max_delay; extern int64_t max_delay;
extern int64_t max_advance; extern int64_t max_advance;
extern bool one_insn_per_tb;
/* /*
* Return true if CS is not running in parallel with other cpus, either * Return true if CS is not running in parallel with other cpus, either
* because there are no other cpus or we are within an exclusive context. * because there are no other cpus or we are within an exclusive context.
*/ */
static inline bool cpu_in_serial_context(CPUState *cs) static inline bool cpu_in_serial_context(CPUState *cs)
{ {
return !tcg_cflags_has(cs, CF_PARALLEL) || cpu_in_exclusive_context(cs); return !(cs->tcg_cflags & CF_PARALLEL) || cpu_in_exclusive_context(cs);
} }
/**
* cpu_plugin_mem_cbs_enabled() - are plugin memory callbacks enabled?
* @cs: CPUState pointer
*
* The memory callbacks are installed if a plugin has instrumented an
* instruction for memory. This can be useful to know if you want to
* force a slow path for a series of memory accesses.
*/
static inline bool cpu_plugin_mem_cbs_enabled(const CPUState *cpu)
{
#ifdef CONFIG_PLUGIN
return !!cpu->neg.plugin_mem_cbs;
#else
return false;
#endif
}
TranslationBlock *tb_gen_code(CPUState *cpu, vaddr pc,
uint64_t cs_base, uint32_t flags,
int cflags);
void page_init(void);
void tb_htable_init(void);
void tb_reset_jump(TranslationBlock *tb, int n);
TranslationBlock *tb_link_page(TranslationBlock *tb);
void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
uintptr_t host_pc);
bool tcg_exec_realizefn(CPUState *cpu, Error **errp);
void tcg_exec_unrealizefn(CPUState *cpu);
#endif #endif

View File

@@ -69,7 +69,19 @@ void tb_invalidate_phys_range_fast(ram_addr_t ram_addr,
G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr); G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr);
#endif /* CONFIG_SOFTMMU */ #endif /* CONFIG_SOFTMMU */
TranslationBlock *tb_gen_code(CPUState *cpu, vaddr pc,
uint64_t cs_base, uint32_t flags,
int cflags);
void page_init(void);
void tb_htable_init(void);
void tb_reset_jump(TranslationBlock *tb, int n);
TranslationBlock *tb_link_page(TranslationBlock *tb);
bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc); bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc);
void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
uintptr_t host_pc);
bool tcg_exec_realizefn(CPUState *cpu, Error **errp);
void tcg_exec_unrealizefn(CPUState *cpu);
/* Return the current PC from CPU, which may be cached in TB. */ /* Return the current PC from CPU, which may be cached in TB. */
static inline vaddr log_pc(CPUState *cpu, const TranslationBlock *tb) static inline vaddr log_pc(CPUState *cpu, const TranslationBlock *tb)
@@ -81,6 +93,8 @@ static inline vaddr log_pc(CPUState *cpu, const TranslationBlock *tb)
} }
} }
extern bool one_insn_per_tb;
/** /**
* tcg_req_mo: * tcg_req_mo:
* @type: TCGBar * @type: TCGBar

View File

@@ -9,8 +9,8 @@
* See the COPYING file in the top-level directory. * See the COPYING file in the top-level directory.
*/ */
#include "host/load-extract-al16-al8.h.inc" #include "host/load-extract-al16-al8.h"
#include "host/store-insert-al16.h.inc" #include "host/store-insert-al16.h"
#ifdef CONFIG_ATOMIC64 #ifdef CONFIG_ATOMIC64
# define HAVE_al8 true # define HAVE_al8 true
@@ -76,7 +76,7 @@ static int required_atomicity(CPUState *cpu, uintptr_t p, MemOp memop)
/* /*
* Examine the alignment of p to determine if there are subobjects * Examine the alignment of p to determine if there are subobjects
* that must be aligned. Note that we only really need ctz4() -- * that must be aligned. Note that we only really need ctz4() --
* any more significant bits are discarded by the immediately * any more sigificant bits are discarded by the immediately
* following comparison. * following comparison.
*/ */
tmp = ctz32(p); tmp = ctz32(p);

View File

@@ -125,9 +125,7 @@ void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx oi)
static void plugin_load_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi) static void plugin_load_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi)
{ {
if (cpu_plugin_mem_cbs_enabled(env_cpu(env))) { qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
}
} }
uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra)
@@ -190,9 +188,7 @@ Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr,
static void plugin_store_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi) static void plugin_store_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi)
{ {
if (cpu_plugin_mem_cbs_enabled(env_cpu(env))) { qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
} }
void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val,
@@ -358,8 +354,7 @@ void cpu_stq_le_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
uint32_t cpu_ldub_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra) uint32_t cpu_ldub_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{ {
int mmu_index = cpu_mmu_index(env_cpu(env), false); return cpu_ldub_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
return cpu_ldub_mmuidx_ra(env, addr, mmu_index, ra);
} }
int cpu_ldsb_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra) int cpu_ldsb_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
@@ -369,8 +364,7 @@ int cpu_ldsb_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
uint32_t cpu_lduw_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra) uint32_t cpu_lduw_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{ {
int mmu_index = cpu_mmu_index(env_cpu(env), false); return cpu_lduw_be_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
return cpu_lduw_be_mmuidx_ra(env, addr, mmu_index, ra);
} }
int cpu_ldsw_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra) int cpu_ldsw_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
@@ -380,20 +374,17 @@ int cpu_ldsw_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
uint32_t cpu_ldl_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra) uint32_t cpu_ldl_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{ {
int mmu_index = cpu_mmu_index(env_cpu(env), false); return cpu_ldl_be_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
return cpu_ldl_be_mmuidx_ra(env, addr, mmu_index, ra);
} }
uint64_t cpu_ldq_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra) uint64_t cpu_ldq_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{ {
int mmu_index = cpu_mmu_index(env_cpu(env), false); return cpu_ldq_be_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
return cpu_ldq_be_mmuidx_ra(env, addr, mmu_index, ra);
} }
uint32_t cpu_lduw_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra) uint32_t cpu_lduw_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{ {
int mmu_index = cpu_mmu_index(env_cpu(env), false); return cpu_lduw_le_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
return cpu_lduw_le_mmuidx_ra(env, addr, mmu_index, ra);
} }
int cpu_ldsw_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra) int cpu_ldsw_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
@@ -403,63 +394,54 @@ int cpu_ldsw_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
uint32_t cpu_ldl_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra) uint32_t cpu_ldl_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{ {
int mmu_index = cpu_mmu_index(env_cpu(env), false); return cpu_ldl_le_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
return cpu_ldl_le_mmuidx_ra(env, addr, mmu_index, ra);
} }
uint64_t cpu_ldq_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra) uint64_t cpu_ldq_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{ {
int mmu_index = cpu_mmu_index(env_cpu(env), false); return cpu_ldq_le_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
return cpu_ldq_le_mmuidx_ra(env, addr, mmu_index, ra);
} }
void cpu_stb_data_ra(CPUArchState *env, abi_ptr addr, void cpu_stb_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra) uint32_t val, uintptr_t ra)
{ {
int mmu_index = cpu_mmu_index(env_cpu(env), false); cpu_stb_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
cpu_stb_mmuidx_ra(env, addr, val, mmu_index, ra);
} }
void cpu_stw_be_data_ra(CPUArchState *env, abi_ptr addr, void cpu_stw_be_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra) uint32_t val, uintptr_t ra)
{ {
int mmu_index = cpu_mmu_index(env_cpu(env), false); cpu_stw_be_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
cpu_stw_be_mmuidx_ra(env, addr, val, mmu_index, ra);
} }
void cpu_stl_be_data_ra(CPUArchState *env, abi_ptr addr, void cpu_stl_be_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra) uint32_t val, uintptr_t ra)
{ {
int mmu_index = cpu_mmu_index(env_cpu(env), false); cpu_stl_be_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
cpu_stl_be_mmuidx_ra(env, addr, val, mmu_index, ra);
} }
void cpu_stq_be_data_ra(CPUArchState *env, abi_ptr addr, void cpu_stq_be_data_ra(CPUArchState *env, abi_ptr addr,
uint64_t val, uintptr_t ra) uint64_t val, uintptr_t ra)
{ {
int mmu_index = cpu_mmu_index(env_cpu(env), false); cpu_stq_be_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
cpu_stq_be_mmuidx_ra(env, addr, val, mmu_index, ra);
} }
void cpu_stw_le_data_ra(CPUArchState *env, abi_ptr addr, void cpu_stw_le_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra) uint32_t val, uintptr_t ra)
{ {
int mmu_index = cpu_mmu_index(env_cpu(env), false); cpu_stw_le_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
cpu_stw_le_mmuidx_ra(env, addr, val, mmu_index, ra);
} }
void cpu_stl_le_data_ra(CPUArchState *env, abi_ptr addr, void cpu_stl_le_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra) uint32_t val, uintptr_t ra)
{ {
int mmu_index = cpu_mmu_index(env_cpu(env), false); cpu_stl_le_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
cpu_stl_le_mmuidx_ra(env, addr, val, mmu_index, ra);
} }
void cpu_stq_le_data_ra(CPUArchState *env, abi_ptr addr, void cpu_stq_le_data_ra(CPUArchState *env, abi_ptr addr,
uint64_t val, uintptr_t ra) uint64_t val, uintptr_t ra)
{ {
int mmu_index = cpu_mmu_index(env_cpu(env), false); cpu_stq_le_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
cpu_stq_le_mmuidx_ra(env, addr, val, mmu_index, ra);
} }
/*--------------------------*/ /*--------------------------*/

View File

@@ -1,8 +1,8 @@
tcg_ss = ss.source_set()
common_ss.add(when: 'CONFIG_TCG', if_true: files( common_ss.add(when: 'CONFIG_TCG', if_true: files(
'cpu-exec-common.c', 'cpu-exec-common.c',
)) ))
tcg_specific_ss = ss.source_set() tcg_ss.add(files(
tcg_specific_ss.add(files(
'tcg-all.c', 'tcg-all.c',
'cpu-exec.c', 'cpu-exec.c',
'tb-maint.c', 'tb-maint.c',
@@ -11,16 +11,17 @@ tcg_specific_ss.add(files(
'translate-all.c', 'translate-all.c',
'translator.c', 'translator.c',
)) ))
tcg_specific_ss.add(when: 'CONFIG_USER_ONLY', if_true: files('user-exec.c')) tcg_ss.add(when: 'CONFIG_USER_ONLY', if_true: files('user-exec.c'))
tcg_specific_ss.add(when: 'CONFIG_SYSTEM_ONLY', if_false: files('user-exec-stub.c')) tcg_ss.add(when: 'CONFIG_SYSTEM_ONLY', if_false: files('user-exec-stub.c'))
if get_option('plugins') if get_option('plugins')
tcg_specific_ss.add(files('plugin-gen.c')) tcg_ss.add(files('plugin-gen.c'))
endif endif
specific_ss.add_all(when: 'CONFIG_TCG', if_true: tcg_specific_ss) tcg_ss.add(when: libdw, if_true: files('debuginfo.c'))
tcg_ss.add(when: 'CONFIG_LINUX', if_true: files('perf.c'))
specific_ss.add_all(when: 'CONFIG_TCG', if_true: tcg_ss)
specific_ss.add(when: ['CONFIG_SYSTEM_ONLY', 'CONFIG_TCG'], if_true: files( specific_ss.add(when: ['CONFIG_SYSTEM_ONLY', 'CONFIG_TCG'], if_true: files(
'cputlb.c', 'cputlb.c',
'watchpoint.c',
)) ))
system_ss.add(when: ['CONFIG_TCG'], if_true: files( system_ss.add(when: ['CONFIG_TCG'], if_true: files(

View File

@@ -10,13 +10,13 @@
#include "qemu/osdep.h" #include "qemu/osdep.h"
#include "elf.h" #include "elf.h"
#include "exec/target_page.h" #include "exec/exec-all.h"
#include "exec/translation-block.h"
#include "qemu/timer.h" #include "qemu/timer.h"
#include "tcg/debuginfo.h"
#include "tcg/perf.h"
#include "tcg/tcg.h" #include "tcg/tcg.h"
#include "debuginfo.h"
#include "perf.h"
static FILE *safe_fopen_w(const char *path) static FILE *safe_fopen_w(const char *path)
{ {
int saved_errno; int saved_errno;
@@ -335,7 +335,11 @@ void perf_report_code(uint64_t guest_pc, TranslationBlock *tb,
/* FIXME: This replicates the restore_state_to_opc() logic. */ /* FIXME: This replicates the restore_state_to_opc() logic. */
q[insn].address = gen_insn_data[insn * start_words + 0]; q[insn].address = gen_insn_data[insn * start_words + 0];
if (tb_cflags(tb) & CF_PCREL) { if (tb_cflags(tb) & CF_PCREL) {
q[insn].address |= (guest_pc & qemu_target_page_mask()); q[insn].address |= (guest_pc & TARGET_PAGE_MASK);
} else {
#if defined(TARGET_I386)
q[insn].address -= tb->cs_base;
#endif
} }
q[insn].flags = DEBUGINFO_SYMBOL | (jitdump ? DEBUGINFO_LINE : 0); q[insn].flags = DEBUGINFO_SYMBOL | (jitdump ? DEBUGINFO_LINE : 0);
} }

View File

@@ -4,8 +4,8 @@
* SPDX-License-Identifier: GPL-2.0-or-later * SPDX-License-Identifier: GPL-2.0-or-later
*/ */
#ifndef TCG_PERF_H #ifndef ACCEL_TCG_PERF_H
#define TCG_PERF_H #define ACCEL_TCG_PERF_H
#if defined(CONFIG_TCG) && defined(CONFIG_LINUX) #if defined(CONFIG_TCG) && defined(CONFIG_LINUX)
/* Start writing perf-<pid>.map. */ /* Start writing perf-<pid>.map. */

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,4 @@
#ifdef CONFIG_PLUGIN
DEF_HELPER_FLAGS_2(plugin_vcpu_udata_cb, TCG_CALL_NO_RWG | TCG_CALL_PLUGIN, void, i32, ptr)
DEF_HELPER_FLAGS_4(plugin_vcpu_mem_cb, TCG_CALL_NO_RWG | TCG_CALL_PLUGIN, void, i32, i32, i64, ptr)
#endif

View File

@@ -9,25 +9,20 @@
#ifndef ACCEL_TCG_TB_JMP_CACHE_H #ifndef ACCEL_TCG_TB_JMP_CACHE_H
#define ACCEL_TCG_TB_JMP_CACHE_H #define ACCEL_TCG_TB_JMP_CACHE_H
#include "qemu/rcu.h"
#include "exec/cpu-common.h"
#define TB_JMP_CACHE_BITS 12 #define TB_JMP_CACHE_BITS 12
#define TB_JMP_CACHE_SIZE (1 << TB_JMP_CACHE_BITS) #define TB_JMP_CACHE_SIZE (1 << TB_JMP_CACHE_BITS)
/* /*
* Invalidated in parallel; all accesses to 'tb' must be atomic. * Accessed in parallel; all accesses to 'tb' must be atomic.
* A valid entry is read/written by a single CPU, therefore there is * For CF_PCREL, accesses to 'pc' must be protected by a
* no need for qatomic_rcu_read() and pc is always consistent with a * load_acquire/store_release to 'tb'.
* non-NULL value of 'tb'. Strictly speaking pc is only needed for
* CF_PCREL, but it's used always for simplicity.
*/ */
typedef struct CPUJumpCache { struct CPUJumpCache {
struct rcu_head rcu; struct rcu_head rcu;
struct { struct {
TranslationBlock *tb; TranslationBlock *tb;
vaddr pc; vaddr pc;
} array[TB_JMP_CACHE_SIZE]; } array[TB_JMP_CACHE_SIZE];
} CPUJumpCache; };
#endif /* ACCEL_TCG_TB_JMP_CACHE_H */ #endif /* ACCEL_TCG_TB_JMP_CACHE_H */

View File

@@ -23,7 +23,6 @@
#include "exec/cputlb.h" #include "exec/cputlb.h"
#include "exec/log.h" #include "exec/log.h"
#include "exec/exec-all.h" #include "exec/exec-all.h"
#include "exec/page-protection.h"
#include "exec/tb-flush.h" #include "exec/tb-flush.h"
#include "exec/translate-all.h" #include "exec/translate-all.h"
#include "sysemu/tcg.h" #include "sysemu/tcg.h"
@@ -713,7 +712,7 @@ static void tb_record(TranslationBlock *tb)
tb_page_addr_t paddr0 = tb_page_addr0(tb); tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb); tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS; tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS; tb_page_addr_t pindex1 = paddr0 >> TARGET_PAGE_BITS;
assert(paddr0 != -1); assert(paddr0 != -1);
if (unlikely(paddr1 != -1) && pindex0 != pindex1) { if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
@@ -745,7 +744,7 @@ static void tb_remove(TranslationBlock *tb)
tb_page_addr_t paddr0 = tb_page_addr0(tb); tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb); tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS; tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS; tb_page_addr_t pindex1 = paddr0 >> TARGET_PAGE_BITS;
assert(paddr0 != -1); assert(paddr0 != -1);
if (unlikely(paddr1 != -1) && pindex0 != pindex1) { if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
@@ -1022,7 +1021,7 @@ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t last)
* Called with mmap_lock held for user-mode emulation * Called with mmap_lock held for user-mode emulation
* NOTE: this function must not be called while a TB is running. * NOTE: this function must not be called while a TB is running.
*/ */
static void tb_invalidate_phys_page(tb_page_addr_t addr) void tb_invalidate_phys_page(tb_page_addr_t addr)
{ {
tb_page_addr_t start, last; tb_page_addr_t start, last;
@@ -1084,7 +1083,8 @@ bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc)
if (current_tb_modified) { if (current_tb_modified) {
/* Force execution of one insn next time. */ /* Force execution of one insn next time. */
CPUState *cpu = current_cpu; CPUState *cpu = current_cpu;
cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu); cpu->cflags_next_tb =
1 | CF_LAST_IO | CF_NOIRQ | curr_cflags(current_cpu);
return true; return true;
} }
return false; return false;
@@ -1154,13 +1154,36 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages,
if (current_tb_modified) { if (current_tb_modified) {
page_collection_unlock(pages); page_collection_unlock(pages);
/* Force execution of one insn next time. */ /* Force execution of one insn next time. */
current_cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu); current_cpu->cflags_next_tb =
1 | CF_LAST_IO | CF_NOIRQ | curr_cflags(current_cpu);
mmap_unlock(); mmap_unlock();
cpu_loop_exit_noexc(current_cpu); cpu_loop_exit_noexc(current_cpu);
} }
#endif #endif
} }
/*
* Invalidate all TBs which intersect with the target physical
* address page @addr.
*/
void tb_invalidate_phys_page(tb_page_addr_t addr)
{
struct page_collection *pages;
tb_page_addr_t start, last;
PageDesc *p;
p = page_find(addr >> TARGET_PAGE_BITS);
if (p == NULL) {
return;
}
start = addr & TARGET_PAGE_MASK;
last = addr | ~TARGET_PAGE_MASK;
pages = page_collection_lock(start, last);
tb_invalidate_phys_page_range__locked(pages, p, start, last, 0);
page_collection_unlock(pages);
}
/* /*
* Invalidate all TBs which intersect with the target physical address range * Invalidate all TBs which intersect with the target physical address range
* [start;last]. NOTE: start and end may refer to *different* physical pages. * [start;last]. NOTE: start and end may refer to *different* physical pages.

View File

@@ -123,12 +123,12 @@ void icount_prepare_for_run(CPUState *cpu, int64_t cpu_budget)
if (cpu->icount_budget == 0) { if (cpu->icount_budget == 0) {
/* /*
* We're called without the BQL, so must take it while * We're called without the iothread lock, so must take it while
* we're calling timer handlers. * we're calling timer handlers.
*/ */
bql_lock(); qemu_mutex_lock_iothread();
icount_notify_aio_contexts(); icount_notify_aio_contexts();
bql_unlock(); qemu_mutex_unlock_iothread();
} }
} }

View File

@@ -76,7 +76,7 @@ static void *mttcg_cpu_thread_fn(void *arg)
rcu_add_force_rcu_notifier(&force_rcu.notifier); rcu_add_force_rcu_notifier(&force_rcu.notifier);
tcg_register_thread(); tcg_register_thread();
bql_lock(); qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread); qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id(); cpu->thread_id = qemu_get_thread_id();
@@ -91,9 +91,9 @@ static void *mttcg_cpu_thread_fn(void *arg)
do { do {
if (cpu_can_run(cpu)) { if (cpu_can_run(cpu)) {
int r; int r;
bql_unlock(); qemu_mutex_unlock_iothread();
r = tcg_cpu_exec(cpu); r = tcg_cpus_exec(cpu);
bql_lock(); qemu_mutex_lock_iothread();
switch (r) { switch (r) {
case EXCP_DEBUG: case EXCP_DEBUG:
cpu_handle_guest_debug(cpu); cpu_handle_guest_debug(cpu);
@@ -105,9 +105,9 @@ static void *mttcg_cpu_thread_fn(void *arg)
*/ */
break; break;
case EXCP_ATOMIC: case EXCP_ATOMIC:
bql_unlock(); qemu_mutex_unlock_iothread();
cpu_exec_step_atomic(cpu); cpu_exec_step_atomic(cpu);
bql_lock(); qemu_mutex_lock_iothread();
default: default:
/* Ignore everything else? */ /* Ignore everything else? */
break; break;
@@ -118,8 +118,8 @@ static void *mttcg_cpu_thread_fn(void *arg)
qemu_wait_io_event(cpu); qemu_wait_io_event(cpu);
} while (!cpu->unplug || cpu_can_run(cpu)); } while (!cpu->unplug || cpu_can_run(cpu));
tcg_cpu_destroy(cpu); tcg_cpus_destroy(cpu);
bql_unlock(); qemu_mutex_unlock_iothread();
rcu_remove_force_rcu_notifier(&force_rcu.notifier); rcu_remove_force_rcu_notifier(&force_rcu.notifier);
rcu_unregister_thread(); rcu_unregister_thread();
return NULL; return NULL;
@@ -137,6 +137,10 @@ void mttcg_start_vcpu_thread(CPUState *cpu)
g_assert(tcg_enabled()); g_assert(tcg_enabled());
tcg_cpu_init_cflags(cpu, current_machine->smp.max_cpus > 1); tcg_cpu_init_cflags(cpu, current_machine->smp.max_cpus > 1);
cpu->thread = g_new0(QemuThread, 1);
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
/* create a thread per vCPU with TCG (MTTCG) */ /* create a thread per vCPU with TCG (MTTCG) */
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/TCG", snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/TCG",
cpu->cpu_index); cpu->cpu_index);

View File

@@ -111,7 +111,7 @@ static void rr_wait_io_event(void)
while (all_cpu_threads_idle()) { while (all_cpu_threads_idle()) {
rr_stop_kick_timer(); rr_stop_kick_timer();
qemu_cond_wait_bql(first_cpu->halt_cond); qemu_cond_wait_iothread(first_cpu->halt_cond);
} }
rr_start_kick_timer(); rr_start_kick_timer();
@@ -131,7 +131,7 @@ static void rr_deal_with_unplugged_cpus(void)
CPU_FOREACH(cpu) { CPU_FOREACH(cpu) {
if (cpu->unplug && !cpu_can_run(cpu)) { if (cpu->unplug && !cpu_can_run(cpu)) {
tcg_cpu_destroy(cpu); tcg_cpus_destroy(cpu);
break; break;
} }
} }
@@ -188,7 +188,7 @@ static void *rr_cpu_thread_fn(void *arg)
rcu_add_force_rcu_notifier(&force_rcu); rcu_add_force_rcu_notifier(&force_rcu);
tcg_register_thread(); tcg_register_thread();
bql_lock(); qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread); qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id(); cpu->thread_id = qemu_get_thread_id();
@@ -198,7 +198,7 @@ static void *rr_cpu_thread_fn(void *arg)
/* wait for initial kick-off after machine start */ /* wait for initial kick-off after machine start */
while (first_cpu->stopped) { while (first_cpu->stopped) {
qemu_cond_wait_bql(first_cpu->halt_cond); qemu_cond_wait_iothread(first_cpu->halt_cond);
/* process any pending work */ /* process any pending work */
CPU_FOREACH(cpu) { CPU_FOREACH(cpu) {
@@ -218,9 +218,9 @@ static void *rr_cpu_thread_fn(void *arg)
/* Only used for icount_enabled() */ /* Only used for icount_enabled() */
int64_t cpu_budget = 0; int64_t cpu_budget = 0;
bql_unlock(); qemu_mutex_unlock_iothread();
replay_mutex_lock(); replay_mutex_lock();
bql_lock(); qemu_mutex_lock_iothread();
if (icount_enabled()) { if (icount_enabled()) {
int cpu_count = rr_cpu_count(); int cpu_count = rr_cpu_count();
@@ -254,23 +254,23 @@ static void *rr_cpu_thread_fn(void *arg)
if (cpu_can_run(cpu)) { if (cpu_can_run(cpu)) {
int r; int r;
bql_unlock(); qemu_mutex_unlock_iothread();
if (icount_enabled()) { if (icount_enabled()) {
icount_prepare_for_run(cpu, cpu_budget); icount_prepare_for_run(cpu, cpu_budget);
} }
r = tcg_cpu_exec(cpu); r = tcg_cpus_exec(cpu);
if (icount_enabled()) { if (icount_enabled()) {
icount_process_data(cpu); icount_process_data(cpu);
} }
bql_lock(); qemu_mutex_lock_iothread();
if (r == EXCP_DEBUG) { if (r == EXCP_DEBUG) {
cpu_handle_guest_debug(cpu); cpu_handle_guest_debug(cpu);
break; break;
} else if (r == EXCP_ATOMIC) { } else if (r == EXCP_ATOMIC) {
bql_unlock(); qemu_mutex_unlock_iothread();
cpu_exec_step_atomic(cpu); cpu_exec_step_atomic(cpu);
bql_lock(); qemu_mutex_lock_iothread();
break; break;
} }
} else if (cpu->stop) { } else if (cpu->stop) {
@@ -317,23 +317,22 @@ void rr_start_vcpu_thread(CPUState *cpu)
tcg_cpu_init_cflags(cpu, false); tcg_cpu_init_cflags(cpu, false);
if (!single_tcg_cpu_thread) { if (!single_tcg_cpu_thread) {
single_tcg_halt_cond = cpu->halt_cond; cpu->thread = g_new0(QemuThread, 1);
single_tcg_cpu_thread = cpu->thread; cpu->halt_cond = g_new0(QemuCond, 1);
qemu_cond_init(cpu->halt_cond);
/* share a single thread for all cpus with TCG */ /* share a single thread for all cpus with TCG */
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "ALL CPUs/TCG"); snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "ALL CPUs/TCG");
qemu_thread_create(cpu->thread, thread_name, qemu_thread_create(cpu->thread, thread_name,
rr_cpu_thread_fn, rr_cpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE); cpu, QEMU_THREAD_JOINABLE);
single_tcg_halt_cond = cpu->halt_cond;
single_tcg_cpu_thread = cpu->thread;
} else { } else {
/* we share the thread, dump spare data */ /* we share the thread */
g_free(cpu->thread);
qemu_cond_destroy(cpu->halt_cond);
g_free(cpu->halt_cond);
cpu->thread = single_tcg_cpu_thread; cpu->thread = single_tcg_cpu_thread;
cpu->halt_cond = single_tcg_halt_cond; cpu->halt_cond = single_tcg_halt_cond;
/* copy the stuff done at start of rr_cpu_thread_fn */
cpu->thread_id = first_cpu->thread_id; cpu->thread_id = first_cpu->thread_id;
cpu->neg.can_do_io = 1; cpu->neg.can_do_io = 1;
cpu->created = true; cpu->created = true;

View File

@@ -35,9 +35,7 @@
#include "exec/exec-all.h" #include "exec/exec-all.h"
#include "exec/hwaddr.h" #include "exec/hwaddr.h"
#include "exec/tb-flush.h" #include "exec/tb-flush.h"
#include "gdbstub/enums.h" #include "exec/gdbstub.h"
#include "hw/core/cpu.h"
#include "tcg-accel-ops.h" #include "tcg-accel-ops.h"
#include "tcg-accel-ops-mttcg.h" #include "tcg-accel-ops-mttcg.h"
@@ -62,15 +60,15 @@ void tcg_cpu_init_cflags(CPUState *cpu, bool parallel)
cflags |= parallel ? CF_PARALLEL : 0; cflags |= parallel ? CF_PARALLEL : 0;
cflags |= icount_enabled() ? CF_USE_ICOUNT : 0; cflags |= icount_enabled() ? CF_USE_ICOUNT : 0;
tcg_cflags_set(cpu, cflags); cpu->tcg_cflags |= cflags;
} }
void tcg_cpu_destroy(CPUState *cpu) void tcg_cpus_destroy(CPUState *cpu)
{ {
cpu_thread_signal_destroyed(cpu); cpu_thread_signal_destroyed(cpu);
} }
int tcg_cpu_exec(CPUState *cpu) int tcg_cpus_exec(CPUState *cpu)
{ {
int ret; int ret;
assert(tcg_enabled()); assert(tcg_enabled());
@@ -90,7 +88,7 @@ static void tcg_cpu_reset_hold(CPUState *cpu)
/* mask must never be zero, except for A20 change call */ /* mask must never be zero, except for A20 change call */
void tcg_handle_interrupt(CPUState *cpu, int mask) void tcg_handle_interrupt(CPUState *cpu, int mask)
{ {
g_assert(bql_locked()); g_assert(qemu_mutex_iothread_locked());
cpu->interrupt_request |= mask; cpu->interrupt_request |= mask;

View File

@@ -14,8 +14,8 @@
#include "sysemu/cpus.h" #include "sysemu/cpus.h"
void tcg_cpu_destroy(CPUState *cpu); void tcg_cpus_destroy(CPUState *cpu);
int tcg_cpu_exec(CPUState *cpu); int tcg_cpus_exec(CPUState *cpu);
void tcg_handle_interrupt(CPUState *cpu, int mask); void tcg_handle_interrupt(CPUState *cpu, int mask);
void tcg_cpu_init_cflags(CPUState *cpu, bool parallel); void tcg_cpu_init_cflags(CPUState *cpu, bool parallel);

View File

@@ -38,7 +38,7 @@
#if !defined(CONFIG_USER_ONLY) #if !defined(CONFIG_USER_ONLY)
#include "hw/boards.h" #include "hw/boards.h"
#endif #endif
#include "internal-common.h" #include "internal-target.h"
struct TCGState { struct TCGState {
AccelState parent_obj; AccelState parent_obj;

View File

@@ -63,7 +63,7 @@
#include "tb-context.h" #include "tb-context.h"
#include "internal-common.h" #include "internal-common.h"
#include "internal-target.h" #include "internal-target.h"
#include "tcg/perf.h" #include "perf.h"
#include "tcg/insn-start-words.h" #include "tcg/insn-start-words.h"
TBContext tb_ctx; TBContext tb_ctx;
@@ -256,6 +256,7 @@ bool cpu_unwind_state_data(CPUState *cpu, uintptr_t host_pc, uint64_t *data)
void page_init(void) void page_init(void)
{ {
page_size_init();
page_table_config_init(); page_table_config_init();
} }
@@ -303,7 +304,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
if (phys_pc == -1) { if (phys_pc == -1) {
/* Generate a one-shot TB with 1 insn in it */ /* Generate a one-shot TB with 1 insn in it */
cflags = (cflags & ~CF_COUNT_MASK) | 1; cflags = (cflags & ~CF_COUNT_MASK) | CF_LAST_IO | 1;
} }
max_insns = cflags & CF_COUNT_MASK; max_insns = cflags & CF_COUNT_MASK;
@@ -631,10 +632,10 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
* operations only (which execute after completion) so we don't * operations only (which execute after completion) so we don't
* double instrument the instruction. * double instrument the instruction.
*/ */
cpu->cflags_next_tb = curr_cflags(cpu) | CF_MEMI_ONLY | n; cpu->cflags_next_tb = curr_cflags(cpu) | CF_MEMI_ONLY | CF_LAST_IO | n;
if (qemu_loglevel_mask(CPU_LOG_EXEC)) { if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
vaddr pc = cpu->cc->get_pc(cpu); vaddr pc = log_pc(cpu, tb);
if (qemu_log_in_addr_range(pc)) { if (qemu_log_in_addr_range(pc)) {
qemu_log("cpu_io_recompile: rewound execution of TB to %016" qemu_log("cpu_io_recompile: rewound execution of TB to %016"
VADDR_PRIx "\n", pc); VADDR_PRIx "\n", pc);
@@ -644,6 +645,15 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
cpu_loop_exit_noexc(cpu); cpu_loop_exit_noexc(cpu);
} }
#else /* CONFIG_USER_ONLY */
void cpu_interrupt(CPUState *cpu, int mask)
{
g_assert(qemu_mutex_iothread_locked());
cpu->interrupt_request |= mask;
qatomic_set(&cpu->neg.icount_decr.u16.high, -1);
}
#endif /* CONFIG_USER_ONLY */ #endif /* CONFIG_USER_ONLY */
/* /*

View File

@@ -12,23 +12,26 @@
#include "qemu/error-report.h" #include "qemu/error-report.h"
#include "exec/exec-all.h" #include "exec/exec-all.h"
#include "exec/translator.h" #include "exec/translator.h"
#include "exec/cpu_ldst.h"
#include "exec/plugin-gen.h" #include "exec/plugin-gen.h"
#include "exec/cpu_ldst.h"
#include "tcg/tcg-op-common.h" #include "tcg/tcg-op-common.h"
#include "internal-target.h" #include "internal-target.h"
#include "disas/disas.h"
static void set_can_do_io(DisasContextBase *db, bool val) static void set_can_do_io(DisasContextBase *db, bool val)
{ {
QEMU_BUILD_BUG_ON(sizeof_field(CPUState, neg.can_do_io) != 1); if (db->saved_can_do_io != val) {
tcg_gen_st8_i32(tcg_constant_i32(val), tcg_env, db->saved_can_do_io = val;
offsetof(ArchCPU, parent_obj.neg.can_do_io) -
offsetof(ArchCPU, env)); QEMU_BUILD_BUG_ON(sizeof_field(CPUState, neg.can_do_io) != 1);
tcg_gen_st8_i32(tcg_constant_i32(val), tcg_env,
offsetof(ArchCPU, parent_obj.neg.can_do_io) -
offsetof(ArchCPU, env));
}
} }
bool translator_io_start(DisasContextBase *db) bool translator_io_start(DisasContextBase *db)
{ {
set_can_do_io(db, true);
/* /*
* Ensure that this instruction will be the last in the TB. * Ensure that this instruction will be the last in the TB.
* The target may override this to something more forceful. * The target may override this to something more forceful.
@@ -81,6 +84,13 @@ static TCGOp *gen_tb_start(DisasContextBase *db, uint32_t cflags)
- offsetof(ArchCPU, env)); - offsetof(ArchCPU, env));
} }
/*
* cpu->neg.can_do_io is set automatically here at the beginning of
* each translation block. The cost is minimal, plus it would be
* very easy to forget doing it in the translator.
*/
set_can_do_io(db, db->max_insns == 1 && (cflags & CF_LAST_IO));
return icount_start_insn; return icount_start_insn;
} }
@@ -119,7 +129,6 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
{ {
uint32_t cflags = tb_cflags(tb); uint32_t cflags = tb_cflags(tb);
TCGOp *icount_start_insn; TCGOp *icount_start_insn;
TCGOp *first_insn_start = NULL;
bool plugin_enabled; bool plugin_enabled;
/* Initialize DisasContext */ /* Initialize DisasContext */
@@ -130,12 +139,9 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
db->num_insns = 0; db->num_insns = 0;
db->max_insns = *max_insns; db->max_insns = *max_insns;
db->singlestep_enabled = cflags & CF_SINGLE_STEP; db->singlestep_enabled = cflags & CF_SINGLE_STEP;
db->insn_start = NULL; db->saved_can_do_io = -1;
db->fake_insn = false;
db->host_addr[0] = host_pc; db->host_addr[0] = host_pc;
db->host_addr[1] = NULL; db->host_addr[1] = NULL;
db->record_start = 0;
db->record_len = 0;
ops->init_disas_context(db, cpu); ops->init_disas_context(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */ tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
@@ -145,28 +151,32 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
ops->tb_start(db, cpu); ops->tb_start(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */ tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
plugin_enabled = plugin_gen_tb_start(cpu, db); if (cflags & CF_MEMI_ONLY) {
/* We should only see CF_MEMI_ONLY for io_recompile. */
assert(cflags & CF_LAST_IO);
plugin_enabled = plugin_gen_tb_start(cpu, db, true);
} else {
plugin_enabled = plugin_gen_tb_start(cpu, db, false);
}
db->plugin_enabled = plugin_enabled; db->plugin_enabled = plugin_enabled;
while (true) { while (true) {
*max_insns = ++db->num_insns; *max_insns = ++db->num_insns;
ops->insn_start(db, cpu); ops->insn_start(db, cpu);
db->insn_start = tcg_last_op();
if (first_insn_start == NULL) {
first_insn_start = db->insn_start;
}
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */ tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
if (plugin_enabled) { if (plugin_enabled) {
plugin_gen_insn_start(cpu, db); plugin_gen_insn_start(cpu, db);
} }
/* /* Disassemble one instruction. The translate_insn hook should
* Disassemble one instruction. The translate_insn hook should update db->pc_next and db->is_jmp to indicate what should be
* update db->pc_next and db->is_jmp to indicate what should be done next -- either exiting this loop or locate the start of
* done next -- either exiting this loop or locate the start of the next instruction. */
* the next instruction. if (db->num_insns == db->max_insns && (cflags & CF_LAST_IO)) {
*/ /* Accept I/O on the last instruction. */
set_can_do_io(db, true);
}
ops->translate_insn(db, cpu); ops->translate_insn(db, cpu);
/* /*
@@ -199,277 +209,172 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
ops->tb_stop(db, cpu); ops->tb_stop(db, cpu);
gen_tb_end(tb, cflags, icount_start_insn, db->num_insns); gen_tb_end(tb, cflags, icount_start_insn, db->num_insns);
/*
* Manage can_do_io for the translation block: set to false before
* the first insn and set to true before the last insn.
*/
if (db->num_insns == 1) {
tcg_debug_assert(first_insn_start == db->insn_start);
} else {
tcg_debug_assert(first_insn_start != db->insn_start);
tcg_ctx->emit_before_op = first_insn_start;
set_can_do_io(db, false);
}
tcg_ctx->emit_before_op = db->insn_start;
set_can_do_io(db, true);
tcg_ctx->emit_before_op = NULL;
/* May be used by disas_log or plugin callbacks. */
tb->size = db->pc_next - db->pc_first;
tb->icount = db->num_insns;
if (plugin_enabled) { if (plugin_enabled) {
plugin_gen_tb_end(cpu, db->num_insns); plugin_gen_tb_end(cpu, db->num_insns);
} }
/* The disas_log hook may use these values rather than recompute. */
tb->size = db->pc_next - db->pc_first;
tb->icount = db->num_insns;
if (qemu_loglevel_mask(CPU_LOG_TB_IN_ASM) if (qemu_loglevel_mask(CPU_LOG_TB_IN_ASM)
&& qemu_log_in_addr_range(db->pc_first)) { && qemu_log_in_addr_range(db->pc_first)) {
FILE *logfile = qemu_log_trylock(); FILE *logfile = qemu_log_trylock();
if (logfile) { if (logfile) {
fprintf(logfile, "----------------\n"); fprintf(logfile, "----------------\n");
ops->disas_log(db, cpu, logfile);
if (!ops->disas_log ||
!ops->disas_log(db, cpu, logfile)) {
fprintf(logfile, "IN: %s\n", lookup_symbol(db->pc_first));
target_disas(logfile, cpu, db);
}
fprintf(logfile, "\n"); fprintf(logfile, "\n");
qemu_log_unlock(logfile); qemu_log_unlock(logfile);
} }
} }
} }
static bool translator_ld(CPUArchState *env, DisasContextBase *db, static void *translator_access(CPUArchState *env, DisasContextBase *db,
void *dest, vaddr pc, size_t len) vaddr pc, size_t len)
{ {
TranslationBlock *tb = db->tb;
vaddr last = pc + len - 1;
void *host; void *host;
vaddr base; vaddr base, end;
TranslationBlock *tb;
tb = db->tb;
/* Use slow path if first page is MMIO. */ /* Use slow path if first page is MMIO. */
if (unlikely(tb_page_addr0(tb) == -1)) { if (unlikely(tb_page_addr0(tb) == -1)) {
/* We capped translation with first page MMIO in tb_gen_code. */ return NULL;
tcg_debug_assert(db->max_insns == 1);
return false;
} }
host = db->host_addr[0]; end = pc + len - 1;
base = db->pc_first; if (likely(is_same_page(db, end))) {
host = db->host_addr[0];
if (likely(((base ^ last) & TARGET_PAGE_MASK) == 0)) { base = db->pc_first;
/* Entire read is from the first page. */ } else {
memcpy(dest, host + (pc - base), len);
return true;
}
if (unlikely(((base ^ pc) & TARGET_PAGE_MASK) == 0)) {
/* Read begins on the first page and extends to the second. */
size_t len0 = -(pc | TARGET_PAGE_MASK);
memcpy(dest, host + (pc - base), len0);
pc += len0;
dest += len0;
len -= len0;
}
/*
* The read must conclude on the second page and not extend to a third.
*
* TODO: We could allow the two pages to be virtually discontiguous,
* since we already allow the two pages to be physically discontiguous.
* The only reasonable use case would be executing an insn at the end
* of the address space wrapping around to the beginning. For that,
* we would need to know the current width of the address space.
* In the meantime, assert.
*/
base = (base & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE;
assert(((base ^ pc) & TARGET_PAGE_MASK) == 0);
assert(((base ^ last) & TARGET_PAGE_MASK) == 0);
host = db->host_addr[1];
if (host == NULL) {
tb_page_addr_t page0, old_page1, new_page1;
new_page1 = get_page_addr_code_hostp(env, base, &db->host_addr[1]);
/*
* If the second page is MMIO, treat as if the first page
* was MMIO as well, so that we do not cache the TB.
*/
if (unlikely(new_page1 == -1)) {
tb_unlock_pages(tb);
tb_set_page_addr0(tb, -1);
/* Require that this be the final insn. */
db->max_insns = db->num_insns;
return false;
}
/*
* If this is not the first time around, and page1 matches,
* then we already have the page locked. Alternately, we're
* not doing anything to prevent the PTE from changing, so
* we might wind up with a different page, requiring us to
* re-do the locking.
*/
old_page1 = tb_page_addr1(tb);
if (likely(new_page1 != old_page1)) {
page0 = tb_page_addr0(tb);
if (unlikely(old_page1 != -1)) {
tb_unlock_page1(page0, old_page1);
}
tb_set_page_addr1(tb, new_page1);
tb_lock_page1(page0, new_page1);
}
host = db->host_addr[1]; host = db->host_addr[1];
base = TARGET_PAGE_ALIGN(db->pc_first);
if (host == NULL) {
tb_page_addr_t page0, old_page1, new_page1;
new_page1 = get_page_addr_code_hostp(env, base, &db->host_addr[1]);
/*
* If the second page is MMIO, treat as if the first page
* was MMIO as well, so that we do not cache the TB.
*/
if (unlikely(new_page1 == -1)) {
tb_unlock_pages(tb);
tb_set_page_addr0(tb, -1);
return NULL;
}
/*
* If this is not the first time around, and page1 matches,
* then we already have the page locked. Alternately, we're
* not doing anything to prevent the PTE from changing, so
* we might wind up with a different page, requiring us to
* re-do the locking.
*/
old_page1 = tb_page_addr1(tb);
if (likely(new_page1 != old_page1)) {
page0 = tb_page_addr0(tb);
if (unlikely(old_page1 != -1)) {
tb_unlock_page1(page0, old_page1);
}
tb_set_page_addr1(tb, new_page1);
tb_lock_page1(page0, new_page1);
}
host = db->host_addr[1];
}
/* Use slow path when crossing pages. */
if (is_same_page(db, pc)) {
return NULL;
}
} }
memcpy(dest, host + (pc - base), len); tcg_debug_assert(pc >= base);
return true; return host + (pc - base);
} }
static void record_save(DisasContextBase *db, vaddr pc, static void plugin_insn_append(abi_ptr pc, const void *from, size_t size)
const void *from, int size)
{ {
int offset; #ifdef CONFIG_PLUGIN
struct qemu_plugin_insn *insn = tcg_ctx->plugin_insn;
abi_ptr off;
/* Do not record probes before the start of TB. */ if (insn == NULL) {
if (pc < db->pc_first) {
return; return;
} }
off = pc - insn->vaddr;
/* if (off < insn->data->len) {
* In translator_access, we verified that pc is within 2 pages g_byte_array_set_size(insn->data, off);
* of pc_first, thus this will never overflow. } else if (off > insn->data->len) {
*/ /* we have an unexpected gap */
offset = pc - db->pc_first; g_assert_not_reached();
/*
* Either the first or second page may be I/O. If it is the second,
* then the first byte we need to record will be at a non-zero offset.
* In either case, we should not need to record but a single insn.
*/
if (db->record_len == 0) {
db->record_start = offset;
db->record_len = size;
} else {
assert(offset == db->record_start + db->record_len);
assert(db->record_len + size <= sizeof(db->record));
db->record_len += size;
} }
memcpy(db->record + (offset - db->record_start), from, size); insn->data = g_byte_array_append(insn->data, from, size);
#endif
} }
size_t translator_st_len(const DisasContextBase *db) uint8_t translator_ldub(CPUArchState *env, DisasContextBase *db, abi_ptr pc)
{ {
return db->fake_insn ? db->record_len : db->tb->size; uint8_t ret;
void *p = translator_access(env, db, pc, sizeof(ret));
if (p) {
plugin_insn_append(pc, p, sizeof(ret));
return ldub_p(p);
}
ret = cpu_ldub_code(env, pc);
plugin_insn_append(pc, &ret, sizeof(ret));
return ret;
} }
bool translator_st(const DisasContextBase *db, void *dest, uint16_t translator_lduw(CPUArchState *env, DisasContextBase *db, abi_ptr pc)
vaddr addr, size_t len)
{ {
size_t offset, offset_end; uint16_t ret, plug;
void *p = translator_access(env, db, pc, sizeof(ret));
if (addr < db->pc_first) { if (p) {
return false; plugin_insn_append(pc, p, sizeof(ret));
return lduw_p(p);
} }
offset = addr - db->pc_first; ret = cpu_lduw_code(env, pc);
offset_end = offset + len; plug = tswap16(ret);
if (offset_end > translator_st_len(db)) { plugin_insn_append(pc, &plug, sizeof(ret));
return false; return ret;
}
if (!db->fake_insn) {
size_t offset_page1 = -(db->pc_first | TARGET_PAGE_MASK);
/* Get all the bytes from the first page. */
if (db->host_addr[0]) {
if (offset_end <= offset_page1) {
memcpy(dest, db->host_addr[0] + offset, len);
return true;
}
if (offset < offset_page1) {
size_t len0 = offset_page1 - offset;
memcpy(dest, db->host_addr[0] + offset, len0);
offset += len0;
dest += len0;
}
}
/* Get any bytes from the second page. */
if (db->host_addr[1] && offset >= offset_page1) {
memcpy(dest, db->host_addr[1] + (offset - offset_page1),
offset_end - offset);
return true;
}
}
/* Else get recorded bytes. */
if (db->record_len != 0 &&
offset >= db->record_start &&
offset_end <= db->record_start + db->record_len) {
memcpy(dest, db->record + (offset - db->record_start),
offset_end - offset);
return true;
}
return false;
} }
uint8_t translator_ldub(CPUArchState *env, DisasContextBase *db, vaddr pc) uint32_t translator_ldl(CPUArchState *env, DisasContextBase *db, abi_ptr pc)
{ {
uint8_t raw; uint32_t ret, plug;
void *p = translator_access(env, db, pc, sizeof(ret));
if (!translator_ld(env, db, &raw, pc, sizeof(raw))) { if (p) {
raw = cpu_ldub_code(env, pc); plugin_insn_append(pc, p, sizeof(ret));
record_save(db, pc, &raw, sizeof(raw)); return ldl_p(p);
} }
return raw; ret = cpu_ldl_code(env, pc);
plug = tswap32(ret);
plugin_insn_append(pc, &plug, sizeof(ret));
return ret;
} }
uint16_t translator_lduw(CPUArchState *env, DisasContextBase *db, vaddr pc) uint64_t translator_ldq(CPUArchState *env, DisasContextBase *db, abi_ptr pc)
{ {
uint16_t raw, tgt; uint64_t ret, plug;
void *p = translator_access(env, db, pc, sizeof(ret));
if (translator_ld(env, db, &raw, pc, sizeof(raw))) { if (p) {
tgt = tswap16(raw); plugin_insn_append(pc, p, sizeof(ret));
} else { return ldq_p(p);
tgt = cpu_lduw_code(env, pc);
raw = tswap16(tgt);
record_save(db, pc, &raw, sizeof(raw));
} }
return tgt; ret = cpu_ldq_code(env, pc);
plug = tswap64(ret);
plugin_insn_append(pc, &plug, sizeof(ret));
return ret;
} }
uint32_t translator_ldl(CPUArchState *env, DisasContextBase *db, vaddr pc) void translator_fake_ldb(uint8_t insn8, abi_ptr pc)
{ {
uint32_t raw, tgt; plugin_insn_append(pc, &insn8, sizeof(insn8));
if (translator_ld(env, db, &raw, pc, sizeof(raw))) {
tgt = tswap32(raw);
} else {
tgt = cpu_ldl_code(env, pc);
raw = tswap32(tgt);
record_save(db, pc, &raw, sizeof(raw));
}
return tgt;
}
uint64_t translator_ldq(CPUArchState *env, DisasContextBase *db, vaddr pc)
{
uint64_t raw, tgt;
if (translator_ld(env, db, &raw, pc, sizeof(raw))) {
tgt = tswap64(raw);
} else {
tgt = cpu_ldq_code(env, pc);
raw = tswap64(tgt);
record_save(db, pc, &raw, sizeof(raw));
}
return tgt;
}
void translator_fake_ld(DisasContextBase *db, const void *data, size_t len)
{
db->fake_insn = true;
record_save(db, db->pc_first, data, len);
} }

View File

@@ -24,9 +24,7 @@
#include "qemu/bitops.h" #include "qemu/bitops.h"
#include "qemu/rcu.h" #include "qemu/rcu.h"
#include "exec/cpu_ldst.h" #include "exec/cpu_ldst.h"
#include "qemu/main-loop.h"
#include "exec/translate-all.h" #include "exec/translate-all.h"
#include "exec/page-protection.h"
#include "exec/helper-proto.h" #include "exec/helper-proto.h"
#include "qemu/atomic128.h" #include "qemu/atomic128.h"
#include "trace/trace-root.h" #include "trace/trace-root.h"
@@ -38,13 +36,6 @@ __thread uintptr_t helper_retaddr;
//#define DEBUG_SIGNAL //#define DEBUG_SIGNAL
void cpu_interrupt(CPUState *cpu, int mask)
{
g_assert(bql_locked());
cpu->interrupt_request |= mask;
qatomic_set(&cpu->neg.icount_decr.u16.high, -1);
}
/* /*
* Adjust the pc to pass to cpu_restore_state; return the memop type. * Adjust the pc to pass to cpu_restore_state; return the memop type.
*/ */
@@ -660,17 +651,16 @@ void page_protect(tb_page_addr_t address)
{ {
PageFlagsNode *p; PageFlagsNode *p;
target_ulong start, last; target_ulong start, last;
int host_page_size = qemu_real_host_page_size();
int prot; int prot;
assert_memory_lock(); assert_memory_lock();
if (host_page_size <= TARGET_PAGE_SIZE) { if (qemu_host_page_size <= TARGET_PAGE_SIZE) {
start = address & TARGET_PAGE_MASK; start = address & TARGET_PAGE_MASK;
last = start + TARGET_PAGE_SIZE - 1; last = start + TARGET_PAGE_SIZE - 1;
} else { } else {
start = address & -host_page_size; start = address & qemu_host_page_mask;
last = start + host_page_size - 1; last = start + qemu_host_page_size - 1;
} }
p = pageflags_find(start, last); p = pageflags_find(start, last);
@@ -681,7 +671,7 @@ void page_protect(tb_page_addr_t address)
if (unlikely(p->itree.last < last)) { if (unlikely(p->itree.last < last)) {
/* More than one protection region covers the one host page. */ /* More than one protection region covers the one host page. */
assert(TARGET_PAGE_SIZE < host_page_size); assert(TARGET_PAGE_SIZE < qemu_host_page_size);
while ((p = pageflags_next(p, start, last)) != NULL) { while ((p = pageflags_next(p, start, last)) != NULL) {
prot |= p->flags; prot |= p->flags;
} }
@@ -689,7 +679,7 @@ void page_protect(tb_page_addr_t address)
if (prot & PAGE_WRITE) { if (prot & PAGE_WRITE) {
pageflags_set_clear(start, last, 0, PAGE_WRITE); pageflags_set_clear(start, last, 0, PAGE_WRITE);
mprotect(g2h_untagged(start), last - start + 1, mprotect(g2h_untagged(start), qemu_host_page_size,
prot & (PAGE_READ | PAGE_EXEC) ? PROT_READ : PROT_NONE); prot & (PAGE_READ | PAGE_EXEC) ? PROT_READ : PROT_NONE);
} }
} }
@@ -735,19 +725,18 @@ int page_unprotect(target_ulong address, uintptr_t pc)
} }
#endif #endif
} else { } else {
int host_page_size = qemu_real_host_page_size();
target_ulong start, len, i; target_ulong start, len, i;
int prot; int prot;
if (host_page_size <= TARGET_PAGE_SIZE) { if (qemu_host_page_size <= TARGET_PAGE_SIZE) {
start = address & TARGET_PAGE_MASK; start = address & TARGET_PAGE_MASK;
len = TARGET_PAGE_SIZE; len = TARGET_PAGE_SIZE;
prot = p->flags | PAGE_WRITE; prot = p->flags | PAGE_WRITE;
pageflags_set_clear(start, start + len - 1, PAGE_WRITE, 0); pageflags_set_clear(start, start + len - 1, PAGE_WRITE, 0);
current_tb_invalidated = tb_invalidate_phys_page_unwind(start, pc); current_tb_invalidated = tb_invalidate_phys_page_unwind(start, pc);
} else { } else {
start = address & -host_page_size; start = address & qemu_host_page_mask;
len = host_page_size; len = qemu_host_page_size;
prot = 0; prot = 0;
for (i = 0; i < len; i += TARGET_PAGE_SIZE) { for (i = 0; i < len; i += TARGET_PAGE_SIZE) {
@@ -773,7 +762,7 @@ int page_unprotect(target_ulong address, uintptr_t pc)
if (prot & PAGE_EXEC) { if (prot & PAGE_EXEC) {
prot = (prot & ~PAGE_EXEC) | PAGE_READ; prot = (prot & ~PAGE_EXEC) | PAGE_READ;
} }
mprotect((void *)g2h_untagged(start), len, prot & PAGE_RWX); mprotect((void *)g2h_untagged(start), len, prot & PAGE_BITS);
} }
mmap_unlock(); mmap_unlock();
@@ -873,7 +862,7 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, vaddr addr,
typedef struct TargetPageDataNode { typedef struct TargetPageDataNode {
struct rcu_head rcu; struct rcu_head rcu;
IntervalTreeNode itree; IntervalTreeNode itree;
char data[] __attribute__((aligned)); char data[TPD_PAGES][TARGET_PAGE_DATA_SIZE] __attribute__((aligned));
} TargetPageDataNode; } TargetPageDataNode;
static IntervalTreeRoot targetdata_root; static IntervalTreeRoot targetdata_root;
@@ -911,8 +900,7 @@ void page_reset_target_data(target_ulong start, target_ulong last)
n_last = MIN(last, n->last); n_last = MIN(last, n->last);
p_len = (n_last + 1 - n_start) >> TARGET_PAGE_BITS; p_len = (n_last + 1 - n_start) >> TARGET_PAGE_BITS;
memset(t->data + p_ofs * TARGET_PAGE_DATA_SIZE, 0, memset(t->data[p_ofs], 0, p_len * TARGET_PAGE_DATA_SIZE);
p_len * TARGET_PAGE_DATA_SIZE);
} }
} }
@@ -920,7 +908,7 @@ void *page_get_target_data(target_ulong address)
{ {
IntervalTreeNode *n; IntervalTreeNode *n;
TargetPageDataNode *t; TargetPageDataNode *t;
target_ulong page, region, p_ofs; target_ulong page, region;
page = address & TARGET_PAGE_MASK; page = address & TARGET_PAGE_MASK;
region = address & TBD_MASK; region = address & TBD_MASK;
@@ -936,8 +924,7 @@ void *page_get_target_data(target_ulong address)
mmap_lock(); mmap_lock();
n = interval_tree_iter_first(&targetdata_root, page, page); n = interval_tree_iter_first(&targetdata_root, page, page);
if (!n) { if (!n) {
t = g_malloc0(sizeof(TargetPageDataNode) t = g_new0(TargetPageDataNode, 1);
+ TPD_PAGES * TARGET_PAGE_DATA_SIZE);
n = &t->itree; n = &t->itree;
n->start = region; n->start = region;
n->last = region | ~TBD_MASK; n->last = region | ~TBD_MASK;
@@ -947,8 +934,7 @@ void *page_get_target_data(target_ulong address)
} }
t = container_of(n, TargetPageDataNode, itree); t = container_of(n, TargetPageDataNode, itree);
p_ofs = (page - region) >> TARGET_PAGE_BITS; return t->data[(page - region) >> TARGET_PAGE_BITS];
return t->data + p_ofs * TARGET_PAGE_DATA_SIZE;
} }
#else #else
void page_reset_target_data(target_ulong start, target_ulong last) { } void page_reset_target_data(target_ulong start, target_ulong last) { }

View File

@@ -1,18 +0,0 @@
/*
* SPDX-FileContributor: Philippe Mathieu-Daudé <philmd@linaro.org>
* SPDX-FileCopyrightText: 2023 Linaro Ltd.
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef ACCEL_TCG_VCPU_STATE_H
#define ACCEL_TCG_VCPU_STATE_H
#include "hw/core/cpu.h"
#ifdef CONFIG_USER_ONLY
static inline TaskState *get_task_state(const CPUState *cs)
{
return cs->opaque;
}
#endif
#endif

View File

@@ -1,143 +0,0 @@
/*
* CPU watchpoints
*
* Copyright (c) 2003 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "qemu/main-loop.h"
#include "qemu/error-report.h"
#include "exec/exec-all.h"
#include "exec/translate-all.h"
#include "sysemu/tcg.h"
#include "sysemu/replay.h"
#include "hw/core/tcg-cpu-ops.h"
#include "hw/core/cpu.h"
/*
* Return true if this watchpoint address matches the specified
* access (ie the address range covered by the watchpoint overlaps
* partially or completely with the address range covered by the
* access).
*/
static inline bool watchpoint_address_matches(CPUWatchpoint *wp,
vaddr addr, vaddr len)
{
/*
* We know the lengths are non-zero, but a little caution is
* required to avoid errors in the case where the range ends
* exactly at the top of the address space and so addr + len
* wraps round to zero.
*/
vaddr wpend = wp->vaddr + wp->len - 1;
vaddr addrend = addr + len - 1;
return !(addr > wpend || wp->vaddr > addrend);
}
/* Return flags for watchpoints that match addr + prot. */
int cpu_watchpoint_address_matches(CPUState *cpu, vaddr addr, vaddr len)
{
CPUWatchpoint *wp;
int ret = 0;
QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) {
if (watchpoint_address_matches(wp, addr, len)) {
ret |= wp->flags;
}
}
return ret;
}
/* Generate a debug exception if a watchpoint has been hit. */
void cpu_check_watchpoint(CPUState *cpu, vaddr addr, vaddr len,
MemTxAttrs attrs, int flags, uintptr_t ra)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
CPUWatchpoint *wp;
assert(tcg_enabled());
if (cpu->watchpoint_hit) {
/*
* We re-entered the check after replacing the TB.
* Now raise the debug interrupt so that it will
* trigger after the current instruction.
*/
bql_lock();
cpu_interrupt(cpu, CPU_INTERRUPT_DEBUG);
bql_unlock();
return;
}
if (cc->tcg_ops->adjust_watchpoint_address) {
/* this is currently used only by ARM BE32 */
addr = cc->tcg_ops->adjust_watchpoint_address(cpu, addr, len);
}
assert((flags & ~BP_MEM_ACCESS) == 0);
QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) {
int hit_flags = wp->flags & flags;
if (hit_flags && watchpoint_address_matches(wp, addr, len)) {
if (replay_running_debug()) {
/*
* replay_breakpoint reads icount.
* Force recompile to succeed, because icount may
* be read only at the end of the block.
*/
if (!cpu->neg.can_do_io) {
/* Force execution of one insn next time. */
cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(cpu);
cpu_loop_exit_restore(cpu, ra);
}
/*
* Don't process the watchpoints when we are
* in a reverse debugging operation.
*/
replay_breakpoint();
return;
}
wp->flags |= hit_flags << BP_HIT_SHIFT;
wp->hitaddr = MAX(addr, wp->vaddr);
wp->hitattrs = attrs;
if (wp->flags & BP_CPU
&& cc->tcg_ops->debug_check_watchpoint
&& !cc->tcg_ops->debug_check_watchpoint(cpu, wp)) {
wp->flags &= ~BP_WATCHPOINT_HIT;
continue;
}
cpu->watchpoint_hit = wp;
mmap_lock();
/* This call also restores vCPU state */
tb_check_watchpoint(cpu, ra);
if (wp->flags & BP_STOP_BEFORE_ACCESS) {
cpu->exception_index = EXCP_DEBUG;
mmap_unlock();
cpu_loop_exit(cpu);
} else {
/* Force execution of one insn next time. */
cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(cpu);
mmap_unlock();
cpu_loop_exit_noexc(cpu);
}
} else {
wp->flags &= ~BP_WATCHPOINT_HIT;
}
}
}

View File

@@ -15,7 +15,6 @@
#include "hw/xen/xen_native.h" #include "hw/xen/xen_native.h"
#include "hw/xen/xen-legacy-backend.h" #include "hw/xen/xen-legacy-backend.h"
#include "hw/xen/xen_pt.h" #include "hw/xen/xen_pt.h"
#include "hw/xen/xen_igd.h"
#include "chardev/char.h" #include "chardev/char.h"
#include "qemu/accel.h" #include "qemu/accel.h"
#include "sysemu/cpus.h" #include "sysemu/cpus.h"

View File

@@ -1683,7 +1683,7 @@ static const VMStateDescription vmstate_audio = {
.version_id = 1, .version_id = 1,
.minimum_version_id = 1, .minimum_version_id = 1,
.needed = vmstate_audio_needed, .needed = vmstate_audio_needed,
.fields = (const VMStateField[]) { .fields = (VMStateField[]) {
VMSTATE_END_OF_LIST() VMSTATE_END_OF_LIST()
} }
}; };
@@ -1744,7 +1744,7 @@ static AudioState *audio_init(Audiodev *dev, Error **errp)
if (driver) { if (driver) {
done = !audio_driver_init(s, driver, dev, errp); done = !audio_driver_init(s, driver, dev, errp);
} else { } else {
error_setg(errp, "Unknown audio driver `%s'", drvname); error_setg(errp, "Unknown audio driver `%s'\n", drvname);
} }
if (!done) { if (!done) {
goto out; goto out;
@@ -1758,15 +1758,12 @@ static AudioState *audio_init(Audiodev *dev, Error **errp)
goto out; goto out;
} }
s->dev = dev = e->dev; s->dev = dev = e->dev;
QSIMPLEQ_REMOVE_HEAD(&default_audiodevs, next);
g_free(e);
drvname = AudiodevDriver_str(dev->driver); drvname = AudiodevDriver_str(dev->driver);
driver = audio_driver_lookup(drvname); driver = audio_driver_lookup(drvname);
if (!audio_driver_init(s, driver, dev, NULL)) { if (!audio_driver_init(s, driver, dev, NULL)) {
break; break;
} }
qapi_free_Audiodev(dev); QSIMPLEQ_REMOVE_HEAD(&default_audiodevs, next);
s->dev = NULL;
} }
} }

View File

@@ -44,6 +44,11 @@ typedef struct coreaudioVoiceOut {
bool enabled; bool enabled;
} coreaudioVoiceOut; } coreaudioVoiceOut;
#if !defined(MAC_OS_VERSION_12_0) \
|| (MAC_OS_X_VERSION_MIN_REQUIRED < MAC_OS_VERSION_12_0)
#define kAudioObjectPropertyElementMain kAudioObjectPropertyElementMaster
#endif
static const AudioObjectPropertyAddress voice_addr = { static const AudioObjectPropertyAddress voice_addr = {
kAudioHardwarePropertyDefaultOutputDevice, kAudioHardwarePropertyDefaultOutputDevice,
kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyScopeGlobal,
@@ -294,7 +299,7 @@ COREAUDIO_WRAPPER_FUNC(write, size_t, (HWVoiceOut *hw, void *buf, size_t size),
#undef COREAUDIO_WRAPPER_FUNC #undef COREAUDIO_WRAPPER_FUNC
/* /*
* callback to feed audiooutput buffer. called without BQL. * callback to feed audiooutput buffer. called without iothread lock.
* allowed to lock "buf_mutex", but disallowed to have any other locks. * allowed to lock "buf_mutex", but disallowed to have any other locks.
*/ */
static OSStatus audioDeviceIOProc( static OSStatus audioDeviceIOProc(
@@ -533,7 +538,7 @@ static void update_device_playback_state(coreaudioVoiceOut *core)
} }
} }
/* called without BQL. */ /* called without iothread lock. */
static OSStatus handle_voice_change( static OSStatus handle_voice_change(
AudioObjectID in_object_id, AudioObjectID in_object_id,
UInt32 in_number_addresses, UInt32 in_number_addresses,
@@ -542,7 +547,7 @@ static OSStatus handle_voice_change(
{ {
coreaudioVoiceOut *core = in_client_data; coreaudioVoiceOut *core = in_client_data;
bql_lock(); qemu_mutex_lock_iothread();
if (core->outputDeviceID) { if (core->outputDeviceID) {
fini_out_device(core); fini_out_device(core);
@@ -552,7 +557,7 @@ static OSStatus handle_voice_change(
update_device_playback_state(core); update_device_playback_state(core);
} }
bql_unlock(); qemu_mutex_unlock_iothread();
return 0; return 0;
} }

View File

@@ -105,7 +105,7 @@ static size_t dbus_put_buffer_out(HWVoiceOut *hw, void *buf, size_t size)
assert(buf == vo->buf + vo->buf_pos && vo->buf_pos + size <= vo->buf_size); assert(buf == vo->buf + vo->buf_pos && vo->buf_pos + size <= vo->buf_size);
vo->buf_pos += size; vo->buf_pos += size;
trace_dbus_audio_put_buffer_out(vo->buf_pos, vo->buf_size); trace_dbus_audio_put_buffer_out(size);
if (vo->buf_pos < vo->buf_size) { if (vo->buf_pos < vo->buf_size) {
return size; return size;

View File

@@ -30,8 +30,7 @@ endforeach
if dbus_display if dbus_display
module_ss = ss.source_set() module_ss = ss.source_set()
module_ss.add(when: [gio, pixman], module_ss.add(when: [gio, pixman], if_true: files('dbusaudio.c'))
if_true: [dbus_display1, files('dbusaudio.c')])
audio_modules += {'dbus': module_ss} audio_modules += {'dbus': module_ss}
endif endif

View File

@@ -11,6 +11,7 @@
#include "qemu/osdep.h" #include "qemu/osdep.h"
#include "qemu/module.h" #include "qemu/module.h"
#include "audio.h" #include "audio.h"
#include <errno.h>
#include "qemu/error-report.h" #include "qemu/error-report.h"
#include "qapi/error.h" #include "qapi/error.h"
#include <spa/param/audio/format-utils.h> #include <spa/param/audio/format-utils.h>

View File

@@ -15,7 +15,7 @@ oss_version(int version) "OSS version = 0x%x"
# dbusaudio.c # dbusaudio.c
dbus_audio_register(const char *s, const char *dir) "sender = %s, dir = %s" dbus_audio_register(const char *s, const char *dir) "sender = %s, dir = %s"
dbus_audio_put_buffer_out(size_t pos, size_t size) "buf_pos = %zu, buf_size = %zu" dbus_audio_put_buffer_out(size_t len) "len = %zu"
dbus_audio_read(size_t len) "len = %zu" dbus_audio_read(size_t len) "len = %zu"
# pwaudio.c # pwaudio.c

View File

@@ -1,9 +1 @@
source tpm/Kconfig source tpm/Kconfig
config IOMMUFD
bool
depends on VFIO
config SPDM_SOCKET
bool
default y

View File

@@ -23,7 +23,6 @@
#include "qemu/osdep.h" #include "qemu/osdep.h"
#include "sysemu/cryptodev.h" #include "sysemu/cryptodev.h"
#include "qemu/error-report.h"
#include "qapi/error.h" #include "qapi/error.h"
#include "standard-headers/linux/virtio_crypto.h" #include "standard-headers/linux/virtio_crypto.h"
#include "crypto/cipher.h" #include "crypto/cipher.h"
@@ -397,8 +396,8 @@ static int cryptodev_builtin_create_session(
case VIRTIO_CRYPTO_HASH_CREATE_SESSION: case VIRTIO_CRYPTO_HASH_CREATE_SESSION:
case VIRTIO_CRYPTO_MAC_CREATE_SESSION: case VIRTIO_CRYPTO_MAC_CREATE_SESSION:
default: default:
error_report("Unsupported opcode :%" PRIu32 "", error_setg(&local_error, "Unsupported opcode :%" PRIu32 "",
sess_info->op_code); sess_info->op_code);
return -VIRTIO_CRYPTO_NOTSUPP; return -VIRTIO_CRYPTO_NOTSUPP;
} }
@@ -428,9 +427,7 @@ static int cryptodev_builtin_close_session(
CRYPTODEV_BACKEND_BUILTIN(backend); CRYPTODEV_BACKEND_BUILTIN(backend);
CryptoDevBackendBuiltinSession *session; CryptoDevBackendBuiltinSession *session;
if (session_id >= MAX_NUM_SESSIONS || !builtin->sessions[session_id]) { assert(session_id < MAX_NUM_SESSIONS && builtin->sessions[session_id]);
return -VIRTIO_CRYPTO_INVSESS;
}
session = builtin->sessions[session_id]; session = builtin->sessions[session_id];
if (session->cipher) { if (session->cipher) {
@@ -555,8 +552,8 @@ static int cryptodev_builtin_operation(
if (op_info->session_id >= MAX_NUM_SESSIONS || if (op_info->session_id >= MAX_NUM_SESSIONS ||
builtin->sessions[op_info->session_id] == NULL) { builtin->sessions[op_info->session_id] == NULL) {
error_report("Cannot find a valid session id: %" PRIu64 "", error_setg(&local_error, "Cannot find a valid session id: %" PRIu64 "",
op_info->session_id); op_info->session_id);
return -VIRTIO_CRYPTO_INVSESS; return -VIRTIO_CRYPTO_INVSESS;
} }

View File

@@ -398,7 +398,6 @@ static void cryptodev_backend_set_ops(Object *obj, Visitor *v,
static void static void
cryptodev_backend_complete(UserCreatable *uc, Error **errp) cryptodev_backend_complete(UserCreatable *uc, Error **errp)
{ {
ERRP_GUARD();
CryptoDevBackend *backend = CRYPTODEV_BACKEND(uc); CryptoDevBackend *backend = CRYPTODEV_BACKEND(uc);
CryptoDevBackendClass *bc = CRYPTODEV_BACKEND_GET_CLASS(uc); CryptoDevBackendClass *bc = CRYPTODEV_BACKEND_GET_CLASS(uc);
uint32_t services; uint32_t services;
@@ -407,20 +406,11 @@ cryptodev_backend_complete(UserCreatable *uc, Error **errp)
QTAILQ_INIT(&backend->opinfos); QTAILQ_INIT(&backend->opinfos);
value = backend->tc.buckets[THROTTLE_OPS_TOTAL].avg; value = backend->tc.buckets[THROTTLE_OPS_TOTAL].avg;
cryptodev_backend_set_throttle(backend, THROTTLE_OPS_TOTAL, value, errp); cryptodev_backend_set_throttle(backend, THROTTLE_OPS_TOTAL, value, errp);
if (*errp) {
return;
}
value = backend->tc.buckets[THROTTLE_BPS_TOTAL].avg; value = backend->tc.buckets[THROTTLE_BPS_TOTAL].avg;
cryptodev_backend_set_throttle(backend, THROTTLE_BPS_TOTAL, value, errp); cryptodev_backend_set_throttle(backend, THROTTLE_BPS_TOTAL, value, errp);
if (*errp) {
return;
}
if (bc->init) { if (bc->init) {
bc->init(backend, errp); bc->init(backend, errp);
if (*errp) {
return;
}
} }
services = backend->conf.crypto_services; services = backend->conf.crypto_services;

View File

@@ -393,7 +393,7 @@ static const VMStateDescription dbus_vmstate = {
.version_id = 0, .version_id = 0,
.pre_save = dbus_vmstate_pre_save, .pre_save = dbus_vmstate_pre_save,
.post_load = dbus_vmstate_post_load, .post_load = dbus_vmstate_post_load,
.fields = (const VMStateField[]) { .fields = (VMStateField[]) {
VMSTATE_UINT32(data_size, DBusVMState), VMSTATE_UINT32(data_size, DBusVMState),
VMSTATE_VBUFFER_ALLOC_UINT32(data, DBusVMState, 0, 0, data_size), VMSTATE_VBUFFER_ALLOC_UINT32(data, DBusVMState, 0, 0, data_size),
VMSTATE_END_OF_LIST() VMSTATE_END_OF_LIST()

View File

@@ -1,33 +0,0 @@
/*
* Host IOMMU device abstract
*
* Copyright (C) 2024 Intel Corporation.
*
* Authors: Zhenzhong Duan <zhenzhong.duan@intel.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/host_iommu_device.h"
OBJECT_DEFINE_ABSTRACT_TYPE(HostIOMMUDevice,
host_iommu_device,
HOST_IOMMU_DEVICE,
OBJECT)
static void host_iommu_device_class_init(ObjectClass *oc, void *data)
{
}
static void host_iommu_device_init(Object *obj)
{
}
static void host_iommu_device_finalize(Object *obj)
{
HostIOMMUDevice *hiod = HOST_IOMMU_DEVICE(obj);
g_free(hiod->name);
}

View File

@@ -17,28 +17,31 @@
#include "sysemu/hostmem.h" #include "sysemu/hostmem.h"
#include "hw/i386/hostmem-epc.h" #include "hw/i386/hostmem-epc.h"
static bool static void
sgx_epc_backend_memory_alloc(HostMemoryBackend *backend, Error **errp) sgx_epc_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{ {
g_autofree char *name = NULL;
uint32_t ram_flags; uint32_t ram_flags;
char *name;
int fd; int fd;
if (!backend->size) { if (!backend->size) {
error_setg(errp, "can't create backend with size 0"); error_setg(errp, "can't create backend with size 0");
return false; return;
} }
fd = qemu_open("/dev/sgx_vepc", O_RDWR, errp); fd = qemu_open_old("/dev/sgx_vepc", O_RDWR);
if (fd < 0) { if (fd < 0) {
return false; error_setg_errno(errp, errno,
"failed to open /dev/sgx_vepc to alloc SGX EPC");
return;
} }
backend->aligned = true;
name = object_get_canonical_path(OBJECT(backend)); name = object_get_canonical_path(OBJECT(backend));
ram_flags = (backend->share ? RAM_SHARED : 0) | RAM_PROTECTED; ram_flags = (backend->share ? RAM_SHARED : 0) | RAM_PROTECTED;
return memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend), name, memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend),
backend->size, ram_flags, fd, 0, errp); name, backend->size, ram_flags,
fd, 0, errp);
g_free(name);
} }
static void sgx_epc_backend_instance_init(Object *obj) static void sgx_epc_backend_instance_init(Object *obj)

View File

@@ -36,25 +36,24 @@ struct HostMemoryBackendFile {
OnOffAuto rom; OnOffAuto rom;
}; };
static bool static void
file_backend_memory_alloc(HostMemoryBackend *backend, Error **errp) file_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{ {
#ifndef CONFIG_POSIX #ifndef CONFIG_POSIX
error_setg(errp, "backend '%s' not supported on this host", error_setg(errp, "backend '%s' not supported on this host",
object_get_typename(OBJECT(backend))); object_get_typename(OBJECT(backend)));
return false;
#else #else
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(backend); HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(backend);
g_autofree gchar *name = NULL;
uint32_t ram_flags; uint32_t ram_flags;
gchar *name;
if (!backend->size) { if (!backend->size) {
error_setg(errp, "can't create backend with size 0"); error_setg(errp, "can't create backend with size 0");
return false; return;
} }
if (!fb->mem_path) { if (!fb->mem_path) {
error_setg(errp, "mem-path property not set"); error_setg(errp, "mem-path property not set");
return false; return;
} }
switch (fb->rom) { switch (fb->rom) {
@@ -66,32 +65,31 @@ file_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
if (!fb->readonly) { if (!fb->readonly) {
error_setg(errp, "property 'rom' = 'on' is not supported with" error_setg(errp, "property 'rom' = 'on' is not supported with"
" 'readonly' = 'off'"); " 'readonly' = 'off'");
return false; return;
} }
break; break;
case ON_OFF_AUTO_OFF: case ON_OFF_AUTO_OFF:
if (fb->readonly && backend->share) { if (fb->readonly && backend->share) {
error_setg(errp, "property 'rom' = 'off' is incompatible with" error_setg(errp, "property 'rom' = 'off' is incompatible with"
" 'readonly' = 'on' and 'share' = 'on'"); " 'readonly' = 'on' and 'share' = 'on'");
return false; return;
} }
break; break;
default: default:
g_assert_not_reached(); assert(false);
} }
backend->aligned = true;
name = host_memory_backend_get_name(backend); name = host_memory_backend_get_name(backend);
ram_flags = backend->share ? RAM_SHARED : 0; ram_flags = backend->share ? RAM_SHARED : 0;
ram_flags |= fb->readonly ? RAM_READONLY_FD : 0; ram_flags |= fb->readonly ? RAM_READONLY_FD : 0;
ram_flags |= fb->rom == ON_OFF_AUTO_ON ? RAM_READONLY : 0; ram_flags |= fb->rom == ON_OFF_AUTO_ON ? RAM_READONLY : 0;
ram_flags |= backend->reserve ? 0 : RAM_NORESERVE; ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
ram_flags |= backend->guest_memfd ? RAM_GUEST_MEMFD : 0;
ram_flags |= fb->is_pmem ? RAM_PMEM : 0; ram_flags |= fb->is_pmem ? RAM_PMEM : 0;
ram_flags |= RAM_NAMED_FILE; ram_flags |= RAM_NAMED_FILE;
return memory_region_init_ram_from_file(&backend->mr, OBJECT(backend), name, memory_region_init_ram_from_file(&backend->mr, OBJECT(backend), name,
backend->size, fb->align, ram_flags, backend->size, fb->align, ram_flags,
fb->mem_path, fb->offset, errp); fb->mem_path, fb->offset, errp);
g_free(name);
#endif #endif
} }

View File

@@ -31,17 +31,17 @@ struct HostMemoryBackendMemfd {
bool seal; bool seal;
}; };
static bool static void
memfd_backend_memory_alloc(HostMemoryBackend *backend, Error **errp) memfd_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{ {
HostMemoryBackendMemfd *m = MEMORY_BACKEND_MEMFD(backend); HostMemoryBackendMemfd *m = MEMORY_BACKEND_MEMFD(backend);
g_autofree char *name = NULL;
uint32_t ram_flags; uint32_t ram_flags;
char *name;
int fd; int fd;
if (!backend->size) { if (!backend->size) {
error_setg(errp, "can't create backend with size 0"); error_setg(errp, "can't create backend with size 0");
return false; return;
} }
fd = qemu_memfd_create(TYPE_MEMORY_BACKEND_MEMFD, backend->size, fd = qemu_memfd_create(TYPE_MEMORY_BACKEND_MEMFD, backend->size,
@@ -49,16 +49,15 @@ memfd_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
F_SEAL_GROW | F_SEAL_SHRINK | F_SEAL_SEAL : 0, F_SEAL_GROW | F_SEAL_SHRINK | F_SEAL_SEAL : 0,
errp); errp);
if (fd == -1) { if (fd == -1) {
return false; return;
} }
backend->aligned = true;
name = host_memory_backend_get_name(backend); name = host_memory_backend_get_name(backend);
ram_flags = backend->share ? RAM_SHARED : 0; ram_flags = backend->share ? RAM_SHARED : 0;
ram_flags |= backend->reserve ? 0 : RAM_NORESERVE; ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
ram_flags |= backend->guest_memfd ? RAM_GUEST_MEMFD : 0; memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend), name,
return memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend), name, backend->size, ram_flags, fd, 0, errp);
backend->size, ram_flags, fd, 0, errp); g_free(name);
} }
static bool static bool

View File

@@ -16,24 +16,23 @@
#include "qemu/module.h" #include "qemu/module.h"
#include "qom/object_interfaces.h" #include "qom/object_interfaces.h"
static bool static void
ram_backend_memory_alloc(HostMemoryBackend *backend, Error **errp) ram_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{ {
g_autofree char *name = NULL;
uint32_t ram_flags; uint32_t ram_flags;
char *name;
if (!backend->size) { if (!backend->size) {
error_setg(errp, "can't create backend with size 0"); error_setg(errp, "can't create backend with size 0");
return false; return;
} }
name = host_memory_backend_get_name(backend); name = host_memory_backend_get_name(backend);
ram_flags = backend->share ? RAM_SHARED : 0; ram_flags = backend->share ? RAM_SHARED : 0;
ram_flags |= backend->reserve ? 0 : RAM_NORESERVE; ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
ram_flags |= backend->guest_memfd ? RAM_GUEST_MEMFD : 0; memory_region_init_ram_flags_nomigrate(&backend->mr, OBJECT(backend), name,
return memory_region_init_ram_flags_nomigrate(&backend->mr, OBJECT(backend), backend->size, ram_flags, errp);
name, backend->size, g_free(name);
ram_flags, errp);
} }
static void static void

View File

@@ -1,123 +0,0 @@
/*
* QEMU host POSIX shared memory object backend
*
* Copyright (C) 2024 Red Hat Inc
*
* Authors:
* Stefano Garzarella <sgarzare@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/hostmem.h"
#include "qapi/error.h"
#define TYPE_MEMORY_BACKEND_SHM "memory-backend-shm"
OBJECT_DECLARE_SIMPLE_TYPE(HostMemoryBackendShm, MEMORY_BACKEND_SHM)
struct HostMemoryBackendShm {
HostMemoryBackend parent_obj;
};
static bool
shm_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
g_autoptr(GString) shm_name = g_string_new(NULL);
g_autofree char *backend_name = NULL;
uint32_t ram_flags;
int fd, oflag;
mode_t mode;
if (!backend->size) {
error_setg(errp, "can't create shm backend with size 0");
return false;
}
if (!backend->share) {
error_setg(errp, "can't create shm backend with `share=off`");
return false;
}
/*
* Let's use `mode = 0` because we don't want other processes to open our
* memory unless we share the file descriptor with them.
*/
mode = 0;
oflag = O_RDWR | O_CREAT | O_EXCL;
backend_name = host_memory_backend_get_name(backend);
/*
* Some operating systems allow creating anonymous POSIX shared memory
* objects (e.g. FreeBSD provides the SHM_ANON constant), but this is not
* defined by POSIX, so let's create a unique name.
*
* From Linux's shm_open(3) man-page:
* For portable use, a shared memory object should be identified
* by a name of the form /somename;"
*/
g_string_printf(shm_name, "/qemu-" FMT_pid "-shm-%s", getpid(),
backend_name);
fd = shm_open(shm_name->str, oflag, mode);
if (fd < 0) {
error_setg_errno(errp, errno,
"failed to create POSIX shared memory");
return false;
}
/*
* We have the file descriptor, so we no longer need to expose the
* POSIX shared memory object. However it will remain allocated as long as
* there are file descriptors pointing to it.
*/
shm_unlink(shm_name->str);
if (ftruncate(fd, backend->size) == -1) {
error_setg_errno(errp, errno,
"failed to resize POSIX shared memory to %" PRIu64,
backend->size);
close(fd);
return false;
}
ram_flags = RAM_SHARED;
ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
return memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend),
backend_name, backend->size,
ram_flags, fd, 0, errp);
}
static void
shm_backend_instance_init(Object *obj)
{
HostMemoryBackendShm *m = MEMORY_BACKEND_SHM(obj);
MEMORY_BACKEND(m)->share = true;
}
static void
shm_backend_class_init(ObjectClass *oc, void *data)
{
HostMemoryBackendClass *bc = MEMORY_BACKEND_CLASS(oc);
bc->alloc = shm_backend_memory_alloc;
}
static const TypeInfo shm_backend_info = {
.name = TYPE_MEMORY_BACKEND_SHM,
.parent = TYPE_MEMORY_BACKEND,
.instance_init = shm_backend_instance_init,
.class_init = shm_backend_class_init,
.instance_size = sizeof(HostMemoryBackendShm),
};
static void register_types(void)
{
type_register_static(&shm_backend_info);
}
type_init(register_types);

View File

@@ -20,8 +20,6 @@
#include "qom/object_interfaces.h" #include "qom/object_interfaces.h"
#include "qemu/mmap-alloc.h" #include "qemu/mmap-alloc.h"
#include "qemu/madvise.h" #include "qemu/madvise.h"
#include "qemu/cutils.h"
#include "hw/qdev-core.h"
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
#include <numaif.h> #include <numaif.h>
@@ -170,24 +168,19 @@ static void host_memory_backend_set_merge(Object *obj, bool value, Error **errp)
{ {
HostMemoryBackend *backend = MEMORY_BACKEND(obj); HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (QEMU_MADV_MERGEABLE == QEMU_MADV_INVALID) { if (!host_memory_backend_mr_inited(backend)) {
if (value) { backend->merge = value;
error_setg(errp, "Memory merging is not supported on this host");
}
assert(!backend->merge);
return; return;
} }
if (!host_memory_backend_mr_inited(backend) && if (value != backend->merge) {
value != backend->merge) {
void *ptr = memory_region_get_ram_ptr(&backend->mr); void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr); uint64_t sz = memory_region_size(&backend->mr);
qemu_madvise(ptr, sz, qemu_madvise(ptr, sz,
value ? QEMU_MADV_MERGEABLE : QEMU_MADV_UNMERGEABLE); value ? QEMU_MADV_MERGEABLE : QEMU_MADV_UNMERGEABLE);
backend->merge = value;
} }
backend->merge = value;
} }
static bool host_memory_backend_get_dump(Object *obj, Error **errp) static bool host_memory_backend_get_dump(Object *obj, Error **errp)
@@ -201,24 +194,19 @@ static void host_memory_backend_set_dump(Object *obj, bool value, Error **errp)
{ {
HostMemoryBackend *backend = MEMORY_BACKEND(obj); HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (QEMU_MADV_DONTDUMP == QEMU_MADV_INVALID) { if (!host_memory_backend_mr_inited(backend)) {
if (!value) { backend->dump = value;
error_setg(errp, "Dumping guest memory cannot be disabled on this host");
}
assert(backend->dump);
return; return;
} }
if (host_memory_backend_mr_inited(backend) && if (value != backend->dump) {
value != backend->dump) {
void *ptr = memory_region_get_ram_ptr(&backend->mr); void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr); uint64_t sz = memory_region_size(&backend->mr);
qemu_madvise(ptr, sz, qemu_madvise(ptr, sz,
value ? QEMU_MADV_DODUMP : QEMU_MADV_DONTDUMP); value ? QEMU_MADV_DODUMP : QEMU_MADV_DONTDUMP);
backend->dump = value;
} }
backend->dump = value;
} }
static bool host_memory_backend_get_prealloc(Object *obj, Error **errp) static bool host_memory_backend_get_prealloc(Object *obj, Error **errp)
@@ -231,6 +219,7 @@ static bool host_memory_backend_get_prealloc(Object *obj, Error **errp)
static void host_memory_backend_set_prealloc(Object *obj, bool value, static void host_memory_backend_set_prealloc(Object *obj, bool value,
Error **errp) Error **errp)
{ {
Error *local_err = NULL;
HostMemoryBackend *backend = MEMORY_BACKEND(obj); HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (!backend->reserve && value) { if (!backend->reserve && value) {
@@ -248,8 +237,10 @@ static void host_memory_backend_set_prealloc(Object *obj, bool value,
void *ptr = memory_region_get_ram_ptr(&backend->mr); void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr); uint64_t sz = memory_region_size(&backend->mr);
if (!qemu_prealloc_mem(fd, ptr, sz, backend->prealloc_threads, qemu_prealloc_mem(fd, ptr, sz, backend->prealloc_threads,
backend->prealloc_context, false, errp)) { backend->prealloc_context, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return; return;
} }
backend->prealloc = true; backend->prealloc = true;
@@ -288,7 +279,6 @@ static void host_memory_backend_init(Object *obj)
/* TODO: convert access to globals to compat properties */ /* TODO: convert access to globals to compat properties */
backend->merge = machine_mem_merge(machine); backend->merge = machine_mem_merge(machine);
backend->dump = machine_dump_guest_core(machine); backend->dump = machine_dump_guest_core(machine);
backend->guest_memfd = machine_require_guest_memfd(machine);
backend->reserve = true; backend->reserve = true;
backend->prealloc_threads = machine->smp.cpus; backend->prealloc_threads = machine->smp.cpus;
} }
@@ -334,101 +324,91 @@ host_memory_backend_memory_complete(UserCreatable *uc, Error **errp)
{ {
HostMemoryBackend *backend = MEMORY_BACKEND(uc); HostMemoryBackend *backend = MEMORY_BACKEND(uc);
HostMemoryBackendClass *bc = MEMORY_BACKEND_GET_CLASS(uc); HostMemoryBackendClass *bc = MEMORY_BACKEND_GET_CLASS(uc);
Error *local_err = NULL;
void *ptr; void *ptr;
uint64_t sz; uint64_t sz;
size_t pagesize;
bool async = !phase_check(PHASE_LATE_BACKENDS_CREATED);
if (!bc->alloc) { if (bc->alloc) {
return; bc->alloc(backend, &local_err);
} if (local_err) {
if (!bc->alloc(backend, errp)) { goto out;
return; }
}
ptr = memory_region_get_ram_ptr(&backend->mr); ptr = memory_region_get_ram_ptr(&backend->mr);
sz = memory_region_size(&backend->mr); sz = memory_region_size(&backend->mr);
pagesize = qemu_ram_pagesize(backend->mr.ram_block);
if (backend->aligned && !QEMU_IS_ALIGNED(sz, pagesize)) { if (backend->merge) {
g_autofree char *pagesize_str = size_to_str(pagesize); qemu_madvise(ptr, sz, QEMU_MADV_MERGEABLE);
error_setg(errp, "backend '%s' memory size must be multiple of %s", }
object_get_typename(OBJECT(uc)), pagesize_str); if (!backend->dump) {
return; qemu_madvise(ptr, sz, QEMU_MADV_DONTDUMP);
} }
if (backend->merge) {
qemu_madvise(ptr, sz, QEMU_MADV_MERGEABLE);
}
if (!backend->dump) {
qemu_madvise(ptr, sz, QEMU_MADV_DONTDUMP);
}
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
unsigned long lastbit = find_last_bit(backend->host_nodes, MAX_NODES); unsigned long lastbit = find_last_bit(backend->host_nodes, MAX_NODES);
/* lastbit == MAX_NODES means maxnode = 0 */ /* lastbit == MAX_NODES means maxnode = 0 */
unsigned long maxnode = (lastbit + 1) % (MAX_NODES + 1); unsigned long maxnode = (lastbit + 1) % (MAX_NODES + 1);
/* /* ensure policy won't be ignored in case memory is preallocated
* Ensure policy won't be ignored in case memory is preallocated * before mbind(). note: MPOL_MF_STRICT is ignored on hugepages so
* before mbind(). note: MPOL_MF_STRICT is ignored on hugepages so * this doesn't catch hugepage case. */
* this doesn't catch hugepage case. unsigned flags = MPOL_MF_STRICT | MPOL_MF_MOVE;
*/ int mode = backend->policy;
unsigned flags = MPOL_MF_STRICT | MPOL_MF_MOVE;
int mode = backend->policy;
/* check for invalid host-nodes and policies and give more verbose /* check for invalid host-nodes and policies and give more verbose
* error messages than mbind(). */ * error messages than mbind(). */
if (maxnode && backend->policy == MPOL_DEFAULT) { if (maxnode && backend->policy == MPOL_DEFAULT) {
error_setg(errp, "host-nodes must be empty for policy default," error_setg(errp, "host-nodes must be empty for policy default,"
" or you should explicitly specify a policy other" " or you should explicitly specify a policy other"
" than default"); " than default");
return; return;
} else if (maxnode == 0 && backend->policy != MPOL_DEFAULT) { } else if (maxnode == 0 && backend->policy != MPOL_DEFAULT) {
error_setg(errp, "host-nodes must be set for policy %s", error_setg(errp, "host-nodes must be set for policy %s",
HostMemPolicy_str(backend->policy)); HostMemPolicy_str(backend->policy));
return;
}
/*
* We can have up to MAX_NODES nodes, but we need to pass maxnode+1
* as argument to mbind() due to an old Linux bug (feature?) which
* cuts off the last specified node. This means backend->host_nodes
* must have MAX_NODES+1 bits available.
*/
assert(sizeof(backend->host_nodes) >=
BITS_TO_LONGS(MAX_NODES + 1) * sizeof(unsigned long));
assert(maxnode <= MAX_NODES);
#ifdef HAVE_NUMA_HAS_PREFERRED_MANY
if (mode == MPOL_PREFERRED && numa_has_preferred_many() > 0) {
/*
* Replace with MPOL_PREFERRED_MANY otherwise the mbind() below
* silently picks the first node.
*/
mode = MPOL_PREFERRED_MANY;
}
#endif
if (maxnode &&
mbind(ptr, sz, mode, backend->host_nodes, maxnode + 1, flags)) {
if (backend->policy != MPOL_DEFAULT || errno != ENOSYS) {
error_setg_errno(errp, errno,
"cannot bind memory to host NUMA nodes");
return; return;
} }
}
/* We can have up to MAX_NODES nodes, but we need to pass maxnode+1
* as argument to mbind() due to an old Linux bug (feature?) which
* cuts off the last specified node. This means backend->host_nodes
* must have MAX_NODES+1 bits available.
*/
assert(sizeof(backend->host_nodes) >=
BITS_TO_LONGS(MAX_NODES + 1) * sizeof(unsigned long));
assert(maxnode <= MAX_NODES);
#ifdef HAVE_NUMA_HAS_PREFERRED_MANY
if (mode == MPOL_PREFERRED && numa_has_preferred_many() > 0) {
/*
* Replace with MPOL_PREFERRED_MANY otherwise the mbind() below
* silently picks the first node.
*/
mode = MPOL_PREFERRED_MANY;
}
#endif #endif
/*
* Preallocate memory after the NUMA policy has been instantiated. if (maxnode &&
* This is necessary to guarantee memory is allocated with mbind(ptr, sz, mode, backend->host_nodes, maxnode + 1, flags)) {
* specified NUMA policy in place. if (backend->policy != MPOL_DEFAULT || errno != ENOSYS) {
*/ error_setg_errno(errp, errno,
if (backend->prealloc && !qemu_prealloc_mem(memory_region_get_fd(&backend->mr), "cannot bind memory to host NUMA nodes");
ptr, sz, return;
backend->prealloc_threads, }
backend->prealloc_context, }
async, errp)) { #endif
return; /* Preallocate memory after the NUMA policy has been instantiated.
* This is necessary to guarantee memory is allocated with
* specified NUMA policy in place.
*/
if (backend->prealloc) {
qemu_prealloc_mem(memory_region_get_fd(&backend->mr), ptr, sz,
backend->prealloc_threads,
backend->prealloc_context, &local_err);
if (local_err) {
goto out;
}
}
} }
out:
error_propagate(errp, local_err);
} }
static bool static bool

View File

@@ -1,360 +0,0 @@
/*
* iommufd container backend
*
* Copyright (C) 2023 Intel Corporation.
* Copyright Red Hat, Inc. 2023
*
* Authors: Yi Liu <yi.l.liu@intel.com>
* Eric Auger <eric.auger@redhat.com>
*
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#include "qemu/osdep.h"
#include "sysemu/iommufd.h"
#include "qapi/error.h"
#include "qemu/module.h"
#include "qom/object_interfaces.h"
#include "qemu/error-report.h"
#include "monitor/monitor.h"
#include "trace.h"
#include "hw/vfio/vfio-common.h"
#include <sys/ioctl.h>
#include <linux/iommufd.h>
static void iommufd_backend_init(Object *obj)
{
IOMMUFDBackend *be = IOMMUFD_BACKEND(obj);
be->fd = -1;
be->users = 0;
be->owned = true;
}
static void iommufd_backend_finalize(Object *obj)
{
IOMMUFDBackend *be = IOMMUFD_BACKEND(obj);
if (be->owned) {
close(be->fd);
be->fd = -1;
}
}
static void iommufd_backend_set_fd(Object *obj, const char *str, Error **errp)
{
ERRP_GUARD();
IOMMUFDBackend *be = IOMMUFD_BACKEND(obj);
int fd = -1;
fd = monitor_fd_param(monitor_cur(), str, errp);
if (fd == -1) {
error_prepend(errp, "Could not parse remote object fd %s:", str);
return;
}
be->fd = fd;
be->owned = false;
trace_iommu_backend_set_fd(be->fd);
}
static bool iommufd_backend_can_be_deleted(UserCreatable *uc)
{
IOMMUFDBackend *be = IOMMUFD_BACKEND(uc);
return !be->users;
}
static void iommufd_backend_class_init(ObjectClass *oc, void *data)
{
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
ucc->can_be_deleted = iommufd_backend_can_be_deleted;
object_class_property_add_str(oc, "fd", NULL, iommufd_backend_set_fd);
}
bool iommufd_backend_connect(IOMMUFDBackend *be, Error **errp)
{
int fd;
if (be->owned && !be->users) {
fd = qemu_open("/dev/iommu", O_RDWR, errp);
if (fd < 0) {
return false;
}
be->fd = fd;
}
be->users++;
trace_iommufd_backend_connect(be->fd, be->owned, be->users);
return true;
}
void iommufd_backend_disconnect(IOMMUFDBackend *be)
{
if (!be->users) {
goto out;
}
be->users--;
if (!be->users && be->owned) {
close(be->fd);
be->fd = -1;
}
out:
trace_iommufd_backend_disconnect(be->fd, be->users);
}
bool iommufd_backend_alloc_ioas(IOMMUFDBackend *be, uint32_t *ioas_id,
Error **errp)
{
int fd = be->fd;
struct iommu_ioas_alloc alloc_data = {
.size = sizeof(alloc_data),
.flags = 0,
};
if (ioctl(fd, IOMMU_IOAS_ALLOC, &alloc_data)) {
error_setg_errno(errp, errno, "Failed to allocate ioas");
return false;
}
*ioas_id = alloc_data.out_ioas_id;
trace_iommufd_backend_alloc_ioas(fd, *ioas_id);
return true;
}
void iommufd_backend_free_id(IOMMUFDBackend *be, uint32_t id)
{
int ret, fd = be->fd;
struct iommu_destroy des = {
.size = sizeof(des),
.id = id,
};
ret = ioctl(fd, IOMMU_DESTROY, &des);
trace_iommufd_backend_free_id(fd, id, ret);
if (ret) {
error_report("Failed to free id: %u %m", id);
}
}
int iommufd_backend_map_dma(IOMMUFDBackend *be, uint32_t ioas_id, hwaddr iova,
ram_addr_t size, void *vaddr, bool readonly)
{
int ret, fd = be->fd;
struct iommu_ioas_map map = {
.size = sizeof(map),
.flags = IOMMU_IOAS_MAP_READABLE |
IOMMU_IOAS_MAP_FIXED_IOVA,
.ioas_id = ioas_id,
.__reserved = 0,
.user_va = (uintptr_t)vaddr,
.iova = iova,
.length = size,
};
if (!readonly) {
map.flags |= IOMMU_IOAS_MAP_WRITEABLE;
}
ret = ioctl(fd, IOMMU_IOAS_MAP, &map);
trace_iommufd_backend_map_dma(fd, ioas_id, iova, size,
vaddr, readonly, ret);
if (ret) {
ret = -errno;
/* TODO: Not support mapping hardware PCI BAR region for now. */
if (errno == EFAULT) {
warn_report("IOMMU_IOAS_MAP failed: %m, PCI BAR?");
} else {
error_report("IOMMU_IOAS_MAP failed: %m");
}
}
return ret;
}
int iommufd_backend_unmap_dma(IOMMUFDBackend *be, uint32_t ioas_id,
hwaddr iova, ram_addr_t size)
{
int ret, fd = be->fd;
struct iommu_ioas_unmap unmap = {
.size = sizeof(unmap),
.ioas_id = ioas_id,
.iova = iova,
.length = size,
};
ret = ioctl(fd, IOMMU_IOAS_UNMAP, &unmap);
/*
* IOMMUFD takes mapping as some kind of object, unmapping
* nonexistent mapping is treated as deleting a nonexistent
* object and return ENOENT. This is different from legacy
* backend which allows it. vIOMMU may trigger a lot of
* redundant unmapping, to avoid flush the log, treat them
* as succeess for IOMMUFD just like legacy backend.
*/
if (ret && errno == ENOENT) {
trace_iommufd_backend_unmap_dma_non_exist(fd, ioas_id, iova, size, ret);
ret = 0;
} else {
trace_iommufd_backend_unmap_dma(fd, ioas_id, iova, size, ret);
}
if (ret) {
ret = -errno;
error_report("IOMMU_IOAS_UNMAP failed: %m");
}
return ret;
}
bool iommufd_backend_alloc_hwpt(IOMMUFDBackend *be, uint32_t dev_id,
uint32_t pt_id, uint32_t flags,
uint32_t data_type, uint32_t data_len,
void *data_ptr, uint32_t *out_hwpt,
Error **errp)
{
int ret, fd = be->fd;
struct iommu_hwpt_alloc alloc_hwpt = {
.size = sizeof(struct iommu_hwpt_alloc),
.flags = flags,
.dev_id = dev_id,
.pt_id = pt_id,
.data_type = data_type,
.data_len = data_len,
.data_uptr = (uintptr_t)data_ptr,
};
ret = ioctl(fd, IOMMU_HWPT_ALLOC, &alloc_hwpt);
trace_iommufd_backend_alloc_hwpt(fd, dev_id, pt_id, flags, data_type,
data_len, (uintptr_t)data_ptr,
alloc_hwpt.out_hwpt_id, ret);
if (ret) {
error_setg_errno(errp, errno, "Failed to allocate hwpt");
return false;
}
*out_hwpt = alloc_hwpt.out_hwpt_id;
return true;
}
bool iommufd_backend_set_dirty_tracking(IOMMUFDBackend *be,
uint32_t hwpt_id, bool start,
Error **errp)
{
int ret;
struct iommu_hwpt_set_dirty_tracking set_dirty = {
.size = sizeof(set_dirty),
.hwpt_id = hwpt_id,
.flags = start ? IOMMU_HWPT_DIRTY_TRACKING_ENABLE : 0,
};
ret = ioctl(be->fd, IOMMU_HWPT_SET_DIRTY_TRACKING, &set_dirty);
trace_iommufd_backend_set_dirty(be->fd, hwpt_id, start, ret ? errno : 0);
if (ret) {
error_setg_errno(errp, errno,
"IOMMU_HWPT_SET_DIRTY_TRACKING(hwpt_id %u) failed",
hwpt_id);
return false;
}
return true;
}
bool iommufd_backend_get_dirty_bitmap(IOMMUFDBackend *be,
uint32_t hwpt_id,
uint64_t iova, ram_addr_t size,
uint64_t page_size, uint64_t *data,
Error **errp)
{
int ret;
struct iommu_hwpt_get_dirty_bitmap get_dirty_bitmap = {
.size = sizeof(get_dirty_bitmap),
.hwpt_id = hwpt_id,
.iova = iova,
.length = size,
.page_size = page_size,
.data = (uintptr_t)data,
};
ret = ioctl(be->fd, IOMMU_HWPT_GET_DIRTY_BITMAP, &get_dirty_bitmap);
trace_iommufd_backend_get_dirty_bitmap(be->fd, hwpt_id, iova, size,
page_size, ret ? errno : 0);
if (ret) {
error_setg_errno(errp, errno,
"IOMMU_HWPT_GET_DIRTY_BITMAP (iova: 0x%"HWADDR_PRIx
" size: 0x"RAM_ADDR_FMT") failed", iova, size);
return false;
}
return true;
}
bool iommufd_backend_get_device_info(IOMMUFDBackend *be, uint32_t devid,
uint32_t *type, void *data, uint32_t len,
uint64_t *caps, Error **errp)
{
struct iommu_hw_info info = {
.size = sizeof(info),
.dev_id = devid,
.data_len = len,
.data_uptr = (uintptr_t)data,
};
if (ioctl(be->fd, IOMMU_GET_HW_INFO, &info)) {
error_setg_errno(errp, errno, "Failed to get hardware info");
return false;
}
g_assert(type);
*type = info.out_data_type;
g_assert(caps);
*caps = info.out_capabilities;
return true;
}
static int hiod_iommufd_get_cap(HostIOMMUDevice *hiod, int cap, Error **errp)
{
HostIOMMUDeviceCaps *caps = &hiod->caps;
switch (cap) {
case HOST_IOMMU_DEVICE_CAP_IOMMU_TYPE:
return caps->type;
case HOST_IOMMU_DEVICE_CAP_AW_BITS:
return vfio_device_get_aw_bits(hiod->agent);
default:
error_setg(errp, "%s: unsupported capability %x", hiod->name, cap);
return -EINVAL;
}
}
static void hiod_iommufd_class_init(ObjectClass *oc, void *data)
{
HostIOMMUDeviceClass *hioc = HOST_IOMMU_DEVICE_CLASS(oc);
hioc->get_cap = hiod_iommufd_get_cap;
};
static const TypeInfo types[] = {
{
.name = TYPE_IOMMUFD_BACKEND,
.parent = TYPE_OBJECT,
.instance_size = sizeof(IOMMUFDBackend),
.instance_init = iommufd_backend_init,
.instance_finalize = iommufd_backend_finalize,
.class_size = sizeof(IOMMUFDBackendClass),
.class_init = iommufd_backend_class_init,
.interfaces = (InterfaceInfo[]) {
{ TYPE_USER_CREATABLE },
{ }
}
}, {
.name = TYPE_HOST_IOMMU_DEVICE_IOMMUFD,
.parent = TYPE_HOST_IOMMU_DEVICE,
.class_init = hiod_iommufd_class_init,
.abstract = true,
}
};
DEFINE_TYPES(types)

View File

@@ -10,15 +10,9 @@ system_ss.add([files(
'confidential-guest-support.c', 'confidential-guest-support.c',
), numa]) ), numa])
if host_os != 'windows' system_ss.add(when: 'CONFIG_POSIX', if_true: files('rng-random.c'))
system_ss.add(files('rng-random.c')) system_ss.add(when: 'CONFIG_POSIX', if_true: files('hostmem-file.c'))
system_ss.add(files('hostmem-file.c')) system_ss.add(when: 'CONFIG_LINUX', if_true: files('hostmem-memfd.c'))
system_ss.add([files('hostmem-shm.c'), rt])
endif
if host_os == 'linux'
system_ss.add(files('hostmem-memfd.c'))
system_ss.add(files('host_iommu_device.c'))
endif
if keyutils.found() if keyutils.found()
system_ss.add(keyutils, files('cryptodev-lkcf.c')) system_ss.add(keyutils, files('cryptodev-lkcf.c'))
endif endif
@@ -26,13 +20,10 @@ if have_vhost_user
system_ss.add(when: 'CONFIG_VIRTIO', if_true: files('vhost-user.c')) system_ss.add(when: 'CONFIG_VIRTIO', if_true: files('vhost-user.c'))
endif endif
system_ss.add(when: 'CONFIG_VIRTIO_CRYPTO', if_true: files('cryptodev-vhost.c')) system_ss.add(when: 'CONFIG_VIRTIO_CRYPTO', if_true: files('cryptodev-vhost.c'))
system_ss.add(when: 'CONFIG_IOMMUFD', if_true: files('iommufd.c'))
if have_vhost_user_crypto if have_vhost_user_crypto
system_ss.add(when: 'CONFIG_VIRTIO_CRYPTO', if_true: files('cryptodev-vhost-user.c')) system_ss.add(when: 'CONFIG_VIRTIO_CRYPTO', if_true: files('cryptodev-vhost-user.c'))
endif endif
system_ss.add(when: gio, if_true: files('dbus-vmstate.c')) system_ss.add(when: gio, if_true: files('dbus-vmstate.c'))
system_ss.add(when: 'CONFIG_SGX', if_true: files('hostmem-epc.c')) system_ss.add(when: 'CONFIG_SGX', if_true: files('hostmem-epc.c'))
system_ss.add(when: 'CONFIG_SPDM_SOCKET', if_true: files('spdm-socket.c'))
subdir('tpm') subdir('tpm')

View File

@@ -75,7 +75,10 @@ static void rng_random_opened(RngBackend *b, Error **errp)
error_setg(errp, QERR_INVALID_PARAMETER_VALUE, error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
"filename", "a valid filename"); "filename", "a valid filename");
} else { } else {
s->fd = qemu_open(s->filename, O_RDONLY | O_NONBLOCK, errp); s->fd = qemu_open_old(s->filename, O_RDONLY | O_NONBLOCK);
if (s->fd == -1) {
error_setg_file_open(errp, errno, s->filename);
}
} }
} }

View File

@@ -1,216 +0,0 @@
/* SPDX-License-Identifier: BSD-3-Clause */
/*
* QEMU SPDM socket support
*
* This is based on:
* https://github.com/DMTF/spdm-emu/blob/07c0a838bcc1c6207c656ac75885c0603e344b6f/spdm_emu/spdm_emu_common/command.c
* but has been re-written to match QEMU style
*
* Copyright (c) 2021, DMTF. All rights reserved.
* Copyright (c) 2023. Western Digital Corporation or its affiliates.
*/
#include "qemu/osdep.h"
#include "sysemu/spdm-socket.h"
#include "qapi/error.h"
static bool read_bytes(const int socket, uint8_t *buffer,
size_t number_of_bytes)
{
ssize_t number_received = 0;
ssize_t result;
while (number_received < number_of_bytes) {
result = recv(socket, buffer + number_received,
number_of_bytes - number_received, 0);
if (result <= 0) {
return false;
}
number_received += result;
}
return true;
}
static bool read_data32(const int socket, uint32_t *data)
{
bool result;
result = read_bytes(socket, (uint8_t *)data, sizeof(uint32_t));
if (!result) {
return result;
}
*data = ntohl(*data);
return true;
}
static bool read_multiple_bytes(const int socket, uint8_t *buffer,
uint32_t *bytes_received,
uint32_t max_buffer_length)
{
uint32_t length;
bool result;
result = read_data32(socket, &length);
if (!result) {
return result;
}
if (length > max_buffer_length) {
return false;
}
if (bytes_received) {
*bytes_received = length;
}
if (length == 0) {
return true;
}
return read_bytes(socket, buffer, length);
}
static bool receive_platform_data(const int socket,
uint32_t transport_type,
uint32_t *command,
uint8_t *receive_buffer,
uint32_t *bytes_to_receive)
{
bool result;
uint32_t response;
uint32_t bytes_received;
result = read_data32(socket, &response);
if (!result) {
return result;
}
*command = response;
result = read_data32(socket, &transport_type);
if (!result) {
return result;
}
bytes_received = 0;
result = read_multiple_bytes(socket, receive_buffer, &bytes_received,
*bytes_to_receive);
if (!result) {
return result;
}
*bytes_to_receive = bytes_received;
return result;
}
static bool write_bytes(const int socket, const uint8_t *buffer,
uint32_t number_of_bytes)
{
ssize_t number_sent = 0;
ssize_t result;
while (number_sent < number_of_bytes) {
result = send(socket, buffer + number_sent,
number_of_bytes - number_sent, 0);
if (result == -1) {
return false;
}
number_sent += result;
}
return true;
}
static bool write_data32(const int socket, uint32_t data)
{
data = htonl(data);
return write_bytes(socket, (uint8_t *)&data, sizeof(uint32_t));
}
static bool write_multiple_bytes(const int socket, const uint8_t *buffer,
uint32_t bytes_to_send)
{
bool result;
result = write_data32(socket, bytes_to_send);
if (!result) {
return result;
}
return write_bytes(socket, buffer, bytes_to_send);
}
static bool send_platform_data(const int socket,
uint32_t transport_type, uint32_t command,
const uint8_t *send_buffer, size_t bytes_to_send)
{
bool result;
result = write_data32(socket, command);
if (!result) {
return result;
}
result = write_data32(socket, transport_type);
if (!result) {
return result;
}
return write_multiple_bytes(socket, send_buffer, bytes_to_send);
}
int spdm_socket_connect(uint16_t port, Error **errp)
{
int client_socket;
struct sockaddr_in server_addr;
client_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if (client_socket < 0) {
error_setg(errp, "cannot create socket: %s", strerror(errno));
return -1;
}
memset((char *)&server_addr, 0, sizeof(server_addr));
server_addr.sin_family = AF_INET;
server_addr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
server_addr.sin_port = htons(port);
if (connect(client_socket, (struct sockaddr *)&server_addr,
sizeof(server_addr)) < 0) {
error_setg(errp, "cannot connect: %s", strerror(errno));
close(client_socket);
return -1;
}
return client_socket;
}
uint32_t spdm_socket_rsp(const int socket, uint32_t transport_type,
void *req, uint32_t req_len,
void *rsp, uint32_t rsp_len)
{
uint32_t command;
bool result;
result = send_platform_data(socket, transport_type,
SPDM_SOCKET_COMMAND_NORMAL,
req, req_len);
if (!result) {
return 0;
}
result = receive_platform_data(socket, transport_type, &command,
(uint8_t *)rsp, &rsp_len);
if (!result) {
return 0;
}
assert(command != 0);
return rsp_len;
}
void spdm_socket_close(const int socket, uint32_t transport_type)
{
send_platform_data(socket, transport_type,
SPDM_SOCKET_COMMAND_SHUTDOWN, NULL, 0);
}

View File

@@ -904,7 +904,7 @@ static void tpm_emulator_vm_state_change(void *opaque, bool running,
trace_tpm_emulator_vm_state_change(running, state); trace_tpm_emulator_vm_state_change(running, state);
if (!running || !tpm_emu->relock_storage) { if (!running || state != RUN_STATE_RUNNING || !tpm_emu->relock_storage) {
return; return;
} }
@@ -939,7 +939,7 @@ static const VMStateDescription vmstate_tpm_emulator = {
.version_id = 0, .version_id = 0,
.pre_save = tpm_emulator_pre_save, .pre_save = tpm_emulator_pre_save,
.post_load = tpm_emulator_post_load, .post_load = tpm_emulator_post_load,
.fields = (const VMStateField[]) { .fields = (VMStateField[]) {
VMSTATE_UINT32(state_blobs.permanent_flags, TPMEmulator), VMSTATE_UINT32(state_blobs.permanent_flags, TPMEmulator),
VMSTATE_UINT32(state_blobs.permanent.size, TPMEmulator), VMSTATE_UINT32(state_blobs.permanent.size, TPMEmulator),
VMSTATE_VBUFFER_ALLOC_UINT32(state_blobs.permanent.buffer, VMSTATE_VBUFFER_ALLOC_UINT32(state_blobs.permanent.buffer,

View File

@@ -339,11 +339,10 @@ void tpm_util_show_buffer(const unsigned char *buffer,
size_t len, i; size_t len, i;
char *line_buffer, *p; char *line_buffer, *p;
if (!trace_event_get_state_backends(TRACE_TPM_UTIL_SHOW_BUFFER_CONTENT)) { if (!trace_event_get_state_backends(TRACE_TPM_UTIL_SHOW_BUFFER)) {
return; return;
} }
len = MIN(tpm_cmd_get_size(buffer), buffer_size); len = MIN(tpm_cmd_get_size(buffer), buffer_size);
trace_tpm_util_show_buffer_header(string, len);
/* /*
* allocate enough room for 3 chars per buffer entry plus a * allocate enough room for 3 chars per buffer entry plus a
@@ -357,7 +356,7 @@ void tpm_util_show_buffer(const unsigned char *buffer,
} }
p += sprintf(p, "%.2X ", buffer[i]); p += sprintf(p, "%.2X ", buffer[i]);
} }
trace_tpm_util_show_buffer_content(line_buffer); trace_tpm_util_show_buffer(string, len, line_buffer);
g_free(line_buffer); g_free(line_buffer);
} }

View File

@@ -10,8 +10,7 @@ tpm_util_get_buffer_size_len(uint32_t len, size_t expected) "tpm_resp->len = %u,
tpm_util_get_buffer_size_hdr_len2(uint32_t len, size_t expected) "tpm2_resp->hdr.len = %u, expected = %zu" tpm_util_get_buffer_size_hdr_len2(uint32_t len, size_t expected) "tpm2_resp->hdr.len = %u, expected = %zu"
tpm_util_get_buffer_size_len2(uint32_t len, size_t expected) "tpm2_resp->len = %u, expected = %zu" tpm_util_get_buffer_size_len2(uint32_t len, size_t expected) "tpm2_resp->len = %u, expected = %zu"
tpm_util_get_buffer_size(size_t len) "buffersize of device: %zu" tpm_util_get_buffer_size(size_t len) "buffersize of device: %zu"
tpm_util_show_buffer_header(const char *direction, size_t len) "direction: %s len: %zu" tpm_util_show_buffer(const char *direction, size_t len, const char *buf) "direction: %s len: %zu\n%s"
tpm_util_show_buffer_content(const char *buf) "%s"
# tpm_emulator.c # tpm_emulator.c
tpm_emulator_set_locality(uint8_t locty) "setting locality to %d" tpm_emulator_set_locality(uint8_t locty) "setting locality to %d"

View File

@@ -5,16 +5,3 @@ dbus_vmstate_pre_save(void)
dbus_vmstate_post_load(int version_id) "version_id: %d" dbus_vmstate_post_load(int version_id) "version_id: %d"
dbus_vmstate_loading(const char *id) "id: %s" dbus_vmstate_loading(const char *id) "id: %s"
dbus_vmstate_saving(const char *id) "id: %s" dbus_vmstate_saving(const char *id) "id: %s"
# iommufd.c
iommufd_backend_connect(int fd, bool owned, uint32_t users) "fd=%d owned=%d users=%d"
iommufd_backend_disconnect(int fd, uint32_t users) "fd=%d users=%d"
iommu_backend_set_fd(int fd) "pre-opened /dev/iommu fd=%d"
iommufd_backend_map_dma(int iommufd, uint32_t ioas, uint64_t iova, uint64_t size, void *vaddr, bool readonly, int ret) " iommufd=%d ioas=%d iova=0x%"PRIx64" size=0x%"PRIx64" addr=%p readonly=%d (%d)"
iommufd_backend_unmap_dma_non_exist(int iommufd, uint32_t ioas, uint64_t iova, uint64_t size, int ret) " Unmap nonexistent mapping: iommufd=%d ioas=%d iova=0x%"PRIx64" size=0x%"PRIx64" (%d)"
iommufd_backend_unmap_dma(int iommufd, uint32_t ioas, uint64_t iova, uint64_t size, int ret) " iommufd=%d ioas=%d iova=0x%"PRIx64" size=0x%"PRIx64" (%d)"
iommufd_backend_alloc_ioas(int iommufd, uint32_t ioas) " iommufd=%d ioas=%d"
iommufd_backend_alloc_hwpt(int iommufd, uint32_t dev_id, uint32_t pt_id, uint32_t flags, uint32_t hwpt_type, uint32_t len, uint64_t data_ptr, uint32_t out_hwpt_id, int ret) " iommufd=%d dev_id=%u pt_id=%u flags=0x%x hwpt_type=%u len=%u data_ptr=0x%"PRIx64" out_hwpt=%u (%d)"
iommufd_backend_free_id(int iommufd, uint32_t id, int ret) " iommufd=%d id=%d (%d)"
iommufd_backend_set_dirty(int iommufd, uint32_t hwpt_id, bool start, int ret) " iommufd=%d hwpt=%u enable=%d (%d)"
iommufd_backend_get_dirty_bitmap(int iommufd, uint32_t hwpt_id, uint64_t iova, uint64_t size, uint64_t page_size, int ret) " iommufd=%d hwpt=%u iova=0x%"PRIx64" size=0x%"PRIx64" page_size=0x%"PRIx64" (%d)"

499
block.c

File diff suppressed because it is too large Load Diff

View File

@@ -356,7 +356,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
BlockDriverState *target, int64_t speed, BlockDriverState *target, int64_t speed,
MirrorSyncMode sync_mode, BdrvDirtyBitmap *sync_bitmap, MirrorSyncMode sync_mode, BdrvDirtyBitmap *sync_bitmap,
BitmapSyncMode bitmap_mode, BitmapSyncMode bitmap_mode,
bool compress, bool discard_source, bool compress,
const char *filter_node_name, const char *filter_node_name,
BackupPerf *perf, BackupPerf *perf,
BlockdevOnError on_source_error, BlockdevOnError on_source_error,
@@ -457,8 +457,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
goto error; goto error;
} }
cbw = bdrv_cbw_append(bs, target, filter_node_name, discard_source, cbw = bdrv_cbw_append(bs, target, filter_node_name, &bcs, errp);
&bcs, errp);
if (!cbw) { if (!cbw) {
goto error; goto error;
} }
@@ -497,7 +496,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
block_copy_set_speed(bcs, speed); block_copy_set_speed(bcs, speed);
/* Required permissions are taken by copy-before-write filter target */ /* Required permissions are taken by copy-before-write filter target */
bdrv_graph_wrlock(); bdrv_graph_wrlock(target);
block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL, block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
&error_abort); &error_abort);
bdrv_graph_wrunlock(); bdrv_graph_wrunlock();

View File

@@ -1073,7 +1073,7 @@ static BlockDriver bdrv_blkdebug = {
.is_filter = true, .is_filter = true,
.bdrv_parse_filename = blkdebug_parse_filename, .bdrv_parse_filename = blkdebug_parse_filename,
.bdrv_open = blkdebug_open, .bdrv_file_open = blkdebug_open,
.bdrv_close = blkdebug_close, .bdrv_close = blkdebug_close,
.bdrv_reopen_prepare = blkdebug_reopen_prepare, .bdrv_reopen_prepare = blkdebug_reopen_prepare,
.bdrv_child_perm = blkdebug_child_perm, .bdrv_child_perm = blkdebug_child_perm,

Some files were not shown because too many files have changed in this diff Show More