Compare commits

..

52 Commits

Author SHA1 Message Date
Michael Roth
54e1f5be86 Update version for v6.1.1 release
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-23 09:52:09 -06:00
Cole Robinson
fddd169de5 tests: tcg: Fix PVH test with binutils 2.36+
binutils started adding a .note.gnu.property ELF section which
makes the PVH test fail:

  TEST    hello on x86_64
qemu-system-x86_64: Error loading uncompressed kernel without PVH ELF Note

Discard .note.gnu* while keeping the PVH .note bits intact.

This also strips the build-id note, so drop the related comment.

Signed-off-by: Cole Robinson <crobinso@redhat.com>
Message-Id: <5ab2a54c262c61f64c22dbb49ade3e2db8a740bb.1633708346.git.crobinso@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 8e751e9c38)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-15 07:13:37 -06:00
Richard Henderson
711bd602cc tcg/arm: Reduce vector alignment requirement for NEON
With arm32, the ABI gives us 8-byte alignment for the stack.
While it's possible to realign the stack to provide 16-byte alignment,
it's far easier to simply not encode 16-byte alignment in the
VLD1 and VST1 instructions that we emit.

Remove the assertion in temp_allocate_frame, limit natural alignment
to the provided stack alignment, and add a comment.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1999878
Reported-by: Richard W.M. Jones <rjones@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210912174925.200132-1-richard.henderson@linaro.org>
Message-Id: <20211206191335.230683-2-richard.henderson@linaro.org>
(cherry picked from commit b9537d5904)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-15 07:13:27 -06:00
Daniel P. Berrangé
e88636b4d4 target/i386: add missing bits to CR4_RESERVED_MASK
Booting Fedora kernels with -cpu max hangs very early in boot. Disabling
the la57 CPUID bit fixes the problem. git bisect traced the regression to

  commit 213ff024a2 (HEAD, refs/bisect/bad)
  Author: Lara Lazier <laramglazier@gmail.com>
  Date:   Wed Jul 21 17:26:50 2021 +0200

    target/i386: Added consistency checks for CR4

    All MBZ bits in CR4 must be zero. (APM2 15.5)
    Added reserved bitmask and added checks in both
    helper_vmrun and helper_write_crN.

    Signed-off-by: Lara Lazier <laramglazier@gmail.com>
    Message-Id: <20210721152651.14683-2-laramglazier@gmail.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

In this commit CR4_RESERVED_MASK is missing CR4_LA57_MASK and
two others. Adding this lets Fedora kernels boot once again.

Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Message-Id: <20210831175033.175584-1-berrange@redhat.com>
[Removed VMXE/SMXE, matching the commit message. - Paolo]
Fixes: 213ff024a2 ("target/i386: Added consistency checks for CR4", 2021-07-22)
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 69e3895f9d)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-15 07:13:18 -06:00
Gerd Hoffmann
34833f361b qxl: fix pre-save logic
Oops.  Logic is backwards.

Fixes: 39b8a183e2 ("qxl: remove assert in qxl_pre_save.")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/610
Resolves: https://bugzilla.redhat.com//show_bug.cgi?id=2002907
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210910094203.3582378-1-kraxel@redhat.com>
(cherry picked from commit eb94846280)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-15 07:13:12 -06:00
Jon Maloy
43583f0c07 e1000: fix tx re-entrancy problem
The fact that the MMIO handler is not re-entrant causes an infinite
loop under certain conditions:

Guest write to TDT ->  Loopback -> RX (DMA to TDT) -> TX

We now eliminate the effect of this problem locally in e1000, by adding
a boolean in struct E1000State indicating when the TX side is busy. This
will cause any entering new call to return early instead of interfering
with the ongoing work, and eliminates any risk of looping.

This is intended to address CVE-2021-20257.

Signed-off-by: Jon Maloy <jmaloy@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 25ddb946e6)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 17:40:06 -06:00
Prasad J Pandit
1ce084af08 net: vmxnet3: validate configuration values during activate (CVE-2021-20203)
While activating device in vmxnet3_acticate_device(), it does not
validate guest supplied configuration values against predefined
minimum - maximum limits. This may lead to integer overflow or
OOB access issues. Add checks to avoid it.

Fixes: CVE-2021-20203
Buglink: https://bugs.launchpad.net/qemu/+bug/1913873
Reported-by: Gaoning Pan <pgn@zju.edu.cn>
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit d05dcd94ae)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 17:39:20 -06:00
Mark Mielke
fec12fc888 virtio-blk: Fix clean up of host notifiers for single MR transaction.
The code that introduced "virtio-blk: Configure all host notifiers in
a single MR transaction" introduced a second loop variable to perform
cleanup in second loop, but mistakenly still refers to the first
loop variable within the second loop body.

Fixes: d0267da614 ("virtio-blk: Configure all host notifiers in a single MR transaction")
Signed-off-by: Mark Mielke <mark.mielke@gmail.com>
Message-id: CALm7yL08qarOu0dnQkTN+pa=BSRC92g31YpQQNDeAiT4yLZWQQ@mail.gmail.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 5b807181c2)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 15:10:56 -06:00
Philippe Mathieu-Daudé
ef0cf1887e tests/qtest/fdc-test: Add a regression test for CVE-2021-20196
Without the previous commit, when running 'make check-qtest-i386'
with QEMU configured with '--enable-sanitizers' we get:

  AddressSanitizer:DEADLYSIGNAL
  =================================================================
  ==287878==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000344
  ==287878==The signal is caused by a WRITE memory access.
  ==287878==Hint: address points to the zero page.
      #0 0x564b2e5bac27 in blk_inc_in_flight block/block-backend.c:1346:5
      #1 0x564b2e5bb228 in blk_pwritev_part block/block-backend.c:1317:5
      #2 0x564b2e5bcd57 in blk_pwrite block/block-backend.c:1498:11
      #3 0x564b2ca1cdd3 in fdctrl_write_data hw/block/fdc.c:2221:17
      #4 0x564b2ca1b2f7 in fdctrl_write hw/block/fdc.c:829:9
      #5 0x564b2dc49503 in portio_write softmmu/ioport.c:201:9

Add the reproducer for CVE-2021-20196.

Suggested-by: Alexander Bulekov <alxndr@bu.edu>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20211124161536.631563-4-philmd@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit cc20926e9b)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 15:05:11 -06:00
Philippe Mathieu-Daudé
71ba2adfeb hw/block/fdc: Kludge missing floppy drive to fix CVE-2021-20196
Guest might select another drive on the bus by setting the
DRIVE_SEL bit of the DIGITAL OUTPUT REGISTER (DOR).
The current controller model doesn't expect a BlockBackend
to be NULL. A simple way to fix CVE-2021-20196 is to create
an empty BlockBackend when it is missing. All further
accesses will be safely handled, and the controller state
machines keep behaving correctly.

Cc: qemu-stable@nongnu.org
Fixes: CVE-2021-20196
Reported-by: Gaoning Pan (Ant Security Light-Year Lab) <pgn@zju.edu.cn>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20211124161536.631563-3-philmd@redhat.com
BugLink: https://bugs.launchpad.net/qemu/+bug/1912780
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/338
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit 1ab95af033)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 15:05:05 -06:00
Philippe Mathieu-Daudé
7629818574 hw/block/fdc: Extract blk_create_empty_drive()
We are going to re-use this code in the next commit,
so extract it as a new blk_create_empty_drive() function.

Inspired-by: Hanna Reitz <hreitz@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20211124161536.631563-2-philmd@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit b154791e7b)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 15:04:59 -06:00
Daniil Tatianin
4658dfcbc0 chardev/wctable: don't free the instance in wctablet_chr_finalize
Object is supposed to be freed by invoking obj->free, and not
obj->instance_finalize. This would lead to use-after-free followed by
double free in object_unref/object_finalize.

Signed-off-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20211117142349.836279-1-d-tatianin@yandex-team.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit fdc6e16818)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:54:14 -06:00
Klaus Jensen
2b2eb343a0 hw/nvme: fix buffer overrun in nvme_changed_nslist (CVE-2021-3947)
Fix missing offset verification.

Cc: qemu-stable@nongnu.org
Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
Reported-by: Qiuhao Li <Qiuhao.Li@outlook.com>
Fixes: f432fdfa12 ("support changed namespace asynchronous event")
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
(cherry picked from commit e2c57529c9)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:52:50 -06:00
Eric Blake
932333c5f0 nbd/server: Don't complain on certain client disconnects
When a client disconnects abruptly, but did not have any pending
requests (for example, when using nbdsh without calling h.shutdown),
we used to output the following message:

$ qemu-nbd -f raw file
$ nbdsh -u 'nbd://localhost:10809' -c 'h.trim(1,0)'
qemu-nbd: Disconnect client, due to: Failed to read request: Unexpected end-of-file before all bytes were read

Then in commit f148ae7, we refactored nbd_receive_request() to use
nbd_read_eof(); when this returns 0, we regressed into tracing
uninitialized memory (if tracing is enabled) and reporting a
less-specific:

qemu-nbd: Disconnect client, due to: Request handling failed in intermediate state

Note that with Unix sockets, we have yet another error message,
unchanged by the 6.0 regression:

$ qemu-nbd -k /tmp/sock -f raw file
$ nbdsh -u 'nbd+unix:///?socket=/tmp/sock' -c 'h.trim(1,0)'
qemu-nbd: Disconnect client, due to: Failed to send reply: Unable to write to socket: Broken pipe

But in all cases, the error message goes away if the client performs a
soft shutdown by using NBD_CMD_DISC, rather than a hard shutdown by
abrupt disconnect:

$ nbdsh -u 'nbd://localhost:10809' -c 'h.trim(1,0)' -c 'h.shutdown()'

This patch fixes things to avoid uninitialized memory, and in general
avoids warning about a client that does a hard shutdown when not in
the middle of a packet.  A client that aborts mid-request, or which
does not read the full server's reply, can still result in warnings,
but those are indeed much more unusual situations.

CC: qemu-stable@nongnu.org
Fixes: f148ae7d36 ("nbd/server: Quiesce coroutines on context switch", v6.0.0)
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
[eblake: defer unrelated typo fixes to later patch]
Message-Id: <20211117170230.1128262-2-eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
(cherry picked from commit 1644cccea5)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:51:17 -06:00
Peng Liang
8c2d5911de vfio: Fix memory leak of hostwin
hostwin is allocated and added to hostwin_list in vfio_host_win_add, but
it is only deleted from hostwin_list in vfio_host_win_del, which causes
a memory leak.  Also, freeing all elements in hostwin_list is missing in
vfio_disconnect_container.

Fix: 2e4109de8e ("vfio/spapr: Create DMA window dynamically (SPAPR IOMMU v2)")
CC: qemu-stable@nongnu.org
Signed-off-by: Peng Liang <liangpeng10@huawei.com>
Link: https://lore.kernel.org/r/20211117014739.1839263-1-liangpeng10@huawei.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
(cherry picked from commit f3bc3a73c9)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:49:55 -06:00
Jason Wang
08e46e6d92 virtio: use virtio accessor to access packed event
We used to access packed descriptor event and off_wrap via
address_space_{write|read}_cached(). When we hit the cache, memcpy()
is used which is not atomic which may lead a wrong value to be read or
wrote.

This patch fixes this by switching to use
virito_{stw|lduw}_phys_cached() to make sure the access is atomic.

Fixes: 683f766567 ("virtio: event suppression support for packed ring")
Cc: qemu-stable@nongnu.org
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20211111063854.29060-2-jasowang@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit d152cdd6f6)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:43:25 -06:00
Jason Wang
df1c9c3039 virtio: use virtio accessor to access packed descriptor flags
We used to access packed descriptor flags via
address_space_{write|read}_cached(). When we hit the cache, memcpy()
is used which is not an atomic operation which may lead a wrong value
is read or wrote.

So this patch switches to use virito_{stw|lduw}_phys_cached() to make
sure the aceess is atomic.

Fixes: 86044b24e8 ("virtio: basic packed virtqueue support")
Cc: qemu-stable@nongnu.org
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20211111063854.29060-1-jasowang@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit f463e761a4)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:43:18 -06:00
Igor Mammedov
7204b8f3c6 pcie: rename 'native-hotplug' to 'x-native-hotplug'
Mark property as experimental/internal adding 'x-' prefix.

Property was introduced in 6.1 and it should have provided
ability to turn on native PCIE hotplug on port even when
ACPI PCI hotplug is in use is user explicitly sets property
on CLI. However that never worked since slot is wired to
ACPI hotplug controller.
Another non-intended usecase: disable native hotplug on slot
when APCI based hotplug is disabled, which works but slot has
'hotplug' property for this taks.

It should be relatively safe to rename it to experimental
as no users should exist for it and given that the property
is broken we don't really want to leave it around for much
longer lest users start using it.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Ani Sinha <ani@anisinha.ca>
Message-Id: <20211112110857.3116853-2-imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 2aa1842d6d)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:38:08 -06:00
Greg Kurz
36c651c226 accel/tcg: Register a force_rcu notifier
A TCG vCPU doing a busy loop systematicaly hangs the QEMU monitor
if the user passes 'device_add' without argument. This is because
drain_cpu_all() which is called from qmp_device_add() cannot return
if readers don't exit read-side critical sections. That is typically
what busy-looping TCG vCPUs do:

int cpu_exec(CPUState *cpu)
{
[...]
    rcu_read_lock();
[...]
    while (!cpu_handle_exception(cpu, &ret)) {
        // Busy loop keeps vCPU here
    }
[...]
    rcu_read_unlock();

    return ret;
}

For MTTCG, have all vCPU threads register a force_rcu notifier that will
kick them out of the loop using async_run_on_cpu(). The notifier is called
with the rcu_registry_lock mutex held, using async_run_on_cpu() ensures
there are no deadlocks.

For RR, a single thread runs all vCPUs. Just register a single notifier
that kicks the current vCPU to the next one.

For MTTCG:
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>

For RR:
Suggested-by: Richard Henderson <richard.henderson@linaro.org>

Fixes: 7bed89958b ("device_core: use drain_call_rcu in in qmp_device_add")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/650
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211109183523.47726-3-groug@kaod.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit dd47a8f654)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:26:01 -06:00
Greg Kurz
fceaefb43f rcu: Introduce force_rcu notifier
The drain_rcu_call() function can be blocked as long as an RCU reader
stays in a read-side critical section. This is typically what happens
when a TCG vCPU is executing a busy loop. It can deadlock the QEMU
monitor as reported in https://gitlab.com/qemu-project/qemu/-/issues/650 .

This can be avoided by allowing drain_rcu_call() to enforce an RCU grace
period. Since each reader might need to do specific actions to end a
read-side critical section, do it with notifiers.

Prepare ground for this by adding a notifier list to the RCU reader
struct and use it in wait_for_readers() if drain_rcu_call() is in
progress. An API is added for readers to register their notifiers.

This is largely based on a draft from Paolo Bonzini.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211109183523.47726-2-groug@kaod.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit ef149763a8)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:25:55 -06:00
Laurent Vivier
7d71e6bfb0 hw: m68k: virt: Add compat machine for 6.1
Add the missing machine type for m68k/virt

Cc: qemu-stable@nongnu.org
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20211106194158.4068596-2-laurent@vivier.eu>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
(cherry picked from commit 6837f29976)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:23:21 -06:00
Mauro Matteo Cascella
c2c7f108b8 hw/scsi/scsi-disk: MODE_PAGE_ALLS not allowed in MODE SELECT commands
This avoids an off-by-one read of 'mode_sense_valid' buffer in
hw/scsi/scsi-disk.c:mode_sense_page().

Fixes: CVE-2021-3930
Cc: qemu-stable@nongnu.org
Reported-by: Alexander Bulekov <alxndr@bu.edu>
Fixes: a8f4bbe290 ("scsi-disk: store valid mode pages in a table")
Fixes: #546
Reported-by: Qiuhao Li <Qiuhao.Li@outlook.com>
Signed-off-by: Mauro Matteo Cascella <mcascell@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit b3af7fdf9c)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:22:44 -06:00
Paolo Bonzini
3488bb205d target-i386: mmu: fix handling of noncanonical virtual addresses
mmu_translate is supposed to return an error code for page faults; it is
not able to handle other exceptions.  The #GP case for noncanonical
virtual addresses is not handled correctly, and incorrectly raised as
a page fault with error code 1.  Since it cannot happen for nested
page tables, move it directly to handle_mmu_fault, even before the
invocation of mmu_translate.

Fixes: #676
Fixes: 661ff4879e ("target/i386: extract mmu_translate", 2021-05-11)
Cc: qemu-stable@nongnu.org
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit b04dc92e01)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:19:00 -06:00
Paolo Bonzini
cddfaf96ab target-i386: mmu: use pg_mode instead of HF_LMA_MASK
Correctly look up the paging mode of the hypervisor when it is using 64-bit
mode but the guest is not.

Fixes: 68746930ae ("target/i386: use mmu_translate for NPT walk", 2021-05-11)
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 93eae35832)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:18:25 -06:00
Jessica Clarke
43a457841f Partially revert "build: -no-pie is no functional linker flag"
This partially reverts commit bbd2d5a812.

This commit was misguided and broke using --disable-pie on any distro
that enables PIE by default in their compiler driver, including Debian
and its derivatives. Whilst -no-pie is not a linker flag, it is a
compiler driver flag that ensures -pie is not automatically passed by it
to the linker. Without it, all compile_prog checks will fail as any code
built with the explicit -fno-pie will fail to link with the implicit
default -pie due to trying to use position-dependent relocations. The
only bug that needed fixing was LDFLAGS_NOPIE being used as a flag for
the linker itself in pc-bios/optionrom/Makefile.

Note this does not reinstate exporting LDFLAGS_NOPIE, as it is unused,
since the only previous use was the one that should not have existed. I
have also updated the comment for the -fno-pie and -no-pie checks to
reflect what they're actually needed for.

Fixes: bbd2d5a812
Cc: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com>
Message-Id: <20210805192545.38279-1-jrtc27@jrtc27.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit ffd205ef29)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:13:38 -06:00
Ari Sundholm
ebf660beb1 block/file-posix: Fix return value translation for AIO discards
AIO discards regressed as a result of the following commit:
	0dfc7af2 block/file-posix: Optimize for macOS

When trying to run blkdiscard within a Linux guest, the request would
fail, with some errors in dmesg:

---- [ snip ] ----
[    4.010070] sd 2:0:0:0: [sda] tag#0 FAILED Result: hostbyte=DID_OK
driverbyte=DRIVER_SENSE
[    4.011061] sd 2:0:0:0: [sda] tag#0 Sense Key : Aborted Command
[current]
[    4.011061] sd 2:0:0:0: [sda] tag#0 Add. Sense: I/O process
terminated
[    4.011061] sd 2:0:0:0: [sda] tag#0 CDB: Unmap/Read sub-channel 42
00 00 00 00 00 00 00 18 00
[    4.011061] blk_update_request: I/O error, dev sda, sector 0
---- [ snip ] ----

This turns out to be a result of a flaw in changes to the error value
translation logic in handle_aiocb_discard(). The default return value
may be left untranslated in some configurations, and the wrong variable
is used in one translation.

Fix both issues.

Fixes: 0dfc7af2b2 ("block/file-posix: Optimize for macOS")
Cc: qemu-stable@nongnu.org
Signed-off-by: Ari Sundholm <ari@tuxera.com>
Signed-off-by: Emil Karlson <jkarlson@tuxera.com>
Reviewed-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20211019110954.4170931-1-ari@tuxera.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 13a028336f)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:12:50 -06:00
Ani Sinha
bbbdedb386 tests/acpi/bios-tables-test: update DSDT blob for multifunction bridge test
We added a new unit test for testing acpi hotplug on multifunction bridges in
q35 machines. Here, we update the DSDT table gloden master blob for this unit
test.

The test adds the following devices to qemu and then checks the changes
introduced in the DSDT table due to the addition of the following devices:

(a) a multifunction bridge device
(b) a bridge device with function 1
(c) a non-bridge device with function 2

In the DSDT table, we should see AML hotplug descriptions for (a) and (b).
For (a) we should find a hotplug AML description for function 0.

Following is the ASL diff between the original DSDT table and the modified DSDT
table due to the unit test. We see that multifunction bridge on bus 2 and single
function bridge on bus 3 function 1 are described, not the non-bridge balloon
device on bus 4, function 2.

@@ -1,30 +1,30 @@
 /*
  * Intel ACPI Component Architecture
  * AML/ASL+ Disassembler version 20190509 (64-bit version)
  * Copyright (c) 2000 - 2019 Intel Corporation
  *
  * Disassembling to symbolic ASL+ operators
  *
- * Disassembly of tests/data/acpi/q35/DSDT, Thu Oct  7 18:29:19 2021
+ * Disassembly of /tmp/aml-C7JCA1, Thu Oct  7 18:29:19 2021
  *
  * Original Table Header:
  *     Signature        "DSDT"
- *     Length           0x00002061 (8289)
+ *     Length           0x00002187 (8583)
  *     Revision         0x01 **** 32-bit table (V1), no 64-bit math support
- *     Checksum         0xF9
+ *     Checksum         0x8D
  *     OEM ID           "BOCHS "
  *     OEM Table ID     "BXPC    "
  *     OEM Revision     0x00000001 (1)
  *     Compiler ID      "BXPC"
  *     Compiler Version 0x00000001 (1)
  */
 DefinitionBlock ("", "DSDT", 1, "BOCHS ", "BXPC    ", 0x00000001)
 {
     Scope (\)
     {
         OperationRegion (DBG, SystemIO, 0x0402, One)
         Field (DBG, ByteAcc, NoLock, Preserve)
         {
             DBGB,   8
         }

@@ -3265,23 +3265,95 @@
                 Method (_S1D, 0, NotSerialized)  // _S1D: S1 Device State
                 {
                     Return (Zero)
                 }

                 Method (_S2D, 0, NotSerialized)  // _S2D: S2 Device State
                 {
                     Return (Zero)
                 }

                 Method (_S3D, 0, NotSerialized)  // _S3D: S3 Device State
                 {
                     Return (Zero)
                 }
             }

+            Device (S10)
+            {
+                Name (_ADR, 0x00020000)  // _ADR: Address
+                Name (BSEL, One)
+                Device (S00)
+                {
+                    Name (_SUN, Zero)  // _SUN: Slot User Number
+                    Name (_ADR, Zero)  // _ADR: Address
+                    Method (_EJ0, 1, NotSerialized)  // _EJx: Eject Device, x=0-9
+                    {
+                        PCEJ (BSEL, _SUN)
+                    }
+
+                    Method (_DSM, 4, Serialized)  // _DSM: Device-Specific Method
+                    {
+                        Return (PDSM (Arg0, Arg1, Arg2, Arg3, BSEL, _SUN))
+                    }
+                }
+
+                Method (DVNT, 2, NotSerialized)
+                {
+                    If ((Arg0 & One))
+                    {
+                        Notify (S00, Arg1)
+                    }
+                }
+
+                Method (PCNT, 0, NotSerialized)
+                {
+                    BNUM = One
+                    DVNT (PCIU, One)
+                    DVNT (PCID, 0x03)
+                }
+            }
+
+            Device (S19)
+            {
+                Name (_ADR, 0x00030001)  // _ADR: Address
+                Name (BSEL, Zero)
+                Device (S00)
+                {
+                    Name (_SUN, Zero)  // _SUN: Slot User Number
+                    Name (_ADR, Zero)  // _ADR: Address
+                    Method (_EJ0, 1, NotSerialized)  // _EJx: Eject Device, x=0-9
+                    {
+                        PCEJ (BSEL, _SUN)
+                    }
+
+                    Method (_DSM, 4, Serialized)  // _DSM: Device-Specific Method
+                    {
+                        Return (PDSM (Arg0, Arg1, Arg2, Arg3, BSEL, _SUN))
+                    }
+                }
+
+                Method (DVNT, 2, NotSerialized)
+                {
+                    If ((Arg0 & One))
+                    {
+                        Notify (S00, Arg1)
+                    }
+                }
+
+                Method (PCNT, 0, NotSerialized)
+                {
+                    BNUM = Zero
+                    DVNT (PCIU, One)
+                    DVNT (PCID, 0x03)
+                }
+            }
+
             Method (PCNT, 0, NotSerialized)
             {
+                ^S19.PCNT ()
+                ^S10.PCNT ()
             }
         }
     }
 }

Signed-off-by: Ani Sinha <ani@anisinha.ca>
Message-Id: <20211007135750.1277213-4-ani@anisinha.ca>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
(cherry picked from commit a8339e07f9)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:05:24 -06:00
Ani Sinha
8319de607f tests/acpi/pcihp: add unit tests for hotplug on multifunction bridges for q35
commit d7346e614f ("acpi: x86: pcihp: add support hotplug on multifunction bridges")
added ACPI hotplug descriptions for cold plugged bridges for functions other
than 0. For all other devices, the ACPI hotplug descriptions are limited to
function 0 only. This change adds unit tests for this feature.

This test adds the following devices to qemu and then checks the changes
introduced in the DSDT table due to the addition of the following devices:

(a) a multifunction bridge device
(b) a bridge device with function 1
(c) a non-bridge device with function 2

In the DSDT table, we should see AML hotplug descriptions for (a) and (b).
For (a) we should find a hotplug AML description for function 0.

The following diff compares the DSDT table AML with the new unit test before
and after the change d7346e614f is introduced. In other words,
this diff reflects the changes that occurs in the DSDT table due to the change
d7346e614f .

@@ -1,60 +1,38 @@
 /*
  * Intel ACPI Component Architecture
  * AML/ASL+ Disassembler version 20190509 (64-bit version)
  * Copyright (c) 2000 - 2019 Intel Corporation
  *
  * Disassembling to symbolic ASL+ operators
  *
- * Disassembly of tests/data/acpi/q35/DSDT.multi-bridge, Thu Oct  7 18:56:05 2021
+ * Disassembly of /tmp/aml-AN0DA1, Thu Oct  7 18:56:05 2021
  *
  * Original Table Header:
  *     Signature        "DSDT"
- *     Length           0x000020FE (8446)
+ *     Length           0x00002187 (8583)
  *     Revision         0x01 **** 32-bit table (V1), no 64-bit math support
- *     Checksum         0xDE
+ *     Checksum         0x8D
  *     OEM ID           "BOCHS "
  *     OEM Table ID     "BXPC    "
  *     OEM Revision     0x00000001 (1)
  *     Compiler ID      "BXPC"
  *     Compiler Version 0x00000001 (1)
  */
 DefinitionBlock ("", "DSDT", 1, "BOCHS ", "BXPC    ", 0x00000001)
 {
-    /*
-     * iASL Warning: There was 1 external control method found during
-     * disassembly, but only 0 were resolved (1 unresolved). Additional
-     * ACPI tables may be required to properly disassemble the code. This
-     * resulting disassembler output file may not compile because the
-     * disassembler did not know how many arguments to assign to the
-     * unresolved methods. Note: SSDTs can be dynamically loaded at
-     * runtime and may or may not be available via the host OS.
-     *
-     * In addition, the -fe option can be used to specify a file containing
-     * control method external declarations with the associated method
-     * argument counts. Each line of the file must be of the form:
-     *     External (<method pathname>, MethodObj, <argument count>)
-     * Invocation:
-     *     iasl -fe refs.txt -d dsdt.aml
-     *
-     * The following methods were unresolved and many not compile properly
-     * because the disassembler had to guess at the number of arguments
-     * required for each:
-     */
-    External (_SB_.PCI0.S19_.PCNT, MethodObj)    // Warning: Unknown method, guessing 1 arguments
-
     Scope (\)
     {
         OperationRegion (DBG, SystemIO, 0x0402, One)
         Field (DBG, ByteAcc, NoLock, Preserve)
         {
             DBGB,   8
         }

         Method (DBUG, 1, NotSerialized)
         {
             ToHexString (Arg0, Local0)
             ToBuffer (Local0, Local0)
             Local1 = (SizeOf (Local0) - One)
             Local2 = Zero
             While ((Local2 < Local1))
             {
@@ -3322,24 +3300,60 @@
                 Method (DVNT, 2, NotSerialized)
                 {
                     If ((Arg0 & One))
                     {
                         Notify (S00, Arg1)
                     }
                 }

                 Method (PCNT, 0, NotSerialized)
                 {
                     BNUM = One
                     DVNT (PCIU, One)
                     DVNT (PCID, 0x03)
                 }
             }

+            Device (S19)
+            {
+                Name (_ADR, 0x00030001)  // _ADR: Address
+                Name (BSEL, Zero)
+                Device (S00)
+                {
+                    Name (_SUN, Zero)  // _SUN: Slot User Number
+                    Name (_ADR, Zero)  // _ADR: Address
+                    Method (_EJ0, 1, NotSerialized)  // _EJx: Eject Device, x=0-9
+                    {
+                        PCEJ (BSEL, _SUN)
+                    }
+
+                    Method (_DSM, 4, Serialized)  // _DSM: Device-Specific Method
+                    {
+                        Return (PDSM (Arg0, Arg1, Arg2, Arg3, BSEL, _SUN))
+                    }
+                }
+
+                Method (DVNT, 2, NotSerialized)
+                {
+                    If ((Arg0 & One))
+                    {
+                        Notify (S00, Arg1)
+                    }
+                }
+
+                Method (PCNT, 0, NotSerialized)
+                {
+                    BNUM = Zero
+                    DVNT (PCIU, One)
+                    DVNT (PCID, 0x03)
+                }
+            }
+
             Method (PCNT, 0, NotSerialized)
             {
-                ^S19.PCNT (^S10.PCNT ())
+                ^S19.PCNT ()
+                ^S10.PCNT ()
             }
         }
     }
 }

Signed-off-by: Ani Sinha <ani@anisinha.ca>
Message-Id: <20211007135750.1277213-3-ani@anisinha.ca>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
(cherry picked from commit 04dd78b9e8)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:05:20 -06:00
Ani Sinha
a759dc19ec tests/acpi/bios-tables-test: add and allow changes to a new q35 DSDT table blob
We are adding a new unit test to cover the acpi hotplug support in q35 for
multi-function bridges. This test uses a new table DSDT.multi-bridge.
We need to allow changes in DSDT acpi table for addition of this new
unit test.

Signed-off-by: Ani Sinha <ani@anisinha.ca>
Message-Id: <20211007135750.1277213-2-ani@anisinha.ca>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
(cherry picked from commit 6dcb1cc951)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:05:15 -06:00
Michael S. Tsirkin
24101e36f1 pci: fix PCI resource reserve capability on BE
PCI resource reserve capability should use LE format as all other PCI
things. If we don't then seabios won't boot:

=== PCI new allocation pass #1 ===
PCI: check devices
PCI: QEMU resource reserve cap: size 10000000000000 type io
PCI: secondary bus 1 size 10000000000000 type io
PCI: secondary bus 1 size 00200000 type mem
PCI: secondary bus 1 size 00200000 type prefmem
=== PCI new allocation pass #2 ===
PCI: out of I/O address space

This became more important since we started reserving IO by default,
previously no one noticed.

Fixes: e2a6290aab ("hw/pcie-root-port: Fix hotplug for PCI devices requiring IO")
Cc: marcel.apfelbaum@gmail.com
Fixes: 226263fb5c ("hw/pci: add QEMU-specific PCI capability to the Generic PCI Express Root Port")
Cc: zuban32s@gmail.com
Fixes: 6755e618d0 ("hw/pci: add PCI resource reserve capability to legacy PCI bridge")
Cc: jing2.liu@linux.intel.com
Tested-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
(cherry picked from commit 0e464f7d99)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 14:01:41 -06:00
Paolo Bonzini
a43e057bd6 block: introduce max_hw_iov for use in scsi-generic
Linux limits the size of iovecs to 1024 (UIO_MAXIOV in the kernel
sources, IOV_MAX in POSIX).  Because of this, on some host adapters
requests with many iovecs are rejected with -EINVAL by the
io_submit() or readv()/writev() system calls.

In fact, the same limit applies to SG_IO as well.  To fix both the
EINVAL and the possible performance issues from using fewer iovecs
than allowed by Linux (some HBAs have max_segments as low as 128),
introduce a separate entry in BlockLimits to hold the max_segments
value from sysfs.  This new limit is used only for SG_IO and clamped
to bs->bl.max_iov anyway, just like max_hw_transfer is clamped to
bs->bl.max_transfer.

Reported-by: Halil Pasic <pasic@linux.ibm.com>
Cc: Hanna Reitz <hreitz@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-block@nongnu.org
Cc: qemu-stable@nongnu.org
Fixes: 18473467d5 ("file-posix: try BLKSECTGET on block devices too, do not round to power of 2", 2021-06-25)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210923130436.1187591-1-pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit cc07162953)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 13:53:15 -06:00
Ani Sinha
3aa2c2cd67 bios-tables-test: Update ACPI DSDT table golden blobs for q35
We have modified the IO address range for ACPI pci hotplug in q35. See change:

5adcc9e39e6a5 ("hw/i386/acpi: fix conflicting IO address range for acpi pci hotplug in q35")

The ACPI DSDT table golden blobs must be regenrated in order to make the unit tests
pass. This change updates the golden ACPI DSDT table blobs.

Following is the ASL diff between the blobs:

@@ -1,30 +1,30 @@
 /*
  * Intel ACPI Component Architecture
  * AML/ASL+ Disassembler version 20190509 (64-bit version)
  * Copyright (c) 2000 - 2019 Intel Corporation
  *
  * Disassembling to symbolic ASL+ operators
  *
- * Disassembly of tests/data/acpi/q35/DSDT, Tue Sep 14 09:04:06 2021
+ * Disassembly of /tmp/aml-52DP90, Tue Sep 14 09:04:06 2021
  *
  * Original Table Header:
  *     Signature        "DSDT"
  *     Length           0x00002061 (8289)
  *     Revision         0x01 **** 32-bit table (V1), no 64-bit math support
- *     Checksum         0xE5
+ *     Checksum         0xF9
  *     OEM ID           "BOCHS "
  *     OEM Table ID     "BXPC    "
  *     OEM Revision     0x00000001 (1)
  *     Compiler ID      "BXPC"
  *     Compiler Version 0x00000001 (1)
  */
 DefinitionBlock ("", "DSDT", 1, "BOCHS ", "BXPC    ", 0x00000001)
 {
     Scope (\)
     {
         OperationRegion (DBG, SystemIO, 0x0402, One)
         Field (DBG, ByteAcc, NoLock, Preserve)
         {
             DBGB,   8
         }

@@ -226,46 +226,46 @@
             Name (_CRS, ResourceTemplate ()  // _CRS: Current Resource Settings
             {
                 IO (Decode16,
                     0x0070,             // Range Minimum
                     0x0070,             // Range Maximum
                     0x01,               // Alignment
                     0x08,               // Length
                     )
                 IRQNoFlags ()
                     {8}
             })
         }
     }

     Scope (_SB.PCI0)
     {
-        OperationRegion (PCST, SystemIO, 0x0CC4, 0x08)
+        OperationRegion (PCST, SystemIO, 0x0CC0, 0x08)
         Field (PCST, DWordAcc, NoLock, WriteAsZeros)
         {
             PCIU,   32,
             PCID,   32
         }

-        OperationRegion (SEJ, SystemIO, 0x0CCC, 0x04)
+        OperationRegion (SEJ, SystemIO, 0x0CC8, 0x04)
         Field (SEJ, DWordAcc, NoLock, WriteAsZeros)
         {
             B0EJ,   32
         }

-        OperationRegion (BNMR, SystemIO, 0x0CD4, 0x08)
+        OperationRegion (BNMR, SystemIO, 0x0CD0, 0x08)
         Field (BNMR, DWordAcc, NoLock, WriteAsZeros)
         {
             BNUM,   32,
             PIDX,   32
         }

         Mutex (BLCK, 0x00)
         Method (PCEJ, 2, NotSerialized)
         {
             Acquire (BLCK, 0xFFFF)
             BNUM = Arg0
             B0EJ = (One << Arg1)
             Release (BLCK)
             Return (Zero)
         }

@@ -3185,34 +3185,34 @@
                     0x0620,             // Range Minimum
                     0x0620,             // Range Maximum
                     0x01,               // Alignment
                     0x10,               // Length
                     )
             })
         }

         Device (PHPR)
         {
             Name (_HID, "PNP0A06" /* Generic Container Device */)  // _HID: Hardware ID
             Name (_UID, "PCI Hotplug resources")  // _UID: Unique ID
             Name (_STA, 0x0B)  // _STA: Status
             Name (_CRS, ResourceTemplate ()  // _CRS: Current Resource Settings
             {
                 IO (Decode16,
-                    0x0CC4,             // Range Minimum
-                    0x0CC4,             // Range Maximum
+                    0x0CC0,             // Range Minimum
+                    0x0CC0,             // Range Maximum
                     0x01,               // Alignment
                     0x18,               // Length
                     )
             })
         }
     }

     Scope (\)
     {
         Name (_S3, Package (0x04)  // _S3_: S3 System State
         {
             One,
             One,
             Zero,
             Zero
         })

Signed-off-by: Ani Sinha <ani@anisinha.ca>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20210916132838.3469580-4-ani@anisinha.ca>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 500eb21cff)
*drop dependency on 75539b886a ("tests: acpi: tpm1.2: Add expected TPM 1.2 ACPI blobs")
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 13:27:44 -06:00
Ani Sinha
9e80a430ed hw/i386/acpi: fix conflicting IO address range for acpi pci hotplug in q35
Change caf108bc58 ("hw/i386/acpi-build: Add ACPI PCI hot-plug methods to Q35")
selects an IO address range for acpi based PCI hotplug for q35 arbitrarily. It
starts at address 0x0cc4 and ends at 0x0cdb. At the time when the patch was
written but the final version of the patch was not yet pushed upstream, this
address range was free and did not conflict with any other IO address ranges.
However, with the following change, this address range was no
longer conflict free as in this change, the IO address range
(value of ACPI_PCIHP_SIZE) was incremented by four bytes:

b32bd763a1 ("pci: introduce acpi-index property for PCI device")

This can be seen from the output of QMP command 'info mtree' :

0000000000000600-0000000000000603 (prio 0, i/o): acpi-evt
0000000000000604-0000000000000605 (prio 0, i/o): acpi-cnt
0000000000000608-000000000000060b (prio 0, i/o): acpi-tmr
0000000000000620-000000000000062f (prio 0, i/o): acpi-gpe0
0000000000000630-0000000000000637 (prio 0, i/o): acpi-smi
0000000000000cc4-0000000000000cdb (prio 0, i/o): acpi-pci-hotplug
0000000000000cd8-0000000000000ce3 (prio 0, i/o): acpi-cpu-hotplug

It shows that there is a region of conflict between IO regions of acpi
pci hotplug and acpi cpu hotplug.

Unfortunately, the change caf108bc58 did not update the IO address range
appropriately before it was pushed upstream to accommodate the increased
length of the IO address space introduced in change b32bd763a1.

Due to this bug, windows guests complain 'This device cannot find
enough free resources it can use' in the device manager panel for extended
IO buses. This issue also breaks the correct functioning of pci hotplug as the
following shows that the IO space for pci hotplug has been truncated:

(qemu) info mtree -f
FlatView #0
 AS "I/O", root: io
 Root memory region: io
  0000000000000cc4-0000000000000cd7 (prio 0, i/o): acpi-pci-hotplug
  0000000000000cd8-0000000000000cf7 (prio 0, i/o): acpi-cpu-hotplug

Therefore, in this fix, we adjust the IO address range for the acpi pci
hotplug so that it does not conflict with cpu hotplug and there is no
truncation of IO spaces. The starting IO address of PCI hotplug region
has been decremented by four bytes in order to accommodate four byte
increment in the IO address space introduced by change
b32bd763a1 ("pci: introduce acpi-index property for PCI device")

After fixing, the following are the corrected IO ranges:

0000000000000600-0000000000000603 (prio 0, i/o): acpi-evt
0000000000000604-0000000000000605 (prio 0, i/o): acpi-cnt
0000000000000608-000000000000060b (prio 0, i/o): acpi-tmr
0000000000000620-000000000000062f (prio 0, i/o): acpi-gpe0
0000000000000630-0000000000000637 (prio 0, i/o): acpi-smi
0000000000000cc0-0000000000000cd7 (prio 0, i/o): acpi-pci-hotplug
0000000000000cd8-0000000000000ce3 (prio 0, i/o): acpi-cpu-hotplug

This change has been tested using a Windows Server 2019 guest VM. Windows
no longer complains after this change.

Fixes: caf108bc58 ("hw/i386/acpi-build: Add ACPI PCI hot-plug methods to Q35")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/561

Signed-off-by: Ani Sinha <ani@anisinha.ca>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Julia Suvorova <jusual@redhat.com>
Message-Id: <20210916132838.3469580-3-ani@anisinha.ca>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 0e780da76a)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 13:12:38 -06:00
Ani Sinha
c66f5dfc12 bios-tables-test: allow changes in DSDT ACPI tables for q35
We are going to commit a change to fix IO address range allocated for acpi pci
hotplug in q35. This affects DSDT tables. This change allows DSDT table
modification so that unit tests are not broken.

Signed-off-by: Ani Sinha <ani@anisinha.ca>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20210916132838.3469580-2-ani@anisinha.ca>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 9f29e872d5)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 13:12:31 -06:00
Jean-Philippe Brucker
5cf977a2a1 hw/i386: Rename default_bus_bypass_iommu
Since commit d8fb7d0969 ("vl: switch -M parsing to keyval"), machine
parameter definitions cannot use underscores, because keyval_dashify()
transforms them to dashes and the parser doesn't find the parameter.

This affects option default_bus_bypass_iommu which was introduced in the
same release:

$ qemu-system-x86_64 -M q35,default_bus_bypass_iommu=on
qemu-system-x86_64: Property 'pc-q35-6.1-machine.default-bus-bypass-iommu' not found

Rename the parameter to "default-bus-bypass-iommu". Passing
"default_bus_bypass_iommu" is still valid since the underscore are
transformed automatically.

Fixes: c9e96b04fc ("hw/i386: Add a default_bus_bypass_iommu pc machine option")
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Message-Id: <20211025104737.1560274-1-jean-philippe@linaro.org>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 739b38630c)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 13:07:48 -06:00
Jean-Philippe Brucker
36cfd11a86 hw/arm/virt: Rename default_bus_bypass_iommu
Since commit d8fb7d0969 ("vl: switch -M parsing to keyval"), machine
parameter definitions cannot use underscores, because keyval_dashify()
transforms them to dashes and the parser doesn't find the parameter.

This affects option default_bus_bypass_iommu which was introduced in the
same release:

$ qemu-system-aarch64 -M virt,default_bus_bypass_iommu=on
qemu-system-aarch64: Property 'virt-6.1-machine.default-bus-bypass-iommu' not found

Rename the parameter to "default-bus-bypass-iommu". Passing
"default_bus_bypass_iommu" is still valid since the underscore are
transformed automatically.

Fixes: 6d7a85483a ("hw/arm/virt: Add default_bus_bypass_iommu machine option")
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Tested-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211026093733.2144161-1-jean-philippe@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 9dad363a22)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 13:07:27 -06:00
Stefano Garzarella
246ccfbf44 vhost-vsock: fix migration issue when seqpacket is supported
Commit 1e08fd0a46 ("vhost-vsock: SOCK_SEQPACKET feature bit support")
enabled the SEQPACKET feature bit.
This commit is released with QEMU 6.1, so if we try to migrate a VM where
the host kernel supports SEQPACKET but machine type version is less than
6.1, we get the following errors:

    Features 0x130000002 unsupported. Allowed features: 0x179000000
    Failed to load virtio-vhost_vsock:virtio
    error while loading state for instance 0x0 of device '0000:00:05.0/virtio-vhost_vsock'
    load of migration failed: Operation not permitted

Let's disable the feature bit for machine types < 6.1.
We add a new OnOffAuto property for this, called `seqpacket`.
When it is `auto` (default), QEMU behaves as before, trying to enable the
feature, when it is `on` QEMU will fail if the backend (vhost-vsock
kernel module) doesn't support it.

Fixes: 1e08fd0a46 ("vhost-vsock: SOCK_SEQPACKET feature bit support")
Cc: qemu-stable@nongnu.org
Reported-by: Jiang Wang <jiang.wang@bytedance.com>
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Message-Id: <20210921161642.206461-2-sgarzare@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit d6a9378f47)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 13:04:13 -06:00
Michael Tokarev
3ee93e456d qemu-sockets: fix unix socket path copy (again)
Commit 4cfd970ec1 added an
assert which ensures the path within an address of a unix
socket returned from the kernel is at least one byte and
does not exceed sun_path buffer. Both of this constraints
are wrong:

A unix socket can be unnamed, in this case the path is
completely empty (not even \0)

And some implementations (notable linux) can add extra
trailing byte (\0) _after_ the sun_path buffer if we
passed buffer larger than it (and we do).

So remove the assertion (since it causes real-life breakage)
but at the same time fix the usage of sun_path. Namely,
we should not access sun_path[0] if kernel did not return
it at all (this is the case for unnamed sockets),
and use the returned salen when copyig actual path as an
upper constraint for the amount of bytes to copy - this
will ensure we wont exceed the information provided by
the kernel, regardless whenever there is a trailing \0
or not. This also helps with unnamed sockets.

Note the case of abstract socket, the sun_path is actually
a blob and can contain \0 characters, - it should not be
passed to g_strndup and the like, it should be accessed by
memcpy-like functions.

Fixes: 4cfd970ec1
Fixes: http://bugs.debian.org/993145
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
CC: qemu-stable@nongnu.org
(cherry picked from commit 118d527f2e)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 12:55:36 -06:00
Paolo Bonzini
ec08035102 plugins: do not limit exported symbols if modules are active
On Mac --enable-modules and --enable-plugins are currently incompatible, because the
Apple -Wl,-exported_symbols_list command line options prevents the export of any
symbols needed by the modules.  On x86 -Wl,--dynamic-list does not have this effect,
but only because the -Wl,--export-dynamic option provided by gmodule-2.0.pc overrides
it.  On Apple there is no -Wl,--export-dynamic, because it is the default, and thus
no override.

Either way, when modules are active there is no reason to include the plugin_ldflags.
While at it, avoid the useless -Wl,--export-dynamic when --enable-plugins is
specified but --enable-modules is not; this way, the GNU and Apple configurations
are more similar.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/516
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[AJB: fix noexport to no-export]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210811100550.54714-1-pbonzini@redhat.com>
Cc: qemu-stable@nongnu.org
(cherry picked from commit b906acace2)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 12:49:09 -06:00
Mahmoud Mandour
f97853c8cb plugins/execlog: removed unintended "s" at the end of log lines.
Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210803151428.125323-1-ma.mandourr@gmail.com>
Message-Id: <20210806141015.2487502-2-alex.bennee@linaro.org>
Cc: qemu-stable@nongnu.org
(cherry picked from commit b40310616d)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 12:48:25 -06:00
Christian Schoenebeck
abeee2a470 9pfs: fix crash in v9fs_walk()
v9fs_walk() utilizes the v9fs_co_run_in_worker({...}) macro to run the
supplied fs driver code block on a background worker thread.

When either the 'Twalk' client request was interrupted or if the client
requested fid for that 'Twalk' request caused a stat error then that
fs driver code block was left by 'break' keyword, with the intention to
return from worker thread back to main thread as well:

    v9fs_co_run_in_worker({
        if (v9fs_request_cancelled(pdu)) {
            err = -EINTR;
            break;
        }
        err = s->ops->lstat(&s->ctx, &dpath, &fidst);
        if (err < 0) {
            err = -errno;
            break;
        }
        ...
    });

However that 'break;' statement also skipped the v9fs_co_run_in_worker()
macro's final and mandatory

    /* re-enter back to qemu thread */
    qemu_coroutine_yield();

call and thus caused the rest of v9fs_walk() to be continued being
executed on the worker thread instead of main thread, eventually
leading to a crash in the transport virtio transport driver.

To fix this issue and to prevent the same error from happening again by
other users of v9fs_co_run_in_worker() in future, auto wrap the supplied
code block into its own

    do { } while (0);

loop inside the 'v9fs_co_run_in_worker' macro definition.

Full discussion and backtrace:
https://lists.gnu.org/archive/html/qemu-devel/2021-08/msg05209.html
https://lists.gnu.org/archive/html/qemu-devel/2021-09/msg00174.html

Fixes: 8d6cb10073
Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
Cc: qemu-stable@nongnu.org
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-Id: <E1mLTBg-0002Bh-2D@lizzy.crudebyte.com>
(cherry picked from commit f83df00900)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 12:46:48 -06:00
Yang Zhong
ff6d391e10 i386/cpu: Remove AVX_VNNI feature from Cooperlake cpu model
The AVX_VNNI feature is not in Cooperlake platform, remove it
from cpu model.

Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210820054611.84303-1-yang.zhong@intel.com>
Fixes: c1826ea6a0 ("i386/cpu: Expose AVX_VNNI instruction to guest")
Cc: qemu-stable@nongnu.org
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
(cherry picked from commit f429dbf8fc)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 12:43:09 -06:00
Helge Deller
b19de1137b hw/display/artist: Fix bug in coordinate extraction in artist_vram_read() and artist_vram_write()
The CDE desktop on HP-UX 10 shows wrongly rendered pixels when the local screen
menu is closed. This bug was introduced by commit c7050f3f16
("hw/display/artist: Refactor x/y coordination extraction") which converted the
coordinate extraction in artist_vram_read() and artist_vram_write() to use the
ADDR_TO_X and ADDR_TO_Y macros, but forgot to right-shift the address by 2 as
it was done before.

Signed-off-by: Helge Deller <deller@gmx.de>
Fixes: c7050f3f16 ("hw/display/artist: Refactor x/y coordination extraction")
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Sven Schnelle <svens@stackframe.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <YK1aPb8keur9W7h2@ls3530>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 01f750f5fe)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 08:57:12 -06:00
David Hildenbrand
3c6e5df1f6 libvhost-user: fix VHOST_USER_REM_MEM_REG skipping mmap_addr
We end up not copying the mmap_addr of all existing regions, resulting
in a SEGFAULT once we actually try to map/access anything within our
memory regions.

Fixes: 875b9fd97b ("Support individual region unmap in libvhost-user")
Cc: qemu-stable@nongnu.org
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Raphael Norwitz <raphael.norwitz@nutanix.com>
Cc: "Marc-André Lureau" <marcandre.lureau@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Coiby Xu <coiby.xu@gmail.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20211011201047.62587-1-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 6889eb2d43)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 08:57:07 -06:00
Xueming Li
695c25e167 vhost-user: fix duplicated notifier MR init
In case of device resume after suspend, VQ notifier MR still valid.
Duplicated registrations explode memory block list and slow down device
resume.

Fixes: 44866521bd ("vhost-user: support registering external host notifiers")
Cc: tiwei.bie@intel.com
Cc: qemu-stable@nongnu.org
Cc: Yuwei Zhang <zhangyuwei.9149@bytedance.com>

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Message-Id: <20211008080215.590292-1-xuemingl@nvidia.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit a1ed9ef1de)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 08:57:02 -06:00
Gerd Hoffmann
23ba9f170f uas: add stream number sanity checks.
The device uses the guest-supplied stream number unchecked, which can
lead to guest-triggered out-of-band access to the UASDevice->data3 and
UASDevice->status3 fields.  Add the missing checks.

Fixes: CVE-2021-3713
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reported-by: Chen Zhe <chenzhe@huawei.com>
Reported-by: Tan Jingguo <tanjingguo@huawei.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20210818120505.1258262-2-kraxel@redhat.com>
(cherry picked from commit 13b250b12a)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 08:56:53 -06:00
David Hildenbrand
f0dee5a40d virtio-mem-pci: Fix memory leak when creating MEMORY_DEVICE_SIZE_CHANGE event
Apparently, we don't have to duplicate the string.

Fixes: 722a3c783e ("virtio-pci: Send qapi events when the virtio-mem size changes")
Cc: qemu-stable@nongnu.org
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210929162445.64060-2-david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 75b98cb9f6)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 08:56:49 -06:00
Markus Armbruster
7637373b23 hmp: Unbreak "change vnc"
HMP command "change vnc" can take the password as argument, or prompt
for it:

    (qemu) change vnc password 123
    (qemu) change vnc password
    Password: ***
    (qemu)

This regressed in commit cfb5387a1d "hmp: remove "change vnc TARGET"
command", v6.0.0.

    (qemu) change vnc passwd 123
    Password: ***
    (qemu) change vnc passwd
    (qemu)

The latter passes NULL to qmp_change_vnc_password(), which is a no-no.
Looks like it puts the display into "password required, but none set"
state.

The logic error is easy to miss in review, but testing should've
caught it.

Fix the obvious way.

Fixes: cfb5387a1d
Cc: qemu-stable@nongnu.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Message-Id: <20210909081219.308065-2-armbru@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
(cherry picked from commit 6193344f93)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 08:56:41 -06:00
Nir Soffer
4c34ef3d34 qemu-nbd: Change default cache mode to writeback
Both qemu and qemu-img use writeback cache mode by default, which is
already documented in qemu(1). qemu-nbd uses writethrough cache mode by
default, and the default cache mode is not documented.

According to the qemu-nbd(8):

   --cache=CACHE
          The  cache  mode  to be used with the file.  See the
          documentation of the emulator's -drive cache=... option for
          allowed values.

qemu(1) says:

    The default mode is cache=writeback.

So users have no reason to assume that qemu-nbd is using writethough
cache mode. The only hint is the painfully slow writing when using the
defaults.

Looking in git history, it seems that qemu used writethrough in the past
to support broken guests that did not flush data properly, or could not
flush due to limitations in qemu. But qemu-nbd clients can use
NBD_CMD_FLUSH to flush data, so using writethrough does not help anyone.

Change the default cache mode to writback, and document the default and
available values properly in the online help and manual.

With this change converting image via qemu-nbd is 3.5 times faster.

    $ qemu-img create dst.img 50g
    $ qemu-nbd -t -f raw -k /tmp/nbd.sock dst.img

Before this change:

    $ hyperfine -r3 "./qemu-img convert -p -f raw -O raw -T none -W fedora34.img nbd+unix:///?socket=/tmp/nbd.sock"
    Benchmark #1: ./qemu-img convert -p -f raw -O raw -T none -W fedora34.img nbd+unix:///?socket=/tmp/nbd.sock
      Time (mean ± σ):     83.639 s ±  5.970 s    [User: 2.733 s, System: 6.112 s]
      Range (min … max):   76.749 s … 87.245 s    3 runs

After this change:

    $ hyperfine -r3 "./qemu-img convert -p -f raw -O raw -T none -W fedora34.img nbd+unix:///?socket=/tmp/nbd.sock"
    Benchmark #1: ./qemu-img convert -p -f raw -O raw -T none -W fedora34.img nbd+unix:///?socket=/tmp/nbd.sock
      Time (mean ± σ):     23.522 s ±  0.433 s    [User: 2.083 s, System: 5.475 s]
      Range (min … max):   23.234 s … 24.019 s    3 runs

Users can avoid the issue by using --cache=writeback[1] but the defaults
should give good performance for the common use case.

[1] https://bugzilla.redhat.com/1990656

Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Message-Id: <20210813205519.50518-1-nsoffer@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
(cherry picked from commit 0961525705)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 08:56:36 -06:00
Jason Wang
9e41f16fca virtio-net: fix use after unmap/free for sg
When mergeable buffer is enabled, we try to set the num_buffers after
the virtqueue elem has been unmapped. This will lead several issues,
E.g a use after free when the descriptor has an address which belongs
to the non direct access region. In this case we use bounce buffer
that is allocated during address_space_map() and freed during
address_space_unmap().

Fixing this by storing the elems temporarily in an array and delay the
unmap after we set the the num_buffers.

This addresses CVE-2021-3748.

Reported-by: Alexander Bulekov <alxndr@bu.edu>
Fixes: fbe78f4f55 ("virtio-net support")
Cc: qemu-stable@nongnu.org
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit bedd7e93d0)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 08:56:31 -06:00
Peter Maydell
3054f772de target/arm: Don't skip M-profile reset entirely in user mode
Currently all of the M-profile specific code in arm_cpu_reset() is
inside a !defined(CONFIG_USER_ONLY) ifdef block.  This is
unintentional: it happened because originally the only
M-profile-specific handling was the setup of the initial SP and PC
from the vector table, which is system-emulation only.  But then we
added a lot of other M-profile setup to the same "if (ARM_FEATURE_M)"
code block without noticing that it was all inside a not-user-mode
ifdef.  This has generally been harmless, but with the addition of
v8.1M low-overhead-loop support we ran into a problem: the reset of
FPSCR.LTPSIZE to 4 was only being done for system emulation mode, so
if a user-mode guest tried to execute the LE instruction it would
incorrectly take a UsageFault.

Adjust the ifdefs so only the really system-emulation specific parts
are covered.  Because this means we now run some reset code that sets
up initial values in the FPCCR and similar FPU related registers,
explicitly set up the registers controlling FPU context handling in
user-emulation mode so that the FPU works by design and not by
chance.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/613
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210914120725.24992-2-peter.maydell@linaro.org
(cherry picked from commit b62ceeaf80)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 08:56:25 -06:00
David Hildenbrand
aa77e375a5 virtio-balloon: don't start free page hinting if postcopy is possible
Postcopy never worked properly with 'free-page-hint=on', as there are
at least two issues:

1) With postcopy, the guest will never receive a VIRTIO_BALLOON_CMD_ID_DONE
   and consequently won't release free pages back to the OS once
   migration finishes.

   The issue is that for postcopy, we won't do a final bitmap sync while
   the guest is stopped on the source and
   virtio_balloon_free_page_hint_notify() will only call
   virtio_balloon_free_page_done() on the source during
   PRECOPY_NOTIFY_CLEANUP, after the VM state was already migrated to
   the destination.

2) Once the VM touches a page on the destination that has been excluded
   from migration on the source via qemu_guest_free_page_hint() while
   postcopy is active, that thread will stall until postcopy finishes
   and all threads are woken up. (with older Linux kernels that won't
   retry faults when woken up via userfaultfd, we might actually get a
   SEGFAULT)

   The issue is that the source will refuse to migrate any pages that
   are not marked as dirty in the dirty bmap -- for example, because the
   page might just have been sent. Consequently, the faulting thread will
   stall, waiting for the page to be migrated -- which could take quite
   a while and result in guest OS issues.

While we could fix 1) comparatively easily, 2) is harder to get right and
might require more involved RAM migration changes on source and destination
[1].

As it never worked properly, let's not start free page hinting in the
precopy notifier if the postcopy migration capability was enabled to fix
it easily. Capabilities cannot be enabled once migration is already
running.

Note 1: in the future we might either adjust migration code on the source
        to track pages that have actually been sent or adjust
        migration code on source and destination  to eventually send
        pages multiple times from the source and and deal with pages
        that are sent multiple times on the destination.

Note 2: virtio-mem has similar issues, however, access to "unplugged"
        memory by the guest is very rare and we would have to be very
        lucky for it to happen during migration. The spec states
        "The driver SHOULD NOT read from unplugged memory blocks ..."
        and "The driver MUST NOT write to unplugged memory blocks".
        virtio-mem will move away from virtio_balloon_free_page_done()
        soon and handle this case explicitly on the destination.

[1] https://lkml.kernel.org/r/e79fd18c-aa62-c1d8-c7f3-ba3fc2c25fc8@redhat.com

Fixes: c13c4153f7 ("virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT")
Cc: qemu-stable@nongnu.org
Cc: Wei Wang <wei.w.wang@intel.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Cc: Juan Quintela <quintela@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Peter Xu <peterx@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210708095339.20274-2-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
(cherry picked from commit fd51e54fa1)
Signed-off-by: Michael Roth <michael.roth@amd.com>
2021-12-14 08:56:18 -06:00
8457 changed files with 550169 additions and 1007709 deletions

View File

@@ -1,14 +0,0 @@
#
# Common b4 settings that can be used to send patches to QEMU upstream.
# https://b4.docs.kernel.org/
#
[b4]
send-series-to = qemu-devel@nongnu.org
send-auto-to-cmd = echo
send-auto-cc-cmd = scripts/get_maintainer.pl --noroles --norolestats --nogit --nogit-fallback
am-perpatch-check-cmd = scripts/checkpatch.pl -q --terse --no-summary --mailback -
prep-perpatch-check-cmd = scripts/checkpatch.pl -q --terse --no-summary --mailback -
searchmask = https://lore.kernel.org/qemu-devel/?x=m&t=1&q=%s
linkmask = https://lore.kernel.org/qemu-devel/%s
linktrailermask = Message-ID: <%s>

110
.cirrus.yml Normal file
View File

@@ -0,0 +1,110 @@
env:
CIRRUS_CLONE_DEPTH: 1
windows_msys2_task:
timeout_in: 90m
windows_container:
image: cirrusci/windowsservercore:2019
os_version: 2019
cpu: 8
memory: 8G
env:
CIRRUS_SHELL: powershell
MSYS: winsymlinks:nativestrict
MSYSTEM: MINGW64
MSYS2_URL: https://github.com/msys2/msys2-installer/releases/download/2021-04-19/msys2-base-x86_64-20210419.sfx.exe
MSYS2_FINGERPRINT: 0
MSYS2_PACKAGES: "
diffutils git grep make pkg-config sed
mingw-w64-x86_64-python
mingw-w64-x86_64-python-sphinx
mingw-w64-x86_64-toolchain
mingw-w64-x86_64-SDL2
mingw-w64-x86_64-SDL2_image
mingw-w64-x86_64-gtk3
mingw-w64-x86_64-glib2
mingw-w64-x86_64-ninja
mingw-w64-x86_64-jemalloc
mingw-w64-x86_64-lzo2
mingw-w64-x86_64-zstd
mingw-w64-x86_64-libjpeg-turbo
mingw-w64-x86_64-pixman
mingw-w64-x86_64-libgcrypt
mingw-w64-x86_64-libpng
mingw-w64-x86_64-libssh
mingw-w64-x86_64-libxml2
mingw-w64-x86_64-snappy
mingw-w64-x86_64-libusb
mingw-w64-x86_64-usbredir
mingw-w64-x86_64-libtasn1
mingw-w64-x86_64-nettle
mingw-w64-x86_64-cyrus-sasl
mingw-w64-x86_64-curl
mingw-w64-x86_64-gnutls
mingw-w64-x86_64-libnfs
"
CHERE_INVOKING: 1
msys2_cache:
folder: C:\tools\archive
reupload_on_changes: false
# These env variables are used to generate fingerprint to trigger the cache procedure
# If wanna to force re-populate msys2, increase MSYS2_FINGERPRINT
fingerprint_script:
- |
echo $env:CIRRUS_TASK_NAME
echo $env:MSYS2_URL
echo $env:MSYS2_FINGERPRINT
echo $env:MSYS2_PACKAGES
populate_script:
- |
md -Force C:\tools\archive\pkg
$start_time = Get-Date
bitsadmin /transfer msys_download /dynamic /download /priority FOREGROUND $env:MSYS2_URL C:\tools\archive\base.exe
Write-Output "Download time taken: $((Get-Date).Subtract($start_time))"
cd C:\tools
C:\tools\archive\base.exe -y
del -Force C:\tools\archive\base.exe
Write-Output "Base install time taken: $((Get-Date).Subtract($start_time))"
$start_time = Get-Date
((Get-Content -path C:\tools\msys64\etc\\post-install\\07-pacman-key.post -Raw) -replace '--refresh-keys', '--version') | Set-Content -Path C:\tools\msys64\etc\\post-install\\07-pacman-key.post
C:\tools\msys64\usr\bin\bash.exe -lc "sed -i 's/^CheckSpace/#CheckSpace/g' /etc/pacman.conf"
C:\tools\msys64\usr\bin\bash.exe -lc "export"
C:\tools\msys64\usr\bin\pacman.exe --noconfirm -Sy
echo Y | C:\tools\msys64\usr\bin\pacman.exe --noconfirm -Suu --overwrite=*
taskkill /F /FI "MODULES eq msys-2.0.dll"
tasklist
C:\tools\msys64\usr\bin\bash.exe -lc "mv -f /etc/pacman.conf.pacnew /etc/pacman.conf || true"
C:\tools\msys64\usr\bin\bash.exe -lc "pacman --noconfirm -Syuu --overwrite=*"
Write-Output "Core install time taken: $((Get-Date).Subtract($start_time))"
$start_time = Get-Date
C:\tools\msys64\usr\bin\bash.exe -lc "pacman --noconfirm -S --needed $env:MSYS2_PACKAGES"
Write-Output "Package install time taken: $((Get-Date).Subtract($start_time))"
$start_time = Get-Date
del -Force -ErrorAction SilentlyContinue C:\tools\msys64\etc\mtab
del -Force -ErrorAction SilentlyContinue C:\tools\msys64\dev\fd
del -Force -ErrorAction SilentlyContinue C:\tools\msys64\dev\stderr
del -Force -ErrorAction SilentlyContinue C:\tools\msys64\dev\stdin
del -Force -ErrorAction SilentlyContinue C:\tools\msys64\dev\stdout
del -Force -Recurse -ErrorAction SilentlyContinue C:\tools\msys64\var\cache\pacman\pkg
tar cf C:\tools\archive\msys64.tar -C C:\tools\ msys64
Write-Output "Package archive time taken: $((Get-Date).Subtract($start_time))"
del -Force -Recurse -ErrorAction SilentlyContinue c:\tools\msys64
install_script:
- |
$start_time = Get-Date
cd C:\tools
ls C:\tools\archive\msys64.tar
tar xf C:\tools\archive\msys64.tar
Write-Output "Extract msys2 time taken: $((Get-Date).Subtract($start_time))"
script:
- C:\tools\msys64\usr\bin\bash.exe -lc "mkdir build"
- C:\tools\msys64\usr\bin\bash.exe -lc "cd build && ../configure --python=python3"
- C:\tools\msys64\usr\bin\bash.exe -lc "cd build && make -j8"
- exit $LastExitCode
test_script:
- C:\tools\msys64\usr\bin\bash.exe -lc "cd build && make V=1 check"
- exit $LastExitCode

View File

@@ -47,16 +47,3 @@ emacs_mode = glsl
[*.json]
indent_style = space
emacs_mode = python
# by default follow QEMU's style
[*.pl]
indent_style = space
indent_size = 4
emacs_mode = perl
# but user kernel "style" for imported scripts
[scripts/{kernel-doc,get_maintainer.pl,checkpatch.pl}]
indent_style = tab
indent_size = 8
emacs_mode = perl

View File

@@ -1,21 +0,0 @@
#
# List of code-formatting clean ups the git blame can ignore
#
# git blame --ignore-revs-file .git-blame-ignore-revs
#
# or
#
# git config blame.ignoreRevsFile .git-blame-ignore-revs
#
# gdbstub: clean-up indents
ad9e4585b3c7425759d3eea697afbca71d2c2082
# e1000e: fix code style
0eadd56bf53ab196a16d492d7dd31c62e1c24c32
# target/riscv: coding style fixes
8c7feddddd9218b407792120bcfda0347ed16205
# replace TABs with spaces
48805df9c22a0700fba4b3b548fafaa21726ca68

6
.gitattributes vendored
View File

@@ -1,9 +1,3 @@
*.c.inc diff=c
*.h.inc diff=c
*.m diff=objc
*.py diff=python
*.rs diff=rust
*.rs.inc diff=rust
Cargo.lock diff=toml merge=binary
*.patch -text -whitespace

34
.github/lockdown.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
# Configuration for Repo Lockdown - https://github.com/dessant/repo-lockdown
# Close issues and pull requests
close: true
# Lock issues and pull requests
lock: true
issues:
comment: |
Thank you for your interest in the QEMU project.
This repository is a read-only mirror of the project's repostories hosted
at https://gitlab.com/qemu-project/qemu.git.
The project does not process issues filed on GitHub.
The project issues are tracked on GitLab:
https://gitlab.com/qemu-project/qemu/-/issues
QEMU welcomes bug report contributions. You can file new ones on:
https://gitlab.com/qemu-project/qemu/-/issues/new
pulls:
comment: |
Thank you for your interest in the QEMU project.
This repository is a read-only mirror of the project's repostories hosted
on https://gitlab.com/qemu-project/qemu.git.
The project does not process merge requests filed on GitHub.
QEMU welcomes contributions of code (either fixing bugs or adding new
functionality). However, we get a lot of patches, and so we have some
guidelines about contributing on the project website:
https://www.qemu.org/contribute/

View File

@@ -1,30 +0,0 @@
# Configuration for Repo Lockdown - https://github.com/dessant/repo-lockdown
name: 'Repo Lockdown'
on:
pull_request_target:
types: opened
permissions:
pull-requests: write
jobs:
action:
runs-on: ubuntu-latest
steps:
- uses: dessant/repo-lockdown@v2
with:
pr-comment: |
Thank you for your interest in the QEMU project.
This repository is a read-only mirror of the project's repostories hosted
on https://gitlab.com/qemu-project/qemu.git.
The project does not process merge requests filed on GitHub.
QEMU welcomes contributions of code (either fixing bugs or adding new
functionality). However, we get a lot of patches, and so we have some
guidelines about contributing on the project website:
https://www.qemu.org/contribute/
lock-pr: true
close-pr: true

5
.gitignore vendored
View File

@@ -1,13 +1,9 @@
/GNUmakefile
/build/
/.cache/
/.vscode/
*.pyc
.sdk
.stgit-*
.git-submodule-status
.clang-format
.gdb_history
cscope.*
tags
TAGS
@@ -19,4 +15,3 @@ GTAGS
*.depend_raw
*.swp
*.patch
*.gcov

View File

@@ -1,136 +0,0 @@
variables:
# On stable branches this is changed by later rules. Should also
# be overridden per pipeline if running pipelines concurrently
# for different branches in contributor forks.
QEMU_CI_CONTAINER_TAG: latest
# For purposes of CI rules, upstream is the gitlab.com/qemu-project
# namespace. When testing CI, it might be usefult to override this
# to point to a fork repo
QEMU_CI_UPSTREAM: qemu-project
# The order of rules defined here is critically important.
# They are evaluated in order and first match wins.
#
# Thus we group them into a number of stages, ordered from
# most restrictive to least restrictive
#
# For pipelines running for stable "staging-X.Y" branches
# we must override QEMU_CI_CONTAINER_TAG
#
.base_job_template:
variables:
# Each script line from will be in a collapsible section in the job output
# and show the duration of each line.
FF_SCRIPT_SECTIONS: 1
# The project has a fairly fat GIT repo so we try and avoid bringing in things
# we don't need. The --filter options avoid blobs and tree references we aren't going to use
# and we also avoid fetching tags.
GIT_FETCH_EXTRA_FLAGS: --filter=blob:none --filter=tree:0 --no-tags --prune --quiet
interruptible: true
rules:
#############################################################
# Stage 1: exclude scenarios where we definitely don't
# want jobs to run
#############################################################
# Never run jobs upstream on stable branch, staging branch jobs already ran
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_BRANCH =~ /^stable-/'
when: never
# Never run jobs upstream on tags, staging branch jobs already ran
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_TAG'
when: never
# Scheduled runs on mainline don't get pipelines except for the special Coverity job
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_PIPELINE_SOURCE == "schedule"'
when: never
# Cirrus jobs can't run unless the creds / target repo are set
- if: '$QEMU_JOB_CIRRUS && ($CIRRUS_GITHUB_REPO == null || $CIRRUS_API_TOKEN == null)'
when: never
# Publishing jobs should only run on the default branch in upstream
- if: '$QEMU_JOB_PUBLISH == "1" && $CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH'
when: never
# Non-publishing jobs should only run on staging branches in upstream
- if: '$QEMU_JOB_PUBLISH != "1" && $CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_BRANCH !~ /staging/'
when: never
# Jobs only intended for forks should always be skipped on upstream
- if: '$QEMU_JOB_ONLY_FORKS == "1" && $CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM'
when: never
# Forks don't get pipelines unless QEMU_CI=1 or QEMU_CI=2 is set
- if: '$QEMU_CI != "1" && $QEMU_CI != "2" && $CI_PROJECT_NAMESPACE != $QEMU_CI_UPSTREAM'
when: never
# Avocado jobs don't run in forks unless $QEMU_CI_AVOCADO_TESTING is set
- if: '$QEMU_JOB_AVOCADO && $QEMU_CI_AVOCADO_TESTING != "1" && $CI_PROJECT_NAMESPACE != $QEMU_CI_UPSTREAM'
when: never
#############################################################
# Stage 2: fine tune execution of jobs in specific scenarios
# where the catch all logic is inappropriate
#############################################################
# Optional jobs should not be run unless manually triggered
- if: '$QEMU_JOB_OPTIONAL && $CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_BRANCH =~ /staging-[[:digit:]]+\.[[:digit:]]/'
when: manual
allow_failure: true
variables:
QEMU_CI_CONTAINER_TAG: $CI_COMMIT_REF_SLUG
- if: '$QEMU_JOB_OPTIONAL'
when: manual
allow_failure: true
# Skipped jobs should not be run unless manually triggered
- if: '$QEMU_JOB_SKIPPED && $CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_BRANCH =~ /staging-[[:digit:]]+\.[[:digit:]]/'
when: manual
allow_failure: true
variables:
QEMU_CI_CONTAINER_TAG: $CI_COMMIT_REF_SLUG
- if: '$QEMU_JOB_SKIPPED'
when: manual
allow_failure: true
# Avocado jobs can be manually start in forks if $QEMU_CI_AVOCADO_TESTING is unset
- if: '$QEMU_JOB_AVOCADO && $CI_PROJECT_NAMESPACE != $QEMU_CI_UPSTREAM'
when: manual
allow_failure: true
#############################################################
# Stage 3: catch all logic applying to any job not matching
# an earlier criteria
#############################################################
# Forks pipeline jobs don't start automatically unless
# QEMU_CI=2 is set
- if: '$QEMU_CI != "2" && $CI_PROJECT_NAMESPACE != $QEMU_CI_UPSTREAM'
when: manual
# Upstream pipeline jobs start automatically unless told not to
# by setting QEMU_CI=1
- if: '$QEMU_CI == "1" && $CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_BRANCH =~ /staging-[[:digit:]]+\.[[:digit:]]/'
when: manual
variables:
QEMU_CI_CONTAINER_TAG: $CI_COMMIT_REF_SLUG
- if: '$QEMU_CI == "1" && $CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM'
when: manual
# Jobs can run if any jobs they depend on were successful
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_BRANCH =~ /staging-[[:digit:]]+\.[[:digit:]]/'
when: on_success
variables:
QEMU_CI_CONTAINER_TAG: $CI_COMMIT_REF_SLUG
- when: on_success

View File

@@ -1,111 +1,56 @@
.native_build_job_template:
extends: .base_job_template
stage: build
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
cache:
paths:
- ccache
key: "$CI_JOB_NAME"
when: always
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
before_script:
- source scripts/ci/gitlab-ci-section
- section_start setup "Pre-script setup"
- JOBS=$(expr $(nproc) + 1)
- cat /packages.txt
- section_end setup
script:
- export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
- export CCACHE_MAXSIZE="500M"
- export PATH="$CCACHE_WRAPPERSDIR:$PATH"
- du -sh .git
- mkdir build
- cd build
- ccache --zero-stats
- section_start configure "Running configure"
- ../configure --enable-werror --disable-docs --enable-fdt=system
${TARGETS:+--target-list="$TARGETS"}
$CONFIGURE_ARGS ||
{ cat config.log meson-logs/meson-log.txt && exit 1; }
- if test -n "$LD_JOBS";
then
pyvenv/bin/meson configure . -Dbackend_max_links="$LD_JOBS" ;
scripts/git-submodule.sh update meson ;
fi
- mkdir build
- cd build
- if test -n "$TARGETS";
then
../configure --enable-werror --disable-docs ${LD_JOBS:+--meson=git} $CONFIGURE_ARGS --target-list="$TARGETS" ;
else
../configure --enable-werror --disable-docs ${LD_JOBS:+--meson=git} $CONFIGURE_ARGS ;
fi || { cat config.log meson-logs/meson-log.txt && exit 1; }
- if test -n "$LD_JOBS";
then
../meson/meson.py configure . -Dbackend_max_links="$LD_JOBS" ;
fi || exit 1;
- section_end configure
- section_start build "Building QEMU"
- $MAKE -j"$JOBS"
- section_end build
- section_start test "Running tests"
- make -j"$JOBS"
- if test -n "$MAKE_CHECK_ARGS";
then
$MAKE -j"$JOBS" $MAKE_CHECK_ARGS ;
make -j"$JOBS" $MAKE_CHECK_ARGS ;
fi
- section_end test
- ccache --show-stats
# We jump some hoops in common_test_job_template to avoid
# rebuilding all the object files we skip in the artifacts
.native_build_artifact_template:
artifacts:
when: on_success
expire_in: 2 days
paths:
- build
- .git-submodule-status
exclude:
- build/**/*.p
- build/**/*.a.p
- build/**/*.c.o
- build/**/*.c.o.d
.common_test_job_template:
extends: .base_job_template
.native_test_job_template:
stage: test
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
script:
- source scripts/ci/gitlab-ci-section
- section_start buildenv "Setting up to run tests"
- scripts/git-submodule.sh update roms/SLOF
- build/pyvenv/bin/meson subprojects download $(cd build/subprojects && echo *)
- scripts/git-submodule.sh update
$(sed -n '/GIT_SUBMODULES=/ s/.*=// p' build/config-host.mak)
- cd build
- find . -type f -exec touch {} +
# Avoid recompiling by hiding ninja with NINJA=":"
# We also have to pre-cache the functional tests manually in this case
- if [ "x${QEMU_TEST_CACHE_DIR}" != "x" ]; then
$MAKE precache-functional ;
fi
- section_end buildenv
- section_start test "Running tests"
- $MAKE NINJA=":" $MAKE_CHECK_ARGS
- section_end test
- make NINJA=":" $MAKE_CHECK_ARGS
.native_test_job_template:
extends: .common_test_job_template
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
when: always
expire_in: 7 days
paths:
- build/meson-logs/testlog.txt
reports:
junit: build/meson-logs/testlog.junit.xml
.functional_test_job_template:
extends: .common_test_job_template
.acceptance_test_job_template:
extends: .native_test_job_template
cache:
key: "${CI_JOB_NAME}-cache"
paths:
- ${CI_PROJECT_DIR}/avocado-cache
- ${CI_PROJECT_DIR}/functional-cache
policy: pull-push
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
when: always
when: on_failure
expire_in: 7 days
paths:
- build/tests/results/latest/results.xml
- build/tests/results/latest/test-results
- build/tests/functional/*/*/*.log
reports:
junit: build/tests/results/latest/results.xml
before_script:
@@ -116,13 +61,21 @@
- echo -e '[job.output.testlogs]\nstatuses = ["FAIL", "INTERRUPT"]'
>> ~/.config/avocado/avocado.conf
- if [ -d ${CI_PROJECT_DIR}/avocado-cache ]; then
du -chs ${CI_PROJECT_DIR}/*-cache ;
du -chs ${CI_PROJECT_DIR}/avocado-cache ;
fi
- export AVOCADO_ALLOW_UNTRUSTED_CODE=1
- export QEMU_TEST_ALLOW_UNTRUSTED_CODE=1
- export QEMU_TEST_CACHE_DIR=${CI_PROJECT_DIR}/functional-cache
after_script:
- cd build
- du -chs ${CI_PROJECT_DIR}/*-cache
variables:
QEMU_JOB_AVOCADO: 1
- du -chs ${CI_PROJECT_DIR}/avocado-cache
rules:
# Only run these jobs if running on the mainstream namespace,
# or if the user set the QEMU_CI_AVOCADO_TESTING variable (either
# in its namespace setting or via git-push option, see documentation
# in /.gitlab-ci.yml of this repository).
- if: '$CI_PROJECT_NAMESPACE == "qemu-project"'
when: on_success
- if: '$QEMU_CI_AVOCADO_TESTING'
when: on_success
# Otherwise, set to manual (the jobs are created but not run).
- when: manual
allow_failure: true

File diff suppressed because it is too large Load Diff

View File

@@ -19,9 +19,10 @@ cwd = os.getcwd()
reponame = os.path.basename(cwd)
repourl = "https://gitlab.com/%s/%s.git" % (namespace, reponame)
print(f"adding upstream git repo @ {repourl}")
subprocess.check_call(["git", "remote", "add", "check-dco", repourl])
subprocess.check_call(["git", "fetch", "--refetch", "check-dco", "master"])
subprocess.check_call(["git", "fetch", "check-dco", "master"],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL)
ancestor = subprocess.check_output(["git", "merge-base",
"check-dco/master", "HEAD"],
@@ -78,10 +79,7 @@ of Origin 1.1 (DCO):
To indicate acceptance of the DCO every commit must have a tag
Signed-off-by: YOUR NAME <EMAIL>
where "YOUR NAME" is your commonly known identity in the context
of the community.
Signed-off-by: REAL NAME <EMAIL>
This can be achieved by passing the "-s" flag to the "git commit" command.

View File

@@ -19,12 +19,13 @@ cwd = os.getcwd()
reponame = os.path.basename(cwd)
repourl = "https://gitlab.com/%s/%s.git" % (namespace, reponame)
print(f"adding upstream git repo @ {repourl}")
# GitLab CI environment does not give us any direct info about the
# base for the user's branch. We thus need to figure out a common
# ancestor between the user's branch and current git master.
subprocess.check_call(["git", "remote", "add", "check-patch", repourl])
subprocess.check_call(["git", "fetch", "--refetch", "check-patch", "master"])
subprocess.check_call(["git", "fetch", "check-patch", "master"],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL)
ancestor = subprocess.check_output(["git", "merge-base",
"check-patch/master", "HEAD"],

View File

@@ -1,66 +0,0 @@
#!/usr/bin/env python3
#
# check-units.py: check the number of compilation units and identify
# those that are rebuilt multiple times
#
# Copyright (C) 2025 Linaro Ltd.
#
# SPDX-License-Identifier: GPL-2.0-or-later
from os import access, R_OK, path
from sys import argv, exit
import json
from collections import Counter
def extract_build_units(cc_path):
"""
Extract the build units and their counds from compile_commands.json file.
Returns:
Hash table of ["unit"] = count
"""
j = json.load(open(cc_path, 'r'))
files = [f['file'] for f in j]
build_units = Counter(files)
return build_units
def analyse_units(build_units):
"""
Analyse the build units and report stats and the top 10 rebuilds
"""
print(f"Total source files: {len(build_units.keys())}")
print(f"Total build units: {sum(units.values())}")
# Create a sorted list by number of rebuilds
sorted_build_units = sorted(build_units.items(),
key=lambda item: item[1],
reverse=True)
print("Most rebuilt units:")
for unit, count in sorted_build_units[:20]:
print(f" {unit} built {count} times")
print("Least rebuilt units:")
for unit, count in sorted_build_units[-10:]:
print(f" {unit} built {count} times")
if __name__ == "__main__":
if len(argv) != 2:
script_name = path.basename(argv[0])
print(f"Usage: {script_name} <path_to_compile_commands.json>")
exit(1)
cc_path = argv[1]
if path.isfile(cc_path) and access(cc_path, R_OK):
units = extract_build_units(cc_path)
analyse_units(units)
exit(0)
else:
print(f"{cc_path} doesn't exist or isn't readable")
exit(1)

View File

@@ -11,50 +11,77 @@
# special care, because we can't just override it at the GitLab CI job
# definition level or we risk breaking it completely.
.cirrus_build_job:
extends: .base_job_template
stage: build
image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:latest
image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:master
needs: []
allow_failure:
exit_codes: 3
# 20 mins larger than "timeout_in" in cirrus/build.yml
# as there's often a 5-10 minute delay before Cirrus CI
# actually starts the task
timeout: 80m
allow_failure: true
script:
- set -o allexport
- source .gitlab-ci.d/cirrus/$NAME.vars
- set +o allexport
- cirrus-vars <.gitlab-ci.d/cirrus/build.yml >.gitlab-ci.d/cirrus/$NAME.yml
- sed -e "s|[@]CI_REPOSITORY_URL@|$CI_REPOSITORY_URL|g"
-e "s|[@]CI_COMMIT_REF_NAME@|$CI_COMMIT_REF_NAME|g"
-e "s|[@]CI_COMMIT_SHA@|$CI_COMMIT_SHA|g"
-e "s|[@]CIRRUS_VM_INSTANCE_TYPE@|$CIRRUS_VM_INSTANCE_TYPE|g"
-e "s|[@]CIRRUS_VM_IMAGE_SELECTOR@|$CIRRUS_VM_IMAGE_SELECTOR|g"
-e "s|[@]CIRRUS_VM_IMAGE_NAME@|$CIRRUS_VM_IMAGE_NAME|g"
-e "s|[@]CIRRUS_VM_CPUS@|$CIRRUS_VM_CPUS|g"
-e "s|[@]CIRRUS_VM_RAM@|$CIRRUS_VM_RAM|g"
-e "s|[@]UPDATE_COMMAND@|$UPDATE_COMMAND|g"
-e "s|[@]INSTALL_COMMAND@|$INSTALL_COMMAND|g"
-e "s|[@]PATH@|$PATH_EXTRA${PATH_EXTRA:+:}\$PATH|g"
-e "s|[@]PKG_CONFIG_PATH@|$PKG_CONFIG_PATH|g"
-e "s|[@]PKGS@|$PKGS|g"
-e "s|[@]MAKE@|$MAKE|g"
-e "s|[@]PYTHON@|$PYTHON|g"
-e "s|[@]PIP3@|$PIP3|g"
-e "s|[@]PYPI_PKGS@|$PYPI_PKGS|g"
-e "s|[@]CONFIGURE_ARGS@|$CONFIGURE_ARGS|g"
-e "s|[@]TEST_TARGETSS@|$TEST_TARGETSS|g"
<.gitlab-ci.d/cirrus/build.yml >.gitlab-ci.d/cirrus/$NAME.yml
- cat .gitlab-ci.d/cirrus/$NAME.yml
- cirrus-run -v --show-build-log always .gitlab-ci.d/cirrus/$NAME.yml
variables:
QEMU_JOB_CIRRUS: 1
rules:
- if: "$CIRRUS_GITHUB_REPO && $CIRRUS_API_TOKEN"
x64-freebsd-14-build:
x64-freebsd-12-build:
extends: .cirrus_build_job
variables:
NAME: freebsd-14
NAME: freebsd-12
CIRRUS_VM_INSTANCE_TYPE: freebsd_instance
CIRRUS_VM_IMAGE_SELECTOR: image_family
CIRRUS_VM_IMAGE_NAME: freebsd-14-2
CIRRUS_VM_IMAGE_NAME: freebsd-12-2
CIRRUS_VM_CPUS: 8
CIRRUS_VM_RAM: 8G
UPDATE_COMMAND: pkg update; pkg upgrade -y
UPDATE_COMMAND: pkg update
INSTALL_COMMAND: pkg install -y
CONFIGURE_ARGS: --target-list-exclude=arm-softmmu,i386-softmmu,microblaze-softmmu,mips64el-softmmu,mipsel-softmmu,mips-softmmu,ppc-softmmu,sh4eb-softmmu,xtensa-softmmu
# TODO: Enable gnutls again once FreeBSD's libtasn1 got fixed
# See: https://gitlab.com/gnutls/libtasn1/-/merge_requests/71
CONFIGURE_ARGS: --disable-gnutls
TEST_TARGETS: check
aarch64-macos-build:
x64-freebsd-13-build:
extends: .cirrus_build_job
variables:
NAME: macos-14
CIRRUS_VM_INSTANCE_TYPE: macos_instance
NAME: freebsd-13
CIRRUS_VM_INSTANCE_TYPE: freebsd_instance
CIRRUS_VM_IMAGE_SELECTOR: image_family
CIRRUS_VM_IMAGE_NAME: freebsd-13-0
CIRRUS_VM_CPUS: 8
CIRRUS_VM_RAM: 8G
UPDATE_COMMAND: pkg update
INSTALL_COMMAND: pkg install -y
TEST_TARGETS: check
x64-macos-11-base-build:
extends: .cirrus_build_job
variables:
NAME: macos-11
CIRRUS_VM_INSTANCE_TYPE: osx_instance
CIRRUS_VM_IMAGE_SELECTOR: image
CIRRUS_VM_IMAGE_NAME: ghcr.io/cirruslabs/macos-runner:sonoma
CIRRUS_VM_IMAGE_NAME: big-sur-base
CIRRUS_VM_CPUS: 12
CIRRUS_VM_RAM: 24G
UPDATE_COMMAND: brew update
INSTALL_COMMAND: brew install
PATH_EXTRA: /opt/homebrew/ccache/libexec:/opt/homebrew/gettext/bin
PKG_CONFIG_PATH: /opt/homebrew/curl/lib/pkgconfig:/opt/homebrew/ncurses/lib/pkgconfig:/opt/homebrew/readline/lib/pkgconfig
CONFIGURE_ARGS: --target-list-exclude=arm-softmmu,i386-softmmu,microblazeel-softmmu,mips64-softmmu,mipsel-softmmu,mips-softmmu,ppc-softmmu,sh4-softmmu,xtensaeb-softmmu
PATH_EXTRA: /usr/local/opt/ccache/libexec:/usr/local/opt/gettext/bin
PKG_CONFIG_PATH: /usr/local/opt/curl/lib/pkgconfig:/usr/local/opt/ncurses/lib/pkgconfig:/usr/local/opt/readline/lib/pkgconfig
TEST_TARGETS: check-unit check-block check-qapi-schema check-softfloat check-qtest-x86_64

View File

@@ -8,25 +8,22 @@ env:
CI_REPOSITORY_URL: "@CI_REPOSITORY_URL@"
CI_COMMIT_REF_NAME: "@CI_COMMIT_REF_NAME@"
CI_COMMIT_SHA: "@CI_COMMIT_SHA@"
PATH: "@PATH_EXTRA@:$PATH"
PATH: "@PATH@"
PKG_CONFIG_PATH: "@PKG_CONFIG_PATH@"
PYTHON: "@PYTHON@"
MAKE: "@MAKE@"
CONFIGURE_ARGS: "@CONFIGURE_ARGS@"
TEST_TARGETS: "@TEST_TARGETS@"
build_task:
# A little shorter than GitLab timeout in ../cirrus.yml
timeout_in: 60m
install_script:
- @UPDATE_COMMAND@
- @INSTALL_COMMAND@ @PKGS@
- if test -n "@PYPI_PKGS@" ; then PYLIB=$(@PYTHON@ -c 'import sysconfig; print(sysconfig.get_path("stdlib"))'); rm -f $PYLIB/EXTERNALLY-MANAGED; @PIP3@ install @PYPI_PKGS@ ; fi
- if test -n "@PYPI_PKGS@" ; then @PIP3@ install @PYPI_PKGS@ ; fi
clone_script:
- git clone --depth 100 "$CI_REPOSITORY_URL" .
- git fetch origin "$CI_COMMIT_REF_NAME"
- git reset --hard "$CI_COMMIT_SHA"
step_script:
build_script:
- mkdir build
- cd build
- ../configure --enable-werror $CONFIGURE_ARGS
@@ -36,7 +33,3 @@ build_task:
do
$MAKE -j$(sysctl -n hw.ncpu) $TARGET V=1 ;
done
always:
build_result_artifacts:
path: build/meson-logs/*log.txt
type: text/plain

View File

@@ -0,0 +1,13 @@
# THIS FILE WAS AUTO-GENERATED
#
# $ lcitool variables freebsd-12 qemu
#
# https://gitlab.com/libvirt/libvirt-ci/-/commit/c7e275ab27ac0dcd09da290817b9adeea1fd1eb1
PACKAGING_COMMAND='pkg'
CCACHE='/usr/local/bin/ccache'
MAKE='/usr/local/bin/gmake'
NINJA='/usr/local/bin/ninja'
PYTHON='/usr/local/bin/python3'
PIP3='/usr/local/bin/pip-3.8'
PKGS='alsa-lib bash bzip2 ca_root_nss capstone4 ccache cdrkit-genisoimage ctags curl cyrus-sasl dbus diffutils gettext git glib gmake gnutls gsed gtk3 libepoxy libffi libgcrypt libjpeg-turbo libnfs libspice-server libssh libtasn1 libxml2 llvm lttng-ust lzo2 meson ncurses nettle ninja opencv p5-Test-Harness perl5 pixman pkgconf png py38-numpy py38-pillow py38-pip py38-sphinx py38-sphinx_rtd_theme py38-virtualenv py38-yaml python3 rpm2cpio sdl2 sdl2_image snappy spice-protocol tesseract texinfo usbredir virglrenderer vte3 zstd'

View File

@@ -0,0 +1,13 @@
# THIS FILE WAS AUTO-GENERATED
#
# $ lcitool variables freebsd-13 qemu
#
# https://gitlab.com/libvirt/libvirt-ci/-/commit/c7e275ab27ac0dcd09da290817b9adeea1fd1eb1
PACKAGING_COMMAND='pkg'
CCACHE='/usr/local/bin/ccache'
MAKE='/usr/local/bin/gmake'
NINJA='/usr/local/bin/ninja'
PYTHON='/usr/local/bin/python3'
PIP3='/usr/local/bin/pip-3.8'
PKGS='alsa-lib bash bzip2 ca_root_nss capstone4 ccache cdrkit-genisoimage ctags curl cyrus-sasl dbus diffutils gettext git glib gmake gnutls gsed gtk3 libepoxy libffi libgcrypt libjpeg-turbo libnfs libspice-server libssh libtasn1 libxml2 llvm lttng-ust lzo2 meson ncurses nettle ninja opencv p5-Test-Harness perl5 pixman pkgconf png py38-numpy py38-pillow py38-pip py38-sphinx py38-sphinx_rtd_theme py38-virtualenv py38-yaml python3 rpm2cpio sdl2 sdl2_image snappy spice-protocol tesseract texinfo usbredir virglrenderer vte3 zstd'

View File

@@ -1,16 +0,0 @@
# THIS FILE WAS AUTO-GENERATED
#
# $ lcitool variables freebsd-14 qemu
#
# https://gitlab.com/libvirt/libvirt-ci
CCACHE='/usr/local/bin/ccache'
CPAN_PKGS=''
CROSS_PKGS=''
MAKE='/usr/local/bin/gmake'
NINJA='/usr/local/bin/ninja'
PACKAGING_COMMAND='pkg'
PIP3='/usr/local/bin/pip'
PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache4 cmocka ctags curl cyrus-sasl dbus diffutils dtc flex fusefs-libs3 gettext git glib gmake gnutls gsed gtk-vnc gtk3 json-c libepoxy libffi libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm lzo2 meson mtools ncurses nettle ninja opencv pixman pkgconf png py311-numpy py311-pillow py311-pip py311-pyyaml py311-sphinx py311-sphinx_rtd_theme py311-tomli python3 rpm2cpio rust rust-bindgen-cli sdl2 sdl2_image snappy sndio socat spice-protocol tesseract usbredir virglrenderer vte3 vulkan-tools xorriso zstd'
PYPI_PKGS=''
PYTHON='/usr/local/bin/python3'

View File

@@ -0,0 +1,15 @@
# THIS FILE WAS AUTO-GENERATED
#
# $ lcitool variables macos-11 qemu
#
# https://gitlab.com/libvirt/libvirt-ci/-/commit/c7e275ab27ac0dcd09da290817b9adeea1fd1eb1
PACKAGING_COMMAND='brew'
CCACHE='/usr/local/bin/ccache'
MAKE='/usr/local/bin/gmake'
NINJA='/usr/local/bin/ninja'
PYTHON='/usr/local/bin/python3'
PIP3='/usr/local/bin/pip3'
PKGS='bash bc bzip2 capstone ccache cpanminus ctags curl dbus diffutils gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb libxml2 llvm lzo make meson ncurses nettle ninja perl pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy sparse spice-protocol tesseract texinfo usbredir vde vte3 zlib zstd'
PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme virtualenv'
CPAN_PKGS='Test::Harness'

View File

@@ -1,16 +0,0 @@
# THIS FILE WAS AUTO-GENERATED
#
# $ lcitool variables macos-14 qemu
#
# https://gitlab.com/libvirt/libvirt-ci
CCACHE='/opt/homebrew/bin/ccache'
CPAN_PKGS=''
CROSS_PKGS=''
MAKE='/opt/homebrew/bin/gmake'
NINJA='/opt/homebrew/bin/ninja'
PACKAGING_COMMAND='brew'
PIP3='/opt/homebrew/bin/pip3'
PKGS='bash bc bindgen bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 gtk-vnc jemalloc jpeg-turbo json-c libcbor libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio rust sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 vulkan-tools xorriso zlib zstd'
PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli'
PYTHON='/opt/homebrew/bin/python3'

View File

@@ -1,12 +1,17 @@
include:
- local: '/.gitlab-ci.d/container-template.yml'
amd64-centos9-container:
amd64-centos8-container:
extends: .container_job_template
variables:
NAME: centos9
NAME: centos8
amd64-fedora-container:
extends: .container_job_template
variables:
NAME: fedora
amd64-debian10-container:
extends: .container_job_template
variables:
NAME: debian10

View File

@@ -1,87 +1,168 @@
alpha-debian-cross-container:
extends: .container_job_template
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-alpha-cross
amd64-debian-cross-container:
extends: .container_job_template
stage: containers
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-amd64-cross
amd64-debian-user-cross-container:
extends: .container_job_template
stage: containers
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-all-test-cross
amd64-debian-legacy-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-legacy-test-cross
arm64-debian-cross-container:
extends: .container_job_template
stage: containers
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-arm64-cross
arm64-test-debian-cross-container:
extends: .container_job_template
stage: containers-layer2
needs: ['amd64-debian11-container']
variables:
NAME: debian-arm64-test-cross
armel-debian-cross-container:
extends: .container_job_template
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-armel-cross
armhf-debian-cross-container:
extends: .container_job_template
stage: containers
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-armhf-cross
# We never want to build hexagon in the CI system and by default we
# always want to refer to the master registry where it lives.
hexagon-cross-container:
extends: .container_job_template
image: docker:stable
stage: containers
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project"'
when: never
- when: always
variables:
NAME: debian-hexagon-cross
GIT_DEPTH: 1
services:
- docker:dind
before_script:
- export TAG="$CI_REGISTRY_IMAGE/qemu/$NAME:latest"
- export COMMON_TAG="$CI_REGISTRY/qemu-project/qemu/qemu/$NAME:latest"
- docker info
- docker login $CI_REGISTRY -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD"
script:
- echo "TAG:$TAG"
- echo "COMMON_TAG:$COMMON_TAG"
- docker pull $COMMON_TAG
- docker tag $COMMON_TAG $TAG
- docker push "$TAG"
after_script:
- docker logout
loongarch-debian-cross-container:
hppa-debian-cross-container:
extends: .container_job_template
stage: containers
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-loongarch-cross
NAME: debian-hppa-cross
i686-debian-cross-container:
m68k-debian-cross-container:
extends: .container_job_template
stage: containers
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-i686-cross
NAME: debian-m68k-cross
mips64-debian-cross-container:
extends: .container_job_template
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-mips64-cross
mips64el-debian-cross-container:
extends: .container_job_template
stage: containers
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-mips64el-cross
mips-debian-cross-container:
extends: .container_job_template
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-mips-cross
mipsel-debian-cross-container:
extends: .container_job_template
stage: containers
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-mipsel-cross
powerpc-test-cross-container:
extends: .container_job_template
stage: containers-layer2
needs: ['amd64-debian11-container']
variables:
NAME: debian-powerpc-test-cross
ppc64el-debian-cross-container:
extends: .container_job_template
stage: containers
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-ppc64el-cross
riscv64-debian-cross-container:
extends: .container_job_template
stage: containers
# as we are currently based on 'sid/unstable' we may break so...
allow_failure: true
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-riscv64-cross
QEMU_JOB_OPTIONAL: 1
s390x-debian-cross-container:
extends: .container_job_template
stage: containers
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-s390x-cross
sh4-debian-cross-container:
extends: .container_job_template
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-sh4-cross
sparc64-debian-cross-container:
extends: .container_job_template
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-sparc64-cross
tricore-debian-cross-container:
extends: .container_job_template
stage: containers
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian-tricore-cross
@@ -90,6 +171,21 @@ xtensa-debian-cross-container:
variables:
NAME: debian-xtensa-cross
cris-fedora-cross-container:
extends: .container_job_template
variables:
NAME: fedora-cris-cross
i386-fedora-cross-container:
extends: .container_job_template
variables:
NAME: fedora-i386-cross
win32-fedora-cross-container:
extends: .container_job_template
variables:
NAME: fedora-win32-cross
win64-fedora-cross-container:
extends: .container_job_template
variables:

View File

@@ -1,21 +1,21 @@
.container_job_template:
extends: .base_job_template
image: docker:latest
image: docker:stable
stage: containers
services:
- docker:dind
before_script:
- export TAG="$CI_REGISTRY_IMAGE/qemu/$NAME:$QEMU_CI_CONTAINER_TAG"
# Always ':latest' because we always use upstream as a common cache source
- export COMMON_TAG="$CI_REGISTRY/qemu-project/qemu/qemu/$NAME:latest"
- export TAG="$CI_REGISTRY_IMAGE/qemu/$NAME:latest"
- export COMMON_TAG="$CI_REGISTRY/qemu-project/qemu/$NAME:latest"
- apk add python3
- docker info
- docker login $CI_REGISTRY -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD"
- until docker info; do sleep 1; done
script:
- echo "TAG:$TAG"
- echo "COMMON_TAG:$COMMON_TAG"
- docker build --tag "$TAG" --cache-from "$TAG" --cache-from "$COMMON_TAG"
--build-arg BUILDKIT_INLINE_CACHE=1
-f "tests/docker/dockerfiles/$NAME.docker" "."
- ./tests/docker/docker.py --engine docker build
-t "qemu/$NAME" -f "tests/docker/dockerfiles/$NAME.docker"
-r $CI_REGISTRY/qemu-project/qemu
- docker tag "qemu/$NAME" "$TAG"
- docker push "$TAG"
after_script:
- docker logout

View File

@@ -7,16 +7,32 @@ amd64-alpine-container:
variables:
NAME: alpine
amd64-debian11-container:
extends: .container_job_template
variables:
NAME: debian11
amd64-debian-container:
extends: .container_job_template
stage: containers
stage: containers-layer2
needs: ['amd64-debian10-container']
variables:
NAME: debian
NAME: debian-amd64
amd64-ubuntu2204-container:
amd64-ubuntu1804-container:
extends: .container_job_template
variables:
NAME: ubuntu2204
NAME: ubuntu1804
amd64-ubuntu2004-container:
extends: .container_job_template
variables:
NAME: ubuntu2004
amd64-ubuntu-container:
extends: .container_job_template
variables:
NAME: ubuntu
amd64-opensuse-leap-container:
extends: .container_job_template
@@ -27,9 +43,3 @@ python-container:
extends: .container_job_template
variables:
NAME: python
amd64-fedora-rust-nightly-container:
extends: .container_job_template
variables:
NAME: fedora-rust-nightly
allow_failure: true

View File

@@ -1,52 +1,22 @@
.cross_system_build_job:
extends: .base_job_template
stage: build
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
cache:
paths:
- ccache
key: "$CI_JOB_NAME"
when: always
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
timeout: 80m
before_script:
- source scripts/ci/gitlab-ci-section
- section_start setup "Pre-script setup"
- JOBS=$(expr $(nproc) + 1)
- cat /packages.txt
- section_end setup
script:
- export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
- export CCACHE_MAXSIZE="500M"
- export PATH="$CCACHE_WRAPPERSDIR:$PATH"
- mkdir build
- cd build
- ccache --zero-stats
- section_start configure "Running configure"
- ../configure --enable-werror --disable-docs --enable-fdt=system
--disable-user $QEMU_CONFIGURE_OPTS $EXTRA_CONFIGURE_OPTS
--target-list-exclude="arm-softmmu
- PKG_CONFIG_PATH=$PKG_CONFIG_PATH
../configure --enable-werror --disable-docs $QEMU_CONFIGURE_OPTS
--disable-user --target-list-exclude="arm-softmmu cris-softmmu
i386-softmmu microblaze-softmmu mips-softmmu mipsel-softmmu
mips64-softmmu ppc-softmmu riscv32-softmmu sh4-softmmu
sparc-softmmu xtensa-softmmu $CROSS_SKIP_TARGETS"
- section_end configure
- section_start build "Building QEMU"
- make -j"$JOBS" all check-build
- section_end build
- section_start test "Running tests"
- if test -n "$MAKE_CHECK_ARGS";
then
$MAKE -j"$JOBS" $MAKE_CHECK_ARGS ;
fi
- section_end test
- section_start installer "Building the installer"
- make -j$(expr $(nproc) + 1) all check-build $MAKE_CHECK_ARGS
- if grep -q "EXESUF=.exe" config-host.mak;
then make installer;
version="$(git describe --match v[0-9]* 2>/dev/null || git rev-parse --short HEAD)";
version="$(git describe --match v[0-9]*)";
mv -v qemu-setup*.exe qemu-setup-${version}.exe;
fi
- section_end installer
- ccache --show-stats
# Job to cross-build specific accelerators.
#
@@ -54,80 +24,24 @@
# KVM), and set extra options (such disabling other accelerators) via the
# $EXTRA_CONFIGURE_OPTS variable.
.cross_accel_build_job:
extends: .base_job_template
stage: build
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
timeout: 60m
cache:
paths:
- ccache/
key: "$CI_JOB_NAME"
before_script:
- source scripts/ci/gitlab-ci-section
- JOBS=$(expr $(nproc) + 1)
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
timeout: 30m
script:
- export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
- export CCACHE_MAXSIZE="500M"
- export PATH="$CCACHE_WRAPPERSDIR:$PATH"
- mkdir build
- cd build
- section_start configure "Running configure"
- ../configure --enable-werror --disable-docs $QEMU_CONFIGURE_OPTS
- PKG_CONFIG_PATH=$PKG_CONFIG_PATH
../configure --enable-werror --disable-docs $QEMU_CONFIGURE_OPTS
--disable-tools --enable-${ACCEL:-kvm} $EXTRA_CONFIGURE_OPTS
- section_end configure
- section_start build "Building QEMU"
- make -j"$JOBS" all check-build
- section_end build
- section_start test "Running tests"
- if test -n "$MAKE_CHECK_ARGS";
then
$MAKE -j"$JOBS" $MAKE_CHECK_ARGS ;
fi
- section_end test
- make -j$(expr $(nproc) + 1) all check-build $MAKE_CHECK_ARGS
.cross_user_build_job:
extends: .base_job_template
stage: build
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
cache:
paths:
- ccache/
key: "$CI_JOB_NAME"
before_script:
- source scripts/ci/gitlab-ci-section
- JOBS=$(expr $(nproc) + 1)
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
script:
- export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
- export CCACHE_MAXSIZE="500M"
- mkdir build
- cd build
- section_start configure "Running configure"
- ../configure --enable-werror --disable-docs $QEMU_CONFIGURE_OPTS
--disable-system --target-list-exclude="aarch64_be-linux-user
alpha-linux-user m68k-linux-user microblazeel-linux-user
or1k-linux-user ppc-linux-user sparc-linux-user
xtensa-linux-user $CROSS_SKIP_TARGETS"
- section_end configure
- section_start build "Building QEMU"
- make -j"$JOBS" all check-build
- section_end build
- section_start test "Running tests"
- if test -n "$MAKE_CHECK_ARGS";
then
$MAKE -j"$JOBS" $MAKE_CHECK_ARGS ;
fi
- section_end test
# We can still run some tests on some of our cross build jobs. They can add this
# template to their extends to save the build logs and test results
.cross_test_artifacts:
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
when: always
expire_in: 7 days
paths:
- build/meson-logs/testlog.txt
reports:
junit: build/meson-logs/testlog.junit.xml
- PKG_CONFIG_PATH=$PKG_CONFIG_PATH
../configure --enable-werror --disable-docs $QEMU_CONFIGURE_OPTS
--disable-system
- make -j$(expr $(nproc) + 1) all check-build $MAKE_CHECK_ARGS

View File

@@ -1,6 +1,27 @@
include:
- local: '/.gitlab-ci.d/crossbuild-template.yml'
cross-armel-system:
extends: .cross_system_build_job
needs:
job: armel-debian-cross-container
variables:
IMAGE: debian-armel-cross
cross-armel-user:
extends: .cross_user_build_job
needs:
job: armel-debian-cross-container
variables:
IMAGE: debian-armel-cross
cross-armhf-system:
extends: .cross_system_build_job
needs:
job: armhf-debian-cross-container
variables:
IMAGE: debian-armhf-cross
cross-armhf-user:
extends: .cross_user_build_job
needs:
@@ -22,51 +43,44 @@ cross-arm64-user:
variables:
IMAGE: debian-arm64-cross
cross-arm64-kvm-only:
extends: .cross_accel_build_job
cross-i386-system:
extends: .cross_system_build_job
needs:
job: arm64-debian-cross-container
job: i386-fedora-cross-container
variables:
IMAGE: debian-arm64-cross
EXTRA_CONFIGURE_OPTS: --disable-tcg --without-default-features
cross-i686-system:
extends:
- .cross_system_build_job
- .cross_test_artifacts
needs:
job: i686-debian-cross-container
variables:
IMAGE: debian-i686-cross
EXTRA_CONFIGURE_OPTS: --disable-kvm
IMAGE: fedora-i386-cross
MAKE_CHECK_ARGS: check-qtest
cross-i686-user:
extends:
- .cross_user_build_job
- .cross_test_artifacts
cross-i386-user:
extends: .cross_user_build_job
needs:
job: i686-debian-cross-container
job: i386-fedora-cross-container
variables:
IMAGE: debian-i686-cross
IMAGE: fedora-i386-cross
MAKE_CHECK_ARGS: check
cross-i686-tci:
extends:
- .cross_accel_build_job
- .cross_test_artifacts
cross-i386-tci:
extends: .cross_accel_build_job
timeout: 60m
needs:
job: i686-debian-cross-container
variables:
IMAGE: debian-i686-cross
IMAGE: fedora-i386-cross
ACCEL: tcg-interpreter
EXTRA_CONFIGURE_OPTS: --target-list=i386-softmmu,i386-linux-user,arm-softmmu,arm-linux-user,ppc-softmmu,ppc-linux-user --disable-plugins --disable-kvm
# Force tests to run with reduced parallelism, to see whether this
# reduces the flakiness of this CI job. The CI
# environment by default shows us 8 CPUs and so we
# would otherwise be using a parallelism of 9.
MAKE_CHECK_ARGS: check check-tcg -j2
EXTRA_CONFIGURE_OPTS: --target-list=i386-softmmu,i386-linux-user,aarch64-softmmu,aarch64-linux-user,ppc-softmmu,ppc-linux-user
MAKE_CHECK_ARGS: check check-tcg
cross-mips-system:
extends: .cross_system_build_job
needs:
job: mips-debian-cross-container
variables:
IMAGE: debian-mips-cross
cross-mips-user:
extends: .cross_user_build_job
needs:
job: mips-debian-cross-container
variables:
IMAGE: debian-mips-cross
cross-mipsel-system:
extends: .cross_system_build_job
@@ -110,33 +124,6 @@ cross-ppc64el-user:
variables:
IMAGE: debian-ppc64el-cross
cross-ppc64el-kvm-only:
extends: .cross_accel_build_job
needs:
job: ppc64el-debian-cross-container
variables:
IMAGE: debian-ppc64el-cross
EXTRA_CONFIGURE_OPTS: --disable-tcg --without-default-devices
# The riscv64 cross-builds currently use a 'sid' container to get
# compilers and libraries. Until something more stable is found we
# allow_failure so as not to block CI.
cross-riscv64-system:
extends: .cross_system_build_job
allow_failure: true
needs:
job: riscv64-debian-cross-container
variables:
IMAGE: debian-riscv64-cross
cross-riscv64-user:
extends: .cross_user_build_job
allow_failure: true
needs:
job: riscv64-debian-cross-container
variables:
IMAGE: debian-riscv64-cross
cross-s390x-system:
extends: .cross_system_build_job
needs:
@@ -157,7 +144,7 @@ cross-s390x-kvm-only:
job: s390x-debian-cross-container
variables:
IMAGE: debian-s390x-cross
EXTRA_CONFIGURE_OPTS: --disable-tcg --enable-trace-backends=ftrace
EXTRA_CONFIGURE_OPTS: --disable-tcg
cross-mips64el-kvm-only:
extends: .cross_accel_build_job
@@ -167,19 +154,27 @@ cross-mips64el-kvm-only:
IMAGE: debian-mips64el-cross
EXTRA_CONFIGURE_OPTS: --disable-tcg --target-list=mips64el-softmmu
cross-win32-system:
extends: .cross_system_build_job
needs:
job: win32-fedora-cross-container
variables:
IMAGE: fedora-win32-cross
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu m68k-softmmu
microblazeel-softmmu mips64el-softmmu nios2-softmmu
artifacts:
paths:
- build/qemu-setup*.exe
cross-win64-system:
extends: .cross_system_build_job
needs:
job: win64-fedora-cross-container
variables:
IMAGE: fedora-win64-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu
m68k-softmmu microblazeel-softmmu
or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu
CROSS_SKIP_TARGETS: or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu
tricore-softmmu xtensaeb-softmmu
artifacts:
when: on_success
paths:
- build/qemu-setup*.exe

View File

@@ -10,25 +10,229 @@
# gitlab-runner. To avoid problems that gitlab-runner can cause while
# reusing the GIT repository, let's enable the clone strategy, which
# guarantees a fresh repository on each job run.
variables:
GIT_STRATEGY: clone
# All custom runners can extend this template to upload the testlog
# data as an artifact and also feed the junit report
.custom_runner_template:
extends: .base_job_template
variables:
GIT_STRATEGY: clone
GIT_FETCH_EXTRA_FLAGS: --no-tags --prune --quiet
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
expire_in: 7 days
when: always
paths:
- build/build.ninja
- build/meson-logs
reports:
junit: build/meson-logs/testlog.junit.xml
# All ubuntu-18.04 jobs should run successfully in an environment
# setup by the scripts/ci/setup/build-environment.yml task
# "Install basic packages to build QEMU on Ubuntu 18.04/20.04"
ubuntu-18.04-s390x-all-linux-static:
allow_failure: true
needs: []
stage: build
tags:
- ubuntu_18.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$S390X_RUNNER_AVAILABLE"
script:
# --disable-libssh is needed because of https://bugs.launchpad.net/qemu/+bug/1838763
# --disable-glusterfs is needed because there's no static version of those libs in distro supplied packages
- mkdir build
- cd build
- ../configure --enable-debug --static --disable-system --disable-glusterfs --disable-libssh
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check V=1
- make --output-sync -j`nproc` check-tcg V=1
include:
- local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-s390x.yml'
- local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch64.yml'
- local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch32.yml'
ubuntu-18.04-s390x-all:
allow_failure: true
needs: []
stage: build
tags:
- ubuntu_18.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$S390X_RUNNER_AVAILABLE"
script:
- mkdir build
- cd build
- ../configure --disable-libssh
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check V=1
ubuntu-18.04-s390x-alldbg:
allow_failure: true
needs: []
stage: build
tags:
- ubuntu_18.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$S390X_RUNNER_AVAILABLE"
script:
- mkdir build
- cd build
- ../configure --enable-debug --disable-libssh
- make clean
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check V=1
ubuntu-18.04-s390x-clang:
allow_failure: true
needs: []
stage: build
tags:
- ubuntu_18.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
when: manual
- if: "$S390X_RUNNER_AVAILABLE"
when: manual
script:
- mkdir build
- cd build
- ../configure --disable-libssh --cc=clang --cxx=clang++ --enable-sanitizers
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check V=1
ubuntu-18.04-s390x-tci:
allow_failure: true
needs: []
stage: build
tags:
- ubuntu_18.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$S390X_RUNNER_AVAILABLE"
script:
- mkdir build
- cd build
- ../configure --disable-libssh --enable-tcg-interpreter
- make --output-sync -j`nproc`
ubuntu-18.04-s390x-notcg:
allow_failure: true
needs: []
stage: build
tags:
- ubuntu_18.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
when: manual
- if: "$S390X_RUNNER_AVAILABLE"
when: manual
script:
- mkdir build
- cd build
- ../configure --disable-libssh --disable-tcg
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check V=1
# All ubuntu-20.04 jobs should run successfully in an environment
# setup by the scripts/ci/setup/qemu/build-environment.yml task
# "Install basic packages to build QEMU on Ubuntu 18.04/20.04"
ubuntu-20.04-aarch64-all-linux-static:
allow_failure: true
needs: []
stage: build
tags:
- ubuntu_20.04
- aarch64
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$S390X_RUNNER_AVAILABLE"
script:
# --disable-libssh is needed because of https://bugs.launchpad.net/qemu/+bug/1838763
# --disable-glusterfs is needed because there's no static version of those libs in distro supplied packages
- mkdir build
- cd build
- ../configure --enable-debug --static --disable-system --disable-glusterfs --disable-libssh
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check V=1
- make --output-sync -j`nproc` check-tcg V=1
ubuntu-20.04-aarch64-all:
allow_failure: true
needs: []
stage: build
tags:
- ubuntu_20.04
- aarch64
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$S390X_RUNNER_AVAILABLE"
script:
- mkdir build
- cd build
- ../configure --disable-libssh
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check V=1
ubuntu-20.04-aarch64-alldbg:
allow_failure: true
needs: []
stage: build
tags:
- ubuntu_20.04
- aarch64
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$S390X_RUNNER_AVAILABLE"
script:
- mkdir build
- cd build
- ../configure --enable-debug --disable-libssh
- make clean
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check V=1
ubuntu-20.04-aarch64-clang:
allow_failure: true
needs: []
stage: build
tags:
- ubuntu_20.04
- aarch64
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
when: manual
- if: "$S390X_RUNNER_AVAILABLE"
when: manual
script:
- mkdir build
- cd build
- ../configure --disable-libssh --cc=clang-10 --cxx=clang++-10 --enable-sanitizers
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check V=1
ubuntu-20.04-aarch64-tci:
allow_failure: true
needs: []
stage: build
tags:
- ubuntu_20.04
- aarch64
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$S390X_RUNNER_AVAILABLE"
script:
- mkdir build
- cd build
- ../configure --disable-libssh --enable-tcg-interpreter
- make --output-sync -j`nproc`
ubuntu-20.04-aarch64-notcg:
allow_failure: true
needs: []
stage: build
tags:
- ubuntu_20.04
- aarch64
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
when: manual
- if: "$S390X_RUNNER_AVAILABLE"
when: manual
script:
- mkdir build
- cd build
- ../configure --disable-libssh --disable-tcg
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check V=1

View File

@@ -1,25 +0,0 @@
# All ubuntu-22.04 jobs should run successfully in an environment
# setup by the scripts/ci/setup/ubuntu/build-environment.yml task
# "Install basic packages to build QEMU on Ubuntu 22.04"
ubuntu-22.04-aarch32-all:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- aarch32
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
when: manual
allow_failure: true
- if: "$AARCH32_RUNNER_AVAILABLE"
when: manual
allow_failure: true
script:
- mkdir build
- cd build
- ../configure --cross-prefix=arm-linux-gnueabihf-
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc --ignore=40`
- make --output-sync -j`nproc --ignore=40` check

View File

@@ -1,151 +0,0 @@
# All ubuntu-22.04 jobs should run successfully in an environment
# setup by the scripts/ci/setup/ubuntu/build-environment.yml task
# "Install basic packages to build QEMU on Ubuntu 22.04"
ubuntu-22.04-aarch64-all-linux-static:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- aarch64
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$AARCH64_RUNNER_AVAILABLE"
script:
- mkdir build
- cd build
# Disable -static-pie due to build error with system libc:
# https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1987438
- ../configure --enable-debug --static --disable-system --disable-pie
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc --ignore=40`
- make check-tcg
- make --output-sync -j`nproc --ignore=40` check
ubuntu-22.04-aarch64-all:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- aarch64
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
when: manual
allow_failure: true
- if: "$AARCH64_RUNNER_AVAILABLE"
when: manual
allow_failure: true
script:
- mkdir build
- cd build
- ../configure
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc --ignore=40`
- make --output-sync -j`nproc --ignore=40` check
ubuntu-22.04-aarch64-without-defaults:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- aarch64
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
when: manual
allow_failure: true
- if: "$AARCH64_RUNNER_AVAILABLE"
when: manual
allow_failure: true
script:
- mkdir build
- cd build
- ../configure --disable-user --without-default-devices --without-default-features
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc --ignore=40`
- make --output-sync -j`nproc --ignore=40` check
ubuntu-22.04-aarch64-alldbg:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- aarch64
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$AARCH64_RUNNER_AVAILABLE"
script:
- mkdir build
- cd build
- ../configure --enable-debug
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make clean
- make --output-sync -j`nproc --ignore=40`
- make --output-sync -j`nproc --ignore=40` check
ubuntu-22.04-aarch64-clang:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- aarch64
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
when: manual
allow_failure: true
- if: "$AARCH64_RUNNER_AVAILABLE"
when: manual
allow_failure: true
script:
- mkdir build
- cd build
- ../configure --disable-libssh --cc=clang --cxx=clang++ --enable-ubsan
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc --ignore=40`
- make --output-sync -j`nproc --ignore=40` check
ubuntu-22.04-aarch64-tci:
needs: []
stage: build
tags:
- ubuntu_22.04
- aarch64
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
when: manual
allow_failure: true
- if: "$AARCH64_RUNNER_AVAILABLE"
when: manual
allow_failure: true
script:
- mkdir build
- cd build
- ../configure --enable-tcg-interpreter
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc --ignore=40`
ubuntu-22.04-aarch64-notcg:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- aarch64
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
when: manual
allow_failure: true
- if: "$AARCH64_RUNNER_AVAILABLE"
when: manual
allow_failure: true
script:
- mkdir build
- cd build
- ../configure --disable-tcg --with-devices-aarch64=minimal
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc --ignore=40`
- make --output-sync -j`nproc --ignore=40` check

View File

@@ -1,128 +0,0 @@
# All ubuntu-22.04 jobs should run successfully in an environment
# setup by the scripts/ci/setup/ubuntu/build-environment.yml task
# "Install basic packages to build QEMU on Ubuntu 22.04"
ubuntu-22.04-s390x-all-linux:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$S390X_RUNNER_AVAILABLE"
script:
- mkdir build
- cd build
- ../configure --enable-debug --disable-system --disable-tools --disable-docs
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc`
- make --output-sync check-tcg
- make --output-sync -j`nproc` check
ubuntu-22.04-s390x-all-system:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- s390x
timeout: 75m
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$S390X_RUNNER_AVAILABLE"
script:
- mkdir build
- cd build
- ../configure --disable-user
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check
ubuntu-22.04-s390x-alldbg:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
when: manual
allow_failure: true
- if: "$S390X_RUNNER_AVAILABLE"
when: manual
allow_failure: true
script:
- mkdir build
- cd build
- ../configure --enable-debug
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make clean
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check
ubuntu-22.04-s390x-clang:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
when: manual
allow_failure: true
- if: "$S390X_RUNNER_AVAILABLE"
when: manual
allow_failure: true
script:
- mkdir build
- cd build
- ../configure --cc=clang --cxx=clang++ --enable-ubsan
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check
ubuntu-22.04-s390x-tci:
needs: []
stage: build
tags:
- ubuntu_22.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
when: manual
allow_failure: true
- if: "$S390X_RUNNER_AVAILABLE"
when: manual
allow_failure: true
script:
- mkdir build
- cd build
- ../configure --enable-tcg-interpreter
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc`
ubuntu-22.04-s390x-notcg:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
when: manual
allow_failure: true
- if: "$S390X_RUNNER_AVAILABLE"
when: manual
allow_failure: true
script:
- mkdir build
- cd build
- ../configure --disable-tcg
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check

56
.gitlab-ci.d/edk2.yml Normal file
View File

@@ -0,0 +1,56 @@
# All jobs needing docker-edk2 must use the same rules it uses.
.edk2_job_rules:
rules: # Only run this job when ...
- changes:
# this file is modified
- .gitlab-ci.d/edk2.yml
# or the Dockerfile is modified
- .gitlab-ci.d/edk2/Dockerfile
# or roms/edk2/ is modified (submodule updated)
- roms/edk2/*
when: on_success
- if: '$CI_COMMIT_REF_NAME =~ /^edk2/' # or the branch/tag starts with 'edk2'
when: on_success
- if: '$CI_COMMIT_MESSAGE =~ /edk2/i' # or last commit description contains 'EDK2'
when: on_success
docker-edk2:
extends: .edk2_job_rules
stage: containers
image: docker:19.03.1
services:
- docker:19.03.1-dind
variables:
GIT_DEPTH: 3
IMAGE_TAG: $CI_REGISTRY_IMAGE:edk2-cross-build
# We don't use TLS
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker pull $IMAGE_TAG || true
- docker build --cache-from $IMAGE_TAG --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
--tag $IMAGE_TAG .gitlab-ci.d/edk2
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $IMAGE_TAG
build-edk2:
extends: .edk2_job_rules
stage: build
needs: ['docker-edk2']
artifacts:
paths: # 'artifacts.zip' will contains the following files:
- pc-bios/edk2*bz2
- pc-bios/edk2-licenses.txt
- edk2-stdout.log
- edk2-stderr.log
image: $CI_REGISTRY_IMAGE:edk2-cross-build
variables:
GIT_DEPTH: 3
script: # Clone the required submodules and build EDK2
- git submodule update --init roms/edk2
- git -C roms/edk2 submodule update --init
- export JOBS=$(($(getconf _NPROCESSORS_ONLN) + 1))
- echo "=== Using ${JOBS} simultaneous jobs ==="
- make -j${JOBS} -C roms efi 2>&1 1>edk2-stdout.log | tee -a edk2-stderr.log >&2

View File

@@ -0,0 +1,27 @@
#
# Docker image to cross-compile EDK2 firmware binaries
#
FROM ubuntu:16.04
MAINTAINER Philippe Mathieu-Daudé <philmd@redhat.com>
# Install packages required to build EDK2
RUN apt update \
&& \
\
DEBIAN_FRONTEND=noninteractive \
apt install --assume-yes --no-install-recommends \
build-essential \
ca-certificates \
dos2unix \
gcc-aarch64-linux-gnu \
gcc-arm-linux-gnueabi \
git \
iasl \
make \
nasm \
python \
uuid-dev \
&& \
\
rm -rf /var/lib/apt/lists/*

View File

@@ -1,88 +1,63 @@
# All jobs needing docker-opensbi must use the same rules it uses.
.opensbi_job_rules:
rules:
# Forks don't get pipelines unless QEMU_CI=1 or QEMU_CI=2 is set
- if: '$QEMU_CI != "1" && $QEMU_CI != "2" && $CI_PROJECT_NAMESPACE != "qemu-project"'
when: never
# In forks, if QEMU_CI=1 is set, then create manual job
# if any files affecting the build output are touched
- if: '$QEMU_CI == "1" && $CI_PROJECT_NAMESPACE != "qemu-project"'
changes:
- .gitlab-ci.d/opensbi.yml
- .gitlab-ci.d/opensbi/Dockerfile
- roms/opensbi/*
when: manual
# In forks, if QEMU_CI=1 is set, then create manual job
# if the branch/tag starts with 'opensbi'
- if: '$QEMU_CI == "1" && $CI_PROJECT_NAMESPACE != "qemu-project" && $CI_COMMIT_REF_NAME =~ /^opensbi/'
when: manual
# In forks, if QEMU_CI=1 is set, then create manual job
# if the last commit msg contains 'OpenSBI' (case insensitive)
- if: '$QEMU_CI == "1" && $CI_PROJECT_NAMESPACE != "qemu-project" && $CI_COMMIT_MESSAGE =~ /opensbi/i'
when: manual
# Scheduled runs on mainline don't get pipelines except for the special Coverity job
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_PIPELINE_SOURCE == "schedule"'
when: never
# Run if any files affecting the build output are touched
- changes:
- .gitlab-ci.d/opensbi.yml
- .gitlab-ci.d/opensbi/Dockerfile
- roms/opensbi/*
when: on_success
# Run if the branch/tag starts with 'opensbi'
- if: '$CI_COMMIT_REF_NAME =~ /^opensbi/'
when: on_success
# Run if the last commit msg contains 'OpenSBI' (case insensitive)
- if: '$CI_COMMIT_MESSAGE =~ /opensbi/i'
when: on_success
rules: # Only run this job when ...
- changes:
# this file is modified
- .gitlab-ci.d/opensbi.yml
# or the Dockerfile is modified
- .gitlab-ci.d/opensbi/Dockerfile
when: on_success
- changes: # or roms/opensbi/ is modified (submodule updated)
- roms/opensbi/*
when: on_success
- if: '$CI_COMMIT_REF_NAME =~ /^opensbi/' # or the branch/tag starts with 'opensbi'
when: on_success
- if: '$CI_COMMIT_MESSAGE =~ /opensbi/i' # or last commit description contains 'OpenSBI'
when: on_success
docker-opensbi:
extends: .opensbi_job_rules
stage: containers
image: docker:latest
services:
- docker:dind
variables:
GIT_DEPTH: 3
IMAGE_TAG: $CI_REGISTRY_IMAGE:opensbi-cross-build
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- until docker info; do sleep 1; done
script:
- docker pull $IMAGE_TAG || true
- docker build --cache-from $IMAGE_TAG --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
--tag $IMAGE_TAG .gitlab-ci.d/opensbi
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $IMAGE_TAG
extends: .opensbi_job_rules
stage: containers
image: docker:19.03.1
services:
- docker:19.03.1-dind
variables:
GIT_DEPTH: 3
IMAGE_TAG: $CI_REGISTRY_IMAGE:opensbi-cross-build
# We don't use TLS
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker pull $IMAGE_TAG || true
- docker build --cache-from $IMAGE_TAG --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
--tag $IMAGE_TAG .gitlab-ci.d/opensbi
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $IMAGE_TAG
build-opensbi:
extends: .opensbi_job_rules
stage: build
needs: ['docker-opensbi']
artifacts:
when: on_success
paths: # 'artifacts.zip' will contains the following files:
- pc-bios/opensbi-riscv32-generic-fw_dynamic.bin
- pc-bios/opensbi-riscv64-generic-fw_dynamic.bin
- opensbi32-generic-stdout.log
- opensbi32-generic-stderr.log
- opensbi64-generic-stdout.log
- opensbi64-generic-stderr.log
image: $CI_REGISTRY_IMAGE:opensbi-cross-build
variables:
GIT_DEPTH: 3
script: # Clone the required submodules and build OpenSBI
- git submodule update --init roms/opensbi
- export JOBS=$(($(getconf _NPROCESSORS_ONLN) + 1))
- echo "=== Using ${JOBS} simultaneous jobs ==="
- make -j${JOBS} -C roms/opensbi clean
- make -j${JOBS} -C roms opensbi32-generic 2>&1 1>opensbi32-generic-stdout.log | tee -a opensbi32-generic-stderr.log >&2
- make -j${JOBS} -C roms/opensbi clean
- make -j${JOBS} -C roms opensbi64-generic 2>&1 1>opensbi64-generic-stdout.log | tee -a opensbi64-generic-stderr.log >&2
extends: .opensbi_job_rules
stage: build
needs: ['docker-opensbi']
artifacts:
paths: # 'artifacts.zip' will contains the following files:
- pc-bios/opensbi-riscv32-generic-fw_dynamic.bin
- pc-bios/opensbi-riscv32-generic-fw_dynamic.elf
- pc-bios/opensbi-riscv64-generic-fw_dynamic.bin
- pc-bios/opensbi-riscv64-generic-fw_dynamic.elf
- opensbi32-generic-stdout.log
- opensbi32-generic-stderr.log
- opensbi64-generic-stdout.log
- opensbi64-generic-stderr.log
image: $CI_REGISTRY_IMAGE:opensbi-cross-build
variables:
GIT_DEPTH: 3
script: # Clone the required submodules and build OpenSBI
- git submodule update --init roms/opensbi
- export JOBS=$(($(getconf _NPROCESSORS_ONLN) + 1))
- echo "=== Using ${JOBS} simultaneous jobs ==="
- make -j${JOBS} -C roms/opensbi clean
- make -j${JOBS} -C roms opensbi32-generic 2>&1 1>opensbi32-generic-stdout.log | tee -a opensbi32-generic-stderr.log >&2
- make -j${JOBS} -C roms/opensbi clean
- make -j${JOBS} -C roms opensbi64-generic 2>&1 1>opensbi64-generic-stdout.log | tee -a opensbi64-generic-stderr.log >&2

View File

@@ -15,7 +15,6 @@ RUN apt update \
ca-certificates \
git \
make \
python3 \
wget \
&& \
\

View File

@@ -1,16 +1,9 @@
# This file contains the set of jobs run by the QEMU project:
# https://gitlab.com/qemu-project/qemu/-/pipelines
variables:
RUNNER_TAG: ""
default:
tags:
- $RUNNER_TAG
include:
- local: '/.gitlab-ci.d/base.yml'
- local: '/.gitlab-ci.d/stages.yml'
- local: '/.gitlab-ci.d/edk2.yml'
- local: '/.gitlab-ci.d/opensbi.yml'
- local: '/.gitlab-ci.d/containers.yml'
- local: '/.gitlab-ci.d/crossbuilds.yml'
@@ -18,4 +11,3 @@ include:
- local: '/.gitlab-ci.d/static_checks.yml'
- local: '/.gitlab-ci.d/custom-runners.yml'
- local: '/.gitlab-ci.d/cirrus.yml'
- local: '/.gitlab-ci.d/windows.yml'

View File

@@ -3,5 +3,6 @@
# - test (for test stages, using build artefacts from a build stage)
stages:
- containers
- containers-layer2
- build
- test

View File

@@ -1,94 +1,49 @@
check-patch:
extends: .base_job_template
stage: build
image: python:3.10-alpine
needs: []
image: $CI_REGISTRY_IMAGE/qemu/centos8:latest
needs:
job: amd64-centos8-container
script:
- .gitlab-ci.d/check-patch.py
variables:
GIT_DEPTH: 1000
QEMU_JOB_ONLY_FORKS: 1
before_script:
- apk -U add git perl
allow_failure: true
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
when: never
- when: on_success
allow_failure: true
check-dco:
extends: .base_job_template
stage: build
image: python:3.10-alpine
needs: []
image: $CI_REGISTRY_IMAGE/qemu/centos8:latest
needs:
job: amd64-centos8-container
script: .gitlab-ci.d/check-dco.py
variables:
GIT_DEPTH: 1000
before_script:
- apk -U add git
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
when: never
- when: on_success
check-python-minreqs:
extends: .base_job_template
check-python-pipenv:
stage: test
image: $CI_REGISTRY_IMAGE/qemu/python:$QEMU_CI_CONTAINER_TAG
image: $CI_REGISTRY_IMAGE/qemu/python:latest
script:
- make -C python check-minreqs
- make -C python check-pipenv
variables:
GIT_DEPTH: 1
needs:
job: python-container
check-python-tox:
extends: .base_job_template
stage: test
image: $CI_REGISTRY_IMAGE/qemu/python:$QEMU_CI_CONTAINER_TAG
image: $CI_REGISTRY_IMAGE/qemu/python:latest
script:
- make -C python check-tox
variables:
GIT_DEPTH: 1
QEMU_TOX_EXTRA_ARGS: --skip-missing-interpreters=false
QEMU_JOB_OPTIONAL: 1
needs:
job: python-container
check-rust-tools-nightly:
extends: .base_job_template
stage: test
image: $CI_REGISTRY_IMAGE/qemu/fedora-rust-nightly:$QEMU_CI_CONTAINER_TAG
script:
- source scripts/ci/gitlab-ci-section
- section_start test "Running Rust code checks"
- cd build
- pyvenv/bin/meson devenv -w ../rust ${CARGO-cargo} fmt --check
- make clippy
- make rustdoc
- section_end test
variables:
GIT_DEPTH: 1
allow_failure: true
needs:
- job: build-system-fedora-rust-nightly
artifacts: true
artifacts:
when: on_success
expire_in: 2 days
paths:
- rust/target/doc
check-build-units:
extends: .base_job_template
stage: build
image: $CI_REGISTRY_IMAGE/qemu/debian:$QEMU_CI_CONTAINER_TAG
needs:
job: amd64-debian-container
before_script:
- source scripts/ci/gitlab-ci-section
- section_start setup "Install Tools"
- apt install --assume-yes --no-install-recommends jq
- section_end setup
script:
- mkdir build
- cd build
- section_start configure "Running configure"
- ../configure
- cd ..
- section_end configure
- section_start analyse "Analyse"
- .gitlab-ci.d/check-units.py build/compile_commands.json
- section_end analyse

View File

@@ -1,106 +0,0 @@
msys2-64bit:
extends: .base_job_template
tags:
- saas-windows-medium-amd64
cache:
key: "$CI_JOB_NAME"
paths:
- msys64/var/cache
- ccache
when: always
needs: []
stage: build
timeout: 100m
variables:
# Select the "64 bit, gcc and MSVCRT" MSYS2 environment
MSYSTEM: MINGW64
# This feature doesn't (currently) work with PowerShell, it stops
# the echo'ing of commands being run and doesn't show any timing
FF_SCRIPT_SECTIONS: 0
CONFIGURE_ARGS: --disable-system --enable-tools -Ddebug=false -Doptimization=0
# The Windows git is a bit older so override the default
GIT_FETCH_EXTRA_FLAGS: --no-tags --prune --quiet
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
expire_in: 7 days
paths:
- build/meson-logs/testlog.txt
reports:
junit: "build/meson-logs/testlog.junit.xml"
before_script:
- Write-Output "Acquiring msys2.exe installer at $(Get-Date -Format u)"
- If ( !(Test-Path -Path msys64\var\cache ) ) {
mkdir msys64\var\cache
}
- Invoke-WebRequest
"https://repo.msys2.org/distrib/msys2-x86_64-latest.sfx.exe.sig"
-outfile "msys2.exe.sig"
- if ( Test-Path -Path msys64\var\cache\msys2.exe.sig ) {
Write-Output "Cached installer sig" ;
if ( ((Get-FileHash msys2.exe.sig).Hash -ne (Get-FileHash msys64\var\cache\msys2.exe.sig).Hash) ) {
Write-Output "Mis-matched installer sig, new installer download required" ;
Remove-Item -Path msys64\var\cache\msys2.exe.sig ;
if ( Test-Path -Path msys64\var\cache\msys2.exe ) {
Remove-Item -Path msys64\var\cache\msys2.exe
}
} else {
Write-Output "Matched installer sig, cached installer still valid"
}
} else {
Write-Output "No cached installer sig, new installer download required" ;
if ( Test-Path -Path msys64\var\cache\msys2.exe ) {
Remove-Item -Path msys64\var\cache\msys2.exe
}
}
- if ( !(Test-Path -Path msys64\var\cache\msys2.exe ) ) {
Write-Output "Fetching latest installer" ;
Invoke-WebRequest
"https://repo.msys2.org/distrib/msys2-x86_64-latest.sfx.exe"
-outfile "msys64\var\cache\msys2.exe" ;
Copy-Item -Path msys2.exe.sig -Destination msys64\var\cache\msys2.exe.sig
} else {
Write-Output "Using cached installer"
}
- Write-Output "Invoking msys2.exe installer at $(Get-Date -Format u)"
- msys64\var\cache\msys2.exe -y
- ((Get-Content -path .\msys64\etc\\post-install\\07-pacman-key.post -Raw)
-replace '--refresh-keys', '--version') |
Set-Content -Path ${CI_PROJECT_DIR}\msys64\etc\\post-install\\07-pacman-key.post
- .\msys64\usr\bin\bash -lc "sed -i 's/^CheckSpace/#CheckSpace/g' /etc/pacman.conf"
- .\msys64\usr\bin\bash -lc 'pacman --noconfirm -Syuu' # Core update
- .\msys64\usr\bin\bash -lc 'pacman --noconfirm -Syuu' # Normal update
- taskkill /F /FI "MODULES eq msys-2.0.dll"
script:
- Write-Output "Installing mingw packages at $(Get-Date -Format u)"
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
bison diffutils flex
git grep make sed
mingw-w64-x86_64-binutils
mingw-w64-x86_64-ccache
mingw-w64-x86_64-curl
mingw-w64-x86_64-gcc
mingw-w64-x86_64-glib2
mingw-w64-x86_64-libnfs
mingw-w64-x86_64-libssh
mingw-w64-x86_64-ninja
mingw-w64-x86_64-pixman
mingw-w64-x86_64-pkgconf
mingw-w64-x86_64-python
mingw-w64-x86_64-zstd"
- Write-Output "Running build at $(Get-Date -Format u)"
- $env:JOBS = $(.\msys64\usr\bin\bash -lc nproc)
- $env:CHERE_INVOKING = 'yes' # Preserve the current working directory
- $env:MSYS = 'winsymlinks:native' # Enable native Windows symlink
- $env:CCACHE_BASEDIR = "$env:CI_PROJECT_DIR"
- $env:CCACHE_DIR = "$env:CCACHE_BASEDIR/ccache"
- $env:CCACHE_MAXSIZE = "500M"
- $env:CCACHE_DEPEND = 1 # cache misses are too expensive with preprocessor mode
- $env:CC = "ccache gcc"
- mkdir build
- cd build
- ..\msys64\usr\bin\bash -lc "ccache --zero-stats"
- ..\msys64\usr\bin\bash -lc "../configure $CONFIGURE_ARGS"
- ..\msys64\usr\bin\bash -lc "make -j$env:JOBS"
- ..\msys64\usr\bin\bash -lc "make check MTESTARGS='$TEST_ARGS' || { cat meson-logs/testlog.txt; exit 1; } ;"
- ..\msys64\usr\bin\bash -lc "ccache --show-stats"
- Write-Output "Finished build at $(Get-Date -Format u)"

View File

@@ -18,11 +18,11 @@ https://www.qemu.org/contribute/security-process/
-->
## Host environment
- Operating system: <!-- Windows 10 21H1, Fedora 37, etc. -->
- OS/kernel version: <!-- For POSIX hosts, use `uname -a` -->
- Architecture: <!-- x86, ARM, s390x, etc. -->
- QEMU flavor: <!-- qemu-system-x86_64, qemu-aarch64, qemu-img, etc. -->
- QEMU version: <!-- e.g. `qemu-system-x86_64 --version` -->
- Operating system: (Windows 10 21H1, Fedora 34, etc.)
- OS/kernel version: (For POSIX hosts, use `uname -a`)
- Architecture: (x86, ARM, s390x, etc.)
- QEMU flavor: (qemu-system-x86_64, qemu-aarch64, qemu-img, etc.)
- QEMU version: (e.g. `qemu-system-x86_64 --version`)
- QEMU command line:
<!--
Give the smallest, complete command line that exhibits the problem.
@@ -35,9 +35,9 @@ https://www.qemu.org/contribute/security-process/
```
## Emulated/Virtualized environment
- Operating system: <!-- Windows 10 21H1, Fedora 37, etc. -->
- OS/kernel version: <!-- For POSIX guests, use `uname -a`. -->
- Architecture: <!-- x86, ARM, s390x, etc. -->
- Operating system: (Windows 10 21H1, Fedora 34, etc.)
- OS/kernel version: (For POSIX guests, use `uname -a`.)
- Architecture: (x86, ARM, s390x, etc.)
## Description of problem

27
.gitmodules vendored
View File

@@ -13,6 +13,12 @@
[submodule "roms/qemu-palcode"]
path = roms/qemu-palcode
url = https://gitlab.com/qemu-project/qemu-palcode.git
[submodule "roms/sgabios"]
path = roms/sgabios
url = https://gitlab.com/qemu-project/sgabios.git
[submodule "dtc"]
path = dtc
url = https://gitlab.com/qemu-project/dtc.git
[submodule "roms/u-boot"]
path = roms/u-boot
url = https://gitlab.com/qemu-project/u-boot.git
@@ -22,24 +28,39 @@
[submodule "roms/QemuMacDrivers"]
path = roms/QemuMacDrivers
url = https://gitlab.com/qemu-project/QemuMacDrivers.git
[submodule "ui/keycodemapdb"]
path = ui/keycodemapdb
url = https://gitlab.com/qemu-project/keycodemapdb.git
[submodule "capstone"]
path = capstone
url = https://gitlab.com/qemu-project/capstone.git
[submodule "roms/seabios-hppa"]
path = roms/seabios-hppa
url = https://gitlab.com/qemu-project/seabios-hppa.git
[submodule "roms/u-boot-sam460ex"]
path = roms/u-boot-sam460ex
url = https://gitlab.com/qemu-project/u-boot-sam460ex.git
[submodule "tests/fp/berkeley-testfloat-3"]
path = tests/fp/berkeley-testfloat-3
url = https://gitlab.com/qemu-project/berkeley-testfloat-3.git
[submodule "tests/fp/berkeley-softfloat-3"]
path = tests/fp/berkeley-softfloat-3
url = https://gitlab.com/qemu-project/berkeley-softfloat-3.git
[submodule "roms/edk2"]
path = roms/edk2
url = https://gitlab.com/qemu-project/edk2.git
[submodule "slirp"]
path = slirp
url = https://gitlab.com/qemu-project/libslirp.git
[submodule "roms/opensbi"]
path = roms/opensbi
url = https://gitlab.com/qemu-project/opensbi.git
[submodule "roms/qboot"]
path = roms/qboot
url = https://gitlab.com/qemu-project/qboot.git
[submodule "meson"]
path = meson
url = https://gitlab.com/qemu-project/meson.git
[submodule "roms/vbootrom"]
path = roms/vbootrom
url = https://gitlab.com/qemu-project/vbootrom.git
[submodule "tests/lcitool/libvirt-ci"]
path = tests/lcitool/libvirt-ci
url = https://gitlab.com/libvirt/libvirt-ci.git

View File

@@ -28,94 +28,47 @@ Thiemo Seufer <ths@networkno.de> ths <ths@c046a42c-6fe2-441c-8c8c-71466251a162>
malc <av1474@comtv.ru> malc <malc@c046a42c-6fe2-441c-8c8c-71466251a162>
# Corrupted Author fields
Aaron Larson <alarson@ddci.com> alarson@ddci.com
Andreas Färber <andreas.faerber@web.de> Andreas Färber <andreas.faerber>
fanwenjie <fanwj@mail.ustc.edu.cn> fanwj@mail.ustc.edu.cn <fanwj@mail.ustc.edu.cn>
Jason Wang <jasowang@redhat.com> Jason Wang <jasowang>
Marek Dolata <mkdolata@us.ibm.com> mkdolata@us.ibm.com <mkdolata@us.ibm.com>
Michael Ellerman <mpe@ellerman.id.au> michael@ozlabs.org <michael@ozlabs.org>
Nick Hudson <hnick@vmware.com> hnick@vmware.com <hnick@vmware.com>
Timothée Cocault <timothee.cocault@gmail.com> timothee.cocault@gmail.com <timothee.cocault@gmail.com>
Stefan Weil <sw@weilnetz.de> <weil@mail.berlios.de>
Stefan Weil <sw@weilnetz.de> Stefan Weil <stefan@kiwi.(none)>
# There is also a:
# (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162>
# for the cvs2svn initialization commit e63c3dc74bf.
# Next, translate a few commits where mailman rewrote the From: line due
# to strict SPF and DMARC. Usually, our build process should be flagging
# commits like these before maintainer merges; if you find the need to add
# a line here, please also report a bug against the part of the build
# process that let the mis-attribution slip through in the first place.
#
# If the mailing list munges your emails, use:
# git config sendemail.from '"Your Name" <your.email@example.com>'
# the use of "" in that line will differ from the typically unquoted
# 'git config user.name', which in turn is sufficient for 'git send-email'
# to add an extra From: line in the body of your email that takes
# precedence over any munged From: in the mail's headers.
# See https://lists.openembedded.org/g/openembedded-core/message/166515
# and https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg06784.html
# to strict SPF, although we prefer to avoid adding more entries like that.
Ed Swierk <eswierk@skyportsystems.com> Ed Swierk via Qemu-devel <qemu-devel@nongnu.org>
Ian McKellar <ianloic@google.com> Ian McKellar via Qemu-devel <qemu-devel@nongnu.org>
Julia Suvorova <jusual@mail.ru> Julia Suvorova via Qemu-devel <qemu-devel@nongnu.org>
Justin Terry (VM) <juterry@microsoft.com> Justin Terry (VM) via Qemu-devel <qemu-devel@nongnu.org>
Stefan Weil <sw@weilnetz.de> Stefan Weil via <qemu-devel@nongnu.org>
Stefan Weil <sw@weilnetz.de> Stefan Weil via <qemu-trivial@nongnu.org>
Andrey Drobyshev <andrey.drobyshev@virtuozzo.com> Andrey Drobyshev via <qemu-block@nongnu.org>
BALATON Zoltan <balaton@eik.bme.hu> BALATON Zoltan via <qemu-ppc@nongnu.org>
# Next, replace old addresses by a more recent one.
Akihiko Odaki <akihiko.odaki@daynix.com> <akihiko.odaki@gmail.com>
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com> <aleksandar.markovic@mips.com>
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com> <aleksandar.markovic@imgtec.com>
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com> <amarkovic@wavecomp.com>
Aleksandar Rikalo <aleksandar.rikalo@syrmia.com> <arikalo@wavecomp.com>
Aleksandar Rikalo <aleksandar.rikalo@syrmia.com> <aleksandar.rikalo@rt-rk.com>
Alexander Graf <agraf@csgraf.de> <agraf@suse.de>
Ani Sinha <anisinha@redhat.com> <ani@anisinha.ca>
Anthony Liguori <anthony@codemonkey.ws> Anthony Liguori <aliguori@us.ibm.com>
Brian Cain <brian.cain@oss.qualcomm.com> <bcain@quicinc.com>
Brian Cain <brian.cain@oss.qualcomm.com> <quic_bcain@quicinc.com>
Christian Borntraeger <borntraeger@linux.ibm.com> <borntraeger@de.ibm.com>
Damien Hedde <damien.hedde@dahe.fr> <damien.hedde@greensocs.com>
Filip Bozuta <filip.bozuta@syrmia.com> <filip.bozuta@rt-rk.com.com>
Frederic Konrad <konrad.frederic@yahoo.fr> <fred.konrad@greensocs.com>
Frederic Konrad <konrad.frederic@yahoo.fr> <konrad@adacore.com>
Frederic Konrad <konrad@adacore.com> <fred.konrad@greensocs.com>
Greg Kurz <groug@kaod.org> <gkurz@linux.vnet.ibm.com>
Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
Juan Quintela <quintela@trasno.org> <quintela@redhat.com>
Leif Lindholm <leif.lindholm@oss.qualcomm.com> <quic_llindhol@quicinc.com>
Leif Lindholm <leif.lindholm@oss.qualcomm.com> <leif.lindholm@linaro.org>
Leif Lindholm <leif.lindholm@oss.qualcomm.com> <leif@nuviainc.com>
Luc Michel <luc@lmichel.fr> <luc.michel@git.antfield.fr>
Luc Michel <luc@lmichel.fr> <luc.michel@greensocs.com>
Luc Michel <luc@lmichel.fr> <lmichel@kalray.eu>
Leif Lindholm <leif@nuviainc.com> <leif.lindholm@linaro.org>
Radoslaw Biernacki <rad@semihalf.com> <radoslaw.biernacki@linaro.org>
Paul Brook <paul@nowt.org> <paul@codesourcery.com>
Paul Burton <paulburton@kernel.org> <paul.burton@mips.com>
Paul Burton <paulburton@kernel.org> <paul.burton@imgtec.com>
Paul Burton <paulburton@kernel.org> <paul@archlinuxmips.org>
Paul Burton <paulburton@kernel.org> <pburton@wavecomp.com>
Philippe Mathieu-Daudé <philmd@linaro.org> <f4bug@amsat.org>
Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@redhat.com>
Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@fungible.com>
Roman Bolshakov <rbolshakov@ddn.com> <r.bolshakov@yadro.com>
Sriram Yagnaraman <sriram.yagnaraman@ericsson.com> <sriram.yagnaraman@est.tech>
Stefan Brankovic <stefan.brankovic@syrmia.com> <stefan.brankovic@rt-rk.com.com>
Stefan Weil <sw@weilnetz.de> Stefan Weil <stefan@weilnetz.de>
Taylor Simpson <ltaylorsimpson@gmail.com> <tsimpson@quicinc.com>
Yongbok Kim <yongbok.kim@mips.com> <yongbok.kim@imgtec.com>
# Also list preferred name forms where people have changed their
# git author config, or had utf8/latin1 encoding issues.
Aaron Lindsay <aaron@os.amperecomputing.com>
Aaron Larson <alarson@ddci.com>
Alexey Gerasimenko <x1917x@gmail.com>
Alex Chen <alex.chen@huawei.com>
Alex Ivanov <void@aleksoft.net>
Andreas Färber <afaerber@suse.de>
Bandan Das <bsd@redhat.com>
@@ -146,11 +99,9 @@ Gautham R. Shenoy <ego@in.ibm.com>
Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Gonglei (Arei) <arei.gonglei@huawei.com>
Guang Wang <wang.guang55@zte.com.cn>
Haibin Zhang <haibinzhang@tencent.com>
Hailiang Zhang <zhang.zhanghailiang@huawei.com>
Hanna Reitz <hreitz@redhat.com> <mreitz@redhat.com>
Hervé Poussineau <hpoussin@reactos.org>
Hyman Huang <huangy81@chinatelecom.cn>
Jakub Jermář <jakub@jermar.eu>
Jakub Jermář <jakub.jermar@kernkonzept.com>
Jean-Christophe Dubois <jcd@tribudubois.net>
@@ -184,11 +135,9 @@ Nicholas Thomas <nick@bytemark.co.uk>
Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Orit Wasserman <owasserm@redhat.com>
Paolo Bonzini <pbonzini@redhat.com>
Pan Nengyuan <pannengyuan@huawei.com>
Pavel Dovgaluk <dovgaluk@ispras.ru>
Pavel Dovgaluk <pavel.dovgaluk@gmail.com>
Pavel Dovgaluk <Pavel.Dovgaluk@ispras.ru>
Peter Chubb <peter.chubb@nicta.com.au>
Peter Crosthwaite <crosthwaite.peter@gmail.com>
Peter Crosthwaite <peter.crosthwaite@petalogix.com>
Peter Crosthwaite <peter.crosthwaite@xilinx.com>

View File

@@ -5,21 +5,16 @@
# Required
version: 2
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.11"
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py
# We recommend specifying your dependencies to enable reproducible builds:
# https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
python:
install:
- requirements: docs/requirements.txt
# We want all the document formats
formats: all
# For consistency, we require that QEMU's Sphinx extensions
# run with at least the same minimum version of Python that
# we require for other Python in our codebase (our conf.py
# enforces this, and some code needs it.)
python:
version: 3.6

View File

@@ -1,5 +1,8 @@
# The current Travis default is a VM based 16.04 Xenial on GCE
# Additional builds with specific requirements for a full VM need to
# be added as additional matrix: entries later on
os: linux
dist: jammy
dist: focal
language: c
compiler:
- gcc
@@ -7,11 +10,50 @@ cache:
# There is one cache per branch and compiler version.
# characteristics of each job are used to identify the cache:
# - OS name (currently only linux)
# - OS distribution (e.g. "jammy" for Linux)
# - OS distribution (for Linux, bionic or focal)
# - Names and values of visible environment variables set in .travis.yml or Settings panel
timeout: 1200
ccache: true
pip: true
directories:
- $HOME/avocado/data/cache
addons:
apt:
packages:
# Build dependencies
- libaio-dev
- libattr1-dev
- libbrlapi-dev
- libcap-ng-dev
- libcacard-dev
- libgcc-7-dev
- libgnutls28-dev
- libgtk-3-dev
- libiscsi-dev
- liblttng-ust-dev
- libncurses5-dev
- libnfs-dev
- libpixman-1-dev
- libpng-dev
- librados-dev
- libsdl2-dev
- libsdl2-image-dev
- libseccomp-dev
- libspice-protocol-dev
- libspice-server-dev
- libssh-dev
- liburcu-dev
- libusb-1.0-0-dev
- libvdeplug-dev
- libvte-2.91-dev
- libzstd-dev
- ninja-build
- sparse
- uuid-dev
# Tests dependencies
- genisoimage
# The channel name "irc.oftc.net#qemu" is encrypted against qemu/qemu
@@ -32,8 +74,8 @@ env:
- BASE_CONFIG="--disable-docs --disable-tools"
- TEST_BUILD_CMD=""
- TEST_CMD="make check V=1"
# This is broadly a list of "mainline" system targets which have support across the major distros
- MAIN_SYSTEM_TARGETS="aarch64-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu"
# This is broadly a list of "mainline" softmmu targets which have support across the major distros
- MAIN_SOFTMMU_TARGETS="aarch64-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu"
- CCACHE_SLOPPINESS="include_file_ctime,include_file_mtime"
- CCACHE_MAXSIZE=1G
- G_MESSAGES_DEBUG=error
@@ -81,6 +123,7 @@ jobs:
- name: "[aarch64] GCC check-tcg"
arch: arm64
dist: focal
addons:
apt_packages:
- libaio-dev
@@ -88,7 +131,6 @@ jobs:
- libbrlapi-dev
- libcacard-dev
- libcap-ng-dev
- libfdt-dev
- libgcrypt20-dev
- libgnutls28-dev
- libgtk-3-dev
@@ -106,17 +148,16 @@ jobs:
- libvdeplug-dev
- libvte-2.91-dev
- ninja-build
- python3-tomli
# Tests dependencies
- genisoimage
env:
- TEST_CMD="make check check-tcg V=1"
- CONFIG="--disable-containers --enable-fdt=system
--target-list=${MAIN_SYSTEM_TARGETS} --cxx=/bin/false"
- CONFIG="--disable-containers --target-list=${MAIN_SOFTMMU_TARGETS} --cxx=/bin/false"
- UNRELIABLE=true
- name: "[ppc64] Clang check-tcg"
- name: "[ppc64] GCC check-tcg"
arch: ppc64le
compiler: clang
dist: focal
addons:
apt_packages:
- libaio-dev
@@ -124,7 +165,6 @@ jobs:
- libbrlapi-dev
- libcacard-dev
- libcap-ng-dev
- libfdt-dev
- libgcrypt20-dev
- libgnutls28-dev
- libgtk-3-dev
@@ -142,16 +182,15 @@ jobs:
- libvdeplug-dev
- libvte-2.91-dev
- ninja-build
- python3-tomli
# Tests dependencies
- genisoimage
env:
- TEST_CMD="make check check-tcg V=1"
- CONFIG="--disable-containers --enable-fdt=system
--target-list=ppc64-softmmu,ppc64le-linux-user"
- CONFIG="--disable-containers --target-list=ppc64-softmmu,ppc64le-linux-user"
- name: "[s390x] GCC check-tcg"
arch: s390x
dist: bionic
addons:
apt_packages:
- libaio-dev
@@ -159,7 +198,6 @@ jobs:
- libbrlapi-dev
- libcacard-dev
- libcap-ng-dev
- libfdt-dev
- libgcrypt20-dev
- libgnutls28-dev
- libgtk-3-dev
@@ -177,33 +215,31 @@ jobs:
- libvdeplug-dev
- libvte-2.91-dev
- ninja-build
- python3-tomli
# Tests dependencies
- genisoimage
env:
- TEST_CMD="make check check-tcg V=1"
- CONFIG="--disable-containers
--target-list=hppa-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu"
- CONFIG="--disable-containers --target-list=${MAIN_SOFTMMU_TARGETS},s390x-linux-user"
- UNRELIABLE=true
script:
- BUILD_RC=0 && make -j${JOBS} || BUILD_RC=$?
- |
if [ "$BUILD_RC" -eq 0 ] ; then
mv pc-bios/s390-ccw/*.img qemu-bundle/usr/local/share/qemu ;
mv pc-bios/s390-ccw/*.img pc-bios/ ;
${TEST_CMD} ;
else
$(exit $BUILD_RC);
fi
- name: "[s390x] Clang (other-system)"
- name: "[s390x] GCC (other-softmmu)"
arch: s390x
compiler: clang
dist: bionic
addons:
apt_packages:
- libaio-dev
- libattr1-dev
- libcacard-dev
- libcap-ng-dev
- libfdt-dev
- libgnutls28-dev
- libiscsi-dev
- liblttng-ust-dev
@@ -217,31 +253,28 @@ jobs:
- libsnappy-dev
- libzstd-dev
- nettle-dev
- xfslibs-dev
- ninja-build
- python3-tomli
# Tests dependencies
- genisoimage
env:
- CONFIG="--disable-containers --audio-drv-list=sdl --disable-user
--target-list=arm-softmmu,avr-softmmu,microblaze-softmmu,sh4eb-softmmu,sparc64-softmmu,xtensaeb-softmmu"
--target-list-exclude=${MAIN_SOFTMMU_TARGETS}"
- name: "[s390x] GCC (user)"
arch: s390x
dist: bionic
addons:
apt_packages:
- libgcrypt20-dev
- libglib2.0-dev
- libgnutls28-dev
- ninja-build
- flex
- bison
- python3-tomli
env:
- TEST_CMD="make check check-tcg V=1"
- CONFIG="--disable-containers --disable-system"
- name: "[s390x] Clang (disable-tcg)"
arch: s390x
dist: bionic
compiler: clang
addons:
apt_packages:
@@ -250,7 +283,6 @@ jobs:
- libbrlapi-dev
- libcacard-dev
- libcap-ng-dev
- libfdt-dev
- libgcrypt20-dev
- libgnutls28-dev
- libgtk-3-dev
@@ -268,8 +300,31 @@ jobs:
- libvdeplug-dev
- libvte-2.91-dev
- ninja-build
- python3-tomli
env:
- TEST_CMD="make check-unit"
- CONFIG="--disable-containers --disable-tcg --enable-kvm --disable-tools
--enable-fdt=system --host-cc=clang --cxx=clang++"
- CONFIG="--disable-containers --disable-tcg --enable-kvm
--disable-tools --host-cc=clang --cxx=clang++"
- UNRELIABLE=true
# Release builds
# The make-release script expect a QEMU version, so our tag must start with a 'v'.
# This is the case when release candidate tags are created.
- name: "Release tarball"
if: tag IS present AND tag =~ /^v\d+\.\d+(\.\d+)?(-\S*)?$/
env:
# We want to build from the release tarball
- BUILD_DIR="release/build/dir" SRC_DIR="../../.."
- BASE_CONFIG="--prefix=$PWD/dist"
- CONFIG="--target-list=x86_64-softmmu,aarch64-softmmu,armeb-linux-user,ppc-linux-user"
- TEST_CMD="make install -j${JOBS}"
- QEMU_VERSION="${TRAVIS_TAG:1}"
- CACHE_NAME="${TRAVIS_BRANCH}-linux-gcc-default"
script:
- make -C ${SRC_DIR} qemu-${QEMU_VERSION}.tar.bz2
- ls -l ${SRC_DIR}/qemu-${QEMU_VERSION}.tar.bz2
- tar -xf ${SRC_DIR}/qemu-${QEMU_VERSION}.tar.bz2 && cd qemu-${QEMU_VERSION}
- mkdir -p release-build && cd release-build
- ../configure ${BASE_CONFIG} ${CONFIG} || { cat config.log meson-logs/meson-log.txt && exit 1; }
- make install
allow_failures:
- env: UNRELIABLE=true

View File

@@ -4,4 +4,3 @@ source accel/Kconfig
source target/Kconfig
source hw/Kconfig
source semihosting/Kconfig
source rust/Kconfig

View File

@@ -5,21 +5,12 @@
config LINUX
bool
config LIBCBOR
bool
config GNUTLS
bool
config OPENGL
bool
config X11
bool
config PIXMAN
bool
config SPICE
bool
@@ -29,38 +20,24 @@ config IVSHMEM
config TPM
bool
config FDT
bool
config VHOST_USER
bool
select VHOST
config VHOST_VDPA
bool
select VHOST
config VHOST_KERNEL
bool
select VHOST
config VIRTFS
bool
config PVRDMA
bool
config MULTIPROCESS_ALLOWED
bool
imply MULTIPROCESS
config FUZZ
bool
select SPARSE_MEM
config VFIO_USER_SERVER_ALLOWED
bool
imply VFIO_USER_SERVER
config HV_BALLOON_POSSIBLE
bool
config HAVE_RUST
bool
config MAC_PVG
bool

File diff suppressed because it is too large Load Diff

147
Makefile
View File

@@ -26,9 +26,9 @@ quiet-command-run = $(if $(V),,$(if $2,printf " %-7s %s\n" $2 $3 && ))$1
quiet-@ = $(if $(V),,@)
quiet-command = $(quiet-@)$(call quiet-command-run,$1,$2,$3)
UNCHECKED_GOALS := TAGS gtags cscope ctags dist \
UNCHECKED_GOALS := %clean TAGS cscope ctags dist \
help check-help print-% \
docker docker-% lcitool-refresh vm-help vm-test vm-build-%
docker docker-% vm-help vm-test vm-build-%
all:
.PHONY: all clean distclean recurse-all dist msi FORCE
@@ -42,8 +42,17 @@ configure: ;
ifneq ($(wildcard config-host.mak),)
include config-host.mak
include Makefile.prereqs
Makefile.prereqs: config-host.mak
git-submodule-update:
.git-submodule-status: git-submodule-update config-host.mak
Makefile: .git-submodule-status
.PHONY: git-submodule-update
git-submodule-update:
ifneq ($(GIT_SUBMODULES_ACTION),ignore)
$(call quiet-command, \
(GIT="$(GIT)" "$(SRC_PATH)/scripts/git-submodule.sh" $(GIT_SUBMODULES_ACTION) $(GIT_SUBMODULES)), \
"GIT","$(GIT_SUBMODULES)")
endif
# 0. ensure the build tree is okay
@@ -78,23 +87,21 @@ x := $(shell rm -rf meson-private meson-info meson-logs)
endif
# 1. ensure config-host.mak is up-to-date
config-host.mak: $(SRC_PATH)/configure $(SRC_PATH)/scripts/meson-buildoptions.sh \
$(SRC_PATH)/pythondeps.toml $(SRC_PATH)/VERSION
config-host.mak: $(SRC_PATH)/configure $(SRC_PATH)/pc-bios $(SRC_PATH)/VERSION
@echo config-host.mak is out-of-date, running configure
@if test -f meson-private/coredata.dat; then \
./config.status --skip-meson; \
else \
./config.status; \
./config.status && touch build.ninja.stamp; \
fi
# 2. meson.stamp exists if meson has run at least once (so ninja reconfigure
# works), but otherwise never needs to be updated
meson-private/coredata.dat: meson.stamp
meson.stamp: config-host.mak
@touch meson.stamp
# 3. ensure meson-generated build files are up-to-date
# 3. ensure generated build files are up-to-date
ifneq ($(NINJA),)
Makefile.ninja: build.ninja
@@ -105,32 +112,18 @@ Makefile.ninja: build.ninja
$(NINJA) -t query build.ninja | sed -n '1,/^ input:/d; /^ outputs:/q; s/$$/ \\/p'; \
} > $@.tmp && mv $@.tmp $@
-include Makefile.ninja
endif
ifneq ($(MESON),)
# The path to meson always points to pyvenv/bin/meson, but the absolute
# paths could change. In that case, force a regeneration of build.ninja.
# Note that this invocation of $(NINJA), just like when Make rebuilds
# Makefiles, does not include -n.
# A separate rule is needed for Makefile dependencies to avoid -n
build.ninja: build.ninja.stamp
$(build-files):
build.ninja.stamp: meson.stamp $(build-files)
@if test "$$(cat build.ninja.stamp)" = "$(MESON)" && test -n "$(NINJA)"; then \
$(NINJA) build.ninja; \
else \
echo "$(MESON) setup --reconfigure $(SRC_PATH)"; \
$(MESON) setup --reconfigure $(SRC_PATH); \
fi && echo "$(MESON)" > $@
$(NINJA) $(if $V,-v,) build.ninja && touch $@
endif
ifneq ($(MESON),)
Makefile.mtest: build.ninja scripts/mtest2make.py
$(MESON) introspect --targets --tests --benchmarks | $(PYTHON) scripts/mtest2make.py > $@
-include Makefile.mtest
.PHONY: update-buildoptions
all update-buildoptions: $(SRC_PATH)/scripts/meson-buildoptions.sh
$(SRC_PATH)/scripts/meson-buildoptions.sh: $(SRC_PATH)/meson_options.txt
$(MESON) introspect --buildoptions $(SRC_PATH)/meson.build | $(PYTHON) \
scripts/meson-buildoptions.py > $@.tmp && mv $@.tmp $@
endif
# 4. Rules to bridge to other makefiles
@@ -142,18 +135,13 @@ MAKE.n = $(findstring n,$(firstword $(filter-out --%,$(MAKEFLAGS))))
MAKE.k = $(findstring k,$(firstword $(filter-out --%,$(MAKEFLAGS))))
MAKE.q = $(findstring q,$(firstword $(filter-out --%,$(MAKEFLAGS))))
MAKE.nq = $(if $(word 2, $(MAKE.n) $(MAKE.q)),nq)
NINJAFLAGS = \
$(if $V,-v) \
$(if $(MAKE.n), -n) \
$(if $(MAKE.k), -k0) \
$(filter-out -j, \
$(or $(filter -l% -j%, $(MAKEFLAGS)), \
$(if $(filter --jobserver-auth=%, $(MAKEFLAGS)),, -j1))) \
-d keepdepfile
ninja-cmd-goals = $(or $(MAKECMDGOALS), all)
ninja-cmd-goals += $(foreach g, $(MAKECMDGOALS), $(.ninja-goals.$g))
NINJAFLAGS = $(if $V,-v) $(if $(MAKE.n), -n) $(if $(MAKE.k), -k0) \
$(filter-out -j, $(lastword -j1 $(filter -l% -j%, $(MAKEFLAGS)))) \
makefile-targets := build.ninja ctags TAGS cscope dist clean
ninja-cmd-goals = $(or $(MAKECMDGOALS), all)
ninja-cmd-goals += $(foreach t, $(.tests), $(.test.deps.$t))
makefile-targets := build.ninja ctags TAGS cscope dist clean uninstall
# "ninja -t targets" also lists all prerequisites. If build system
# files are marked as PHONY, however, Make will always try to execute
# "ninja build.ninja".
@@ -165,14 +153,27 @@ $(ninja-targets): run-ninja
# --output-sync line.
run-ninja: config-host.mak
ifneq ($(filter $(ninja-targets), $(ninja-cmd-goals)),)
+$(if $(MAKE.nq),@:,$(quiet-@)$(NINJA) $(NINJAFLAGS) \
$(sort $(filter $(ninja-targets), $(ninja-cmd-goals))) | cat)
+$(quiet-@)$(if $(MAKE.nq),@:, $(NINJA) -d keepdepfile \
$(NINJAFLAGS) $(sort $(filter $(ninja-targets), $(ninja-cmd-goals))) | cat)
endif
endif
# Force configure to re-run if the API symbols are updated
ifeq ($(CONFIG_PLUGIN),y)
config-host.mak: $(SRC_PATH)/plugins/qemu-plugins.symbols
.PHONY: plugins
plugins:
$(call quiet-command,\
$(MAKE) $(SUBDIR_MAKEFLAGS) -C contrib/plugins V="$(V)", \
"BUILD", "example plugins")
endif # $(CONFIG_PLUGIN)
else # config-host.mak does not exist
config-host.mak:
ifneq ($(filter-out $(UNCHECKED_GOALS),$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
$(error Please call configure before running make)
@echo "Please call configure before running make!"
@exit 1
endif
endif # config-host.mak does not exist
@@ -182,52 +183,53 @@ include $(SRC_PATH)/tests/Makefile.include
all: recurse-all
SUBDIR_RULES=$(foreach t, all clean distclean, $(addsuffix /$(t), $(SUBDIRS)))
.PHONY: $(SUBDIR_RULES)
$(SUBDIR_RULES):
ROM_DIRS = $(addprefix pc-bios/, $(ROMS))
ROM_DIRS_RULES=$(foreach t, all clean, $(addsuffix /$(t), $(ROM_DIRS)))
# Only keep -O and -g cflags
.PHONY: $(ROM_DIRS_RULES)
$(ROM_DIRS_RULES):
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C $(dir $@) V="$(V)" TARGET_DIR="$(dir $@)" $(notdir $@),)
.PHONY: recurse-all recurse-clean
recurse-all: $(addsuffix /all, $(SUBDIRS))
recurse-clean: $(addsuffix /clean, $(SUBDIRS))
recurse-distclean: $(addsuffix /distclean, $(SUBDIRS))
recurse-all: $(addsuffix /all, $(ROM_DIRS))
recurse-clean: $(addsuffix /clean, $(ROM_DIRS))
######################################################################
clean: recurse-clean
-$(quiet-@)test -f build.ninja && $(NINJA) $(NINJAFLAGS) -t clean || :
-$(quiet-@)test -f build.ninja && $(NINJA) $(NINJAFLAGS) clean-ctlist || :
find . \( -name '*.so' -o -name '*.dll' -o \
-name '*.[oda]' -o -name '*.gcno' \) -type f \
# avoid old build problems by removing potentially incorrect old files
rm -f config.mak op-i386.h opc-i386.h gen-op-i386.h op-arm.h opc-arm.h gen-op-arm.h
find . \( -name '*.so' -o -name '*.dll' -o -name '*.[oda]' \) -type f \
! -path ./roms/edk2/ArmPkg/Library/GccLto/liblto-aarch64.a \
! -path ./roms/edk2/ArmPkg/Library/GccLto/liblto-arm.a \
-exec rm {} +
rm -f TAGS cscope.* *~ */*~
@$(MAKE) -Ctests/qemu-iotests clean
rm -f TAGS cscope.* *.pod *~ */*~
rm -f fsdev/*.pod scsi/*.pod
VERSION = $(shell cat $(SRC_PATH)/VERSION)
dist: qemu-$(VERSION).tar.xz
dist: qemu-$(VERSION).tar.bz2
qemu-%.tar.xz:
$(SRC_PATH)/scripts/make-release "$(SRC_PATH)" "$(patsubst qemu-%.tar.xz,%,$@)"
qemu-%.tar.bz2:
$(SRC_PATH)/scripts/make-release "$(SRC_PATH)" "$(patsubst qemu-%.tar.bz2,%,$@)"
distclean: clean recurse-distclean
distclean: clean
-$(quiet-@)test -f build.ninja && $(NINJA) $(NINJAFLAGS) -t clean -g || :
rm -f config-host.mak Makefile.prereqs
rm -f tests/tcg/*/config-target.mak tests/tcg/config-host.mak
rm -f config.status
rm -f roms/seabios/config.mak
rm -f config-host.mak config-host.h* config-poison.h
rm -f tests/tcg/config-*.mak
rm -f config-all-disas.mak config.status
rm -f roms/seabios/config.mak roms/vgabios/config.mak
rm -f qemu-plugins-ld.symbols qemu-plugins-ld64.symbols
rm -f *-config-target.h *-config-devices.mak *-config-devices.h
rm -rf meson-private meson-logs meson-info compile_commands.json
rm -f Makefile.ninja Makefile.mtest build.ninja.stamp meson.stamp
rm -f config.log
rm -f linux-headers/asm
rm -Rf .sdk qemu-bundle
rm -Rf .sdk
find-src-path = find "$(SRC_PATH)" -path "$(SRC_PATH)/meson" -prune -o \
-type l -prune -o \( -name "*.[chsS]" -o -name "*.[ch].inc" \)
find-src-path = find "$(SRC_PATH)/" -path "$(SRC_PATH)/meson" -prune -o \( -name "*.[chsS]" -o -name "*.[ch].inc" \)
.PHONY: ctags
ctags:
@@ -248,7 +250,7 @@ gtags:
"GTAGS", "Remove old $@ files")
$(call quiet-command, \
(cd $(SRC_PATH) && \
$(find-src-path) -print | gtags -f -), \
$(find-src-path) | gtags -f -), \
"GTAGS", "Re-index $(SRC_PATH)")
.PHONY: TAGS
@@ -278,20 +280,12 @@ cscope:
# Needed by "meson install"
export DESTDIR
include $(SRC_PATH)/tests/lcitool/Makefile.include
include $(SRC_PATH)/tests/docker/Makefile.include
include $(SRC_PATH)/tests/vm/Makefile.include
print-help-run = printf " %-30s - %s\\n" "$1" "$2"
print-help = @$(call print-help-run,$1,$2)
.PHONY: update-linux-vdso
update-linux-vdso:
@for m in $(SRC_PATH)/linux-user/*/Makefile.vdso; do \
$(MAKE) $(SUBDIR_MAKEFLAGS) -C $$(dirname $$m) -f Makefile.vdso \
SRC_PATH=$(SRC_PATH) BUILD_DIR=$(BUILD_DIR); \
done
.PHONY: help
help:
@echo 'Generic targets:'
@@ -302,25 +296,26 @@ help:
$(call print-help,cscope,Generate cscope index)
$(call print-help,sparse,Run sparse on the QEMU source)
@echo ''
ifeq ($(CONFIG_PLUGIN),y)
@echo 'Plugin targets:'
$(call print-help,plugins,Build the example TCG plugins)
@echo ''
endif
@echo 'Cleaning targets:'
$(call print-help,clean,Remove most generated files but keep the config)
$(call print-help,distclean,Remove all generated files)
$(call print-help,dist,Build a distributable tarball)
@echo ''
@echo 'Linux-user targets:'
$(call print-help,update-linux-vdso,Build linux-user vdso images)
@echo ''
@echo 'Test targets:'
$(call print-help,check,Run all tests (check-help for details))
$(call print-help,bench,Run all benchmarks)
$(call print-help,lcitool-help,Help about targets for managing build environment manifests)
$(call print-help,docker-help,Help about targets running tests inside containers)
$(call print-help,vm-help,Help about targets running tests inside VM)
@echo ''
@echo 'Documentation targets:'
$(call print-help,html man,Build documentation in specified format)
@echo ''
ifneq ($(filter msi, $(ninja-targets)),)
ifdef CONFIG_WIN32
@echo 'Windows targets:'
$(call print-help,installer,Build NSIS-based installer for QEMU)
$(call print-help,msi,Build MSI-based installer for qemu-ga)

View File

@@ -39,7 +39,7 @@ Documentation can be found hosted online at
current development version that is available at
`<https://www.qemu.org/docs/master/>`_ is generated from the ``docs/``
folder in the source tree, and is built by `Sphinx
<https://www.sphinx-doc.org/en/master/>`_.
<https://www.sphinx-doc.org/en/master/>_`.
Building
@@ -59,9 +59,9 @@ of other UNIX targets. The simple steps to build QEMU are:
Additional information can also be found online via the QEMU website:
* `<https://wiki.qemu.org/Hosts/Linux>`_
* `<https://wiki.qemu.org/Hosts/Mac>`_
* `<https://wiki.qemu.org/Hosts/W32>`_
* `<https://qemu.org/Hosts/Linux>`_
* `<https://qemu.org/Hosts/Mac>`_
* `<https://qemu.org/Hosts/W32>`_
Submitting patches
@@ -78,14 +78,14 @@ format-patch' and/or 'git send-email' to format & send the mail to the
qemu-devel@nongnu.org mailing list. All patches submitted must contain
a 'Signed-off-by' line from the author. Patches should follow the
guidelines set out in the `style section
<https://www.qemu.org/docs/master/devel/style.html>`_ of
<https://www.qemu.org/docs/master/devel/style.html>` of
the Developers Guide.
Additional information on submitting patches can be found online via
the QEMU website:
the QEMU website
* `<https://wiki.qemu.org/Contribute/SubmitAPatch>`_
* `<https://wiki.qemu.org/Contribute/TrivialPatches>`_
* `<https://qemu.org/Contribute/SubmitAPatch>`_
* `<https://qemu.org/Contribute/TrivialPatches>`_
The QEMU website is also maintained under source control.
@@ -102,7 +102,7 @@ requires a working 'git send-email' setup, and by default doesn't
automate everything, so you may want to go through the above steps
manually for once.
For installation instructions, please go to:
For installation instructions, please go to
* `<https://github.com/stefanha/git-publish>`_
@@ -144,7 +144,7 @@ reported via GitLab.
For additional information on bug reporting consult:
* `<https://wiki.qemu.org/Contribute/ReportABug>`_
* `<https://qemu.org/Contribute/ReportABug>`_
ChangeLog
@@ -159,7 +159,7 @@ Contact
=======
The QEMU community can be contacted in a number of ways, with the two
main methods being email and IRC:
main methods being email and IRC
* `<mailto:qemu-devel@nongnu.org>`_
* `<https://lists.nongnu.org/mailman/listinfo/qemu-devel>`_
@@ -168,4 +168,4 @@ main methods being email and IRC:
Information on additional methods of contacting the community can be
found online via the QEMU website:
* `<https://wiki.qemu.org/Contribute/StartHere>`_
* `<https://qemu.org/Contribute/StartHere>`_

View File

@@ -1 +1 @@
9.2.92
6.1.1

View File

@@ -4,6 +4,9 @@ config WHPX
config NVMM
bool
config HAX
bool
config HVF
bool
@@ -16,5 +19,3 @@ config KVM
config XEN
bool
select FSDEV_9P if VIRTFS
select PCI_EXPRESS_GENERIC_BRIDGE
select XEN_BUS

View File

@@ -1,155 +0,0 @@
/*
* Lock to inhibit accelerator ioctls
*
* Copyright (c) 2022 Red Hat Inc.
*
* Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu/lockcnt.h"
#include "qemu/thread.h"
#include "qemu/main-loop.h"
#include "hw/core/cpu.h"
#include "system/accel-blocker.h"
static QemuLockCnt accel_in_ioctl_lock;
static QemuEvent accel_in_ioctl_event;
void accel_blocker_init(void)
{
qemu_lockcnt_init(&accel_in_ioctl_lock);
qemu_event_init(&accel_in_ioctl_event, false);
}
void accel_ioctl_begin(void)
{
if (likely(bql_locked())) {
return;
}
/* block if lock is taken in kvm_ioctl_inhibit_begin() */
qemu_lockcnt_inc(&accel_in_ioctl_lock);
}
void accel_ioctl_end(void)
{
if (likely(bql_locked())) {
return;
}
qemu_lockcnt_dec(&accel_in_ioctl_lock);
/* change event to SET. If event was BUSY, wake up all waiters */
qemu_event_set(&accel_in_ioctl_event);
}
void accel_cpu_ioctl_begin(CPUState *cpu)
{
if (unlikely(bql_locked())) {
return;
}
/* block if lock is taken in kvm_ioctl_inhibit_begin() */
qemu_lockcnt_inc(&cpu->in_ioctl_lock);
}
void accel_cpu_ioctl_end(CPUState *cpu)
{
if (unlikely(bql_locked())) {
return;
}
qemu_lockcnt_dec(&cpu->in_ioctl_lock);
/* change event to SET. If event was BUSY, wake up all waiters */
qemu_event_set(&accel_in_ioctl_event);
}
static bool accel_has_to_wait(void)
{
CPUState *cpu;
bool needs_to_wait = false;
CPU_FOREACH(cpu) {
if (qemu_lockcnt_count(&cpu->in_ioctl_lock)) {
/* exit the ioctl, if vcpu is running it */
qemu_cpu_kick(cpu);
needs_to_wait = true;
}
}
return needs_to_wait || qemu_lockcnt_count(&accel_in_ioctl_lock);
}
void accel_ioctl_inhibit_begin(void)
{
CPUState *cpu;
/*
* We allow to inhibit only when holding the BQL, so we can identify
* when an inhibitor wants to issue an ioctl easily.
*/
g_assert(bql_locked());
/* Block further invocations of the ioctls outside the BQL. */
CPU_FOREACH(cpu) {
qemu_lockcnt_lock(&cpu->in_ioctl_lock);
}
qemu_lockcnt_lock(&accel_in_ioctl_lock);
/* Keep waiting until there are running ioctls */
while (true) {
/* Reset event to FREE. */
qemu_event_reset(&accel_in_ioctl_event);
if (accel_has_to_wait()) {
/*
* If event is still FREE, and there are ioctls still in progress,
* wait.
*
* If an ioctl finishes before qemu_event_wait(), it will change
* the event state to SET. This will prevent qemu_event_wait() from
* blocking, but it's not a problem because if other ioctls are
* still running the loop will iterate once more and reset the event
* status to FREE so that it can wait properly.
*
* If an ioctls finishes while qemu_event_wait() is blocking, then
* it will be waken up, but also here the while loop makes sure
* to re-enter the wait if there are other running ioctls.
*/
qemu_event_wait(&accel_in_ioctl_event);
} else {
/* No ioctl is running */
return;
}
}
}
void accel_ioctl_inhibit_end(void)
{
CPUState *cpu;
qemu_lockcnt_unlock(&accel_in_ioctl_lock);
CPU_FOREACH(cpu) {
qemu_lockcnt_unlock(&cpu->in_ioctl_lock);
}
}

137
accel/accel-common.c Normal file
View File

@@ -0,0 +1,137 @@
/*
* QEMU accel class, components common to system emulation and user mode
*
* Copyright (c) 2003-2008 Fabrice Bellard
* Copyright (c) 2014 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu/accel.h"
#include "cpu.h"
#include "hw/core/accel-cpu.h"
#ifndef CONFIG_USER_ONLY
#include "accel-softmmu.h"
#endif /* !CONFIG_USER_ONLY */
static const TypeInfo accel_type = {
.name = TYPE_ACCEL,
.parent = TYPE_OBJECT,
.class_size = sizeof(AccelClass),
.instance_size = sizeof(AccelState),
};
/* Lookup AccelClass from opt_name. Returns NULL if not found */
AccelClass *accel_find(const char *opt_name)
{
char *class_name = g_strdup_printf(ACCEL_CLASS_NAME("%s"), opt_name);
AccelClass *ac = ACCEL_CLASS(module_object_class_by_name(class_name));
g_free(class_name);
return ac;
}
static void accel_init_cpu_int_aux(ObjectClass *klass, void *opaque)
{
CPUClass *cc = CPU_CLASS(klass);
AccelCPUClass *accel_cpu = opaque;
/*
* The first callback allows accel-cpu to run initializations
* for the CPU, customizing CPU behavior according to the accelerator.
*
* The second one allows the CPU to customize the accel-cpu
* behavior according to the CPU.
*
* The second is currently only used by TCG, to specialize the
* TCGCPUOps depending on the CPU type.
*/
cc->accel_cpu = accel_cpu;
if (accel_cpu->cpu_class_init) {
accel_cpu->cpu_class_init(cc);
}
if (cc->init_accel_cpu) {
cc->init_accel_cpu(accel_cpu, cc);
}
}
/* initialize the arch-specific accel CpuClass interfaces */
static void accel_init_cpu_interfaces(AccelClass *ac)
{
const char *ac_name; /* AccelClass name */
char *acc_name; /* AccelCPUClass name */
ObjectClass *acc; /* AccelCPUClass */
ac_name = object_class_get_name(OBJECT_CLASS(ac));
g_assert(ac_name != NULL);
acc_name = g_strdup_printf("%s-%s", ac_name, CPU_RESOLVING_TYPE);
acc = object_class_by_name(acc_name);
g_free(acc_name);
if (acc) {
object_class_foreach(accel_init_cpu_int_aux,
CPU_RESOLVING_TYPE, false, acc);
}
}
void accel_init_interfaces(AccelClass *ac)
{
#ifndef CONFIG_USER_ONLY
accel_init_ops_interfaces(ac);
#endif /* !CONFIG_USER_ONLY */
accel_init_cpu_interfaces(ac);
}
void accel_cpu_instance_init(CPUState *cpu)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
if (cc->accel_cpu && cc->accel_cpu->cpu_instance_init) {
cc->accel_cpu->cpu_instance_init(cpu);
}
}
bool accel_cpu_realizefn(CPUState *cpu, Error **errp)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
if (cc->accel_cpu && cc->accel_cpu->cpu_realizefn) {
return cc->accel_cpu->cpu_realizefn(cpu, errp);
}
return true;
}
static const TypeInfo accel_cpu_type = {
.name = TYPE_ACCEL_CPU,
.parent = TYPE_OBJECT,
.abstract = true,
.class_size = sizeof(AccelCPUClass),
};
static void register_accel_types(void)
{
type_register_static(&accel_type);
type_register_static(&accel_cpu_type);
}
type_init(register_accel_types);

100
accel/accel-softmmu.c Normal file
View File

@@ -0,0 +1,100 @@
/*
* QEMU accel class, system emulation components
*
* Copyright (c) 2003-2008 Fabrice Bellard
* Copyright (c) 2014 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu/accel.h"
#include "hw/boards.h"
#include "sysemu/cpus.h"
#include "accel-softmmu.h"
int accel_init_machine(AccelState *accel, MachineState *ms)
{
AccelClass *acc = ACCEL_GET_CLASS(accel);
int ret;
ms->accelerator = accel;
*(acc->allowed) = true;
ret = acc->init_machine(ms);
if (ret < 0) {
ms->accelerator = NULL;
*(acc->allowed) = false;
object_unref(OBJECT(accel));
} else {
object_set_accelerator_compat_props(acc->compat_props);
}
return ret;
}
AccelState *current_accel(void)
{
return current_machine->accelerator;
}
void accel_setup_post(MachineState *ms)
{
AccelState *accel = ms->accelerator;
AccelClass *acc = ACCEL_GET_CLASS(accel);
if (acc->setup_post) {
acc->setup_post(ms, accel);
}
}
/* initialize the arch-independent accel operation interfaces */
void accel_init_ops_interfaces(AccelClass *ac)
{
const char *ac_name;
char *ops_name;
AccelOpsClass *ops;
ac_name = object_class_get_name(OBJECT_CLASS(ac));
g_assert(ac_name != NULL);
ops_name = g_strdup_printf("%s" ACCEL_OPS_SUFFIX, ac_name);
ops = ACCEL_OPS_CLASS(module_object_class_by_name(ops_name));
g_free(ops_name);
/*
* all accelerators need to define ops, providing at least a mandatory
* non-NULL create_vcpu_thread operation.
*/
g_assert(ops != NULL);
if (ops->ops_init) {
ops->ops_init(ops);
}
cpus_register_accel(ops);
}
static const TypeInfo accel_ops_type_info = {
.name = TYPE_ACCEL_OPS,
.parent = TYPE_OBJECT,
.abstract = true,
.class_size = sizeof(AccelOpsClass),
};
static void accel_softmmu_register_types(void)
{
type_register_static(&accel_ops_type_info);
}
type_init(accel_softmmu_register_types);

15
accel/accel-softmmu.h Normal file
View File

@@ -0,0 +1,15 @@
/*
* QEMU System Emulation accel internal functions
*
* Copyright 2021 SUSE LLC
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#ifndef ACCEL_SOFTMMU_H
#define ACCEL_SOFTMMU_H
void accel_init_ops_interfaces(AccelClass *ac);
#endif /* ACCEL_SOFTMMU_H */

View File

@@ -1,105 +0,0 @@
/*
* QEMU accel class, system emulation components
*
* Copyright (c) 2003-2008 Fabrice Bellard
* Copyright (c) 2014 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu/accel.h"
#include "hw/boards.h"
#include "system/accel-ops.h"
#include "system/cpus.h"
#include "qemu/error-report.h"
#include "accel-system.h"
int accel_init_machine(AccelState *accel, MachineState *ms)
{
AccelClass *acc = ACCEL_GET_CLASS(accel);
int ret;
ms->accelerator = accel;
*(acc->allowed) = true;
ret = acc->init_machine(ms);
if (ret < 0) {
ms->accelerator = NULL;
*(acc->allowed) = false;
object_unref(OBJECT(accel));
} else {
object_set_accelerator_compat_props(acc->compat_props);
}
return ret;
}
AccelState *current_accel(void)
{
return current_machine->accelerator;
}
void accel_setup_post(MachineState *ms)
{
AccelState *accel = ms->accelerator;
AccelClass *acc = ACCEL_GET_CLASS(accel);
if (acc->setup_post) {
acc->setup_post(ms, accel);
}
}
/* initialize the arch-independent accel operation interfaces */
void accel_system_init_ops_interfaces(AccelClass *ac)
{
const char *ac_name;
char *ops_name;
ObjectClass *oc;
AccelOpsClass *ops;
ac_name = object_class_get_name(OBJECT_CLASS(ac));
g_assert(ac_name != NULL);
ops_name = g_strdup_printf("%s" ACCEL_OPS_SUFFIX, ac_name);
oc = module_object_class_by_name(ops_name);
if (!oc) {
error_report("fatal: could not load module for type '%s'", ops_name);
exit(1);
}
g_free(ops_name);
/*
* all accelerators need to define ops, providing at least a mandatory
* non-NULL create_vcpu_thread operation.
*/
ops = ACCEL_OPS_CLASS(oc);
if (ops->ops_init) {
ops->ops_init(ops);
}
cpus_register_accel(ops);
}
static const TypeInfo accel_ops_type_info = {
.name = TYPE_ACCEL_OPS,
.parent = TYPE_OBJECT,
.abstract = true,
.class_size = sizeof(AccelOpsClass),
};
static void accel_system_register_types(void)
{
type_register_static(&accel_ops_type_info);
}
type_init(accel_system_register_types);

View File

@@ -1,15 +0,0 @@
/*
* QEMU System Emulation accel internal functions
*
* Copyright 2021 SUSE LLC
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#ifndef ACCEL_SYSTEM_H
#define ACCEL_SYSTEM_H
void accel_system_init_ops_interfaces(AccelClass *ac);
#endif /* ACCEL_SYSTEM_H */

View File

@@ -1,175 +0,0 @@
/*
* QEMU accel class, components common to system emulation and user mode
*
* Copyright (c) 2003-2008 Fabrice Bellard
* Copyright (c) 2014 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu/accel.h"
#include "cpu.h"
#include "accel/accel-cpu-target.h"
#ifndef CONFIG_USER_ONLY
#include "accel-system.h"
#endif /* !CONFIG_USER_ONLY */
static const TypeInfo accel_type = {
.name = TYPE_ACCEL,
.parent = TYPE_OBJECT,
.class_size = sizeof(AccelClass),
.instance_size = sizeof(AccelState),
.abstract = true,
};
/* Lookup AccelClass from opt_name. Returns NULL if not found */
AccelClass *accel_find(const char *opt_name)
{
char *class_name = g_strdup_printf(ACCEL_CLASS_NAME("%s"), opt_name);
AccelClass *ac = ACCEL_CLASS(module_object_class_by_name(class_name));
g_free(class_name);
return ac;
}
/* Return the name of the current accelerator */
const char *current_accel_name(void)
{
AccelClass *ac = ACCEL_GET_CLASS(current_accel());
return ac->name;
}
static void accel_init_cpu_int_aux(ObjectClass *klass, void *opaque)
{
CPUClass *cc = CPU_CLASS(klass);
AccelCPUClass *accel_cpu = opaque;
/*
* The first callback allows accel-cpu to run initializations
* for the CPU, customizing CPU behavior according to the accelerator.
*
* The second one allows the CPU to customize the accel-cpu
* behavior according to the CPU.
*
* The second is currently only used by TCG, to specialize the
* TCGCPUOps depending on the CPU type.
*/
cc->accel_cpu = accel_cpu;
if (accel_cpu->cpu_class_init) {
accel_cpu->cpu_class_init(cc);
}
if (cc->init_accel_cpu) {
cc->init_accel_cpu(accel_cpu, cc);
}
}
/* initialize the arch-specific accel CpuClass interfaces */
static void accel_init_cpu_interfaces(AccelClass *ac)
{
const char *ac_name; /* AccelClass name */
char *acc_name; /* AccelCPUClass name */
ObjectClass *acc; /* AccelCPUClass */
ac_name = object_class_get_name(OBJECT_CLASS(ac));
g_assert(ac_name != NULL);
acc_name = g_strdup_printf("%s-%s", ac_name, CPU_RESOLVING_TYPE);
acc = object_class_by_name(acc_name);
g_free(acc_name);
if (acc) {
object_class_foreach(accel_init_cpu_int_aux,
CPU_RESOLVING_TYPE, false, acc);
}
}
void accel_init_interfaces(AccelClass *ac)
{
#ifndef CONFIG_USER_ONLY
accel_system_init_ops_interfaces(ac);
#endif /* !CONFIG_USER_ONLY */
accel_init_cpu_interfaces(ac);
}
void accel_cpu_instance_init(CPUState *cpu)
{
if (cpu->cc->accel_cpu && cpu->cc->accel_cpu->cpu_instance_init) {
cpu->cc->accel_cpu->cpu_instance_init(cpu);
}
}
bool accel_cpu_common_realize(CPUState *cpu, Error **errp)
{
AccelState *accel = current_accel();
AccelClass *acc = ACCEL_GET_CLASS(accel);
/* target specific realization */
if (cpu->cc->accel_cpu
&& cpu->cc->accel_cpu->cpu_target_realize
&& !cpu->cc->accel_cpu->cpu_target_realize(cpu, errp)) {
return false;
}
/* generic realization */
if (acc->cpu_common_realize && !acc->cpu_common_realize(cpu, errp)) {
return false;
}
return true;
}
void accel_cpu_common_unrealize(CPUState *cpu)
{
AccelState *accel = current_accel();
AccelClass *acc = ACCEL_GET_CLASS(accel);
/* generic unrealization */
if (acc->cpu_common_unrealize) {
acc->cpu_common_unrealize(cpu);
}
}
int accel_supported_gdbstub_sstep_flags(void)
{
AccelState *accel = current_accel();
AccelClass *acc = ACCEL_GET_CLASS(accel);
if (acc->gdbstub_supported_sstep_flags) {
return acc->gdbstub_supported_sstep_flags();
}
return 0;
}
static const TypeInfo accel_cpu_type = {
.name = TYPE_ACCEL_CPU,
.parent = TYPE_OBJECT,
.abstract = true,
.class_size = sizeof(AccelCPUClass),
};
static void register_accel_types(void)
{
type_register_static(&accel_type);
type_register_static(&accel_cpu_type);
}
type_init(register_accel_types);

View File

@@ -13,7 +13,7 @@
#include "qemu/osdep.h"
#include "qemu/rcu.h"
#include "system/cpus.h"
#include "sysemu/cpus.h"
#include "qemu/guest-random.h"
#include "qemu/main-loop.h"
#include "hw/core/cpu.h"
@@ -21,29 +21,26 @@
static void *dummy_cpu_thread_fn(void *arg)
{
CPUState *cpu = arg;
rcu_register_thread();
bql_lock();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
current_cpu = cpu;
#ifndef _WIN32
sigset_t waitset;
int r;
rcu_register_thread();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->can_do_io = 1;
current_cpu = cpu;
sigemptyset(&waitset);
sigaddset(&waitset, SIG_IPI);
#endif
/* signal CPU creation */
cpu_thread_signal_created(cpu);
qemu_guest_random_seed_thread_part2(cpu->random_seed);
do {
bql_unlock();
#ifndef _WIN32
qemu_mutex_unlock_iothread();
do {
int sig;
r = sigwait(&waitset, &sig);
@@ -52,14 +49,11 @@ static void *dummy_cpu_thread_fn(void *arg)
perror("sigwait");
exit(1);
}
#else
qemu_sem_wait(&cpu->sem);
#endif
bql_lock();
qemu_mutex_lock_iothread();
qemu_wait_io_event(cpu);
} while (!cpu->unplug);
bql_unlock();
qemu_mutex_unlock_iothread();
rcu_unregister_thread();
return NULL;
}
@@ -68,11 +62,11 @@ void dummy_start_vcpu_thread(CPUState *cpu)
{
char thread_name[VCPU_THREAD_NAME_SIZE];
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/DUMMY",
cpu->cpu_index);
qemu_thread_create(cpu->thread, thread_name, dummy_cpu_thread_fn, cpu,
QEMU_THREAD_JOINABLE);
#ifdef _WIN32
qemu_sem_init(&cpu->sem, 0);
#endif
}

View File

@@ -52,13 +52,10 @@
#include "qemu/main-loop.h"
#include "exec/address-spaces.h"
#include "exec/exec-all.h"
#include "gdbstub/enums.h"
#include "hw/boards.h"
#include "system/accel-ops.h"
#include "system/cpus.h"
#include "system/hvf.h"
#include "system/hvf_int.h"
#include "system/runstate.h"
#include "sysemu/cpus.h"
#include "sysemu/hvf.h"
#include "sysemu/hvf_int.h"
#include "sysemu/runstate.h"
#include "qemu/guest-random.h"
HVFState *hvf_state;
@@ -119,12 +116,11 @@ static void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
{
hvf_slot *mem;
MemoryRegion *area = section->mr;
bool writable = !area->readonly && !area->rom_device;
bool writeable = !area->readonly && !area->rom_device;
hv_memory_flags_t flags;
uint64_t page_size = qemu_real_host_page_size();
if (!memory_region_is_ram(area)) {
if (writable) {
if (writeable) {
return;
} else if (!memory_region_is_romd(area)) {
/*
@@ -135,12 +131,6 @@ static void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
}
}
if (!QEMU_IS_ALIGNED(int128_get64(section->size), page_size) ||
!QEMU_IS_ALIGNED(section->offset_within_address_space, page_size)) {
/* Not page aligned, so we can not map as RAM */
add = false;
}
mem = hvf_find_overlap_slot(
section->offset_within_address_space,
int128_get64(section->size));
@@ -202,15 +192,15 @@ static void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
{
if (!cpu->accel->dirty) {
if (!cpu->vcpu_dirty) {
hvf_get_registers(cpu);
cpu->accel->dirty = true;
cpu->vcpu_dirty = true;
}
}
static void hvf_cpu_synchronize_state(CPUState *cpu)
{
if (!cpu->accel->dirty) {
if (!cpu->vcpu_dirty) {
run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL);
}
}
@@ -219,7 +209,7 @@ static void do_hvf_cpu_synchronize_set_dirty(CPUState *cpu,
run_on_cpu_data arg)
{
/* QEMU state is the reference, push it to HVF now and on next entry */
cpu->accel->dirty = true;
cpu->vcpu_dirty = true;
}
static void hvf_cpu_synchronize_post_reset(CPUState *cpu)
@@ -249,12 +239,12 @@ static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
if (on) {
slot->flags |= HVF_SLOT_LOG;
hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
HV_MEMORY_READ | HV_MEMORY_EXEC);
HV_MEMORY_READ);
/* stop tracking region*/
} else {
slot->flags &= ~HVF_SLOT_LOG;
hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC);
HV_MEMORY_READ | HV_MEMORY_WRITE);
}
}
@@ -301,8 +291,7 @@ static void hvf_region_del(MemoryListener *listener,
}
static MemoryListener hvf_memory_listener = {
.name = "hvf",
.priority = MEMORY_LISTENER_PRIORITY_ACCEL,
.priority = 10,
.region_add = hvf_region_add,
.region_del = hvf_region_del,
.log_start = hvf_log_start,
@@ -321,38 +310,21 @@ static int hvf_accel_init(MachineState *ms)
int x;
hv_return_t ret;
HVFState *s;
int pa_range = 36;
MachineClass *mc = MACHINE_GET_CLASS(ms);
if (mc->hvf_get_physical_address_range) {
pa_range = mc->hvf_get_physical_address_range(ms);
if (pa_range < 0) {
return -EINVAL;
}
}
ret = hvf_arch_vm_create(ms, (uint32_t)pa_range);
ret = hv_vm_create(HV_VM_DEFAULT);
assert_hvf_ok(ret);
s = g_new0(HVFState, 1);
s->num_slots = ARRAY_SIZE(s->slots);
s->num_slots = 32;
for (x = 0; x < s->num_slots; ++x) {
s->slots[x].size = 0;
s->slots[x].slot_id = x;
}
QTAILQ_INIT(&s->hvf_sw_breakpoints);
hvf_state = s;
memory_listener_register(&hvf_memory_listener, &address_space_memory);
return hvf_arch_init();
}
static inline int hvf_gdbstub_sstep_flags(void)
{
return SSTEP_ENABLE | SSTEP_NOIRQ;
return 0;
}
static void hvf_accel_class_init(ObjectClass *oc, void *data)
@@ -361,7 +333,6 @@ static void hvf_accel_class_init(ObjectClass *oc, void *data)
ac->name = "HVF";
ac->init_machine = hvf_accel_init;
ac->allowed = &hvf_allowed;
ac->gdbstub_supported_sstep_flags = hvf_gdbstub_sstep_flags;
}
static const TypeInfo hvf_accel_type = {
@@ -379,41 +350,35 @@ type_init(hvf_type_init);
static void hvf_vcpu_destroy(CPUState *cpu)
{
hv_return_t ret = hv_vcpu_destroy(cpu->accel->fd);
hv_return_t ret = hv_vcpu_destroy(cpu->hvf->fd);
assert_hvf_ok(ret);
hvf_arch_vcpu_destroy(cpu);
g_free(cpu->accel);
cpu->accel = NULL;
g_free(cpu->hvf);
cpu->hvf = NULL;
}
static int hvf_init_vcpu(CPUState *cpu)
{
int r;
cpu->accel = g_new0(AccelCPUState, 1);
cpu->hvf = g_malloc0(sizeof(*cpu->hvf));
/* init cpu signals */
sigset_t set;
struct sigaction sigact;
memset(&sigact, 0, sizeof(sigact));
sigact.sa_handler = dummy_signal;
sigaction(SIG_IPI, &sigact, NULL);
pthread_sigmask(SIG_BLOCK, NULL, &cpu->accel->unblock_ipi_mask);
sigdelset(&cpu->accel->unblock_ipi_mask, SIG_IPI);
pthread_sigmask(SIG_BLOCK, NULL, &set);
sigdelset(&set, SIG_IPI);
#ifdef __aarch64__
r = hv_vcpu_create(&cpu->accel->fd,
(hv_vcpu_exit_t **)&cpu->accel->exit, NULL);
#else
r = hv_vcpu_create(&cpu->accel->fd, HV_VCPU_DEFAULT);
#endif
cpu->accel->dirty = true;
r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT);
cpu->vcpu_dirty = 1;
assert_hvf_ok(r);
cpu->accel->guest_debug_enabled = false;
return hvf_arch_init_vcpu(cpu);
}
@@ -431,10 +396,11 @@ static void *hvf_cpu_thread_fn(void *arg)
rcu_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->can_do_io = 1;
current_cpu = cpu;
hvf_init_vcpu(cpu);
@@ -455,7 +421,7 @@ static void *hvf_cpu_thread_fn(void *arg)
hvf_vcpu_destroy(cpu);
cpu_thread_signal_destroyed(cpu);
bql_unlock();
qemu_mutex_unlock_iothread();
rcu_unregister_thread();
return NULL;
}
@@ -470,131 +436,26 @@ static void hvf_start_vcpu_thread(CPUState *cpu)
*/
assert(hvf_enabled());
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/HVF",
cpu->cpu_index);
qemu_thread_create(cpu->thread, thread_name, hvf_cpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE);
}
static int hvf_insert_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len)
{
struct hvf_sw_breakpoint *bp;
int err;
if (type == GDB_BREAKPOINT_SW) {
bp = hvf_find_sw_breakpoint(cpu, addr);
if (bp) {
bp->use_count++;
return 0;
}
bp = g_new(struct hvf_sw_breakpoint, 1);
bp->pc = addr;
bp->use_count = 1;
err = hvf_arch_insert_sw_breakpoint(cpu, bp);
if (err) {
g_free(bp);
return err;
}
QTAILQ_INSERT_HEAD(&hvf_state->hvf_sw_breakpoints, bp, entry);
} else {
err = hvf_arch_insert_hw_breakpoint(addr, len, type);
if (err) {
return err;
}
}
CPU_FOREACH(cpu) {
err = hvf_update_guest_debug(cpu);
if (err) {
return err;
}
}
return 0;
}
static int hvf_remove_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len)
{
struct hvf_sw_breakpoint *bp;
int err;
if (type == GDB_BREAKPOINT_SW) {
bp = hvf_find_sw_breakpoint(cpu, addr);
if (!bp) {
return -ENOENT;
}
if (bp->use_count > 1) {
bp->use_count--;
return 0;
}
err = hvf_arch_remove_sw_breakpoint(cpu, bp);
if (err) {
return err;
}
QTAILQ_REMOVE(&hvf_state->hvf_sw_breakpoints, bp, entry);
g_free(bp);
} else {
err = hvf_arch_remove_hw_breakpoint(addr, len, type);
if (err) {
return err;
}
}
CPU_FOREACH(cpu) {
err = hvf_update_guest_debug(cpu);
if (err) {
return err;
}
}
return 0;
}
static void hvf_remove_all_breakpoints(CPUState *cpu)
{
struct hvf_sw_breakpoint *bp, *next;
CPUState *tmpcpu;
QTAILQ_FOREACH_SAFE(bp, &hvf_state->hvf_sw_breakpoints, entry, next) {
if (hvf_arch_remove_sw_breakpoint(cpu, bp) != 0) {
/* Try harder to find a CPU that currently sees the breakpoint. */
CPU_FOREACH(tmpcpu)
{
if (hvf_arch_remove_sw_breakpoint(tmpcpu, bp) == 0) {
break;
}
}
}
QTAILQ_REMOVE(&hvf_state->hvf_sw_breakpoints, bp, entry);
g_free(bp);
}
hvf_arch_remove_all_hw_breakpoints();
CPU_FOREACH(cpu) {
hvf_update_guest_debug(cpu);
}
}
static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
{
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
ops->create_vcpu_thread = hvf_start_vcpu_thread;
ops->kick_vcpu_thread = hvf_kick_vcpu_thread;
ops->synchronize_post_reset = hvf_cpu_synchronize_post_reset;
ops->synchronize_post_init = hvf_cpu_synchronize_post_init;
ops->synchronize_state = hvf_cpu_synchronize_state;
ops->synchronize_pre_loadvm = hvf_cpu_synchronize_pre_loadvm;
ops->insert_breakpoint = hvf_insert_breakpoint;
ops->remove_breakpoint = hvf_remove_breakpoint;
ops->remove_all_breakpoints = hvf_remove_all_breakpoints;
ops->update_guest_debug = hvf_update_guest_debug;
ops->supports_guest_debug = hvf_arch_supports_guest_debug;
};
static const TypeInfo hvf_accel_ops_type = {
.name = ACCEL_OPS_NAME("hvf"),

View File

@@ -9,57 +9,39 @@
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "qemu/error-report.h"
#include "system/hvf.h"
#include "system/hvf_int.h"
#include "sysemu/hvf.h"
#include "sysemu/hvf_int.h"
const char *hvf_return_string(hv_return_t ret)
{
switch (ret) {
case HV_SUCCESS: return "HV_SUCCESS";
case HV_ERROR: return "HV_ERROR";
case HV_BUSY: return "HV_BUSY";
case HV_BAD_ARGUMENT: return "HV_BAD_ARGUMENT";
case HV_NO_RESOURCES: return "HV_NO_RESOURCES";
case HV_NO_DEVICE: return "HV_NO_DEVICE";
case HV_UNSUPPORTED: return "HV_UNSUPPORTED";
case HV_DENIED: return "HV_DENIED";
default: return "[unknown hv_return value]";
}
}
void assert_hvf_ok_impl(hv_return_t ret, const char *file, unsigned int line,
const char *exp)
void assert_hvf_ok(hv_return_t ret)
{
if (ret == HV_SUCCESS) {
return;
}
error_report("Error: %s = %s (0x%x, at %s:%u)",
exp, hvf_return_string(ret), ret, file, line);
switch (ret) {
case HV_ERROR:
error_report("Error: HV_ERROR");
break;
case HV_BUSY:
error_report("Error: HV_BUSY");
break;
case HV_BAD_ARGUMENT:
error_report("Error: HV_BAD_ARGUMENT");
break;
case HV_NO_RESOURCES:
error_report("Error: HV_NO_RESOURCES");
break;
case HV_NO_DEVICE:
error_report("Error: HV_NO_DEVICE");
break;
case HV_UNSUPPORTED:
error_report("Error: HV_UNSUPPORTED");
break;
default:
error_report("Unknown Error");
}
abort();
}
struct hvf_sw_breakpoint *hvf_find_sw_breakpoint(CPUState *cpu, vaddr pc)
{
struct hvf_sw_breakpoint *bp;
QTAILQ_FOREACH(bp, &hvf_state->hvf_sw_breakpoints, entry) {
if (bp->pc == pc) {
return bp;
}
}
return NULL;
}
int hvf_sw_breakpoints_active(CPUState *cpu)
{
return !QTAILQ_EMPTY(&hvf_state->hvf_sw_breakpoints);
}
int hvf_update_guest_debug(CPUState *cpu)
{
hvf_arch_update_guest_debug(cpu);
return 0;
}

View File

@@ -16,15 +16,12 @@
#include "qemu/osdep.h"
#include "qemu/error-report.h"
#include "qemu/main-loop.h"
#include "system/accel-ops.h"
#include "system/kvm.h"
#include "system/kvm_int.h"
#include "system/runstate.h"
#include "system/cpus.h"
#include "sysemu/kvm_int.h"
#include "sysemu/runstate.h"
#include "sysemu/cpus.h"
#include "qemu/guest-random.h"
#include "qapi/error.h"
#include <linux/kvm.h>
#include "kvm-cpus.h"
static void *kvm_vcpu_thread_fn(void *arg)
@@ -34,9 +31,10 @@ static void *kvm_vcpu_thread_fn(void *arg)
rcu_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->can_do_io = 1;
current_cpu = cpu;
r = kvm_init_vcpu(cpu, &error_fatal);
@@ -58,7 +56,7 @@ static void *kvm_vcpu_thread_fn(void *arg)
kvm_destroy_vcpu(cpu);
cpu_thread_signal_destroyed(cpu);
bql_unlock();
qemu_mutex_unlock_iothread();
rcu_unregister_thread();
return NULL;
}
@@ -67,48 +65,24 @@ static void kvm_start_vcpu_thread(CPUState *cpu)
{
char thread_name[VCPU_THREAD_NAME_SIZE];
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/KVM",
cpu->cpu_index);
qemu_thread_create(cpu->thread, thread_name, kvm_vcpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE);
}
static bool kvm_vcpu_thread_is_idle(CPUState *cpu)
{
return !kvm_halt_in_kernel();
}
static bool kvm_cpus_are_resettable(void)
{
return !kvm_enabled() || !kvm_state->guest_state_protected;
}
#ifdef TARGET_KVM_HAVE_GUEST_DEBUG
static int kvm_update_guest_debug_ops(CPUState *cpu)
{
return kvm_update_guest_debug(cpu, 0);
}
#endif
static void kvm_accel_ops_class_init(ObjectClass *oc, void *data)
{
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
ops->create_vcpu_thread = kvm_start_vcpu_thread;
ops->cpu_thread_is_idle = kvm_vcpu_thread_is_idle;
ops->cpus_are_resettable = kvm_cpus_are_resettable;
ops->synchronize_post_reset = kvm_cpu_synchronize_post_reset;
ops->synchronize_post_init = kvm_cpu_synchronize_post_init;
ops->synchronize_state = kvm_cpu_synchronize_state;
ops->synchronize_pre_loadvm = kvm_cpu_synchronize_pre_loadvm;
#ifdef TARGET_KVM_HAVE_GUEST_DEBUG
ops->update_guest_debug = kvm_update_guest_debug_ops;
ops->supports_guest_debug = kvm_supports_guest_debug;
ops->insert_breakpoint = kvm_insert_breakpoint;
ops->remove_breakpoint = kvm_remove_breakpoint;
ops->remove_all_breakpoints = kvm_remove_all_breakpoints;
#endif
}
static const TypeInfo kvm_accel_ops_type = {

File diff suppressed because it is too large Load Diff

View File

@@ -10,14 +10,13 @@
#ifndef KVM_CPUS_H
#define KVM_CPUS_H
#include "sysemu/cpus.h"
int kvm_init_vcpu(CPUState *cpu, Error **errp);
int kvm_cpu_exec(CPUState *cpu);
void kvm_destroy_vcpu(CPUState *cpu);
void kvm_cpu_synchronize_post_reset(CPUState *cpu);
void kvm_cpu_synchronize_post_init(CPUState *cpu);
void kvm_cpu_synchronize_pre_loadvm(CPUState *cpu);
bool kvm_supports_guest_debug(void);
int kvm_insert_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len);
int kvm_remove_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len);
void kvm_remove_all_breakpoints(CPUState *cpu);
#endif /* KVM_CPUS_H */

View File

@@ -3,5 +3,6 @@ kvm_ss.add(files(
'kvm-all.c',
'kvm-accel-ops.c',
))
kvm_ss.add(when: 'CONFIG_SEV', if_false: files('sev-stub.c'))
specific_ss.add_all(when: 'CONFIG_KVM', if_true: kvm_ss)

22
accel/kvm/sev-stub.c Normal file
View File

@@ -0,0 +1,22 @@
/*
* QEMU SEV stub
*
* Copyright Advanced Micro Devices 2018
*
* Authors:
* Brijesh Singh <brijesh.singh@amd.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "sysemu/sev.h"
int sev_kvm_init(ConfidentialGuestSupport *cgs, Error **errp)
{
/* If we get here, cgs must be some non-SEV thing */
return 0;
}

View File

@@ -1,25 +1,21 @@
# See docs/devel/tracing.rst for syntax documentation.
# kvm-all.c
kvm_ioctl(unsigned long type, void *arg) "type 0x%lx, arg %p"
kvm_vm_ioctl(unsigned long type, void *arg) "type 0x%lx, arg %p"
kvm_vcpu_ioctl(int cpu_index, unsigned long type, void *arg) "cpu_index %d, type 0x%lx, arg %p"
kvm_ioctl(int type, void *arg) "type 0x%x, arg %p"
kvm_vm_ioctl(int type, void *arg) "type 0x%x, arg %p"
kvm_vcpu_ioctl(int cpu_index, int type, void *arg) "cpu_index %d, type 0x%x, arg %p"
kvm_run_exit(int cpu_index, uint32_t reason) "cpu_index %d, reason %d"
kvm_device_ioctl(int fd, unsigned long type, void *arg) "dev fd %d, type 0x%lx, arg %p"
kvm_device_ioctl(int fd, int type, void *arg) "dev fd %d, type 0x%x, arg %p"
kvm_failed_reg_get(uint64_t id, const char *msg) "Warning: Unable to retrieve ONEREG %" PRIu64 " from KVM: %s"
kvm_failed_reg_set(uint64_t id, const char *msg) "Warning: Unable to set ONEREG %" PRIu64 " to KVM: %s"
kvm_init_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu"
kvm_create_vcpu(int cpu_index, unsigned long arch_cpu_id, int kvm_fd) "index: %d, id: %lu, kvm fd: %d"
kvm_destroy_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu"
kvm_park_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu"
kvm_unpark_vcpu(unsigned long arch_cpu_id, const char *msg) "id: %lu %s"
kvm_irqchip_commit_routes(void) ""
kvm_irqchip_add_msi_route(char *name, int vector, int virq) "dev %s vector %d virq %d"
kvm_irqchip_update_msi_route(int virq) "Updating MSI route virq=%d"
kvm_irqchip_release_virq(int virq) "virq %d"
kvm_set_ioeventfd_mmio(int fd, uint64_t addr, uint32_t val, bool assign, uint32_t size, bool datamatch) "fd: %d @0x%" PRIx64 " val=0x%x assign: %d size: %d match: %d"
kvm_set_ioeventfd_pio(int fd, uint16_t addr, uint32_t val, bool assign, uint32_t size, bool datamatch) "fd: %d @0x%x val=0x%x assign: %d size: %d match: %d"
kvm_set_user_memory(uint16_t as, uint16_t slot, uint32_t flags, uint64_t guest_phys_addr, uint64_t memory_size, uint64_t userspace_addr, uint32_t fd, uint64_t fd_offset, int ret) "AddrSpace#%d Slot#%d flags=0x%x gpa=0x%"PRIx64 " size=0x%"PRIx64 " ua=0x%"PRIx64 " guest_memfd=%d" " guest_memfd_offset=0x%" PRIx64 " ret=%d"
kvm_set_user_memory(uint32_t slot, uint32_t flags, uint64_t guest_phys_addr, uint64_t memory_size, uint64_t userspace_addr, int ret) "Slot#%d flags=0x%x gpa=0x%"PRIx64 " size=0x%"PRIx64 " ua=0x%"PRIx64 " ret=%d"
kvm_clear_dirty_log(uint32_t slot, uint64_t start, uint32_t size) "slot#%"PRId32" start 0x%"PRIx64" size 0x%"PRIx32
kvm_resample_fd_notify(int gsi) "gsi %d"
kvm_dirty_ring_full(int id) "vcpu %d"
@@ -29,11 +25,4 @@ kvm_dirty_ring_reaper(const char *s) "%s"
kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"PRIu64" pages (took %"PRIi64" us)"
kvm_dirty_ring_reaper_kick(const char *reason) "%s"
kvm_dirty_ring_flush(int finished) "%d"
kvm_failed_get_vcpu_mmap_size(void) ""
kvm_cpu_exec(void) ""
kvm_interrupt_exit_request(void) ""
kvm_io_window_exit(void) ""
kvm_run_exit_system_event(int cpu_index, uint32_t event_type) "cpu_index %d, system_even_type %"PRIu32
kvm_convert_memory(uint64_t start, uint64_t size, const char *msg) "start 0x%" PRIx64 " size 0x%" PRIx64 " %s"
kvm_memory_fault(uint64_t start, uint64_t size, uint64_t flags) "start 0x%" PRIx64 " size 0x%" PRIx64 " flags 0x%" PRIx64
kvm_slots_grow(unsigned int old, unsigned int new) "%u -> %u"

View File

@@ -1,15 +1,18 @@
specific_ss.add(files('accel-target.c'))
system_ss.add(files('accel-system.c', 'accel-blocker.c'))
specific_ss.add(files('accel-common.c'))
softmmu_ss.add(files('accel-softmmu.c'))
user_ss.add(files('accel-user.c'))
subdir('hvf')
subdir('qtest')
subdir('kvm')
subdir('tcg')
if have_system
subdir('hvf')
subdir('qtest')
subdir('kvm')
subdir('xen')
subdir('stubs')
endif
subdir('xen')
subdir('stubs')
# qtest
system_ss.add(files('dummy-cpus.c'))
dummy_ss = ss.source_set()
dummy_ss.add(files(
'dummy-cpus.c',
))
specific_ss.add_all(when: ['CONFIG_SOFTMMU', 'CONFIG_POSIX'], if_true: dummy_ss)
specific_ss.add_all(when: ['CONFIG_XEN'], if_true: dummy_ss)

View File

@@ -1 +1,2 @@
qtest_module_ss.add(when: ['CONFIG_SYSTEM_ONLY'], if_true: files('qtest.c'))
qtest_module_ss.add(when: ['CONFIG_SOFTMMU', 'CONFIG_POSIX'],
if_true: files('qtest.c'))

View File

@@ -18,25 +18,13 @@
#include "qemu/option.h"
#include "qemu/config-file.h"
#include "qemu/accel.h"
#include "system/accel-ops.h"
#include "system/qtest.h"
#include "system/cpus.h"
#include "sysemu/qtest.h"
#include "sysemu/cpus.h"
#include "sysemu/cpu-timers.h"
#include "qemu/guest-random.h"
#include "qemu/main-loop.h"
#include "hw/core/cpu.h"
static int64_t qtest_clock_counter;
static int64_t qtest_get_virtual_clock(void)
{
return qatomic_read_i64(&qtest_clock_counter);
}
static void qtest_set_virtual_clock(int64_t count)
{
qatomic_set_i64(&qtest_clock_counter, count);
}
static int qtest_init_accel(MachineState *ms)
{
return 0;
@@ -65,7 +53,6 @@ static void qtest_accel_ops_class_init(ObjectClass *oc, void *data)
ops->create_vcpu_thread = dummy_start_vcpu_thread;
ops->get_virtual_clock = qtest_get_virtual_clock;
ops->set_virtual_clock = qtest_set_virtual_clock;
};
static const TypeInfo qtest_accel_ops_type = {

22
accel/stubs/hax-stub.c Normal file
View File

@@ -0,0 +1,22 @@
/*
* QEMU HAXM support
*
* Copyright (c) 2015, Intel Corporation
*
* Copyright 2016 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "sysemu/hax.h"
int hax_sync_vcpus(void)
{
return 0;
}

View File

@@ -11,18 +11,24 @@
*/
#include "qemu/osdep.h"
#include "system/kvm.h"
#include "sysemu/kvm.h"
#ifndef CONFIG_USER_ONLY
#include "hw/pci/msi.h"
#endif
KVMState *kvm_state;
bool kvm_kernel_irqchip;
bool kvm_async_interrupts_allowed;
bool kvm_eventfds_allowed;
bool kvm_irqfds_allowed;
bool kvm_resamplefds_allowed;
bool kvm_msi_via_irqfd_allowed;
bool kvm_gsi_routing_allowed;
bool kvm_gsi_direct_mapping;
bool kvm_allowed;
bool kvm_readonly_mem_allowed;
bool kvm_ioeventfd_any_length_allowed;
bool kvm_msi_use_devid;
void kvm_flush_coalesced_mmio_buffer(void)
@@ -38,6 +44,32 @@ bool kvm_has_sync_mmu(void)
return false;
}
int kvm_has_many_ioeventfds(void)
{
return 0;
}
int kvm_update_guest_debug(CPUState *cpu, unsigned long reinject_trap)
{
return -ENOSYS;
}
int kvm_insert_breakpoint(CPUState *cpu, target_ulong addr,
target_ulong len, int type)
{
return -EINVAL;
}
int kvm_remove_breakpoint(CPUState *cpu, target_ulong addr,
target_ulong len, int type)
{
return -EINVAL;
}
void kvm_remove_all_breakpoints(CPUState *cpu)
{
}
int kvm_on_sigbus_vcpu(CPUState *cpu, int code, void *addr)
{
return 1;
@@ -48,7 +80,8 @@ int kvm_on_sigbus(int code, void *addr)
return 1;
}
int kvm_irqchip_add_msi_route(KVMRouteChange *c, int vector, PCIDevice *dev)
#ifndef CONFIG_USER_ONLY
int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev)
{
return -ENOSYS;
}
@@ -83,6 +116,11 @@ void kvm_irqchip_change_notify(void)
{
}
int kvm_irqchip_add_adapter_route(KVMState *s, AdapterInfo *adapter)
{
return -ENOSYS;
}
int kvm_irqchip_add_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
EventNotifier *rn, int virq)
{
@@ -95,14 +133,9 @@ int kvm_irqchip_remove_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
return -ENOSYS;
}
unsigned int kvm_get_max_memslots(void)
bool kvm_has_free_slot(MachineState *ms)
{
return 0;
}
unsigned int kvm_get_free_memslots(void)
{
return 0;
return false;
}
void kvm_init_cpu_signals(CPUState *cpu)
@@ -114,23 +147,4 @@ bool kvm_arm_supports_user_irq(void)
{
return false;
}
bool kvm_dirty_ring_enabled(void)
{
return false;
}
uint32_t kvm_dirty_ring_size(void)
{
return 0;
}
bool kvm_hwpoisoned_mem(void)
{
return false;
}
int kvm_create_guest_memfd(uint64_t size, uint64_t flags, Error **errp)
{
return -ENOSYS;
}
#endif

View File

@@ -1,6 +1,4 @@
system_stubs_ss = ss.source_set()
system_stubs_ss.add(when: 'CONFIG_XEN', if_false: files('xen-stub.c'))
system_stubs_ss.add(when: 'CONFIG_KVM', if_false: files('kvm-stub.c'))
system_stubs_ss.add(when: 'CONFIG_TCG', if_false: files('tcg-stub.c'))
specific_ss.add_all(when: ['CONFIG_SYSTEM_ONLY'], if_true: system_stubs_ss)
specific_ss.add(when: 'CONFIG_HAX', if_false: files('hax-stub.c'))
specific_ss.add(when: 'CONFIG_XEN', if_false: files('xen-stub.c'))
specific_ss.add(when: 'CONFIG_KVM', if_false: files('kvm-stub.c'))
specific_ss.add(when: 'CONFIG_TCG', if_false: files('tcg-stub.c'))

View File

@@ -11,15 +11,29 @@
*/
#include "qemu/osdep.h"
#include "exec/tb-flush.h"
#include "exec/exec-all.h"
G_NORETURN void cpu_loop_exit(CPUState *cpu)
void tb_flush(CPUState *cpu)
{
}
void tlb_set_dirty(CPUState *cpu, target_ulong vaddr)
{
}
void *probe_access(CPUArchState *env, target_ulong addr, int size,
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr)
{
/* Handled by hardware accelerator. */
g_assert_not_reached();
}
void QEMU_NORETURN cpu_loop_exit(CPUState *cpu)
{
g_assert_not_reached();
}
G_NORETURN void cpu_loop_exit_restore(CPUState *cpu, uintptr_t pc)
void QEMU_NORETURN cpu_loop_exit_restore(CPUState *cpu, uintptr_t pc)
{
g_assert_not_reached();
}

View File

@@ -6,7 +6,7 @@
*/
#include "qemu/osdep.h"
#include "system/xen.h"
#include "sysemu/xen.h"
#include "qapi/qapi-commands-migration.h"
bool xen_allowed;

View File

@@ -13,23 +13,59 @@
* See the COPYING file in the top-level directory.
*/
static void atomic_trace_rmw_post(CPUArchState *env, uint64_t addr,
uint64_t read_value_low,
uint64_t read_value_high,
uint64_t write_value_low,
uint64_t write_value_high,
MemOpIdx oi)
static uint16_t atomic_trace_rmw_pre(CPUArchState *env, target_ulong addr,
TCGMemOpIdx oi)
{
if (cpu_plugin_mem_cbs_enabled(env_cpu(env))) {
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr,
read_value_low, read_value_high,
oi, QEMU_PLUGIN_MEM_R);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr,
write_value_low, write_value_high,
oi, QEMU_PLUGIN_MEM_W);
}
CPUState *cpu = env_cpu(env);
uint16_t info = trace_mem_get_info(get_memop(oi), get_mmuidx(oi), false);
trace_guest_mem_before_exec(cpu, addr, info);
trace_guest_mem_before_exec(cpu, addr, info | TRACE_MEM_ST);
return info;
}
static void atomic_trace_rmw_post(CPUArchState *env, target_ulong addr,
uint16_t info)
{
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, info);
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, info | TRACE_MEM_ST);
}
#if HAVE_ATOMIC128
static uint16_t atomic_trace_ld_pre(CPUArchState *env, target_ulong addr,
TCGMemOpIdx oi)
{
uint16_t info = trace_mem_get_info(get_memop(oi), get_mmuidx(oi), false);
trace_guest_mem_before_exec(env_cpu(env), addr, info);
return info;
}
static void atomic_trace_ld_post(CPUArchState *env, target_ulong addr,
uint16_t info)
{
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, info);
}
static uint16_t atomic_trace_st_pre(CPUArchState *env, target_ulong addr,
TCGMemOpIdx oi)
{
uint16_t info = trace_mem_get_info(get_memop(oi), get_mmuidx(oi), true);
trace_guest_mem_before_exec(env_cpu(env), addr, info);
return info;
}
static void atomic_trace_st_post(CPUArchState *env, target_ulong addr,
uint16_t info)
{
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, info);
}
#endif
/*
* Atomic helpers callable from TCG.
* These have a common interface and all defer to cpu_atomic_*
@@ -37,7 +73,7 @@ static void atomic_trace_rmw_post(CPUArchState *env, uint64_t addr,
*/
#define CMPXCHG_HELPER(OP, TYPE) \
TYPE HELPER(atomic_##OP)(CPUArchState *env, uint64_t addr, \
TYPE HELPER(atomic_##OP)(CPUArchState *env, target_ulong addr, \
TYPE oldv, TYPE newv, uint32_t oi) \
{ return cpu_atomic_##OP##_mmu(env, addr, oldv, newv, oi, GETPC()); }
@@ -52,35 +88,10 @@ CMPXCHG_HELPER(cmpxchgq_be, uint64_t)
CMPXCHG_HELPER(cmpxchgq_le, uint64_t)
#endif
#if HAVE_CMPXCHG128
CMPXCHG_HELPER(cmpxchgo_be, Int128)
CMPXCHG_HELPER(cmpxchgo_le, Int128)
#endif
#undef CMPXCHG_HELPER
Int128 HELPER(nonatomic_cmpxchgo)(CPUArchState *env, uint64_t addr,
Int128 cmpv, Int128 newv, uint32_t oi)
{
#if TCG_TARGET_REG_BITS == 32
uintptr_t ra = GETPC();
Int128 oldv;
oldv = cpu_ld16_mmu(env, addr, oi, ra);
if (int128_eq(oldv, cmpv)) {
cpu_st16_mmu(env, addr, newv, oi, ra);
} else {
/* Even with comparison failure, still need a write cycle. */
probe_write(env, addr, 16, get_mmuidx(oi), ra);
}
return oldv;
#else
g_assert_not_reached();
#endif
}
#define ATOMIC_HELPER(OP, TYPE) \
TYPE HELPER(glue(atomic_,OP))(CPUArchState *env, uint64_t addr, \
TYPE HELPER(glue(atomic_,OP))(CPUArchState *env, target_ulong addr, \
TYPE val, uint32_t oi) \
{ return glue(glue(cpu_atomic_,OP),_mmu)(env, addr, val, oi, GETPC()); }

View File

@@ -19,6 +19,7 @@
*/
#include "qemu/plugin.h"
#include "trace/mem.h"
#if DATA_SIZE == 16
# define SUFFIX o
@@ -53,14 +54,6 @@
# error unsupported data size
#endif
#if DATA_SIZE == 16
# define VALUE_LOW(val) int128_getlo(val)
# define VALUE_HIGH(val) int128_gethi(val)
#else
# define VALUE_LOW(val) val
# define VALUE_HIGH(val) 0
#endif
#if DATA_SIZE >= 4
# define ABI_TYPE DATA_TYPE
#else
@@ -71,19 +64,20 @@
the ATOMIC_NAME macro, and redefined below. */
#if DATA_SIZE == 1
# define END
#elif HOST_BIG_ENDIAN
#elif defined(HOST_WORDS_BIGENDIAN)
# define END _be
#else
# define END _le
#endif
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE cmpv, ABI_TYPE newv,
MemOpIdx oi, uintptr_t retaddr)
TCGMemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi,
DATA_SIZE, retaddr);
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE,
PAGE_READ | PAGE_WRITE, retaddr);
DATA_TYPE ret;
uint16_t info = atomic_trace_rmw_pre(env, addr, oi);
#if DATA_SIZE == 16
ret = atomic16_cmpxchg(haddr, cmpv, newv);
@@ -91,48 +85,64 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ret = qatomic_cmpxchg__nocheck(haddr, cmpv, newv);
#endif
ATOMIC_MMU_CLEANUP;
atomic_trace_rmw_post(env, addr,
VALUE_LOW(ret),
VALUE_HIGH(ret),
VALUE_LOW(newv),
VALUE_HIGH(newv),
oi);
atomic_trace_rmw_post(env, addr, info);
return ret;
}
#if DATA_SIZE < 16
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val,
MemOpIdx oi, uintptr_t retaddr)
#if DATA_SIZE >= 16
#if HAVE_ATOMIC128
ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr,
TCGMemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi,
DATA_SIZE, retaddr);
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE,
PAGE_READ, retaddr);
DATA_TYPE val;
uint16_t info = atomic_trace_ld_pre(env, addr, oi);
val = atomic16_read(haddr);
ATOMIC_MMU_CLEANUP;
atomic_trace_ld_post(env, addr, info);
return val;
}
void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr, ABI_TYPE val,
TCGMemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE,
PAGE_WRITE, retaddr);
uint16_t info = atomic_trace_st_pre(env, addr, oi);
atomic16_set(haddr, val);
ATOMIC_MMU_CLEANUP;
atomic_trace_st_post(env, addr, info);
}
#endif
#else
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr, ABI_TYPE val,
TCGMemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE,
PAGE_READ | PAGE_WRITE, retaddr);
DATA_TYPE ret;
uint16_t info = atomic_trace_rmw_pre(env, addr, oi);
ret = qatomic_xchg__nocheck(haddr, val);
ATOMIC_MMU_CLEANUP;
atomic_trace_rmw_post(env, addr,
VALUE_LOW(ret),
VALUE_HIGH(ret),
VALUE_LOW(val),
VALUE_HIGH(val),
oi);
atomic_trace_rmw_post(env, addr, info);
return ret;
}
#define GEN_ATOMIC_HELPER(X) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE val, MemOpIdx oi, uintptr_t retaddr) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE val, TCGMemOpIdx oi, uintptr_t retaddr) \
{ \
DATA_TYPE *haddr, ret; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, \
PAGE_READ | PAGE_WRITE, retaddr); \
DATA_TYPE ret; \
uint16_t info = atomic_trace_rmw_pre(env, addr, oi); \
ret = qatomic_##X(haddr, val); \
ATOMIC_MMU_CLEANUP; \
atomic_trace_rmw_post(env, addr, \
VALUE_LOW(ret), \
VALUE_HIGH(ret), \
VALUE_LOW(val), \
VALUE_HIGH(val), \
oi); \
atomic_trace_rmw_post(env, addr, info); \
return ret; \
}
@@ -156,11 +166,13 @@ GEN_ATOMIC_HELPER(xor_fetch)
* of CF_PARALLEL's value, we'll trace just a read and a write.
*/
#define GEN_ATOMIC_HELPER_FN(X, FN, XDATA_TYPE, RET) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE xval, MemOpIdx oi, uintptr_t retaddr) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE xval, TCGMemOpIdx oi, uintptr_t retaddr) \
{ \
XDATA_TYPE *haddr, cmp, old, new, val = xval; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
XDATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, \
PAGE_READ | PAGE_WRITE, retaddr); \
XDATA_TYPE cmp, old, new, val = xval; \
uint16_t info = atomic_trace_rmw_pre(env, addr, oi); \
smp_mb(); \
cmp = qatomic_read__nocheck(haddr); \
do { \
@@ -168,12 +180,7 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
cmp = qatomic_cmpxchg__nocheck(haddr, old, new); \
} while (cmp != old); \
ATOMIC_MMU_CLEANUP; \
atomic_trace_rmw_post(env, addr, \
VALUE_LOW(old), \
VALUE_HIGH(old), \
VALUE_LOW(xval), \
VALUE_HIGH(xval), \
oi); \
atomic_trace_rmw_post(env, addr, info); \
return RET; \
}
@@ -188,7 +195,7 @@ GEN_ATOMIC_HELPER_FN(smax_fetch, MAX, SDATA_TYPE, new)
GEN_ATOMIC_HELPER_FN(umax_fetch, MAX, DATA_TYPE, new)
#undef GEN_ATOMIC_HELPER_FN
#endif /* DATA SIZE < 16 */
#endif /* DATA SIZE >= 16 */
#undef END
@@ -196,19 +203,20 @@ GEN_ATOMIC_HELPER_FN(umax_fetch, MAX, DATA_TYPE, new)
/* Define reverse-host-endian atomic operations. Note that END is used
within the ATOMIC_NAME macro. */
#if HOST_BIG_ENDIAN
#ifdef HOST_WORDS_BIGENDIAN
# define END _le
#else
# define END _be
#endif
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE cmpv, ABI_TYPE newv,
MemOpIdx oi, uintptr_t retaddr)
TCGMemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi,
DATA_SIZE, retaddr);
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE,
PAGE_READ | PAGE_WRITE, retaddr);
DATA_TYPE ret;
uint16_t info = atomic_trace_rmw_pre(env, addr, oi);
#if DATA_SIZE == 16
ret = atomic16_cmpxchg(haddr, BSWAP(cmpv), BSWAP(newv));
@@ -216,48 +224,65 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ret = qatomic_cmpxchg__nocheck(haddr, BSWAP(cmpv), BSWAP(newv));
#endif
ATOMIC_MMU_CLEANUP;
atomic_trace_rmw_post(env, addr,
VALUE_LOW(ret),
VALUE_HIGH(ret),
VALUE_LOW(newv),
VALUE_HIGH(newv),
oi);
atomic_trace_rmw_post(env, addr, info);
return BSWAP(ret);
}
#if DATA_SIZE < 16
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val,
MemOpIdx oi, uintptr_t retaddr)
#if DATA_SIZE >= 16
#if HAVE_ATOMIC128
ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr,
TCGMemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env_cpu(env), addr, oi,
DATA_SIZE, retaddr);
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE,
PAGE_READ, retaddr);
DATA_TYPE val;
uint16_t info = atomic_trace_ld_pre(env, addr, oi);
val = atomic16_read(haddr);
ATOMIC_MMU_CLEANUP;
atomic_trace_ld_post(env, addr, info);
return BSWAP(val);
}
void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr, ABI_TYPE val,
TCGMemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE,
PAGE_WRITE, retaddr);
uint16_t info = atomic_trace_st_pre(env, addr, oi);
val = BSWAP(val);
atomic16_set(haddr, val);
ATOMIC_MMU_CLEANUP;
atomic_trace_st_post(env, addr, info);
}
#endif
#else
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr, ABI_TYPE val,
TCGMemOpIdx oi, uintptr_t retaddr)
{
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE,
PAGE_READ | PAGE_WRITE, retaddr);
ABI_TYPE ret;
uint16_t info = atomic_trace_rmw_pre(env, addr, oi);
ret = qatomic_xchg__nocheck(haddr, BSWAP(val));
ATOMIC_MMU_CLEANUP;
atomic_trace_rmw_post(env, addr,
VALUE_LOW(ret),
VALUE_HIGH(ret),
VALUE_LOW(val),
VALUE_HIGH(val),
oi);
atomic_trace_rmw_post(env, addr, info);
return BSWAP(ret);
}
#define GEN_ATOMIC_HELPER(X) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE val, MemOpIdx oi, uintptr_t retaddr) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE val, TCGMemOpIdx oi, uintptr_t retaddr) \
{ \
DATA_TYPE *haddr, ret; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
DATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, \
PAGE_READ | PAGE_WRITE, retaddr); \
DATA_TYPE ret; \
uint16_t info = atomic_trace_rmw_pre(env, addr, oi); \
ret = qatomic_##X(haddr, BSWAP(val)); \
ATOMIC_MMU_CLEANUP; \
atomic_trace_rmw_post(env, addr, \
VALUE_LOW(ret), \
VALUE_HIGH(ret), \
VALUE_LOW(val), \
VALUE_HIGH(val), \
oi); \
atomic_trace_rmw_post(env, addr, info); \
return BSWAP(ret); \
}
@@ -278,11 +303,13 @@ GEN_ATOMIC_HELPER(xor_fetch)
* of CF_PARALLEL's value, we'll trace just a read and a write.
*/
#define GEN_ATOMIC_HELPER_FN(X, FN, XDATA_TYPE, RET) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ABI_TYPE xval, MemOpIdx oi, uintptr_t retaddr) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE xval, TCGMemOpIdx oi, uintptr_t retaddr) \
{ \
XDATA_TYPE *haddr, ldo, ldn, old, new, val = xval; \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
XDATA_TYPE *haddr = atomic_mmu_lookup(env, addr, oi, DATA_SIZE, \
PAGE_READ | PAGE_WRITE, retaddr); \
XDATA_TYPE ldo, ldn, old, new, val = xval; \
uint16_t info = atomic_trace_rmw_pre(env, addr, oi); \
smp_mb(); \
ldn = qatomic_read__nocheck(haddr); \
do { \
@@ -290,12 +317,7 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ldn = qatomic_cmpxchg__nocheck(haddr, ldo, BSWAP(new)); \
} while (ldo != ldn); \
ATOMIC_MMU_CLEANUP; \
atomic_trace_rmw_post(env, addr, \
VALUE_LOW(old), \
VALUE_HIGH(old), \
VALUE_LOW(xval), \
VALUE_HIGH(xval), \
oi); \
atomic_trace_rmw_post(env, addr, info); \
return RET; \
}
@@ -317,7 +339,7 @@ GEN_ATOMIC_HELPER_FN(add_fetch, ADD, DATA_TYPE, new)
#undef ADD
#undef GEN_ATOMIC_HELPER_FN
#endif /* DATA_SIZE < 16 */
#endif /* DATA_SIZE >= 16 */
#undef END
#endif /* DATA_SIZE > 1 */
@@ -329,5 +351,3 @@ GEN_ATOMIC_HELPER_FN(add_fetch, ADD, DATA_TYPE, new)
#undef SUFFIX
#undef DATA_SIZE
#undef SHIFT
#undef VALUE_LOW
#undef VALUE_HIGH

View File

@@ -18,45 +18,12 @@
*/
#include "qemu/osdep.h"
#include "exec/log.h"
#include "system/tcg.h"
#include "qemu/plugin.h"
#include "internal-common.h"
#include "sysemu/cpus.h"
#include "sysemu/tcg.h"
#include "exec/exec-all.h"
bool tcg_allowed;
bool tcg_cflags_has(CPUState *cpu, uint32_t flags)
{
return cpu->tcg_cflags & flags;
}
void tcg_cflags_set(CPUState *cpu, uint32_t flags)
{
cpu->tcg_cflags |= flags;
}
uint32_t curr_cflags(CPUState *cpu)
{
uint32_t cflags = cpu->tcg_cflags;
/*
* Record gdb single-step. We should be exiting the TB by raising
* EXCP_DEBUG, but to simplify other tests, disable chaining too.
*
* For singlestep and -d nochain, suppress goto_tb so that
* we can log -d cpu,exec after every TB.
*/
if (unlikely(cpu->singlestep_enabled)) {
cflags |= CF_NO_GOTO_TB | CF_NO_GOTO_PTR | CF_SINGLE_STEP | 1;
} else if (qatomic_read(&one_insn_per_tb)) {
cflags |= CF_NO_GOTO_TB | 1;
} else if (qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) {
cflags |= CF_NO_GOTO_TB;
}
return cflags;
}
/* exit the current TB, but without causing any exception to be raised */
void cpu_loop_exit_noexc(CPUState *cpu)
{
@@ -64,27 +31,53 @@ void cpu_loop_exit_noexc(CPUState *cpu)
cpu_loop_exit(cpu);
}
#if defined(CONFIG_SOFTMMU)
void cpu_reloading_memory_map(void)
{
if (qemu_in_vcpu_thread() && current_cpu->running) {
/* The guest can in theory prolong the RCU critical section as long
* as it feels like. The major problem with this is that because it
* can do multiple reconfigurations of the memory map within the
* critical section, we could potentially accumulate an unbounded
* collection of memory data structures awaiting reclamation.
*
* Because the only thing we're currently protecting with RCU is the
* memory data structures, it's sufficient to break the critical section
* in this callback, which we know will get called every time the
* memory map is rearranged.
*
* (If we add anything else in the system that uses RCU to protect
* its data structures, we will need to implement some other mechanism
* to force TCG CPUs to exit the critical section, at which point this
* part of this callback might become unnecessary.)
*
* This pair matches cpu_exec's rcu_read_lock()/rcu_read_unlock(), which
* only protects cpu->as->dispatch. Since we know our caller is about
* to reload it, it's safe to split the critical section.
*/
rcu_read_unlock();
rcu_read_lock();
}
}
#endif
void cpu_loop_exit(CPUState *cpu)
{
/* Undo the setting in cpu_tb_exec. */
cpu->neg.can_do_io = true;
/* Undo any setting in generated code. */
qemu_plugin_disable_mem_helpers(cpu);
cpu->can_do_io = 1;
siglongjmp(cpu->jmp_env, 1);
}
void cpu_loop_exit_restore(CPUState *cpu, uintptr_t pc)
{
if (pc) {
cpu_restore_state(cpu, pc);
cpu_restore_state(cpu, pc, true);
}
cpu_loop_exit(cpu);
}
void cpu_loop_exit_atomic(CPUState *cpu, uintptr_t pc)
{
/* Prevent looping if already executing in a serial context. */
g_assert(!cpu_in_serial_context(cpu));
cpu->exception_index = EXCP_ATOMIC;
cpu_loop_exit_restore(cpu, pc);
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

29
accel/tcg/hmp.c Normal file
View File

@@ -0,0 +1,29 @@
#include "qemu/osdep.h"
#include "qemu/error-report.h"
#include "exec/exec-all.h"
#include "monitor/monitor.h"
#include "sysemu/tcg.h"
static void hmp_info_jit(Monitor *mon, const QDict *qdict)
{
if (!tcg_enabled()) {
error_report("JIT information is only available with accel=tcg");
return;
}
dump_exec_info();
dump_drift_info();
}
static void hmp_info_opcount(Monitor *mon, const QDict *qdict)
{
dump_opcount_info();
}
static void hmp_tcg_register(void)
{
monitor_register_hmp("jit", true, hmp_info_jit);
monitor_register_hmp("opcount", true, hmp_info_opcount);
}
type_init(hmp_tcg_register);

View File

@@ -1,503 +0,0 @@
/*
* QEMU System Emulator
*
* Copyright (c) 2003-2008 Fabrice Bellard
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu/cutils.h"
#include "migration/vmstate.h"
#include "qapi/error.h"
#include "qemu/error-report.h"
#include "system/cpus.h"
#include "system/qtest.h"
#include "qemu/main-loop.h"
#include "qemu/option.h"
#include "qemu/seqlock.h"
#include "system/replay.h"
#include "system/runstate.h"
#include "hw/core/cpu.h"
#include "system/cpu-timers.h"
#include "system/cpu-timers-internal.h"
/*
* ICOUNT: Instruction Counter
*
* this module is split off from cpu-timers because the icount part
* is TCG-specific, and does not need to be built for other accels.
*/
static bool icount_sleep = true;
/* Arbitrarily pick 1MIPS as the minimum allowable speed. */
#define MAX_ICOUNT_SHIFT 10
bool icount_align_option;
/* Do not count executed instructions */
ICountMode use_icount = ICOUNT_DISABLED;
static void icount_enable_precise(void)
{
/* Fixed conversion of insn to ns via "shift" option */
use_icount = ICOUNT_PRECISE;
}
static void icount_enable_adaptive(void)
{
/* Runtime adaptive algorithm to compute shift */
use_icount = ICOUNT_ADAPTATIVE;
}
/*
* The current number of executed instructions is based on what we
* originally budgeted minus the current state of the decrementing
* icount counters in extra/u16.low.
*/
static int64_t icount_get_executed(CPUState *cpu)
{
return (cpu->icount_budget -
(cpu->neg.icount_decr.u16.low + cpu->icount_extra));
}
/*
* Update the global shared timer_state.qemu_icount to take into
* account executed instructions. This is done by the TCG vCPU
* thread so the main-loop can see time has moved forward.
*/
static void icount_update_locked(CPUState *cpu)
{
int64_t executed = icount_get_executed(cpu);
cpu->icount_budget -= executed;
qatomic_set_i64(&timers_state.qemu_icount,
timers_state.qemu_icount + executed);
}
/*
* Update the global shared timer_state.qemu_icount to take into
* account executed instructions. This is done by the TCG vCPU
* thread so the main-loop can see time has moved forward.
*/
void icount_update(CPUState *cpu)
{
seqlock_write_lock(&timers_state.vm_clock_seqlock,
&timers_state.vm_clock_lock);
icount_update_locked(cpu);
seqlock_write_unlock(&timers_state.vm_clock_seqlock,
&timers_state.vm_clock_lock);
}
static int64_t icount_get_raw_locked(void)
{
CPUState *cpu = current_cpu;
if (cpu && cpu->running) {
if (!cpu->neg.can_do_io) {
error_report("Bad icount read");
exit(1);
}
/* Take into account what has run */
icount_update_locked(cpu);
}
/* The read is protected by the seqlock, but needs atomic64 to avoid UB */
return qatomic_read_i64(&timers_state.qemu_icount);
}
static int64_t icount_get_locked(void)
{
int64_t icount = icount_get_raw_locked();
return qatomic_read_i64(&timers_state.qemu_icount_bias) +
icount_to_ns(icount);
}
int64_t icount_get_raw(void)
{
int64_t icount;
unsigned start;
do {
start = seqlock_read_begin(&timers_state.vm_clock_seqlock);
icount = icount_get_raw_locked();
} while (seqlock_read_retry(&timers_state.vm_clock_seqlock, start));
return icount;
}
/* Return the virtual CPU time, based on the instruction counter. */
int64_t icount_get(void)
{
int64_t icount;
unsigned start;
do {
start = seqlock_read_begin(&timers_state.vm_clock_seqlock);
icount = icount_get_locked();
} while (seqlock_read_retry(&timers_state.vm_clock_seqlock, start));
return icount;
}
int64_t icount_to_ns(int64_t icount)
{
return icount << qatomic_read(&timers_state.icount_time_shift);
}
/*
* Correlation between real and virtual time is always going to be
* fairly approximate, so ignore small variation.
* When the guest is idle real and virtual time will be aligned in
* the IO wait loop.
*/
#define ICOUNT_WOBBLE (NANOSECONDS_PER_SECOND / 10)
static void icount_adjust(void)
{
int64_t cur_time;
int64_t cur_icount;
int64_t delta;
/* If the VM is not running, then do nothing. */
if (!runstate_is_running()) {
return;
}
seqlock_write_lock(&timers_state.vm_clock_seqlock,
&timers_state.vm_clock_lock);
cur_time = REPLAY_CLOCK_LOCKED(REPLAY_CLOCK_VIRTUAL_RT,
cpu_get_clock_locked());
cur_icount = icount_get_locked();
delta = cur_icount - cur_time;
/* FIXME: This is a very crude algorithm, somewhat prone to oscillation. */
if (delta > 0
&& timers_state.last_delta + ICOUNT_WOBBLE < delta * 2
&& timers_state.icount_time_shift > 0) {
/* The guest is getting too far ahead. Slow time down. */
qatomic_set(&timers_state.icount_time_shift,
timers_state.icount_time_shift - 1);
}
if (delta < 0
&& timers_state.last_delta - ICOUNT_WOBBLE > delta * 2
&& timers_state.icount_time_shift < MAX_ICOUNT_SHIFT) {
/* The guest is getting too far behind. Speed time up. */
qatomic_set(&timers_state.icount_time_shift,
timers_state.icount_time_shift + 1);
}
timers_state.last_delta = delta;
qatomic_set_i64(&timers_state.qemu_icount_bias,
cur_icount - (timers_state.qemu_icount
<< timers_state.icount_time_shift));
seqlock_write_unlock(&timers_state.vm_clock_seqlock,
&timers_state.vm_clock_lock);
}
static void icount_adjust_rt(void *opaque)
{
timer_mod(timers_state.icount_rt_timer,
qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL_RT) + 1000);
icount_adjust();
}
static void icount_adjust_vm(void *opaque)
{
timer_mod(timers_state.icount_vm_timer,
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
NANOSECONDS_PER_SECOND / 10);
icount_adjust();
}
int64_t icount_round(int64_t count)
{
int shift = qatomic_read(&timers_state.icount_time_shift);
return (count + (1 << shift) - 1) >> shift;
}
static void icount_warp_rt(void)
{
unsigned seq;
int64_t warp_start;
/*
* The icount_warp_timer is rescheduled soon after vm_clock_warp_start
* changes from -1 to another value, so the race here is okay.
*/
do {
seq = seqlock_read_begin(&timers_state.vm_clock_seqlock);
warp_start = timers_state.vm_clock_warp_start;
} while (seqlock_read_retry(&timers_state.vm_clock_seqlock, seq));
if (warp_start == -1) {
return;
}
seqlock_write_lock(&timers_state.vm_clock_seqlock,
&timers_state.vm_clock_lock);
if (runstate_is_running()) {
int64_t clock = REPLAY_CLOCK_LOCKED(REPLAY_CLOCK_VIRTUAL_RT,
cpu_get_clock_locked());
int64_t warp_delta;
warp_delta = clock - timers_state.vm_clock_warp_start;
if (icount_enabled() == ICOUNT_ADAPTATIVE) {
/*
* In adaptive mode, do not let QEMU_CLOCK_VIRTUAL run too far
* ahead of real time (it might already be ahead so careful not
* to go backwards).
*/
int64_t cur_icount = icount_get_locked();
int64_t delta = clock - cur_icount;
if (delta < 0) {
delta = 0;
}
warp_delta = MIN(warp_delta, delta);
}
qatomic_set_i64(&timers_state.qemu_icount_bias,
timers_state.qemu_icount_bias + warp_delta);
}
timers_state.vm_clock_warp_start = -1;
seqlock_write_unlock(&timers_state.vm_clock_seqlock,
&timers_state.vm_clock_lock);
if (qemu_clock_expired(QEMU_CLOCK_VIRTUAL)) {
qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
}
}
static void icount_timer_cb(void *opaque)
{
/*
* No need for a checkpoint because the timer already synchronizes
* with CHECKPOINT_CLOCK_VIRTUAL_RT.
*/
icount_warp_rt();
}
void icount_start_warp_timer(void)
{
int64_t clock;
int64_t deadline;
assert(icount_enabled());
/*
* Nothing to do if the VM is stopped: QEMU_CLOCK_VIRTUAL timers
* do not fire, so computing the deadline does not make sense.
*/
if (!runstate_is_running()) {
return;
}
if (replay_mode != REPLAY_MODE_PLAY) {
if (!all_cpu_threads_idle()) {
return;
}
if (qtest_enabled()) {
/* When testing, qtest commands advance icount. */
return;
}
replay_checkpoint(CHECKPOINT_CLOCK_WARP_START);
} else {
/* warp clock deterministically in record/replay mode */
if (!replay_checkpoint(CHECKPOINT_CLOCK_WARP_START)) {
/*
* vCPU is sleeping and warp can't be started.
* It is probably a race condition: notification sent
* to vCPU was processed in advance and vCPU went to sleep.
* Therefore we have to wake it up for doing something.
*/
if (replay_has_event()) {
qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
}
return;
}
}
/* We want to use the earliest deadline from ALL vm_clocks */
clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT);
deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL,
~QEMU_TIMER_ATTR_EXTERNAL);
if (deadline < 0) {
if (!icount_sleep) {
warn_report_once("icount sleep disabled and no active timers");
}
return;
}
if (deadline > 0) {
/*
* Ensure QEMU_CLOCK_VIRTUAL proceeds even when the virtual CPU goes to
* sleep. Otherwise, the CPU might be waiting for a future timer
* interrupt to wake it up, but the interrupt never comes because
* the vCPU isn't running any insns and thus doesn't advance the
* QEMU_CLOCK_VIRTUAL.
*/
if (!icount_sleep) {
/*
* We never let VCPUs sleep in no sleep icount mode.
* If there is a pending QEMU_CLOCK_VIRTUAL timer we just advance
* to the next QEMU_CLOCK_VIRTUAL event and notify it.
* It is useful when we want a deterministic execution time,
* isolated from host latencies.
*/
seqlock_write_lock(&timers_state.vm_clock_seqlock,
&timers_state.vm_clock_lock);
qatomic_set_i64(&timers_state.qemu_icount_bias,
timers_state.qemu_icount_bias + deadline);
seqlock_write_unlock(&timers_state.vm_clock_seqlock,
&timers_state.vm_clock_lock);
qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
} else {
/*
* We do stop VCPUs and only advance QEMU_CLOCK_VIRTUAL after some
* "real" time, (related to the time left until the next event) has
* passed. The QEMU_CLOCK_VIRTUAL_RT clock will do this.
* This avoids that the warps are visible externally; for example,
* you will not be sending network packets continuously instead of
* every 100ms.
*/
seqlock_write_lock(&timers_state.vm_clock_seqlock,
&timers_state.vm_clock_lock);
if (timers_state.vm_clock_warp_start == -1
|| timers_state.vm_clock_warp_start > clock) {
timers_state.vm_clock_warp_start = clock;
}
seqlock_write_unlock(&timers_state.vm_clock_seqlock,
&timers_state.vm_clock_lock);
timer_mod_anticipate(timers_state.icount_warp_timer,
clock + deadline);
}
} else if (deadline == 0) {
qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
}
}
void icount_account_warp_timer(void)
{
if (!icount_sleep) {
return;
}
/*
* Nothing to do if the VM is stopped: QEMU_CLOCK_VIRTUAL timers
* do not fire, so computing the deadline does not make sense.
*/
if (!runstate_is_running()) {
return;
}
replay_async_events();
/* warp clock deterministically in record/replay mode */
if (!replay_checkpoint(CHECKPOINT_CLOCK_WARP_ACCOUNT)) {
return;
}
timer_del(timers_state.icount_warp_timer);
icount_warp_rt();
}
bool icount_configure(QemuOpts *opts, Error **errp)
{
const char *option = qemu_opt_get(opts, "shift");
bool sleep = qemu_opt_get_bool(opts, "sleep", true);
bool align = qemu_opt_get_bool(opts, "align", false);
long time_shift = -1;
if (!option) {
if (qemu_opt_get(opts, "align") != NULL) {
error_setg(errp, "Please specify shift option when using align");
return false;
}
return true;
}
if (align && !sleep) {
error_setg(errp, "align=on and sleep=off are incompatible");
return false;
}
if (strcmp(option, "auto") != 0) {
if (qemu_strtol(option, NULL, 0, &time_shift) < 0
|| time_shift < 0 || time_shift > MAX_ICOUNT_SHIFT) {
error_setg(errp, "icount: Invalid shift value");
return false;
}
} else if (icount_align_option) {
error_setg(errp, "shift=auto and align=on are incompatible");
return false;
} else if (!icount_sleep) {
error_setg(errp, "shift=auto and sleep=off are incompatible");
return false;
}
icount_sleep = sleep;
if (icount_sleep) {
timers_state.icount_warp_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL_RT,
icount_timer_cb, NULL);
}
icount_align_option = align;
if (time_shift >= 0) {
timers_state.icount_time_shift = time_shift;
icount_enable_precise();
return true;
}
icount_enable_adaptive();
/*
* 125MIPS seems a reasonable initial guess at the guest speed.
* It will be corrected fairly quickly anyway.
*/
timers_state.icount_time_shift = 3;
/*
* Have both realtime and virtual time triggers for speed adjustment.
* The realtime trigger catches emulated time passing too slowly,
* the virtual time trigger catches emulated time passing too fast.
* Realtime triggers occur even when idle, so use them less frequently
* than VM triggers.
*/
timers_state.vm_clock_warp_start = -1;
timers_state.icount_rt_timer = timer_new_ms(QEMU_CLOCK_VIRTUAL_RT,
icount_adjust_rt, NULL);
timer_mod(timers_state.icount_rt_timer,
qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL_RT) + 1000);
timers_state.icount_vm_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
icount_adjust_vm, NULL);
timer_mod(timers_state.icount_vm_timer,
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
NANOSECONDS_PER_SECOND / 10);
return true;
}
void icount_notify_exit(void)
{
assert(icount_enabled());
if (current_cpu) {
qemu_cpu_kick(current_cpu);
qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
}
}

View File

@@ -1,77 +0,0 @@
/*
* Internal execution defines for qemu (target agnostic)
*
* Copyright (c) 2003 Fabrice Bellard
*
* SPDX-License-Identifier: LGPL-2.1-or-later
*/
#ifndef ACCEL_TCG_INTERNAL_COMMON_H
#define ACCEL_TCG_INTERNAL_COMMON_H
#include "exec/cpu-common.h"
#include "exec/translation-block.h"
extern int64_t max_delay;
extern int64_t max_advance;
extern bool one_insn_per_tb;
extern bool icount_align_option;
/*
* Return true if CS is not running in parallel with other cpus, either
* because there are no other cpus or we are within an exclusive context.
*/
static inline bool cpu_in_serial_context(CPUState *cs)
{
return !tcg_cflags_has(cs, CF_PARALLEL) || cpu_in_exclusive_context(cs);
}
/**
* cpu_plugin_mem_cbs_enabled() - are plugin memory callbacks enabled?
* @cs: CPUState pointer
*
* The memory callbacks are installed if a plugin has instrumented an
* instruction for memory. This can be useful to know if you want to
* force a slow path for a series of memory accesses.
*/
static inline bool cpu_plugin_mem_cbs_enabled(const CPUState *cpu)
{
#ifdef CONFIG_PLUGIN
return !!cpu->neg.plugin_mem_cbs;
#else
return false;
#endif
}
TranslationBlock *tb_gen_code(CPUState *cpu, vaddr pc,
uint64_t cs_base, uint32_t flags,
int cflags);
void page_init(void);
void tb_htable_init(void);
void tb_reset_jump(TranslationBlock *tb, int n);
TranslationBlock *tb_link_page(TranslationBlock *tb);
void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
uintptr_t host_pc);
/**
* tlb_init - initialize a CPU's TLB
* @cpu: CPU whose TLB should be initialized
*/
void tlb_init(CPUState *cpu);
/**
* tlb_destroy - destroy a CPU's TLB
* @cpu: CPU whose TLB should be destroyed
*/
void tlb_destroy(CPUState *cpu);
bool tcg_exec_realizefn(CPUState *cpu, Error **errp);
void tcg_exec_unrealizefn(CPUState *cpu);
/* current cflags for hashing/comparison */
uint32_t curr_cflags(CPUState *cpu);
void tb_check_watchpoint(CPUState *cpu, uintptr_t retaddr);
#endif

View File

@@ -1,79 +0,0 @@
/*
* Internal execution defines for qemu (target specific)
*
* Copyright (c) 2003 Fabrice Bellard
*
* SPDX-License-Identifier: LGPL-2.1-or-later
*/
#ifndef ACCEL_TCG_INTERNAL_TARGET_H
#define ACCEL_TCG_INTERNAL_TARGET_H
#include "exec/exec-all.h"
#include "exec/translation-block.h"
#include "tb-internal.h"
#include "tcg-target-mo.h"
/*
* Access to the various translations structures need to be serialised
* via locks for consistency. In user-mode emulation access to the
* memory related structures are protected with mmap_lock.
* In !user-mode we use per-page locks.
*/
#ifdef CONFIG_USER_ONLY
#define assert_memory_lock() tcg_debug_assert(have_mmap_lock())
#else
#define assert_memory_lock()
#endif
#if defined(CONFIG_SOFTMMU) && defined(CONFIG_DEBUG_TCG)
void assert_no_pages_locked(void);
#else
static inline void assert_no_pages_locked(void) { }
#endif
#ifdef CONFIG_USER_ONLY
static inline void page_table_config_init(void) { }
#else
void page_table_config_init(void);
#endif
#ifndef CONFIG_USER_ONLY
G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr);
#endif /* CONFIG_USER_ONLY */
/**
* tcg_req_mo:
* @type: TCGBar
*
* Filter @type to the barrier that is required for the guest
* memory ordering vs the host memory ordering. A non-zero
* result indicates that some barrier is required.
*
* If TCG_GUEST_DEFAULT_MO is not defined, assume that the
* guest requires strict ordering.
*
* This is a macro so that it's constant even without optimization.
*/
#ifdef TCG_GUEST_DEFAULT_MO
# define tcg_req_mo(type) \
((type) & TCG_GUEST_DEFAULT_MO & ~TCG_TARGET_DEFAULT_MO)
#else
# define tcg_req_mo(type) ((type) & ~TCG_TARGET_DEFAULT_MO)
#endif
/**
* cpu_req_mo:
* @type: TCGBar
*
* If tcg_req_mo indicates a barrier for @type is required
* for the guest memory model, issue a host memory barrier.
*/
#define cpu_req_mo(type) \
do { \
if (tcg_req_mo(type)) { \
smp_mb(); \
} \
} while (0)
#endif /* ACCEL_TCG_INTERNAL_H */

22
accel/tcg/internal.h Normal file
View File

@@ -0,0 +1,22 @@
/*
* Internal execution defines for qemu
*
* Copyright (c) 2003 Fabrice Bellard
*
* SPDX-License-Identifier: LGPL-2.1-or-later
*/
#ifndef ACCEL_TCG_INTERNAL_H
#define ACCEL_TCG_INTERNAL_H
#include "exec/exec-all.h"
TranslationBlock *tb_gen_code(CPUState *cpu, target_ulong pc,
target_ulong cs_base, uint32_t flags,
int cflags);
void QEMU_NORETURN cpu_io_recompile(CPUState *cpu, uintptr_t retaddr);
void page_init(void);
void tb_htable_init(void);
#endif /* ACCEL_TCG_INTERNAL_H */

File diff suppressed because it is too large Load Diff

View File

@@ -1,560 +0,0 @@
/*
* Routines common to user and system emulation of load/store.
*
* Copyright (c) 2003 Fabrice Bellard
*
* SPDX-License-Identifier: GPL-2.0-or-later
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
/*
* Load helpers for tcg-ldst.h
*/
tcg_target_ulong helper_ldub_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_8);
return do_ld1_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
tcg_target_ulong helper_lduw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
return do_ld2_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
tcg_target_ulong helper_ldul_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
return do_ld4_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
uint64_t helper_ldq_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
return do_ld8_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD);
}
/*
* Provide signed versions of the load routines as well. We can of course
* avoid this for 64-bit data, or for 32-bit data on 32-bit host.
*/
tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
return (int8_t)helper_ldub_mmu(env, addr, oi, retaddr);
}
tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
return (int16_t)helper_lduw_mmu(env, addr, oi, retaddr);
}
tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
return (int32_t)helper_ldul_mmu(env, addr, oi, retaddr);
}
Int128 helper_ld16_mmu(CPUArchState *env, uint64_t addr,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
return do_ld16_mmu(env_cpu(env), addr, oi, retaddr);
}
Int128 helper_ld_i128(CPUArchState *env, uint64_t addr, uint32_t oi)
{
return helper_ld16_mmu(env, addr, oi, GETPC());
}
/*
* Store helpers for tcg-ldst.h
*/
void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_8);
do_st1_mmu(env_cpu(env), addr, val, oi, ra);
}
void helper_stw_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
do_st2_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_stl_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
do_st4_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_stq_mmu(CPUArchState *env, uint64_t addr, uint64_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
do_st8_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_st16_mmu(CPUArchState *env, uint64_t addr, Int128 val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
do_st16_mmu(env_cpu(env), addr, val, oi, retaddr);
}
void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx oi)
{
helper_st16_mmu(env, addr, val, oi, GETPC());
}
/*
* Load helpers for cpu_ldst.h
*/
static void plugin_load_cb(CPUArchState *env, abi_ptr addr,
uint64_t value_low,
uint64_t value_high,
MemOpIdx oi)
{
if (cpu_plugin_mem_cbs_enabled(env_cpu(env))) {
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr,
value_low, value_high,
oi, QEMU_PLUGIN_MEM_R);
}
}
uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra)
{
uint8_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_UB);
ret = do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, ret, 0, oi);
return ret;
}
uint16_t cpu_ldw_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint16_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
ret = do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, ret, 0, oi);
return ret;
}
uint32_t cpu_ldl_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint32_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
ret = do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, ret, 0, oi);
return ret;
}
uint64_t cpu_ldq_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
uint64_t ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
ret = do_ld8_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
plugin_load_cb(env, addr, ret, 0, oi);
return ret;
}
Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr,
MemOpIdx oi, uintptr_t ra)
{
Int128 ret;
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
ret = do_ld16_mmu(env_cpu(env), addr, oi, ra);
plugin_load_cb(env, addr, int128_getlo(ret), int128_gethi(ret), oi);
return ret;
}
/*
* Store helpers for cpu_ldst.h
*/
static void plugin_store_cb(CPUArchState *env, abi_ptr addr,
uint64_t value_low,
uint64_t value_high,
MemOpIdx oi)
{
if (cpu_plugin_mem_cbs_enabled(env_cpu(env))) {
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr,
value_low, value_high,
oi, QEMU_PLUGIN_MEM_W);
}
}
void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val,
MemOpIdx oi, uintptr_t retaddr)
{
helper_stb_mmu(env, addr, val, oi, retaddr);
plugin_store_cb(env, addr, val, 0, oi);
}
void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uint16_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
do_st2_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, val, 0, oi);
}
void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uint32_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
do_st4_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, val, 0, oi);
}
void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
do_st8_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, val, 0, oi);
}
void cpu_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val,
MemOpIdx oi, uintptr_t retaddr)
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
do_st16_mmu(env_cpu(env), addr, val, oi, retaddr);
plugin_store_cb(env, addr, int128_getlo(val), int128_gethi(val), oi);
}
/*
* Wrappers of the above
*/
uint32_t cpu_ldub_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra)
{
MemOpIdx oi = make_memop_idx(MO_UB, mmu_idx);
return cpu_ldb_mmu(env, addr, oi, ra);
}
int cpu_ldsb_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra)
{
return (int8_t)cpu_ldub_mmuidx_ra(env, addr, mmu_idx, ra);
}
uint32_t cpu_lduw_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra)
{
MemOpIdx oi = make_memop_idx(MO_BEUW | MO_UNALN, mmu_idx);
return cpu_ldw_mmu(env, addr, oi, ra);
}
int cpu_ldsw_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra)
{
return (int16_t)cpu_lduw_be_mmuidx_ra(env, addr, mmu_idx, ra);
}
uint32_t cpu_ldl_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra)
{
MemOpIdx oi = make_memop_idx(MO_BEUL | MO_UNALN, mmu_idx);
return cpu_ldl_mmu(env, addr, oi, ra);
}
uint64_t cpu_ldq_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra)
{
MemOpIdx oi = make_memop_idx(MO_BEUQ | MO_UNALN, mmu_idx);
return cpu_ldq_mmu(env, addr, oi, ra);
}
uint32_t cpu_lduw_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra)
{
MemOpIdx oi = make_memop_idx(MO_LEUW | MO_UNALN, mmu_idx);
return cpu_ldw_mmu(env, addr, oi, ra);
}
int cpu_ldsw_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra)
{
return (int16_t)cpu_lduw_le_mmuidx_ra(env, addr, mmu_idx, ra);
}
uint32_t cpu_ldl_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra)
{
MemOpIdx oi = make_memop_idx(MO_LEUL | MO_UNALN, mmu_idx);
return cpu_ldl_mmu(env, addr, oi, ra);
}
uint64_t cpu_ldq_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra)
{
MemOpIdx oi = make_memop_idx(MO_LEUQ | MO_UNALN, mmu_idx);
return cpu_ldq_mmu(env, addr, oi, ra);
}
void cpu_stb_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
int mmu_idx, uintptr_t ra)
{
MemOpIdx oi = make_memop_idx(MO_UB, mmu_idx);
cpu_stb_mmu(env, addr, val, oi, ra);
}
void cpu_stw_be_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
int mmu_idx, uintptr_t ra)
{
MemOpIdx oi = make_memop_idx(MO_BEUW | MO_UNALN, mmu_idx);
cpu_stw_mmu(env, addr, val, oi, ra);
}
void cpu_stl_be_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
int mmu_idx, uintptr_t ra)
{
MemOpIdx oi = make_memop_idx(MO_BEUL | MO_UNALN, mmu_idx);
cpu_stl_mmu(env, addr, val, oi, ra);
}
void cpu_stq_be_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
int mmu_idx, uintptr_t ra)
{
MemOpIdx oi = make_memop_idx(MO_BEUQ | MO_UNALN, mmu_idx);
cpu_stq_mmu(env, addr, val, oi, ra);
}
void cpu_stw_le_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
int mmu_idx, uintptr_t ra)
{
MemOpIdx oi = make_memop_idx(MO_LEUW | MO_UNALN, mmu_idx);
cpu_stw_mmu(env, addr, val, oi, ra);
}
void cpu_stl_le_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
int mmu_idx, uintptr_t ra)
{
MemOpIdx oi = make_memop_idx(MO_LEUL | MO_UNALN, mmu_idx);
cpu_stl_mmu(env, addr, val, oi, ra);
}
void cpu_stq_le_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
int mmu_idx, uintptr_t ra)
{
MemOpIdx oi = make_memop_idx(MO_LEUQ | MO_UNALN, mmu_idx);
cpu_stq_mmu(env, addr, val, oi, ra);
}
/*--------------------------*/
uint32_t cpu_ldub_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldub_mmuidx_ra(env, addr, mmu_index, ra);
}
int cpu_ldsb_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
return (int8_t)cpu_ldub_data_ra(env, addr, ra);
}
uint32_t cpu_lduw_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_lduw_be_mmuidx_ra(env, addr, mmu_index, ra);
}
int cpu_ldsw_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
return (int16_t)cpu_lduw_be_data_ra(env, addr, ra);
}
uint32_t cpu_ldl_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldl_be_mmuidx_ra(env, addr, mmu_index, ra);
}
uint64_t cpu_ldq_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldq_be_mmuidx_ra(env, addr, mmu_index, ra);
}
uint32_t cpu_lduw_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_lduw_le_mmuidx_ra(env, addr, mmu_index, ra);
}
int cpu_ldsw_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
return (int16_t)cpu_lduw_le_data_ra(env, addr, ra);
}
uint32_t cpu_ldl_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldl_le_mmuidx_ra(env, addr, mmu_index, ra);
}
uint64_t cpu_ldq_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldq_le_mmuidx_ra(env, addr, mmu_index, ra);
}
void cpu_stb_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stb_mmuidx_ra(env, addr, val, mmu_index, ra);
}
void cpu_stw_be_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stw_be_mmuidx_ra(env, addr, val, mmu_index, ra);
}
void cpu_stl_be_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stl_be_mmuidx_ra(env, addr, val, mmu_index, ra);
}
void cpu_stq_be_data_ra(CPUArchState *env, abi_ptr addr,
uint64_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stq_be_mmuidx_ra(env, addr, val, mmu_index, ra);
}
void cpu_stw_le_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stw_le_mmuidx_ra(env, addr, val, mmu_index, ra);
}
void cpu_stl_le_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stl_le_mmuidx_ra(env, addr, val, mmu_index, ra);
}
void cpu_stq_le_data_ra(CPUArchState *env, abi_ptr addr,
uint64_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stq_le_mmuidx_ra(env, addr, val, mmu_index, ra);
}
/*--------------------------*/
uint32_t cpu_ldub_data(CPUArchState *env, abi_ptr addr)
{
return cpu_ldub_data_ra(env, addr, 0);
}
int cpu_ldsb_data(CPUArchState *env, abi_ptr addr)
{
return (int8_t)cpu_ldub_data(env, addr);
}
uint32_t cpu_lduw_be_data(CPUArchState *env, abi_ptr addr)
{
return cpu_lduw_be_data_ra(env, addr, 0);
}
int cpu_ldsw_be_data(CPUArchState *env, abi_ptr addr)
{
return (int16_t)cpu_lduw_be_data(env, addr);
}
uint32_t cpu_ldl_be_data(CPUArchState *env, abi_ptr addr)
{
return cpu_ldl_be_data_ra(env, addr, 0);
}
uint64_t cpu_ldq_be_data(CPUArchState *env, abi_ptr addr)
{
return cpu_ldq_be_data_ra(env, addr, 0);
}
uint32_t cpu_lduw_le_data(CPUArchState *env, abi_ptr addr)
{
return cpu_lduw_le_data_ra(env, addr, 0);
}
int cpu_ldsw_le_data(CPUArchState *env, abi_ptr addr)
{
return (int16_t)cpu_lduw_le_data(env, addr);
}
uint32_t cpu_ldl_le_data(CPUArchState *env, abi_ptr addr)
{
return cpu_ldl_le_data_ra(env, addr, 0);
}
uint64_t cpu_ldq_le_data(CPUArchState *env, abi_ptr addr)
{
return cpu_ldq_le_data_ra(env, addr, 0);
}
void cpu_stb_data(CPUArchState *env, abi_ptr addr, uint32_t val)
{
cpu_stb_data_ra(env, addr, val, 0);
}
void cpu_stw_be_data(CPUArchState *env, abi_ptr addr, uint32_t val)
{
cpu_stw_be_data_ra(env, addr, val, 0);
}
void cpu_stl_be_data(CPUArchState *env, abi_ptr addr, uint32_t val)
{
cpu_stl_be_data_ra(env, addr, val, 0);
}
void cpu_stq_be_data(CPUArchState *env, abi_ptr addr, uint64_t val)
{
cpu_stq_be_data_ra(env, addr, val, 0);
}
void cpu_stw_le_data(CPUArchState *env, abi_ptr addr, uint32_t val)
{
cpu_stw_le_data_ra(env, addr, val, 0);
}
void cpu_stl_le_data(CPUArchState *env, abi_ptr addr, uint32_t val)
{
cpu_stl_le_data_ra(env, addr, val, 0);
}
void cpu_stq_le_data(CPUArchState *env, abi_ptr addr, uint64_t val)
{
cpu_stq_le_data_ra(env, addr, val, 0);
}

View File

@@ -1,33 +1,26 @@
common_ss.add(when: 'CONFIG_TCG', if_true: files(
'cpu-exec-common.c',
'tcg-runtime.c',
'tcg-runtime-gvec.c',
))
tcg_specific_ss = ss.source_set()
tcg_specific_ss.add(files(
tcg_ss = ss.source_set()
tcg_ss.add(files(
'tcg-all.c',
'cpu-exec-common.c',
'cpu-exec.c',
'tb-maint.c',
'tcg-runtime-gvec.c',
'tcg-runtime.c',
'translate-all.c',
'translator.c',
))
tcg_specific_ss.add(when: 'CONFIG_USER_ONLY', if_true: files('user-exec.c'))
tcg_specific_ss.add(when: 'CONFIG_SYSTEM_ONLY', if_false: files('user-exec-stub.c'))
if get_option('plugins')
tcg_specific_ss.add(files('plugin-gen.c'))
endif
specific_ss.add_all(when: 'CONFIG_TCG', if_true: tcg_specific_ss)
tcg_ss.add(when: 'CONFIG_USER_ONLY', if_true: files('user-exec.c'))
tcg_ss.add(when: 'CONFIG_SOFTMMU', if_false: files('user-exec-stub.c'))
tcg_ss.add(when: 'CONFIG_PLUGIN', if_true: [files('plugin-gen.c'), libdl])
specific_ss.add_all(when: 'CONFIG_TCG', if_true: tcg_ss)
specific_ss.add(when: ['CONFIG_SYSTEM_ONLY', 'CONFIG_TCG'], if_true: files(
specific_ss.add(when: ['CONFIG_SOFTMMU', 'CONFIG_TCG'], if_true: files(
'cputlb.c',
'hmp.c',
))
system_ss.add(when: ['CONFIG_TCG'], if_true: files(
'icount-common.c',
'monitor.c',
tcg_module_ss.add(when: ['CONFIG_SOFTMMU', 'CONFIG_TCG'], if_true: files(
'tcg-accel-ops.c',
'tcg-accel-ops-icount.c',
'tcg-accel-ops-mttcg.c',
'tcg-accel-ops-icount.c',
'tcg-accel-ops-rr.c',
'watchpoint.c',
))

View File

@@ -1,243 +0,0 @@
/*
* SPDX-License-Identifier: LGPL-2.1-or-later
*
* QEMU TCG monitor
*
* Copyright (c) 2003-2005 Fabrice Bellard
*/
#include "qemu/osdep.h"
#include "qemu/accel.h"
#include "qemu/qht.h"
#include "qapi/error.h"
#include "qapi/type-helpers.h"
#include "qapi/qapi-commands-machine.h"
#include "monitor/monitor.h"
#include "system/cpu-timers.h"
#include "system/tcg.h"
#include "tcg/tcg.h"
#include "internal-common.h"
#include "tb-context.h"
static void dump_drift_info(GString *buf)
{
if (!icount_enabled()) {
return;
}
g_string_append_printf(buf, "Host - Guest clock %"PRIi64" ms\n",
(cpu_get_clock() - icount_get()) / SCALE_MS);
if (icount_align_option) {
g_string_append_printf(buf, "Max guest delay %"PRIi64" ms\n",
-max_delay / SCALE_MS);
g_string_append_printf(buf, "Max guest advance %"PRIi64" ms\n",
max_advance / SCALE_MS);
} else {
g_string_append_printf(buf, "Max guest delay NA\n");
g_string_append_printf(buf, "Max guest advance NA\n");
}
}
static void dump_accel_info(GString *buf)
{
AccelState *accel = current_accel();
bool one_insn_per_tb = object_property_get_bool(OBJECT(accel),
"one-insn-per-tb",
&error_fatal);
g_string_append_printf(buf, "Accelerator settings:\n");
g_string_append_printf(buf, "one-insn-per-tb: %s\n\n",
one_insn_per_tb ? "on" : "off");
}
static void print_qht_statistics(struct qht_stats hst, GString *buf)
{
uint32_t hgram_opts;
size_t hgram_bins;
char *hgram;
if (!hst.head_buckets) {
return;
}
g_string_append_printf(buf, "TB hash buckets %zu/%zu "
"(%0.2f%% head buckets used)\n",
hst.used_head_buckets, hst.head_buckets,
(double)hst.used_head_buckets /
hst.head_buckets * 100);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_opts |= QDIST_PR_100X | QDIST_PR_PERCENT;
if (qdist_xmax(&hst.occupancy) - qdist_xmin(&hst.occupancy) == 1) {
hgram_opts |= QDIST_PR_NODECIMAL;
}
hgram = qdist_pr(&hst.occupancy, 10, hgram_opts);
g_string_append_printf(buf, "TB hash occupancy %0.2f%% avg chain occ. "
"Histogram: %s\n",
qdist_avg(&hst.occupancy) * 100, hgram);
g_free(hgram);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_bins = qdist_xmax(&hst.chain) - qdist_xmin(&hst.chain);
if (hgram_bins > 10) {
hgram_bins = 10;
} else {
hgram_bins = 0;
hgram_opts |= QDIST_PR_NODECIMAL | QDIST_PR_NOBINRANGE;
}
hgram = qdist_pr(&hst.chain, hgram_bins, hgram_opts);
g_string_append_printf(buf, "TB hash avg chain %0.3f buckets. "
"Histogram: %s\n",
qdist_avg(&hst.chain), hgram);
g_free(hgram);
}
struct tb_tree_stats {
size_t nb_tbs;
size_t host_size;
size_t target_size;
size_t max_target_size;
size_t direct_jmp_count;
size_t direct_jmp2_count;
size_t cross_page;
};
static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data)
{
const TranslationBlock *tb = value;
struct tb_tree_stats *tst = data;
tst->nb_tbs++;
tst->host_size += tb->tc.size;
tst->target_size += tb->size;
if (tb->size > tst->max_target_size) {
tst->max_target_size = tb->size;
}
if (tb->page_addr[1] != -1) {
tst->cross_page++;
}
if (tb->jmp_reset_offset[0] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp_count++;
if (tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
tst->direct_jmp2_count++;
}
}
return false;
}
static void tlb_flush_counts(size_t *pfull, size_t *ppart, size_t *pelide)
{
CPUState *cpu;
size_t full = 0, part = 0, elide = 0;
CPU_FOREACH(cpu) {
full += qatomic_read(&cpu->neg.tlb.c.full_flush_count);
part += qatomic_read(&cpu->neg.tlb.c.part_flush_count);
elide += qatomic_read(&cpu->neg.tlb.c.elide_flush_count);
}
*pfull = full;
*ppart = part;
*pelide = elide;
}
static void tcg_dump_info(GString *buf)
{
g_string_append_printf(buf, "[TCG profiler not compiled]\n");
}
static void dump_exec_info(GString *buf)
{
struct tb_tree_stats tst = {};
struct qht_stats hst;
size_t nb_tbs, flush_full, flush_part, flush_elide;
tcg_tb_foreach(tb_tree_stats_iter, &tst);
nb_tbs = tst.nb_tbs;
/* XXX: avoid using doubles ? */
g_string_append_printf(buf, "Translation buffer state:\n");
/*
* Report total code size including the padding and TB structs;
* otherwise users might think "-accel tcg,tb-size" is not honoured.
* For avg host size we use the precise numbers from tb_tree_stats though.
*/
g_string_append_printf(buf, "gen code size %zu/%zu\n",
tcg_code_size(), tcg_code_capacity());
g_string_append_printf(buf, "TB count %zu\n", nb_tbs);
g_string_append_printf(buf, "TB avg target size %zu max=%zu bytes\n",
nb_tbs ? tst.target_size / nb_tbs : 0,
tst.max_target_size);
g_string_append_printf(buf, "TB avg host size %zu bytes "
"(expansion ratio: %0.1f)\n",
nb_tbs ? tst.host_size / nb_tbs : 0,
tst.target_size ?
(double)tst.host_size / tst.target_size : 0);
g_string_append_printf(buf, "cross page TB count %zu (%zu%%)\n",
tst.cross_page,
nb_tbs ? (tst.cross_page * 100) / nb_tbs : 0);
g_string_append_printf(buf, "direct jump count %zu (%zu%%) "
"(2 jumps=%zu %zu%%)\n",
tst.direct_jmp_count,
nb_tbs ? (tst.direct_jmp_count * 100) / nb_tbs : 0,
tst.direct_jmp2_count,
nb_tbs ? (tst.direct_jmp2_count * 100) / nb_tbs : 0);
qht_statistics_init(&tb_ctx.htable, &hst);
print_qht_statistics(hst, buf);
qht_statistics_destroy(&hst);
g_string_append_printf(buf, "\nStatistics:\n");
g_string_append_printf(buf, "TB flush count %u\n",
qatomic_read(&tb_ctx.tb_flush_count));
g_string_append_printf(buf, "TB invalidate count %u\n",
qatomic_read(&tb_ctx.tb_phys_invalidate_count));
tlb_flush_counts(&flush_full, &flush_part, &flush_elide);
g_string_append_printf(buf, "TLB full flushes %zu\n", flush_full);
g_string_append_printf(buf, "TLB partial flushes %zu\n", flush_part);
g_string_append_printf(buf, "TLB elided flushes %zu\n", flush_elide);
tcg_dump_info(buf);
}
HumanReadableText *qmp_x_query_jit(Error **errp)
{
g_autoptr(GString) buf = g_string_new("");
if (!tcg_enabled()) {
error_setg(errp, "JIT information is only available with accel=tcg");
return NULL;
}
dump_accel_info(buf);
dump_exec_info(buf);
dump_drift_info(buf);
return human_readable_text_from_str(buf);
}
static void tcg_dump_op_count(GString *buf)
{
g_string_append_printf(buf, "[TCG profiler not compiled]\n");
}
HumanReadableText *qmp_x_query_opcount(Error **errp)
{
g_autoptr(GString) buf = g_string_new("");
if (!tcg_enabled()) {
error_setg(errp,
"Opcode count information is only available with accel=tcg");
return NULL;
}
tcg_dump_op_count(buf);
return human_readable_text_from_str(buf);
}
static void hmp_tcg_register(void)
{
monitor_register_hmp_info_hrt("jit", qmp_x_query_jit);
monitor_register_hmp_info_hrt("opcount", qmp_x_query_opcount);
}
type_init(hmp_tcg_register);

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,4 @@
#ifdef CONFIG_PLUGIN
DEF_HELPER_FLAGS_2(plugin_vcpu_udata_cb, TCG_CALL_NO_RWG, void, i32, ptr)
DEF_HELPER_FLAGS_4(plugin_vcpu_mem_cb, TCG_CALL_NO_RWG, void, i32, i32, i64, ptr)
#endif

View File

@@ -22,9 +22,7 @@
#include "exec/cpu-defs.h"
#include "exec/exec-all.h"
#include "exec/translation-block.h"
#include "qemu/xxhash.h"
#include "tb-jmp-cache.h"
#ifdef CONFIG_SOFTMMU
@@ -36,16 +34,16 @@
#define TB_JMP_ADDR_MASK (TB_JMP_PAGE_SIZE - 1)
#define TB_JMP_PAGE_MASK (TB_JMP_CACHE_SIZE - TB_JMP_PAGE_SIZE)
static inline unsigned int tb_jmp_cache_hash_page(vaddr pc)
static inline unsigned int tb_jmp_cache_hash_page(target_ulong pc)
{
vaddr tmp;
target_ulong tmp;
tmp = pc ^ (pc >> (TARGET_PAGE_BITS - TB_JMP_PAGE_BITS));
return (tmp >> (TARGET_PAGE_BITS - TB_JMP_PAGE_BITS)) & TB_JMP_PAGE_MASK;
}
static inline unsigned int tb_jmp_cache_hash_func(vaddr pc)
static inline unsigned int tb_jmp_cache_hash_func(target_ulong pc)
{
vaddr tmp;
target_ulong tmp;
tmp = pc ^ (pc >> (TARGET_PAGE_BITS - TB_JMP_PAGE_BITS));
return (((tmp >> (TARGET_PAGE_BITS - TB_JMP_PAGE_BITS)) & TB_JMP_PAGE_MASK)
| (tmp & TB_JMP_ADDR_MASK));
@@ -54,7 +52,7 @@ static inline unsigned int tb_jmp_cache_hash_func(vaddr pc)
#else
/* In user-mode we can get better hashing because we do not have a TLB */
static inline unsigned int tb_jmp_cache_hash_func(vaddr pc)
static inline unsigned int tb_jmp_cache_hash_func(target_ulong pc)
{
return (pc ^ (pc >> TB_JMP_CACHE_BITS)) & (TB_JMP_CACHE_SIZE - 1);
}
@@ -62,10 +60,10 @@ static inline unsigned int tb_jmp_cache_hash_func(vaddr pc)
#endif /* CONFIG_SOFTMMU */
static inline
uint32_t tb_hash_func(tb_page_addr_t phys_pc, vaddr pc,
uint32_t flags, uint64_t flags2, uint32_t cf_mask)
uint32_t tb_hash_func(tb_page_addr_t phys_pc, target_ulong pc, uint32_t flags,
uint32_t cf_mask, uint32_t trace_vcpu_dstate)
{
return qemu_xxhash8(phys_pc, pc, flags2, flags, cf_mask);
return qemu_xxhash7(phys_pc, pc, flags, cf_mask, trace_vcpu_dstate);
}
#endif

View File

@@ -1,89 +0,0 @@
/*
* TranslationBlock internal declarations (target specific)
*
* Copyright (c) 2003 Fabrice Bellard
*
* SPDX-License-Identifier: LGPL-2.1-or-later
*/
#ifndef ACCEL_TCG_TB_INTERNAL_TARGET_H
#define ACCEL_TCG_TB_INTERNAL_TARGET_H
#include "exec/cpu-all.h"
#include "exec/exec-all.h"
#include "exec/translation-block.h"
/*
* The true return address will often point to a host insn that is part of
* the next translated guest insn. Adjust the address backward to point to
* the middle of the call insn. Subtracting one would do the job except for
* several compressed mode architectures (arm, mips) which set the low bit
* to indicate the compressed mode; subtracting two works around that. It
* is also the case that there are no host isas that contain a call insn
* smaller than 4 bytes, so we don't worry about special-casing this.
*/
#define GETPC_ADJ 2
#ifdef CONFIG_SOFTMMU
#define CPU_TLB_DYN_MIN_BITS 6
#define CPU_TLB_DYN_DEFAULT_BITS 8
# if HOST_LONG_BITS == 32
/* Make sure we do not require a double-word shift for the TLB load */
# define CPU_TLB_DYN_MAX_BITS (32 - TARGET_PAGE_BITS)
# else /* HOST_LONG_BITS == 64 */
/*
* Assuming TARGET_PAGE_BITS==12, with 2**22 entries we can cover 2**(22+12) ==
* 2**34 == 16G of address space. This is roughly what one would expect a
* TLB to cover in a modern (as of 2018) x86_64 CPU. For instance, Intel
* Skylake's Level-2 STLB has 16 1G entries.
* Also, make sure we do not size the TLB past the guest's address space.
*/
# ifdef TARGET_PAGE_BITS_VARY
# define CPU_TLB_DYN_MAX_BITS \
MIN(22, TARGET_VIRT_ADDR_SPACE_BITS - TARGET_PAGE_BITS)
# else
# define CPU_TLB_DYN_MAX_BITS \
MIN_CONST(22, TARGET_VIRT_ADDR_SPACE_BITS - TARGET_PAGE_BITS)
# endif
# endif
#endif /* CONFIG_SOFTMMU */
#ifdef CONFIG_USER_ONLY
#include "user/page-protection.h"
/*
* For user-only, page_protect sets the page read-only.
* Since most execution is already on read-only pages, and we'd need to
* account for other TBs on the same page, defer undoing any page protection
* until we receive the write fault.
*/
static inline void tb_lock_page0(tb_page_addr_t p0)
{
page_protect(p0);
}
static inline void tb_lock_page1(tb_page_addr_t p0, tb_page_addr_t p1)
{
page_protect(p1);
}
static inline void tb_unlock_page1(tb_page_addr_t p0, tb_page_addr_t p1) { }
static inline void tb_unlock_pages(TranslationBlock *tb) { }
#else
void tb_lock_page0(tb_page_addr_t);
void tb_lock_page1(tb_page_addr_t, tb_page_addr_t);
void tb_unlock_page1(tb_page_addr_t, tb_page_addr_t);
void tb_unlock_pages(TranslationBlock *);
#endif
#ifdef CONFIG_SOFTMMU
void tb_invalidate_phys_range_fast(ram_addr_t ram_addr,
unsigned size,
uintptr_t retaddr);
#endif /* CONFIG_SOFTMMU */
bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc);
#endif

View File

@@ -1,33 +0,0 @@
/*
* The per-CPU TranslationBlock jump cache.
*
* Copyright (c) 2003 Fabrice Bellard
*
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef ACCEL_TCG_TB_JMP_CACHE_H
#define ACCEL_TCG_TB_JMP_CACHE_H
#include "qemu/rcu.h"
#include "exec/cpu-common.h"
#define TB_JMP_CACHE_BITS 12
#define TB_JMP_CACHE_SIZE (1 << TB_JMP_CACHE_BITS)
/*
* Invalidated in parallel; all accesses to 'tb' must be atomic.
* A valid entry is read/written by a single CPU, therefore there is
* no need for qatomic_rcu_read() and pc is always consistent with a
* non-NULL value of 'tb'. Strictly speaking pc is only needed for
* CF_PCREL, but it's used always for simplicity.
*/
typedef struct CPUJumpCache {
struct rcu_head rcu;
struct {
TranslationBlock *tb;
vaddr pc;
} array[TB_JMP_CACHE_SIZE];
} CPUJumpCache;
#endif /* ACCEL_TCG_TB_JMP_CACHE_H */

File diff suppressed because it is too large Load Diff

View File

@@ -24,11 +24,12 @@
*/
#include "qemu/osdep.h"
#include "system/replay.h"
#include "system/cpu-timers.h"
#include "qemu-common.h"
#include "sysemu/tcg.h"
#include "sysemu/replay.h"
#include "qemu/main-loop.h"
#include "qemu/guest-random.h"
#include "hw/core/cpu.h"
#include "exec/exec-all.h"
#include "tcg-accel-ops.h"
#include "tcg-accel-ops-icount.h"
@@ -84,25 +85,13 @@ void icount_handle_deadline(void)
* Don't interrupt cpu thread, when these events are waiting
* (i.e., there is no checkpoint)
*/
if (deadline == 0) {
if (deadline == 0
&& (replay_mode != REPLAY_MODE_PLAY || replay_has_checkpoint())) {
icount_notify_aio_contexts();
}
}
/* Distribute the budget evenly across all CPUs */
int64_t icount_percpu_budget(int cpu_count)
{
int64_t limit = icount_get_limit();
int64_t timeslice = limit / cpu_count;
if (timeslice == 0) {
timeslice = limit;
}
return timeslice;
}
void icount_prepare_for_run(CPUState *cpu, int64_t cpu_budget)
void icount_prepare_for_run(CPUState *cpu)
{
int insns_left;
@@ -111,24 +100,18 @@ void icount_prepare_for_run(CPUState *cpu, int64_t cpu_budget)
* each vCPU execution. However u16.high can be raised
* asynchronously by cpu_exit/cpu_interrupt/tcg_handle_interrupt
*/
g_assert(cpu->neg.icount_decr.u16.low == 0);
g_assert(cpu_neg(cpu)->icount_decr.u16.low == 0);
g_assert(cpu->icount_extra == 0);
cpu->icount_budget = icount_get_limit();
insns_left = MIN(0xffff, cpu->icount_budget);
cpu_neg(cpu)->icount_decr.u16.low = insns_left;
cpu->icount_extra = cpu->icount_budget - insns_left;
replay_mutex_lock();
cpu->icount_budget = MIN(icount_get_limit(), cpu_budget);
insns_left = MIN(0xffff, cpu->icount_budget);
cpu->neg.icount_decr.u16.low = insns_left;
cpu->icount_extra = cpu->icount_budget - insns_left;
if (cpu->icount_budget == 0) {
/*
* We're called without the BQL, so must take it while
* we're calling timer handlers.
*/
bql_lock();
if (cpu->icount_budget == 0 && replay_has_checkpoint()) {
icount_notify_aio_contexts();
bql_unlock();
}
}
@@ -138,7 +121,7 @@ void icount_process_data(CPUState *cpu)
icount_update(cpu);
/* Reset the counters */
cpu->neg.icount_decr.u16.low = 0;
cpu_neg(cpu)->icount_decr.u16.low = 0;
cpu->icount_extra = 0;
cpu->icount_budget = 0;
@@ -153,7 +136,7 @@ void icount_handle_interrupt(CPUState *cpu, int mask)
tcg_handle_interrupt(cpu, mask);
if (qemu_cpu_is_self(cpu) &&
!cpu->neg.can_do_io
!cpu->can_do_io
&& (mask & ~old_mask) != 0) {
cpu_abort(cpu, "Raised interrupt while not in I/O function");
}

View File

@@ -7,14 +7,13 @@
* See the COPYING file in the top-level directory.
*/
#ifndef TCG_ACCEL_OPS_ICOUNT_H
#define TCG_ACCEL_OPS_ICOUNT_H
#ifndef TCG_CPUS_ICOUNT_H
#define TCG_CPUS_ICOUNT_H
void icount_handle_deadline(void);
void icount_prepare_for_run(CPUState *cpu, int64_t cpu_budget);
int64_t icount_percpu_budget(int cpu_count);
void icount_prepare_for_run(CPUState *cpu);
void icount_process_data(CPUState *cpu);
void icount_handle_interrupt(CPUState *cpu, int mask);
#endif /* TCG_ACCEL_OPS_ICOUNT_H */
#endif /* TCG_CPUS_ICOUNT_H */

View File

@@ -24,14 +24,15 @@
*/
#include "qemu/osdep.h"
#include "system/tcg.h"
#include "system/replay.h"
#include "system/cpu-timers.h"
#include "qemu-common.h"
#include "sysemu/tcg.h"
#include "sysemu/replay.h"
#include "qemu/main-loop.h"
#include "qemu/notify.h"
#include "qemu/guest-random.h"
#include "exec/exec-all.h"
#include "hw/boards.h"
#include "tcg/startup.h"
#include "tcg-accel-ops.h"
#include "tcg-accel-ops-mttcg.h"
@@ -75,11 +76,11 @@ static void *mttcg_cpu_thread_fn(void *arg)
rcu_add_force_rcu_notifier(&force_rcu.notifier);
tcg_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
cpu->can_do_io = 1;
current_cpu = cpu;
cpu_thread_signal_created(cpu);
qemu_guest_random_seed_thread_part2(cpu->random_seed);
@@ -90,35 +91,40 @@ static void *mttcg_cpu_thread_fn(void *arg)
do {
if (cpu_can_run(cpu)) {
int r;
bql_unlock();
r = tcg_cpu_exec(cpu);
bql_lock();
qemu_mutex_unlock_iothread();
r = tcg_cpus_exec(cpu);
qemu_mutex_lock_iothread();
switch (r) {
case EXCP_DEBUG:
cpu_handle_guest_debug(cpu);
break;
case EXCP_HALTED:
/*
* Usually cpu->halted is set, but may have already been
* reset by another thread by the time we arrive here.
* during start-up the vCPU is reset and the thread is
* kicked several times. If we don't ensure we go back
* to sleep in the halted state we won't cleanly
* start-up when the vCPU is enabled.
*
* cpu->halted should ensure we sleep in wait_io_event
*/
g_assert(cpu->halted);
break;
case EXCP_ATOMIC:
bql_unlock();
qemu_mutex_unlock_iothread();
cpu_exec_step_atomic(cpu);
bql_lock();
qemu_mutex_lock_iothread();
default:
/* Ignore everything else? */
break;
}
}
qatomic_set_mb(&cpu->exit_request, 0);
qatomic_mb_set(&cpu->exit_request, 0);
qemu_wait_io_event(cpu);
} while (!cpu->unplug || cpu_can_run(cpu));
tcg_cpu_destroy(cpu);
bql_unlock();
tcg_cpus_destroy(cpu);
qemu_mutex_unlock_iothread();
rcu_remove_force_rcu_notifier(&force_rcu.notifier);
rcu_unregister_thread();
return NULL;
@@ -136,10 +142,18 @@ void mttcg_start_vcpu_thread(CPUState *cpu)
g_assert(tcg_enabled());
tcg_cpu_init_cflags(cpu, current_machine->smp.max_cpus > 1);
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
/* create a thread per vCPU with TCG (MTTCG) */
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/TCG",
cpu->cpu_index);
qemu_thread_create(cpu->thread, thread_name, mttcg_cpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE);
#ifdef _WIN32
cpu->hThread = qemu_thread_get_handle(cpu->thread);
#endif
}

View File

@@ -7,8 +7,8 @@
* See the COPYING file in the top-level directory.
*/
#ifndef TCG_ACCEL_OPS_MTTCG_H
#define TCG_ACCEL_OPS_MTTCG_H
#ifndef TCG_CPUS_MTTCG_H
#define TCG_CPUS_MTTCG_H
/* kick MTTCG vCPU thread */
void mttcg_kick_vcpu_thread(CPUState *cpu);
@@ -16,4 +16,4 @@ void mttcg_kick_vcpu_thread(CPUState *cpu);
/* start an mttcg vCPU thread */
void mttcg_start_vcpu_thread(CPUState *cpu);
#endif /* TCG_ACCEL_OPS_MTTCG_H */
#endif /* TCG_CPUS_MTTCG_H */

View File

@@ -24,15 +24,14 @@
*/
#include "qemu/osdep.h"
#include "qemu/lockable.h"
#include "system/tcg.h"
#include "system/replay.h"
#include "system/cpu-timers.h"
#include "qemu-common.h"
#include "sysemu/tcg.h"
#include "sysemu/replay.h"
#include "qemu/main-loop.h"
#include "qemu/notify.h"
#include "qemu/guest-random.h"
#include "exec/cpu-common.h"
#include "tcg/startup.h"
#include "exec/exec-all.h"
#include "tcg-accel-ops.h"
#include "tcg-accel-ops-rr.h"
#include "tcg-accel-ops-icount.h"
@@ -52,7 +51,7 @@ void rr_kick_vcpu_thread(CPUState *unused)
*
* The kick timer is responsible for moving single threaded vCPU
* emulation on to the next vCPU. If more than one vCPU is running a
* timer event we force a cpu->exit so the next vCPU can get
* timer event with force a cpu->exit so the next vCPU can get
* scheduled.
*
* The timer is removed if all vCPUs are idle and restarted again once
@@ -62,6 +61,8 @@ void rr_kick_vcpu_thread(CPUState *unused)
static QEMUTimer *rr_kick_vcpu_timer;
static CPUState *rr_current_cpu;
#define TCG_KICK_PERIOD (NANOSECONDS_PER_SECOND / 10)
static inline int64_t rr_next_kick_time(void)
{
return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + TCG_KICK_PERIOD;
@@ -72,13 +73,11 @@ static void rr_kick_next_cpu(void)
{
CPUState *cpu;
do {
cpu = qatomic_read(&rr_current_cpu);
cpu = qatomic_mb_read(&rr_current_cpu);
if (cpu) {
cpu_exit(cpu);
}
/* Finish kicking this cpu before reading again. */
smp_mb();
} while (cpu != qatomic_read(&rr_current_cpu));
} while (cpu != qatomic_mb_read(&rr_current_cpu));
}
static void rr_kick_thread(void *opaque)
@@ -111,7 +110,7 @@ static void rr_wait_io_event(void)
while (all_cpu_threads_idle()) {
rr_stop_kick_timer();
qemu_cond_wait_bql(first_cpu->halt_cond);
qemu_cond_wait_iothread(first_cpu->halt_cond);
}
rr_start_kick_timer();
@@ -131,7 +130,7 @@ static void rr_deal_with_unplugged_cpus(void)
CPU_FOREACH(cpu) {
if (cpu->unplug && !cpu_can_run(cpu)) {
tcg_cpu_destroy(cpu);
tcg_cpus_destroy(cpu);
break;
}
}
@@ -142,33 +141,6 @@ static void rr_force_rcu(Notifier *notify, void *data)
rr_kick_next_cpu();
}
/*
* Calculate the number of CPUs that we will process in a single iteration of
* the main CPU thread loop so that we can fairly distribute the instruction
* count across CPUs.
*
* The CPU count is cached based on the CPU list generation ID to avoid
* iterating the list every time.
*/
static int rr_cpu_count(void)
{
static unsigned int last_gen_id = ~0;
static int cpu_count;
CPUState *cpu;
QEMU_LOCK_GUARD(&qemu_cpu_list_lock);
if (cpu_list_generation_id_get() != last_gen_id) {
cpu_count = 0;
CPU_FOREACH(cpu) {
++cpu_count;
}
last_gen_id = cpu_list_generation_id_get();
}
return cpu_count;
}
/*
* In the single-threaded case each vCPU is simulated in turn. If
* there is more than a single vCPU we create a simple timer to kick
@@ -188,17 +160,17 @@ static void *rr_cpu_thread_fn(void *arg)
rcu_add_force_rcu_notifier(&force_rcu);
tcg_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
cpu->can_do_io = 1;
cpu_thread_signal_created(cpu);
qemu_guest_random_seed_thread_part2(cpu->random_seed);
/* wait for initial kick-off after machine start */
while (first_cpu->stopped) {
qemu_cond_wait_bql(first_cpu->halt_cond);
qemu_cond_wait_iothread(first_cpu->halt_cond);
/* process any pending work */
CPU_FOREACH(cpu) {
@@ -215,16 +187,11 @@ static void *rr_cpu_thread_fn(void *arg)
cpu->exit_request = 1;
while (1) {
/* Only used for icount_enabled() */
int64_t cpu_budget = 0;
bql_unlock();
qemu_mutex_unlock_iothread();
replay_mutex_lock();
bql_lock();
qemu_mutex_lock_iothread();
if (icount_enabled()) {
int cpu_count = rr_cpu_count();
/* Account partial waits to QEMU_CLOCK_VIRTUAL. */
icount_account_warp_timer();
/*
@@ -232,8 +199,6 @@ static void *rr_cpu_thread_fn(void *arg)
* waking up the I/O thread and waiting for completion.
*/
icount_handle_deadline();
cpu_budget = icount_percpu_budget(cpu_count);
}
replay_mutex_unlock();
@@ -243,9 +208,8 @@ static void *rr_cpu_thread_fn(void *arg)
}
while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) {
/* Store rr_current_cpu before evaluating cpu_can_run(). */
qatomic_set_mb(&rr_current_cpu, cpu);
qatomic_mb_set(&rr_current_cpu, cpu);
current_cpu = cpu;
qemu_clock_enable(QEMU_CLOCK_VIRTUAL,
@@ -254,23 +218,23 @@ static void *rr_cpu_thread_fn(void *arg)
if (cpu_can_run(cpu)) {
int r;
bql_unlock();
qemu_mutex_unlock_iothread();
if (icount_enabled()) {
icount_prepare_for_run(cpu, cpu_budget);
icount_prepare_for_run(cpu);
}
r = tcg_cpu_exec(cpu);
r = tcg_cpus_exec(cpu);
if (icount_enabled()) {
icount_process_data(cpu);
}
bql_lock();
qemu_mutex_lock_iothread();
if (r == EXCP_DEBUG) {
cpu_handle_guest_debug(cpu);
break;
} else if (r == EXCP_ATOMIC) {
bql_unlock();
qemu_mutex_unlock_iothread();
cpu_exec_step_atomic(cpu);
bql_lock();
qemu_mutex_lock_iothread();
break;
}
} else if (cpu->stop) {
@@ -283,11 +247,11 @@ static void *rr_cpu_thread_fn(void *arg)
cpu = CPU_NEXT(cpu);
} /* while (cpu && !cpu->exit_request).. */
/* Does not need a memory barrier because a spurious wakeup is okay. */
/* Does not need qatomic_mb_set because a spurious wakeup is okay. */
qatomic_set(&rr_current_cpu, NULL);
if (cpu && cpu->exit_request) {
qatomic_set_mb(&cpu->exit_request, 0);
qatomic_mb_set(&cpu->exit_request, 0);
}
if (icount_enabled() && all_cpu_threads_idle()) {
@@ -302,7 +266,9 @@ static void *rr_cpu_thread_fn(void *arg)
rr_deal_with_unplugged_cpus();
}
g_assert_not_reached();
rcu_remove_force_rcu_notifier(&force_rcu);
rcu_unregister_thread();
return NULL;
}
void rr_start_vcpu_thread(CPUState *cpu)
@@ -315,25 +281,27 @@ void rr_start_vcpu_thread(CPUState *cpu)
tcg_cpu_init_cflags(cpu, false);
if (!single_tcg_cpu_thread) {
single_tcg_halt_cond = cpu->halt_cond;
single_tcg_cpu_thread = cpu->thread;
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
/* share a single thread for all cpus with TCG */
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "ALL CPUs/TCG");
qemu_thread_create(cpu->thread, thread_name,
rr_cpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE);
single_tcg_halt_cond = cpu->halt_cond;
single_tcg_cpu_thread = cpu->thread;
#ifdef _WIN32
cpu->hThread = qemu_thread_get_handle(cpu->thread);
#endif
} else {
/* we share the thread, dump spare data */
g_free(cpu->thread);
qemu_cond_destroy(cpu->halt_cond);
g_free(cpu->halt_cond);
/* we share the thread */
cpu->thread = single_tcg_cpu_thread;
cpu->halt_cond = single_tcg_halt_cond;
/* copy the stuff done at start of rr_cpu_thread_fn */
cpu->thread_id = first_cpu->thread_id;
cpu->neg.can_do_io = 1;
cpu->can_do_io = 1;
cpu->created = true;
}
}

Some files were not shown because too many files have changed in this diff Show More