Compare commits

..

74 Commits

Author SHA1 Message Date
Xiaoyao Li
d20f93da31 docs: Add TDX documentation
Add docs/system/i386/tdx.rst for TDX support, and add tdx in
confidential-guest-support.rst

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>

---
Changes since v1:
 - Add prerequisite of private gmem;
 - update example command to launch TD;

Changes since RFC v4:
 - add the restriction that kernel-irqchip must be split
2023-11-15 00:57:10 -05:00
Sean Christopherson
18f152401f i386/tdx: Don't get/put guest state for TDX VMs
Don't get/put state of TDX VMs since accessing/mutating guest state of
production TDs is not supported.

Note, it will be allowed for a debug TD. Corresponding support will be
introduced when debug TD support is implemented in the future.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
5c8d76a6e4 i386/tdx: Skip kvm_put_apicbase() for TDs
KVM doesn't allow wirting to MSR_IA32_APICBASE for TDs.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
f226ab5a34 i386/tdx: Only configure MSR_IA32_UCODE_REV in kvm_init_msrs() for TDs
For TDs, only MSR_IA32_UCODE_REV in kvm_init_msrs() can be configured
by VMM, while the features enumerated/controlled by other MSRs except
MSR_IA32_UCODE_REV in kvm_init_msrs() are not under control of VMM.

Only configure MSR_IA32_UCODE_REV for TDs.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
e954cd4b22 i386/tdx: Don't synchronize guest tsc for TDs
TSC of TDs is not accessible and KVM doesn't allow access of
MSR_IA32_TSC for TDs. To avoid the assert() in kvm_get_tsc, make
kvm_synchronize_all_tsc() noop for TDs,

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Connor Kuehl <ckuehl@redhat.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
2dbde0df24 hw/i386: add option to forcibly report edge trigger in acpi tables
When level trigger isn't supported on x86 platform,
forcibly report edge trigger in acpi tables.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
9faa2b03a9 hw/i386: add eoi_intercept_unsupported member to X86MachineState
Add a new bool member, eoi_intercept_unsupported, to X86MachineState
with default value false. Set true for TDX VM.

Inability to intercept eoi causes impossibility to emulate level
triggered interrupt to be re-injected when level is still kept active.
which affects interrupt controller emulation.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
d04ea558aa i386/tdx: LMCE is not supported for TDX
LMCE is not supported TDX since KVM doesn't provide emulation for
MSR_IA32_FEAT_CTL.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
378e61b145 i386/tdx: Don't allow system reset for TDX VMs
TDX CPU state is protected and thus vcpu state cann't be reset by VMM.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
fdd53e1a81 i386/tdx: Disable PIC for TDX VMs
Legacy PIC (8259) cannot be supported for TDX VMs since TDX module
doesn't allow directly interrupt injection.  Using posted interrupts
for the PIC is not a viable option as the guest BIOS/kernel will not
do EOI for PIC IRQs, i.e. will leave the vIRR bit set.

Hence disable PIC for TDX VMs and error out if user wants PIC.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
e1ce29aa33 i386/tdx: Disable SMM for TDX VMs
TDX doesn't support SMM and VMM cannot emulate SMM for TDX VMs because
VMM cannot manipulate TDX VM's memory.

Disable SMM for TDX VMs and error out if user requests to enable SMM.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
ffe9a6dccb q35: Introduce smm_ranges property for q35-pci-host
Add a q35 property to check whether or not SMM ranges, e.g. SMRAM, TSEG,
etc... exist for the target platform.  TDX doesn't support SMM and doesn't
play nice with QEMU modifying related guest memory ranges.

Signed-off-by: Isaku Yamahata <isaku.yamahata@linux.intel.com>
Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
7163da7797 pci-host/q35: Move PAM initialization above SMRAM initialization
In mch_realize(), process PAM initialization before SMRAM initialization so
that later patch can skill all the SMRAM related with a single check.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
a5368c6202 i386/tdx: Wire TDX_REPORT_FATAL_ERROR with GuestPanic facility
Integrate TDX's TDX_REPORT_FATAL_ERROR into QEMU GuestPanic facility

Originated-from: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
Changes from v2:
- Add docmentation of new type and struct (Daniel)
- refine the error message handling (Daniel)
2023-11-15 00:57:10 -05:00
Xiaoyao Li
f597afe309 i386/tdx: Handle TDG.VP.VMCALL<REPORT_FATAL_ERROR>
TD guest can use TDG.VP.VMCALL<REPORT_FATAL_ERROR> to request termination
with error message encoded in GPRs.

Parse and print the error message, and terminate the TD guest in the
handler.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
a23e29229e i386/tdx: Limit the range size for MapGPA
If the range for TDG.VP.VMCALL<MapGPA> is too large, process the limited
size and return retry error.  It's bad for VMM to take too long time,
e.g. second order, with blocking vcpu execution.  It results in too many
missing timer interrupts.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
addb6b676d i386/tdx: handle TDG.VP.VMCALL<MapGPA> hypercall
MapGPA is a hypercall to convert GPA from/to private GPA to/from shared GPA.
As the conversion function is already implemented as kvm_convert_memory,
wire it to TDX hypercall exit.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Chenyi Qiang
824f58fdba i386/tdx: setup a timer for the qio channel
To avoid no response from QGS server, setup a timer for the transaction.
If timeout, make it an error and interrupt guest. Define the threshold of
time to 30s at present, maybe change to other value if not appropriate.

Extract the common cleanup code to make it more clear.

Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
Changes in v3:
 - Use t->timer_armed to track if t->timer is initialized;
2023-11-15 00:57:10 -05:00
Isaku Yamahata
9955e91601 i386/tdx: handle TDG.VP.VMCALL<GetQuote>
For GetQuote, delegate a request to Quote Generation Service.
Add property "quote-generation-socket" to tdx-guest, whihc is a property
of type SocketAddress to specify Quote Generation Service(QGS).

On request, connect to the QGS, read request buffer from shared guest
memory, send the request buffer to the server and store the response
into shared guest memory and notify TD guest by interrupt.

command line example:
  qemu-system-x86_64 \
    -object '{"qom-type":"tdx-guest","id":"tdx0","quote-generation-socket":{"type": "vsock", "cid":"2","port":"1234"}}' \
    -machine confidential-guest-support=tdx0

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Codeveloped-by: Chenyi Qiang <chenyi.qiang@intel.com>
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
Changes in v3:
- rename property "quote-generation-service" to "quote-generation-socket";
- change the type of "quote-generation-socket" from str to
  SocketAddress;
- squash next patch into this one;
2023-11-15 00:57:10 -05:00
Isaku Yamahata
80417c539e i386/tdx: handle TDG.VP.VMCALL<SetupEventNotifyInterrupt>
For SetupEventNotifyInterrupt, record interrupt vector and the apic id
of the vcpu that received this TDVMCALL.

Later it can inject interrupt with given vector to the specific vcpu
that received SetupEventNotifyInterrupt.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
511b09205f i386/tdx: Finalize TDX VM
Invoke KVM_TDX_FINALIZE_VM to finalize the TD's measurement and make
the TD vCPUs runnable once machine initialization is complete.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
85a8152691 i386/tdx: Call KVM_TDX_INIT_VCPU to initialize TDX vcpu
TDX vcpu needs to be initialized by SEAMCALL(TDH.VP.INIT) and KVM
provides vcpu level IOCTL KVM_TDX_INIT_VCPU for it.

KVM_TDX_INIT_VCPU needs the address of the HOB as input. Invoke it for
each vcpu after HOB list is created.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Chao Peng
08c095f67a i386/tdx: register TDVF as private memory
Allocate private guest memfd memory for BIOS if it's TD VM.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
d8da77238e memory: Introduce memory_region_init_ram_guest_memfd()
Introduce memory_region_init_ram_guest_memfd() to allocate private
guset memfd on the MemoryRegion initialization. It's for the use case of
TDVF, which must be private on TDX case.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
ab857b85a5 i386/tdx: Add TDVF memory via KVM_TDX_INIT_MEM_REGION
TDVF firmware (CODE and VARS) needs to be added/copied to TD's private
memory via KVM_TDX_INIT_MEM_REGION, as well as TD HOB and TEMP memory.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>

---
Changes in v1:
  - rename variable @metadata to @flags
2023-11-15 00:57:10 -05:00
Xiaoyao Li
517fb00637 i386/tdx: Setup the TD HOB list
The TD HOB list is used to pass the information from VMM to TDVF. The TD
HOB must include PHIT HOB and Resource Descriptor HOB. More details can
be found in TDVF specification and PI specification.

Build the TD HOB in TDX's machine_init_done callback.

Co-developed-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>

---
Changes in v1:
  - drop the code of adding mmio resources since OVMF prepares all the
    MMIO hob itself.
2023-11-15 00:57:10 -05:00
Xiaoyao Li
359be44b10 headers: Add definitions from UEFI spec for volumes, resources, etc...
Add UEFI definitions for literals, enums, structs, GUIDs, etc... that
will be used by TDX to build the UEFI Hand-Off Block (HOB) that is passed
to the Trusted Domain Virtual Firmware (TDVF).

All values come from the UEFI specification [1], PI spec [2] and TDVF
design guide[3].

[1] UEFI Specification v2.1.0 https://uefi.org/sites/default/files/resources/UEFI_Spec_2_10_Aug29.pdf
[2] UEFI PI spec v1.8 https://uefi.org/sites/default/files/resources/UEFI_PI_Spec_1_8_March3.pdf
[3] https://software.intel.com/content/dam/develop/external/us/en/documents/tdx-virtual-firmware-design-guide-rev-1.pdf

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
0222b6f05d i386/tdx: Track RAM entries for TDX VM
The RAM of TDX VM can be classified into two types:

 - TDX_RAM_UNACCEPTED: default type of TDX memory, which needs to be
   accepted by TDX guest before it can be used and will be all-zeros
   after being accepted.

 - TDX_RAM_ADDED: the RAM that is ADD'ed to TD guest before running, and
   can be used directly. E.g., TD HOB and TEMP MEM that needed by TDVF.

Maintain TdxRamEntries[] which grabs the initial RAM info from e820 table
and mark each RAM range as default type TDX_RAM_UNACCEPTED.

Then turn the range of TD HOB and TEMP MEM to TDX_RAM_ADDED since these
ranges will be ADD'ed before TD runs and no need to be accepted runtime.

The TdxRamEntries[] are later used to setup the memory TD resource HOB
that passes memory info from QEMU to TDVF.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
---
Changes in v3:
- use enum TdxRamType in struct TdxRamEntry; (Isaku)
- Fix the indention; (Daniel)

Changes in v1:
  - simplify the algorithm of tdx_accept_ram_range() (Suggested-by: Gerd Hoffman)
    (1) Change the existing entry to cover the accepted ram range.
    (2) If there is room before the accepted ram range add a
	TDX_RAM_UNACCEPTED entry for that.
    (3) If there is room after the accepted ram range add a
	TDX_RAM_UNACCEPTED entry for that.
2023-11-15 00:57:10 -05:00
Xiaoyao Li
8b7763f7ca i386/tdx: Track mem_ptr for each firmware entry of TDVF
For each TDVF sections, QEMU needs to copy the content to guest
private memory via KVM API (KVM_TDX_INIT_MEM_REGION).

Introduce a field @mem_ptr for TdxFirmwareEntry to track the memory
pointer of each TDVF sections. So that QEMU can add/copy them to guest
private memory later.

TDVF sections can be classified into two groups:
 - Firmware itself, e.g., TDVF BFV and CFV, that located separately from
   guest RAM. Its memory pointer is the bios pointer.

 - Sections located at guest RAM, e.g., TEMP_MEM and TD_HOB.
   mmap a new memory range for them.

Register a machine_init_done callback to do the stuff.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
33edcc8426 i386/tdx: Don't initialize pc.rom for TDX VMs
For TDX, the address below 1MB are entirely general RAM. No need to
initialize pc.rom memory region for TDs.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
This is more as a workaround of the issue that for q35 machine type, the
real memslot update (which requires memslot deletion )for pc.rom happens
after tdx_init_memory_region. It leads to the private memory ADD'ed
before get lost. I haven't work out a good solution to resolve the
order issue. So just skip the pc.rom setup to avoid memslot deletion.
2023-11-15 00:57:10 -05:00
Xiaoyao Li
19195c88df i386/tdx: Skip BIOS shadowing setup
TDX doesn't support map different GPAs to same private memory. Thus,
aliasing top 128KB of BIOS as isa-bios is not supported.

On the other hand, TDX guest cannot go to real mode, it can work fine
without isa-bios.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
---
Changes in v1:
 - update commit message and comment to clarify
2023-11-15 00:57:10 -05:00
Xiaoyao Li
954b9e7a42 i386/tdx: Parse TDVF metadata for TDX VM
TDX cannot support pflash device since it doesn't support read-only
memslot and doesn't support emulation. Load TDVF(OVMF) with -bios option
for TDs.

When boot a TD, besides loading TDVF to the address below 4G, it needs
parse TDVF metadata.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
5aad9329d2 i386/tdvf: Introduce function to parse TDVF metadata
TDX VM needs to boot with its specialized firmware, Trusted Domain
Virtual Firmware (TDVF). QEMU needs to parse TDVF and map it in TD
guest memory prior to running the TDX VM.

A TDVF Metadata in TDVF image describes the structure of firmware.
QEMU refers to it to setup memory for TDVF. Introduce function
tdvf_parse_metadata() to parse the metadata from TDVF image and store
the info of each TDVF section.

TDX metadata is located by a TDX metadata offset block, which is a
GUID-ed structure. The data portion of the GUID structure contains
only an 4-byte field that is the offset of TDX metadata to the end
of firmware file.

Select X86_FW_OVMF when TDX is enable to leverage existing functions
to parse and search OVMF's GUID-ed structures.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>

---
Changes in v1:
 - rename tdvf_parse_section_entry() to
   tdvf_parse_and_check_section_entry()
Changes in RFC v4:
 - rename TDX_METADATA_GUID to TDX_METADATA_OFFSET_GUID
2023-11-15 00:57:10 -05:00
Isaku Yamahata
583aae3302 kvm/tdx: Ignore memory conversion to shared of unassigned region
TDX requires vMMIO region to be shared.  For KVM, MMIO region is the region
which kvm memslot isn't assigned to (except in-kernel emulation).
qemu has the memory region for vMMIO at each device level.

While OVMF issues MapGPA(to-shared) conservatively on 32bit PCI MMIO
region, qemu doesn't find corresponding vMMIO region because it's before
PCI device allocation and memory_region_find() finds the device region, not
PCI bus region.  It's safe to ignore MapGPA(to-shared) because when guest
accesses those region they use GPA with shared bit set for vMMIO.  Ignore
memory conversion request of non-assigned region to shared and return
success.  Otherwise OVMF is confused and panics there.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
80af3f2547 kvm/tdx: Don't complain when converting vMMIO region to shared
Because vMMIO region needs to be shared region, guest TD may explicitly
convert such region from private to shared.  Don't complain such
conversion.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
ef90193248 i386/tdx: Make memory type private by default
By default (due to the recent UPM change), restricted memory attribute is
shared.  Convert the memory region from shared to private at the memory
slot creation time.

add kvm region registering function to check the flag
and convert the region, and add memory listener to TDX guest code to set
the flag to the possible memory region.

Without this patch
- Secure-EPT violation on private area
- KVM_MEMORY_FAULT EXIT (kvm -> qemu)
- qemu converts the 4K page from shared to private
- Resume VCPU execution
- Secure-EPT violation again
- KVM resolves EPT Violation
This also prevents huge page because page conversion is done at 4K
granularity.  Although it's possible to merge 4K private mapping into
2M large page, it slows guest boot.

With this patch
- After memory slot creation, convert the region from private to shared
- Secure-EPT violation on private area.
- KVM resolves EPT Violation

Originated-from: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
769b8158f6 kvm/memory: Introduce the infrastructure to set the default shared/private value
Introduce new flag RAM_DEFAULT_PRIVATE for RAMBlock. It's used to
indicate the default attribute,  private or not.

Set the RAM range to private explicitly when it's default private.

Originated-from: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
d7db7d2ea1 i386/tdx: Set kvm_readonly_mem_enabled to false for TDX VM
TDX only supports readonly for shared memory but not for private memory.

In the view of QEMU, it has no idea whether a memslot is used as shared
memory of private. Thus just mark kvm_readonly_mem_enabled to false to
TDX VM for simplicity.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
004db71f60 i386/tdx: Implement user specified tsc frequency
Reuse "-cpu,tsc-frequency=" to get user wanted tsc frequency and call VM
scope VM_SET_TSC_KHZ to set the tsc frequency of TD before KVM_TDX_INIT_VM.

Besides, sanity check the tsc frequency to be in the legal range and
legal granularity (required by TDX module).

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
---
Changes in v3:
- use @errp to report error info; (Daniel)

Changes in v1:
- Use VM scope VM_SET_TSC_KHZ to set the TSC frequency of TD since KVM
  side drop the @tsc_khz field in struct kvm_tdx_init_vm
2023-11-15 00:57:09 -05:00
Isaku Yamahata
bbe50409ce i386/tdx: Allows mrconfigid/mrowner/mrownerconfig for TDX_INIT_VM
Three sha384 hash values, mrconfigid, mrowner and mrownerconfig, of a TD
can be provided for TDX attestation.

So far they were hard coded as 0. Now allow user to specify those values
via property mrconfigid, mrowner and mrownerconfig. They are all in
base64 format.

example
-object tdx-guest, \
  mrconfigid=ASNFZ4mrze8BI0VniavN7wEjRWeJq83vASNFZ4mrze8BI0VniavN7wEjRWeJq83v,\
  mrowner=ASNFZ4mrze8BI0VniavN7wEjRWeJq83vASNFZ4mrze8BI0VniavN7wEjRWeJq83v,\
  mrownerconfig=ASNFZ4mrze8BI0VniavN7wEjRWeJq83vASNFZ4mrze8BI0VniavN7wEjRWeJq83v

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
Changes in v3:
 - use base64 encoding instread of hex-string;
2023-11-15 00:57:09 -05:00
Xiaoyao Li
2ac24a3f82 i386/tdx: Validate TD attributes
Validate TD attributes with tdx_caps that fixed-0 bits must be zero and
fixed-1 bits must be set.

Besides, sanity check the attribute bits that have not been supported by
QEMU yet. e.g., debug bit, it will be allowed in the future when debug
TD support lands in QEMU.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
---
Changes in v3:
- using error_setg() for error report; (Daniel)
2023-11-15 00:57:09 -05:00
Xiaoyao Li
7671a8d293 i386/tdx: Wire CPU features up with attributes of TD guest
For QEMU VMs, PKS is configured via CPUID_7_0_ECX_PKS and PMU is
configured by x86cpu->enable_pmu. Reuse the existing configuration
interface for TDX VMs.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:09 -05:00
Isaku Yamahata
5bf04c14d8 i386/tdx: Make sept_ve_disable set by default
For TDX KVM use case, Linux guest is the most major one.  It requires
sept_ve_disable set.  Make it default for the main use case.  For other use
case, it can be enabled/disabled via qemu command line.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
f503878704 i386/tdx: Add property sept-ve-disable for tdx-guest object
Bit 28 of TD attribute, named SEPT_VE_DISABLE. When set to 1, it disables
EPT violation conversion to #VE on guest TD access of PENDING pages.

Some guest OS (e.g., Linux TD guest) may require this bit as 1.
Otherwise refuse to boot.

Add sept-ve-disable property for tdx-guest object, for user to configure
this bit.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
---
Changes in v3:
- update the comment of property @sept-ve-disable to make it more
  descriptive and use new format. (Daniel and Markus)
2023-11-15 00:57:09 -05:00
Xiaoyao Li
98f599ec0b i386/tdx: Initialize TDX before creating TD vcpus
Invoke KVM_TDX_INIT in kvm_arch_pre_create_vcpu() that KVM_TDX_INIT
configures global TD configurations, e.g. the canonical CPUID config,
and must be executed prior to creating vCPUs.

Use kvm_x86_arch_cpuid() to setup the CPUID settings for TDX VM.

Note, this doesn't address the fact that QEMU may change the CPUID
configuration when creating vCPUs, i.e. punts on refactoring QEMU to
provide a stable CPUID config prior to kvm_arch_init().

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
---
Changes in v3:
- Pass @errp in tdx_pre_create_vcpu() and pass error info to it. (Daniel)
2023-11-15 00:57:09 -05:00
Xiaoyao Li
a1b994d89a kvm: Introduce kvm_arch_pre_create_vcpu()
Introduce kvm_arch_pre_create_vcpu(), to perform arch-dependent
work prior to create any vcpu. This is for i386 TDX because it needs
call TDX_INIT_VM before creating any vcpu.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
---
Changes in v3:
- pass @errp to kvm_arch_pre_create_vcpu(); (Per Daniel)
2023-11-15 00:57:09 -05:00
Sean Christopherson
04fc588ea9 i386/kvm: Move architectural CPUID leaf generation to separate helper
Move the architectural (for lack of a better term) CPUID leaf generation
to a separate helper so that the generation code can be reused by TDX,
which needs to generate a canonical VM-scoped configuration.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
7e454d2ca4 i386/tdx: Integrate tdx_caps->attrs_fixed0/1 to tdx_cpuid_lookup
Some bits in TD attributes have corresponding CPUID feature bits. Reflect
the fixed0/1 restriction on TD attributes to their corresponding CPUID
bits in tdx_cpuid_lookup[] as well.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
b4a0470949 i386/tdx: Integrate tdx_caps->xfam_fixed0/1 into tdx_cpuid_lookup
KVM requires userspace to pass XFAM configuration via CPUID 0xD leaves.

Convert tdx_caps->xfam_fixed0/1 into corresponding
tdx_cpuid_lookup[].tdx_fixed0/1 field of CPUID 0xD leaves. Thus the
requirement can be applied naturally.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
5fdefc08b3 i386/tdx: Update tdx_cpuid_lookup[].tdx_fixed0/1 by tdx_caps.cpuid_config[]
tdx_cpuid_lookup[].tdx_fixed0/1 is QEMU maintained data which reflects
TDX restrictions regrading how some CPUIDs are virtualized by TDX.

It's retrieved from TDX spec. However, TDX may change some fixed
fields to configurable in the future. Update
tdx_cpuid.lookup[].tdx_fixed0/1 fields by removing the bits that
reported from TDX module as configurable. This can adapt with the
updated TDX (module) automatically.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
d54122cefb i386/tdx: Adjust the supported CPUID based on TDX restrictions
According to Chapter "CPUID Virtualization" in TDX module spec, CPUID
bits of TD can be classified into 6 types:

------------------------------------------------------------------------
1 | As configured | configurable by VMM, independent of native value;
------------------------------------------------------------------------
2 | As configured | configurable by VMM if the bit is supported natively
    (if native)   | Otherwise it equals as native(0).
------------------------------------------------------------------------
3 | Fixed         | fixed to 0/1
------------------------------------------------------------------------
4 | Native        | reflect the native value
------------------------------------------------------------------------
5 | Calculated    | calculated by TDX module.
------------------------------------------------------------------------
6 | Inducing #VE  | get #VE exception
------------------------------------------------------------------------

Note:
1. All the configurable XFAM related features and TD attributes related
   features fall into type #2. And fixed0/1 bits of XFAM and TD
   attributes fall into type #3.

2. For CPUID leaves not listed in "CPUID virtualization Overview" table
   in TDX module spec, TDX module injects #VE to TDs when those are
   queried. For this case, TDs can request CPUID emulation from VMM via
   TDVMCALL and the values are fully controlled by VMM.

Due to TDX module has its own virtualization policy on CPUID bits, it leads
to what reported via KVM_GET_SUPPORTED_CPUID diverges from the supported
CPUID bits for TDs. In order to keep a consistent CPUID configuration
between VMM and TDs. Adjust supported CPUID for TDs based on TDX
restrictions.

Currently only focus on the CPUID leaves recognized by QEMU's
feature_word_info[] that are indexed by a FeatureWord.

Introduce a TDX CPUID lookup table, which maintains 1 entry for each
FeatureWord. Each entry has below fields:

 - tdx_fixed0/1: The bits that are fixed as 0/1;

 - vmm_fixup:   The bits that are configurable from the view of TDX module.
                But they requires emulation of VMM when they are configured
	        as enabled. For those, they are not supported if VMM doesn't
		report them as supported. So they need be fixed up by
		checking if VMM supports them.

 - inducing_ve: TD gets #VE when querying this CPUID leaf. The result is
                totally configurable by VMM.

 - supported_on_ve: It's valid only when @inducing_ve is true. It represents
		    the maximum feature set supported that be emulated
		    for TDs.

By applying TDX CPUID lookup table and TDX capabilities reported from
TDX module, the supported CPUID for TDs can be obtained from following
steps:

- get the base of VMM supported feature set;

- if the leaf is not a FeatureWord just return VMM's value without
  modification;

- if the leaf is an inducing_ve type, applying supported_on_ve mask and
  return;

- include all native bits, it covers type #2, #4, and parts of type #1.
  (it also includes some unsupported bits. The following step will
   correct it.)

- apply fixed0/1 to it (it covers #3, and rectifies the previous step);

- add configurable bits (it covers the other part of type #1);

- fix the ones in vmm_fixup;

- filter the one has valid .supported field;

(Calculated type is ignored since it's determined at runtime).

Co-developed-by: Chenyi Qiang <chenyi.qiang@intel.com>
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
ef64621235 i386/tdx: Introduce is_tdx_vm() helper and cache tdx_guest object
It will need special handling for TDX VMs all around the QEMU.
Introduce is_tdx_vm() helper to query if it's a TDX VM.

Cache tdx_guest object thus no need to cast from ms->cgs every time.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
---
changes in v3:
- replace object_dynamic_cast with TDX_GUEST();
2023-11-15 00:57:09 -05:00
Xiaoyao Li
575bfcd358 i386/tdx: Get tdx_capabilities via KVM_TDX_CAPABILITIES
KVM provides TDX capabilities via sub command KVM_TDX_CAPABILITIES of
IOCTL(KVM_MEMORY_ENCRYPT_OP). Get the capabilities when initializing
TDX context. It will be used to validate user's setting later.

Since there is no interface reporting how many cpuid configs contains in
KVM_TDX_CAPABILITIES, QEMU chooses to try starting with a known number
and abort when it exceeds KVM_MAX_CPUID_ENTRIES.

Besides, introduce the interfaces to invoke TDX "ioctls" at different
scope (KVM, VM and VCPU) in preparation.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
Changes in v3:
- rename __tdx_ioctl() to tdx_ioctl_internal()
- Pass errp in get_tdx_capabilities();

changes in v2:
  - Make the error message more clear;

changes in v1:
  - start from nr_cpuid_configs = 6 for the loop;
  - stop the loop when nr_cpuid_configs exceeds KVM_MAX_CPUID_ENTRIES;
2023-11-15 00:57:09 -05:00
Xiaoyao Li
38b04243ce i386/tdx: Implement tdx_kvm_init() to initialize TDX VM context
Introduce tdx_kvm_init() and invoke it in kvm_confidential_guest_init()
if it's a TDX VM.

Set ms->require_guest_memfd to require kvm guest memfd allocation for any
memory backend. More TDX specific initialization will be added later.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
c64b4c4e28 target/i386: Introduce kvm_confidential_guest_init()
Introduce a separate function kvm_confidential_guest_init(), which
dispatches specific confidential guest initialization function by
ms->cgs type.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
871c9f21ee target/i386: Parse TDX vm type
TDX VM requires VM type KVM_X86_TDX_VM to be passed to
kvm_ioctl(KVM_CREATE_VM).

If tdx-guest object is specified to confidential-guest-support, like,

  qemu -machine ...,confidential-guest-support=tdx0 \
       -object tdx-guest,id=tdx0,...

it parses VM type as KVM_X86_TDX_VM.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
3770c1f6cb target/i386: Implement mc->kvm_type() to get VM type
Implement mc->kvm_type() for i386 machines. It provides a way for user
to create SW_PROTECTE_VM.

Also store the vm_type in machinestate to other code to query what the
VM type is.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
636758fe40 i386: Introduce tdx-guest object
Introduce tdx-guest object which implements the interface of
CONFIDENTIAL_GUEST_SUPPORT, and will be used to create TDX VMs (TDs) by

  qemu -machine ...,confidential-guest-support=tdx0	\
       -object tdx-guest,id=tdx0

It has only one member 'attributes' with fixed value 0 and not
configurable so far.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Markus Armbruster <armbru@redhat.com>
---
changes in v1
- make @attributes not user-settable
2023-11-15 00:57:09 -05:00
Xiaoyao Li
c9636b4bf5 *** HACK *** linux-headers: Update headers to pull in TDX API changes
Pull in recent TDX updates, which are not backwards compatible.

It's just to make this series runnable. It will be updated by script

	scripts/update-linux-headers.sh

once TDX support is upstreamed in linux kernel

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Isaku Yamahata
d39ad1ccfd trace/kvm: Add trace for page convertion between shared and private
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Chao Peng
552a4e57a8 kvm: handle KVM_EXIT_MEMORY_FAULT
Currently only KVM_MEMORY_EXIT_FLAG_PRIVATE in flags is valid when
KVM_EXIT_MEMORY_FAULT happens. It indicates userspace needs to do
the memory conversion on the RAMBlock to turn the memory into desired
attribute, i.e., private/shared.

Note, KVM_EXIT_MEMORY_FAULT makes sense only when the RAMBlock has
guest_memfd memory backend.

Note, KVM_EXIT_MEMORY_FAULT returns with -EFAULT, so special handling is
added.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
105fa48cab physmem: Introduce ram_block_convert_range() for page conversion
It's used for discarding opposite memory after memory conversion, for
confidential guest.

When page is converted from shared to private, the original shared
memory can be discarded via ram_block_discard_range();

When page is converted from private to shared, the original private
memory is back'ed by guest_memfd. Introduce
ram_block_discard_guest_memfd_range() for discarding memory in
guest_memfd.

Originally-from: Isaku Yamahata <isaku.yamahata@intel.com>
Codeveloped-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
9824769418 physmem: replace function name with __func__ in ram_block_discard_range()
Use __func__ to avoid hard-coded function name.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
02fda3dd28 physmem: Relax the alignment check of host_startaddr in ram_block_discard_range()
Commit d3a5038c46 ("exec: ram_block_discard_range") introduced
ram_block_discard_range() which grabs some code from
ram_discard_range(). However, during code movement, it changed alignment
check of host_startaddr from qemu_host_page_size to rb->page_size.

When ramblock is back'ed by hugepage, it requires the startaddr to be
huge page size aligned, which is a overkill. e.g., TDX's private-shared
page conversion is done at 4KB granularity. Shared page is discarded
when it gets converts to private and when shared page back'ed by
hugepage it is going to fail on this check.

So change to alignment check back to qemu_host_page_size.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
Changes in v3:
 - Newly added in v3;
2023-11-15 00:57:09 -05:00
Xiaoyao Li
609e0ca1d8 kvm: Introduce support for memory_attributes
Introduce the helper functions to set the attributes of a range of
memory to private or shared.

This is necessary to notify KVM the private/shared attribute of each gpa
range. KVM needs the information to decide the GPA needs to be mapped at
hva-based shared memory or guest_memfd based private memory.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Chao Peng
d21132fe6e kvm: Enable KVM_SET_USER_MEMORY_REGION2 for memslot
Switch to KVM_SET_USER_MEMORY_REGION2 when supported by KVM.

With KVM_SET_USER_MEMORY_REGION2, QEMU can set up memory region that
backend'ed both by hva-based shared memory and guest memfd based private
memory.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
902dc0c4a7 HostMem: Add mechanism to opt in kvm guest memfd via MachineState
Add a new member "require_guest_memfd" to memory backends. When it's set
to true, it enables RAM_GUEST_MEMFD in ram_flags, thus private kvm
guest_memfd will be allocated during RAMBlock allocation.

Memory backend's @require_guest_memfd is wired with @require_guest_memfd
field of MachineState. MachineState::require_guest_memfd is supposed to
be set by any VMs that requires KVM guest memfd as private memory, e.g.,
TDX VM.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
7bd3bf6642 RAMBlock/guest_memfd: Enable KVM_GUEST_MEMFD_ALLOW_HUGEPAGE
KVM allows KVM_GUEST_MEMFD_ALLOW_HUGEPAGE for guest memfd. When the
flag is set, KVM tries to allocate memory with transparent hugeapge at
first and falls back to non-hugepage on failure.

However, KVM defines one restriction that size must be hugepage size
aligned when KVM_GUEST_MEMFD_ALLOW_HUGEPAGE is set.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
v3:
 - New one in v3.
2023-11-15 00:57:09 -05:00
Xiaoyao Li
843bbbd03b RAMBlock: Add support of KVM private guest memfd
Add KVM guest_memfd support to RAMBlock so both normal hva based memory
and kvm guest memfd based private memory can be associated in one RAMBlock.

Introduce new flag RAM_GUEST_MEMFD. When it's set, it calls KVM ioctl to
create private guest_memfd during RAMBlock setup.

Note, RAM_GUEST_MEMFD is supposed to be set for memory backends of
confidential guests, such as TDX VM. How and when to set it for memory
backends will be implemented in the following patches.

Introduce memory_region_has_guest_memfd() to query if the MemoryRegion has
KVM guest_memfd allocated.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
Changes in v3:
- rename gmem to guest_memfd;
- close(guest_memfd) when RAMBlock is released; (Daniel P. Berrangé)
- Suqash the patch that introduces memory_region_has_guest_memfd().
2023-11-15 00:57:09 -05:00
Xiaoyao Li
3091ad45a5 *** HACK *** linux-headers: Update headers to pull in gmem APIs
This patch needs to be updated by script

	scripts/update-linux-headers.sh

once gmem fd support is upstreamed in Linux kernel.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
8d635c6681 trace/kvm: Split address space and slot id in trace_kvm_set_user_memory()
The upper 16 bits of kvm_userspace_memory_region::slot are
address space id. Parse it separately in trace_kvm_set_user_memory().

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
f5fd218755 i386/cpuid: Remove subleaf constraint on CPUID leaf 1F
No such constraint that subleaf index needs to be less than 64.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
d5591ff440 i386/cpuid: Decrease cpuid_i when skipping CPUID leaf 1F
Decrease array index cpuid_i when CPUID leaf 1F is skipped, otherwise it
will get an all zero'ed CPUID entry with leaf 0 and subleaf 0. It
conflicts with correct leaf 0.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
605d572f9c i386/pc: Drop pc_machine_kvm_type()
pc_machine_kvm_type() was introduced by commit e21be724ea ("i386/xen:
add pc_machine_kvm_type to initialize XEN_EMULATE mode") to do Xen
specific initialization by utilizing kvm_type method.

commit eeedfe6c63 ("hw/xen: Simplify emulated Xen platform init")
moves the Xen specific initialization to pc_basic_device_init().

There is no need to keep the PC specific kvm_type() implementation
anymore. On the other hand, later patch will implement kvm_type()
method for all x86/i386 machines to support KVM_X86_SW_PROTECTED_VM.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
2023-11-15 00:57:09 -05:00
3742 changed files with 114442 additions and 165623 deletions

View File

@@ -24,10 +24,6 @@ variables:
# Each script line from will be in a collapsible section in the job output
# and show the duration of each line.
FF_SCRIPT_SECTIONS: 1
# The project has a fairly fat GIT repo so we try and avoid bringing in things
# we don't need. The --filter options avoid blobs and tree references we aren't going to use
# and we also avoid fetching tags.
GIT_FETCH_EXTRA_FLAGS: --filter=blob:none --filter=tree:0 --no-tags --prune --quiet
interruptible: true
@@ -45,10 +41,6 @@ variables:
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_COMMIT_TAG'
when: never
# Scheduled runs on mainline don't get pipelines except for the special Coverity job
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_PIPELINE_SOURCE == "schedule"'
when: never
# Cirrus jobs can't run unless the creds / target repo are set
- if: '$QEMU_JOB_CIRRUS && ($CIRRUS_GITHUB_REPO == null || $CIRRUS_API_TOKEN == null)'
when: never

View File

@@ -9,13 +9,11 @@
when: always
before_script:
- JOBS=$(expr $(nproc) + 1)
- cat /packages.txt
script:
- export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
- export CCACHE_MAXSIZE="500M"
- export PATH="$CCACHE_WRAPPERSDIR:$PATH"
- du -sh .git
- mkdir build
- cd build
- ccache --zero-stats
@@ -27,10 +25,10 @@
then
pyvenv/bin/meson configure . -Dbackend_max_links="$LD_JOBS" ;
fi || exit 1;
- $MAKE -j"$JOBS"
- make -j"$JOBS"
- if test -n "$MAKE_CHECK_ARGS";
then
$MAKE -j"$JOBS" $MAKE_CHECK_ARGS ;
make -j"$JOBS" $MAKE_CHECK_ARGS ;
fi
- ccache --show-stats
@@ -46,8 +44,10 @@
exclude:
- build/**/*.p
- build/**/*.a.p
- build/**/*.fa.p
- build/**/*.c.o
- build/**/*.c.o.d
- build/**/*.fa
.common_test_job_template:
extends: .base_job_template
@@ -59,7 +59,7 @@
- cd build
- find . -type f -exec touch {} +
# Avoid recompiling by hiding ninja with NINJA=":"
- $MAKE NINJA=":" $MAKE_CHECK_ARGS
- make NINJA=":" $MAKE_CHECK_ARGS
.native_test_job_template:
extends: .common_test_job_template

View File

@@ -61,7 +61,7 @@ avocado-system-ubuntu:
variables:
IMAGE: ubuntu2204
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:alpha arch:microblazeel arch:mips64el
AVOCADO_TAGS: arch:alpha arch:microblaze arch:mips64el
build-system-debian:
extends:
@@ -70,7 +70,7 @@ build-system-debian:
needs:
job: amd64-debian-container
variables:
IMAGE: debian
IMAGE: debian-amd64
CONFIGURE_ARGS: --with-coroutine=sigaltstack
TARGETS: arm-softmmu i386-softmmu riscv64-softmmu sh4eb-softmmu
sparc-softmmu xtensa-softmmu
@@ -82,7 +82,7 @@ check-system-debian:
- job: build-system-debian
artifacts: true
variables:
IMAGE: debian
IMAGE: debian-amd64
MAKE_CHECK_ARGS: check
avocado-system-debian:
@@ -91,7 +91,7 @@ avocado-system-debian:
- job: build-system-debian
artifacts: true
variables:
IMAGE: debian
IMAGE: debian-amd64
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:arm arch:i386 arch:riscv64 arch:sh4 arch:sparc arch:xtensa
@@ -101,7 +101,7 @@ crash-test-debian:
- job: build-system-debian
artifacts: true
variables:
IMAGE: debian
IMAGE: debian-amd64
script:
- cd build
- make NINJA=":" check-venv
@@ -158,89 +158,22 @@ build-system-centos:
- .native_build_job_template
- .native_build_artifact_template
needs:
job: amd64-centos9-container
job: amd64-centos8-container
variables:
IMAGE: centos9
IMAGE: centos8
CONFIGURE_ARGS: --disable-nettle --enable-gcrypt --enable-vfio-user-server
--enable-modules --enable-trace-backends=dtrace --enable-docs
TARGETS: ppc64-softmmu or1k-softmmu s390x-softmmu
x86_64-softmmu rx-softmmu sh4-softmmu
x86_64-softmmu rx-softmmu sh4-softmmu nios2-softmmu
MAKE_CHECK_ARGS: check-build
# Previous QEMU release. Used for cross-version migration tests.
build-previous-qemu:
extends: .native_build_job_template
artifacts:
when: on_success
expire_in: 2 days
paths:
- build-previous
exclude:
- build-previous/**/*.p
- build-previous/**/*.a.p
- build-previous/**/*.c.o
- build-previous/**/*.c.o.d
needs:
job: amd64-opensuse-leap-container
variables:
IMAGE: opensuse-leap
TARGETS: x86_64-softmmu aarch64-softmmu
# Override the default flags as we need more to grab the old version
GIT_FETCH_EXTRA_FLAGS: --prune --quiet
before_script:
- export QEMU_PREV_VERSION="$(sed 's/\([0-9.]*\)\.[0-9]*/v\1.0/' VERSION)"
- git remote add upstream https://gitlab.com/qemu-project/qemu
- git fetch upstream refs/tags/$QEMU_PREV_VERSION:refs/tags/$QEMU_PREV_VERSION
- git checkout $QEMU_PREV_VERSION
after_script:
- mv build build-previous
.migration-compat-common:
extends: .common_test_job_template
needs:
- job: build-previous-qemu
- job: build-system-opensuse
# The old QEMU could have bugs unrelated to migration that are
# already fixed in the current development branch, so this test
# might fail.
allow_failure: true
variables:
IMAGE: opensuse-leap
MAKE_CHECK_ARGS: check-build
script:
# Use the migration-tests from the older QEMU tree. This avoids
# testing an old QEMU against new features/tests that it is not
# compatible with.
- cd build-previous
# old to new
- QTEST_QEMU_BINARY_SRC=./qemu-system-${TARGET}
QTEST_QEMU_BINARY=../build/qemu-system-${TARGET} ./tests/qtest/migration-test
# new to old
- QTEST_QEMU_BINARY_DST=./qemu-system-${TARGET}
QTEST_QEMU_BINARY=../build/qemu-system-${TARGET} ./tests/qtest/migration-test
# This job needs to be disabled until we can have an aarch64 CPU model that
# will both (1) support both KVM and TCG, and (2) provide a stable ABI.
# Currently only "-cpu max" can provide (1), however it doesn't guarantee
# (2). Mark this test skipped until later.
migration-compat-aarch64:
extends: .migration-compat-common
variables:
TARGET: aarch64
QEMU_JOB_SKIPPED: 1
migration-compat-x86_64:
extends: .migration-compat-common
variables:
TARGET: x86_64
check-system-centos:
extends: .native_test_job_template
needs:
- job: build-system-centos
artifacts: true
variables:
IMAGE: centos9
IMAGE: centos8
MAKE_CHECK_ARGS: check
avocado-system-centos:
@@ -249,10 +182,10 @@ avocado-system-centos:
- job: build-system-centos
artifacts: true
variables:
IMAGE: centos9
IMAGE: centos8
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:ppc64 arch:or1k arch:s390x arch:x86_64 arch:rx
arch:sh4
AVOCADO_TAGS: arch:ppc64 arch:or1k arch:390x arch:x86_64 arch:rx
arch:sh4 arch:nios2
build-system-opensuse:
extends:
@@ -284,36 +217,6 @@ avocado-system-opensuse:
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:s390x arch:x86_64 arch:aarch64
#
# Flaky tests. We don't run these by default and they are allow fail
# but often the CI system is the only way to trigger the failures.
#
build-system-flaky:
extends:
- .native_build_job_template
- .native_build_artifact_template
needs:
job: amd64-debian-container
variables:
IMAGE: debian
QEMU_JOB_OPTIONAL: 1
TARGETS: aarch64-softmmu arm-softmmu mips64el-softmmu
ppc64-softmmu rx-softmmu s390x-softmmu sh4-softmmu x86_64-softmmu
MAKE_CHECK_ARGS: check-build
avocado-system-flaky:
extends: .avocado_test_job_template
needs:
- job: build-system-flaky
artifacts: true
allow_failure: true
variables:
IMAGE: debian
MAKE_CHECK_ARGS: check-avocado
QEMU_JOB_OPTIONAL: 1
QEMU_TEST_FLAKY_TESTS: 1
AVOCADO_TAGS: flaky
# This jobs explicitly disable TCG (--disable-tcg), KVM is detected by
# the configure script. The container doesn't contain Xen headers so
@@ -325,9 +228,9 @@ avocado-system-flaky:
build-tcg-disabled:
extends: .native_build_job_template
needs:
job: amd64-centos9-container
job: amd64-centos8-container
variables:
IMAGE: centos9
IMAGE: centos8
script:
- mkdir build
- cd build
@@ -340,7 +243,7 @@ build-tcg-disabled:
- cd tests/qemu-iotests/
- ./check -raw 001 002 003 004 005 008 009 010 011 012 021 025 032 033 048
052 063 077 086 101 104 106 113 148 150 151 152 157 159 160 163
170 171 184 192 194 208 221 226 227 236 253 277 image-fleecing
170 171 183 184 192 194 208 221 226 227 236 253 277 image-fleecing
- ./check -qcow2 028 051 056 057 058 065 068 082 085 091 095 096 102 122
124 132 139 142 144 145 151 152 155 157 165 194 196 200 202
208 209 216 218 227 234 246 247 248 250 254 255 257 258
@@ -430,7 +333,6 @@ clang-system:
IMAGE: fedora
CONFIGURE_ARGS: --cc=clang --cxx=clang++
--extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined
--extra-cflags=-fno-sanitize=function
TARGETS: alpha-softmmu arm-softmmu m68k-softmmu mips64-softmmu s390x-softmmu
MAKE_CHECK_ARGS: check-qtest check-tcg
@@ -444,7 +346,6 @@ clang-user:
CONFIGURE_ARGS: --cc=clang --cxx=clang++ --disable-system
--target-list-exclude=alpha-linux-user,microblazeel-linux-user,aarch64_be-linux-user,i386-linux-user,m68k-linux-user,mipsn32el-linux-user,xtensaeb-linux-user
--extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined
--extra-cflags=-fno-sanitize=function
MAKE_CHECK_ARGS: check-unit check-tcg
# Set LD_JOBS=1 because this requires LTO and ld consumes a large amount of memory.
@@ -575,9 +476,6 @@ tsan-build:
CONFIGURE_ARGS: --enable-tsan --cc=clang --cxx=clang++
--enable-trace-backends=ust --disable-slirp
TARGETS: x86_64-softmmu ppc64-softmmu riscv64-softmmu x86_64-linux-user
# Remove when we switch to a distro with clang >= 18
# https://github.com/google/sanitizers/issues/1716
MAKE: setarch -R make
# gcov is a GCC features
gcov:
@@ -636,7 +534,7 @@ build-tci:
- TARGETS="aarch64 arm hppa m68k microblaze ppc64 s390x x86_64"
- mkdir build
- cd build
- ../configure --enable-tcg-interpreter --disable-kvm --disable-docs --disable-gtk --disable-vnc
- ../configure --enable-tcg-interpreter --disable-docs --disable-gtk --disable-vnc
--target-list="$(for tg in $TARGETS; do echo -n ${tg}'-softmmu '; done)"
|| { cat config.log meson-logs/meson-log.txt && exit 1; }
- make -j"$JOBS"
@@ -651,15 +549,12 @@ build-tci:
- make check-tcg
# Check our reduced build configurations
# requires libfdt: aarch64, arm, loongarch64, microblaze, microblazeel,
# or1k, ppc64, riscv32, riscv64, rx
# fails qtest without boards: i386, x86_64
build-without-defaults:
extends: .native_build_job_template
needs:
job: amd64-centos9-container
job: amd64-centos8-container
variables:
IMAGE: centos9
IMAGE: centos8
CONFIGURE_ARGS:
--without-default-devices
--without-default-features
@@ -667,11 +562,8 @@ build-without-defaults:
--disable-pie
--disable-qom-cast-debug
--disable-strip
TARGETS: alpha-softmmu avr-softmmu cris-softmmu hppa-softmmu m68k-softmmu
mips-softmmu mips64-softmmu mipsel-softmmu mips64el-softmmu
ppc-softmmu s390x-softmmu sh4-softmmu sh4eb-softmmu sparc-softmmu
sparc64-softmmu tricore-softmmu xtensa-softmmu xtensaeb-softmmu
hexagon-linux-user i386-linux-user s390x-linux-user
TARGETS: avr-softmmu mips64-softmmu s390x-softmmu sh4-softmmu
sparc64-softmmu hexagon-linux-user i386-linux-user s390x-linux-user
MAKE_CHECK_ARGS: check
build-libvhost-user:
@@ -697,7 +589,7 @@ build-tools-and-docs-debian:
# when running on 'master' we use pre-existing container
optional: true
variables:
IMAGE: debian
IMAGE: debian-amd64
MAKE_CHECK_ARGS: check-unit ctags TAGS cscope
CONFIGURE_ARGS: --disable-system --disable-user --enable-docs --enable-tools
QEMU_JOB_PUBLISH: 1
@@ -717,7 +609,7 @@ build-tools-and-docs-debian:
# of what topic branch they're currently using
pages:
extends: .base_job_template
image: $CI_REGISTRY_IMAGE/qemu/debian:$QEMU_CI_CONTAINER_TAG
image: $CI_REGISTRY_IMAGE/qemu/debian-amd64:$QEMU_CI_CONTAINER_TAG
stage: test
needs:
- job: build-tools-and-docs-debian
@@ -725,10 +617,7 @@ pages:
- mkdir -p public
# HTML-ised source tree
- make gtags
# We unset variables to work around a bug in some htags versions
# which causes it to fail when the environment is large
- CI_COMMIT_MESSAGE= CI_COMMIT_TAG_MESSAGE= htags
-anT --tree-view=filetree -m qemu_init
- htags -anT --tree-view=filetree -m qemu_init
-t "Welcome to the QEMU sourcecode"
- mv HTML public/src
# Project documentation
@@ -740,40 +629,3 @@ pages:
- public
variables:
QEMU_JOB_PUBLISH: 1
coverity:
image: $CI_REGISTRY_IMAGE/qemu/fedora:$QEMU_CI_CONTAINER_TAG
stage: build
allow_failure: true
timeout: 3h
needs:
- job: amd64-fedora-container
optional: true
before_script:
- dnf install -y curl wget
script:
# would be nice to cancel the job if over quota (https://gitlab.com/gitlab-org/gitlab/-/issues/256089)
# for example:
# curl --request POST --header "PRIVATE-TOKEN: $CI_JOB_TOKEN" "${CI_SERVER_URL}/api/v4/projects/${CI_PROJECT_ID}/jobs/${CI_JOB_ID}/cancel
- 'scripts/coverity-scan/run-coverity-scan --check-upload-only || { exitcode=$?; if test $exitcode = 1; then
exit 0;
else
exit $exitcode;
fi; };
scripts/coverity-scan/run-coverity-scan --update-tools-only > update-tools.log 2>&1 || { cat update-tools.log; exit 1; };
scripts/coverity-scan/run-coverity-scan --no-update-tools'
rules:
- if: '$COVERITY_TOKEN == null'
when: never
- if: '$COVERITY_EMAIL == null'
when: never
# Never included on upstream pipelines, except for schedules
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_PIPELINE_SOURCE == "schedule"'
when: on_success
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM'
when: never
# Forks don't get any pipeline unless QEMU_CI=1 or QEMU_CI=2 is set
- if: '$QEMU_CI != "1" && $QEMU_CI != "2"'
when: never
# Always manual on forks even if $QEMU_CI == "2"
- when: manual

View File

@@ -13,7 +13,7 @@
.cirrus_build_job:
extends: .base_job_template
stage: build
image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:latest
image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:master
needs: []
# 20 mins larger than "timeout_in" in cirrus/build.yml
# as there's often a 5-10 minute delay before Cirrus CI
@@ -52,42 +52,61 @@ x64-freebsd-13-build:
NAME: freebsd-13
CIRRUS_VM_INSTANCE_TYPE: freebsd_instance
CIRRUS_VM_IMAGE_SELECTOR: image_family
CIRRUS_VM_IMAGE_NAME: freebsd-13-3
CIRRUS_VM_IMAGE_NAME: freebsd-13-2
CIRRUS_VM_CPUS: 8
CIRRUS_VM_RAM: 8G
UPDATE_COMMAND: pkg update; pkg upgrade -y
INSTALL_COMMAND: pkg install -y
CONFIGURE_ARGS: --target-list-exclude=arm-softmmu,i386-softmmu,microblaze-softmmu,mips64el-softmmu,mipsel-softmmu,mips-softmmu,ppc-softmmu,sh4eb-softmmu,xtensa-softmmu
TEST_TARGETS: check
aarch64-macos-13-base-build:
aarch64-macos-12-base-build:
extends: .cirrus_build_job
variables:
NAME: macos-13
NAME: macos-12
CIRRUS_VM_INSTANCE_TYPE: macos_instance
CIRRUS_VM_IMAGE_SELECTOR: image
CIRRUS_VM_IMAGE_NAME: ghcr.io/cirruslabs/macos-ventura-base:latest
CIRRUS_VM_IMAGE_NAME: ghcr.io/cirruslabs/macos-monterey-base:latest
CIRRUS_VM_CPUS: 12
CIRRUS_VM_RAM: 24G
UPDATE_COMMAND: brew update
INSTALL_COMMAND: brew install
PATH_EXTRA: /opt/homebrew/ccache/libexec:/opt/homebrew/gettext/bin
PKG_CONFIG_PATH: /opt/homebrew/curl/lib/pkgconfig:/opt/homebrew/ncurses/lib/pkgconfig:/opt/homebrew/readline/lib/pkgconfig
CONFIGURE_ARGS: --target-list-exclude=arm-softmmu,i386-softmmu,microblazeel-softmmu,mips64-softmmu,mipsel-softmmu,mips-softmmu,ppc-softmmu,sh4-softmmu,xtensaeb-softmmu
TEST_TARGETS: check-unit check-block check-qapi-schema check-softfloat check-qtest-x86_64
aarch64-macos-14-base-build:
extends: .cirrus_build_job
# The following jobs run VM-based tests via KVM on a Linux-based Cirrus-CI job
.cirrus_kvm_job:
extends: .base_job_template
stage: build
image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:master
needs: []
timeout: 80m
script:
- sed -e "s|[@]CI_REPOSITORY_URL@|$CI_REPOSITORY_URL|g"
-e "s|[@]CI_COMMIT_REF_NAME@|$CI_COMMIT_REF_NAME|g"
-e "s|[@]CI_COMMIT_SHA@|$CI_COMMIT_SHA|g"
-e "s|[@]NAME@|$NAME|g"
-e "s|[@]CONFIGURE_ARGS@|$CONFIGURE_ARGS|g"
-e "s|[@]TEST_TARGETS@|$TEST_TARGETS|g"
<.gitlab-ci.d/cirrus/kvm-build.yml >.gitlab-ci.d/cirrus/$NAME.yml
- cat .gitlab-ci.d/cirrus/$NAME.yml
- cirrus-run -v --show-build-log always .gitlab-ci.d/cirrus/$NAME.yml
variables:
NAME: macos-14
CIRRUS_VM_INSTANCE_TYPE: macos_instance
CIRRUS_VM_IMAGE_SELECTOR: image
CIRRUS_VM_IMAGE_NAME: ghcr.io/cirruslabs/macos-sonoma-base:latest
CIRRUS_VM_CPUS: 12
CIRRUS_VM_RAM: 24G
UPDATE_COMMAND: brew update
INSTALL_COMMAND: brew install
PATH_EXTRA: /opt/homebrew/ccache/libexec:/opt/homebrew/gettext/bin
PKG_CONFIG_PATH: /opt/homebrew/curl/lib/pkgconfig:/opt/homebrew/ncurses/lib/pkgconfig:/opt/homebrew/readline/lib/pkgconfig
TEST_TARGETS: check-unit check-block check-qapi-schema check-softfloat check-qtest-x86_64
QEMU_JOB_CIRRUS: 1
QEMU_JOB_OPTIONAL: 1
x86-netbsd:
extends: .cirrus_kvm_job
variables:
NAME: netbsd
CONFIGURE_ARGS: --target-list=x86_64-softmmu,ppc64-softmmu,aarch64-softmmu
TEST_TARGETS: check
x86-openbsd:
extends: .cirrus_kvm_job
variables:
NAME: openbsd
CONFIGURE_ARGS: --target-list=i386-softmmu,riscv64-softmmu,mips64-softmmu
TEST_TARGETS: check

View File

@@ -21,7 +21,7 @@ build_task:
install_script:
- @UPDATE_COMMAND@
- @INSTALL_COMMAND@ @PKGS@
- if test -n "@PYPI_PKGS@" ; then PYLIB=$(@PYTHON@ -c 'import sysconfig; print(sysconfig.get_path("stdlib"))'); rm -f $PYLIB/EXTERNALLY-MANAGED; @PIP3@ install @PYPI_PKGS@ ; fi
- if test -n "@PYPI_PKGS@" ; then @PIP3@ install @PYPI_PKGS@ ; fi
clone_script:
- git clone --depth 100 "$CI_REPOSITORY_URL" .
- git fetch origin "$CI_COMMIT_REF_NAME"

View File

@@ -11,6 +11,6 @@ MAKE='/usr/local/bin/gmake'
NINJA='/usr/local/bin/ninja'
PACKAGING_COMMAND='pkg'
PIP3='/usr/local/bin/pip-3.8'
PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache cmocka ctags curl cyrus-sasl dbus diffutils dtc flex fusefs-libs3 gettext git glib gmake gnutls gsed gtk-vnc gtk3 json-c libepoxy libffi libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm lzo2 meson mtools ncurses nettle ninja opencv pixman pkgconf png py311-numpy py311-pillow py311-pip py311-sphinx py311-sphinx_rtd_theme py311-tomli py311-yaml python3 rpm2cpio sdl2 sdl2_image snappy sndio socat spice-protocol tesseract usbredir virglrenderer vte3 xorriso zstd'
PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache cmocka ctags curl cyrus-sasl dbus diffutils dtc flex fusefs-libs3 gettext git glib gmake gnutls gsed gtk3 json-c libepoxy libffi libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm lzo2 meson mtools ncurses nettle ninja opencv pixman pkgconf png py39-numpy py39-pillow py39-pip py39-sphinx py39-sphinx_rtd_theme py39-tomli py39-yaml python3 rpm2cpio sdl2 sdl2_image snappy sndio socat spice-protocol tesseract usbredir virglrenderer vte3 xorriso zstd'
PYPI_PKGS=''
PYTHON='/usr/local/bin/python3'

View File

@@ -0,0 +1,31 @@
container:
image: fedora:35
cpu: 4
memory: 8Gb
kvm: true
env:
CIRRUS_CLONE_DEPTH: 1
CI_REPOSITORY_URL: "@CI_REPOSITORY_URL@"
CI_COMMIT_REF_NAME: "@CI_COMMIT_REF_NAME@"
CI_COMMIT_SHA: "@CI_COMMIT_SHA@"
@NAME@_task:
@NAME@_vm_cache:
folder: $HOME/.cache/qemu-vm
install_script:
- dnf update -y
- dnf install -y git make openssh-clients qemu-img qemu-system-x86 wget meson
clone_script:
- git clone --depth 100 "$CI_REPOSITORY_URL" .
- git fetch origin "$CI_COMMIT_REF_NAME"
- git reset --hard "$CI_COMMIT_SHA"
build_script:
- if [ -f $HOME/.cache/qemu-vm/images/@NAME@.img ]; then
make vm-build-@NAME@ J=$(getconf _NPROCESSORS_ONLN)
EXTRA_CONFIGURE_OPTS="@CONFIGURE_ARGS@"
BUILD_TARGET="@TEST_TARGETS@" ;
else
make vm-build-@NAME@ J=$(getconf _NPROCESSORS_ONLN) BUILD_TARGET=help
EXTRA_CONFIGURE_OPTS="--disable-system --disable-user --disable-tools" ;
fi

View File

@@ -1,6 +1,6 @@
# THIS FILE WAS AUTO-GENERATED
#
# $ lcitool variables macos-13 qemu
# $ lcitool variables macos-12 qemu
#
# https://gitlab.com/libvirt/libvirt-ci
@@ -11,6 +11,6 @@ MAKE='/opt/homebrew/bin/gmake'
NINJA='/opt/homebrew/bin/ninja'
PACKAGING_COMMAND='brew'
PIP3='/opt/homebrew/bin/pip3'
PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 gtk-vnc jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd'
PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd'
PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli'
PYTHON='/opt/homebrew/bin/python3'

View File

@@ -1,16 +0,0 @@
# THIS FILE WAS AUTO-GENERATED
#
# $ lcitool variables macos-14 qemu
#
# https://gitlab.com/libvirt/libvirt-ci
CCACHE='/opt/homebrew/bin/ccache'
CPAN_PKGS=''
CROSS_PKGS=''
MAKE='/opt/homebrew/bin/gmake'
NINJA='/opt/homebrew/bin/ninja'
PACKAGING_COMMAND='brew'
PIP3='/opt/homebrew/bin/pip3'
PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 gtk-vnc jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd'
PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli'
PYTHON='/opt/homebrew/bin/python3'

View File

@@ -1,10 +1,10 @@
include:
- local: '/.gitlab-ci.d/container-template.yml'
amd64-centos9-container:
amd64-centos8-container:
extends: .container_job_template
variables:
NAME: centos9
NAME: centos8
amd64-fedora-container:
extends: .container_job_template

View File

@@ -46,12 +46,6 @@ loongarch-debian-cross-container:
variables:
NAME: debian-loongarch-cross
i686-debian-cross-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian-i686-cross
mips64el-debian-cross-container:
extends: .container_job_template
stage: containers
@@ -101,6 +95,16 @@ cris-fedora-cross-container:
variables:
NAME: fedora-cris-cross
i386-fedora-cross-container:
extends: .container_job_template
variables:
NAME: fedora-i386-cross
win32-fedora-cross-container:
extends: .container_job_template
variables:
NAME: fedora-win32-cross
win64-fedora-cross-container:
extends: .container_job_template
variables:

View File

@@ -11,7 +11,7 @@ amd64-debian-container:
extends: .container_job_template
stage: containers
variables:
NAME: debian
NAME: debian-amd64
amd64-ubuntu2204-container:
extends: .container_job_template

View File

@@ -8,8 +8,6 @@
key: "$CI_JOB_NAME"
when: always
timeout: 80m
before_script:
- cat /packages.txt
script:
- export CCACHE_BASEDIR="$(pwd)"
- export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
@@ -74,7 +72,7 @@
- ../configure --enable-werror --disable-docs $QEMU_CONFIGURE_OPTS
--disable-system --target-list-exclude="aarch64_be-linux-user
alpha-linux-user cris-linux-user m68k-linux-user microblazeel-linux-user
or1k-linux-user ppc-linux-user sparc-linux-user
nios2-linux-user or1k-linux-user ppc-linux-user sparc-linux-user
xtensa-linux-user $CROSS_SKIP_TARGETS"
- make -j$(expr $(nproc) + 1) all check-build $MAKE_CHECK_ARGS

View File

@@ -37,38 +37,27 @@ cross-arm64-kvm-only:
IMAGE: debian-arm64-cross
EXTRA_CONFIGURE_OPTS: --disable-tcg --without-default-features
cross-i686-system:
extends:
- .cross_system_build_job
- .cross_test_artifacts
needs:
job: i686-debian-cross-container
variables:
IMAGE: debian-i686-cross
EXTRA_CONFIGURE_OPTS: --disable-kvm
MAKE_CHECK_ARGS: check-qtest
cross-i686-user:
cross-i386-user:
extends:
- .cross_user_build_job
- .cross_test_artifacts
needs:
job: i686-debian-cross-container
job: i386-fedora-cross-container
variables:
IMAGE: debian-i686-cross
IMAGE: fedora-i386-cross
MAKE_CHECK_ARGS: check
cross-i686-tci:
cross-i386-tci:
extends:
- .cross_accel_build_job
- .cross_test_artifacts
timeout: 60m
needs:
job: i686-debian-cross-container
job: i386-fedora-cross-container
variables:
IMAGE: debian-i686-cross
IMAGE: fedora-i386-cross
ACCEL: tcg-interpreter
EXTRA_CONFIGURE_OPTS: --target-list=i386-softmmu,i386-linux-user,aarch64-softmmu,aarch64-linux-user,ppc-softmmu,ppc-linux-user --disable-plugins --disable-kvm
EXTRA_CONFIGURE_OPTS: --target-list=i386-softmmu,i386-linux-user,aarch64-softmmu,aarch64-linux-user,ppc-softmmu,ppc-linux-user --disable-plugins
MAKE_CHECK_ARGS: check check-tcg
cross-mipsel-system:
@@ -170,6 +159,20 @@ cross-mips64el-kvm-only:
IMAGE: debian-mips64el-cross
EXTRA_CONFIGURE_OPTS: --disable-tcg --target-list=mips64el-softmmu
cross-win32-system:
extends: .cross_system_build_job
needs:
job: win32-fedora-cross-container
variables:
IMAGE: fedora-win32-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu m68k-softmmu
microblazeel-softmmu mips64el-softmmu nios2-softmmu
artifacts:
when: on_success
paths:
- build/qemu-setup*.exe
cross-win64-system:
extends: .cross_system_build_job
needs:
@@ -178,7 +181,7 @@ cross-win64-system:
IMAGE: fedora-win64-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu
m68k-softmmu microblazeel-softmmu
m68k-softmmu microblazeel-softmmu nios2-softmmu
or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu
tricore-softmmu xtensaeb-softmmu
artifacts:

View File

@@ -10,14 +10,13 @@
# gitlab-runner. To avoid problems that gitlab-runner can cause while
# reusing the GIT repository, let's enable the clone strategy, which
# guarantees a fresh repository on each job run.
variables:
GIT_STRATEGY: clone
# All custom runners can extend this template to upload the testlog
# data as an artifact and also feed the junit report
.custom_runner_template:
extends: .base_job_template
variables:
GIT_STRATEGY: clone
GIT_FETCH_EXTRA_FLAGS: --no-tags --prune --quiet
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
expire_in: 7 days
@@ -29,6 +28,7 @@
junit: build/meson-logs/testlog.junit.xml
include:
- local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-s390x.yml'
- local: '/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml'
- local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch64.yml'
- local: '/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch32.yml'
- local: '/.gitlab-ci.d/custom-runners/centos-stream-8-x86_64.yml'

View File

@@ -0,0 +1,24 @@
# All centos-stream-8 jobs should run successfully in an environment
# setup by the scripts/ci/setup/stream/8/build-environment.yml task
# "Installation of extra packages to build QEMU"
centos-stream-8-x86_64:
extends: .custom_runner_template
allow_failure: true
needs: []
stage: build
tags:
- centos_stream_8
- x86_64
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$CENTOS_STREAM_8_x86_64_RUNNER_AVAILABLE"
before_script:
- JOBS=$(expr $(nproc) + 1)
script:
- mkdir build
- cd build
- ../scripts/ci/org.centos/stream/8/x86_64/configure
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make -j"$JOBS"
- make NINJA=":" check check-avocado

View File

@@ -1,32 +1,34 @@
# All ubuntu-22.04 jobs should run successfully in an environment
# setup by the scripts/ci/setup/ubuntu/build-environment.yml task
# "Install basic packages to build QEMU on Ubuntu 22.04"
# All ubuntu-20.04 jobs should run successfully in an environment
# setup by the scripts/ci/setup/build-environment.yml task
# "Install basic packages to build QEMU on Ubuntu 20.04/20.04"
ubuntu-22.04-s390x-all-linux:
ubuntu-20.04-s390x-all-linux-static:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- ubuntu_20.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$S390X_RUNNER_AVAILABLE"
script:
# --disable-libssh is needed because of https://bugs.launchpad.net/qemu/+bug/1838763
# --disable-glusterfs is needed because there's no static version of those libs in distro supplied packages
- mkdir build
- cd build
- ../configure --enable-debug --disable-system --disable-tools --disable-docs
- ../configure --enable-debug --static --disable-system --disable-glusterfs --disable-libssh
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc`
- make --output-sync check-tcg
- make --output-sync -j`nproc` check
ubuntu-22.04-s390x-all-system:
ubuntu-20.04-s390x-all:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- ubuntu_20.04
- s390x
timeout: 75m
rules:
@@ -35,17 +37,17 @@ ubuntu-22.04-s390x-all-system:
script:
- mkdir build
- cd build
- ../configure --disable-user
- ../configure --disable-libssh
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check
ubuntu-22.04-s390x-alldbg:
ubuntu-20.04-s390x-alldbg:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- ubuntu_20.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
@@ -57,18 +59,18 @@ ubuntu-22.04-s390x-alldbg:
script:
- mkdir build
- cd build
- ../configure --enable-debug
- ../configure --enable-debug --disable-libssh
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make clean
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check
ubuntu-22.04-s390x-clang:
ubuntu-20.04-s390x-clang:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- ubuntu_20.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
@@ -80,16 +82,16 @@ ubuntu-22.04-s390x-clang:
script:
- mkdir build
- cd build
- ../configure --cc=clang --cxx=clang++ --enable-sanitizers
- ../configure --disable-libssh --cc=clang --cxx=clang++ --enable-sanitizers
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check
ubuntu-22.04-s390x-tci:
ubuntu-20.04-s390x-tci:
needs: []
stage: build
tags:
- ubuntu_22.04
- ubuntu_20.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
@@ -101,16 +103,16 @@ ubuntu-22.04-s390x-tci:
script:
- mkdir build
- cd build
- ../configure --enable-tcg-interpreter
- ../configure --disable-libssh --enable-tcg-interpreter
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc`
ubuntu-22.04-s390x-notcg:
ubuntu-20.04-s390x-notcg:
extends: .custom_runner_template
needs: []
stage: build
tags:
- ubuntu_22.04
- ubuntu_20.04
- s390x
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
@@ -122,7 +124,7 @@ ubuntu-22.04-s390x-notcg:
script:
- mkdir build
- cd build
- ../configure --disable-tcg
- ../configure --disable-libssh --disable-tcg
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make --output-sync -j`nproc`
- make --output-sync -j`nproc` check

View File

@@ -1,5 +1,5 @@
# All ubuntu-22.04 jobs should run successfully in an environment
# setup by the scripts/ci/setup/ubuntu/build-environment.yml task
# setup by the scripts/ci/setup/qemu/build-environment.yml task
# "Install basic packages to build QEMU on Ubuntu 22.04"
ubuntu-22.04-aarch32-all:

View File

@@ -1,5 +1,5 @@
# All ubuntu-22.04 jobs should run successfully in an environment
# setup by the scripts/ci/setup/ubuntu/build-environment.yml task
# setup by the scripts/ci/setup/qemu/build-environment.yml task
# "Install basic packages to build QEMU on Ubuntu 22.04"
ubuntu-22.04-aarch64-all-linux-static:

View File

@@ -24,10 +24,6 @@
- if: '$QEMU_CI == "1" && $CI_PROJECT_NAMESPACE != "qemu-project" && $CI_COMMIT_MESSAGE =~ /opensbi/i'
when: manual
# Scheduled runs on mainline don't get pipelines except for the special Coverity job
- if: '$CI_PROJECT_NAMESPACE == $QEMU_CI_UPSTREAM && $CI_PIPELINE_SOURCE == "schedule"'
when: never
# Run if any files affecting the build output are touched
- changes:
- .gitlab-ci.d/opensbi.yml

View File

@@ -1,7 +1,9 @@
msys2-64bit:
.shared_msys2_builder:
extends: .base_job_template
tags:
- saas-windows-medium-amd64
- shared-windows
- windows
- windows-1809
cache:
key: "$CI_JOB_NAME"
paths:
@@ -12,19 +14,9 @@ msys2-64bit:
stage: build
timeout: 100m
variables:
# Select the "64 bit, gcc and MSVCRT" MSYS2 environment
MSYSTEM: MINGW64
# This feature doesn't (currently) work with PowerShell, it stops
# the echo'ing of commands being run and doesn't show any timing
FF_SCRIPT_SECTIONS: 0
# do not remove "--without-default-devices"!
# commit 9f8e6cad65a6 ("gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices"
# changed to compile QEMU with the --without-default-devices switch
# for this job, because otherwise the build could not complete within
# the project timeout.
CONFIGURE_ARGS: --target-list=sparc-softmmu --without-default-devices -Ddebug=false -Doptimization=0
# The Windows git is a bit older so override the default
GIT_FETCH_EXTRA_FLAGS: --no-tags --prune --quiet
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
expire_in: 7 days
@@ -80,35 +72,35 @@ msys2-64bit:
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
bison diffutils flex
git grep make sed
mingw-w64-x86_64-binutils
mingw-w64-x86_64-capstone
mingw-w64-x86_64-ccache
mingw-w64-x86_64-curl
mingw-w64-x86_64-cyrus-sasl
mingw-w64-x86_64-dtc
mingw-w64-x86_64-gcc
mingw-w64-x86_64-glib2
mingw-w64-x86_64-gnutls
mingw-w64-x86_64-gtk3
mingw-w64-x86_64-libgcrypt
mingw-w64-x86_64-libjpeg-turbo
mingw-w64-x86_64-libnfs
mingw-w64-x86_64-libpng
mingw-w64-x86_64-libssh
mingw-w64-x86_64-libtasn1
mingw-w64-x86_64-libusb
mingw-w64-x86_64-lzo2
mingw-w64-x86_64-nettle
mingw-w64-x86_64-ninja
mingw-w64-x86_64-pixman
mingw-w64-x86_64-pkgconf
mingw-w64-x86_64-python
mingw-w64-x86_64-SDL2
mingw-w64-x86_64-SDL2_image
mingw-w64-x86_64-snappy
mingw-w64-x86_64-spice
mingw-w64-x86_64-usbredir
mingw-w64-x86_64-zstd"
$MINGW_TARGET-binutils
$MINGW_TARGET-capstone
$MINGW_TARGET-ccache
$MINGW_TARGET-curl
$MINGW_TARGET-cyrus-sasl
$MINGW_TARGET-dtc
$MINGW_TARGET-gcc
$MINGW_TARGET-glib2
$MINGW_TARGET-gnutls
$MINGW_TARGET-gtk3
$MINGW_TARGET-libgcrypt
$MINGW_TARGET-libjpeg-turbo
$MINGW_TARGET-libnfs
$MINGW_TARGET-libpng
$MINGW_TARGET-libssh
$MINGW_TARGET-libtasn1
$MINGW_TARGET-libusb
$MINGW_TARGET-lzo2
$MINGW_TARGET-nettle
$MINGW_TARGET-ninja
$MINGW_TARGET-pixman
$MINGW_TARGET-pkgconf
$MINGW_TARGET-python
$MINGW_TARGET-SDL2
$MINGW_TARGET-SDL2_image
$MINGW_TARGET-snappy
$MINGW_TARGET-spice
$MINGW_TARGET-usbredir
$MINGW_TARGET-zstd "
- Write-Output "Running build at $(Get-Date -Format u)"
- $env:CHERE_INVOKING = 'yes' # Preserve the current working directory
- $env:MSYS = 'winsymlinks:native' # Enable native Windows symlink
@@ -125,3 +117,25 @@ msys2-64bit:
- ..\msys64\usr\bin\bash -lc "make check MTESTARGS='$TEST_ARGS' || { cat meson-logs/testlog.txt; exit 1; } ;"
- ..\msys64\usr\bin\bash -lc "ccache --show-stats"
- Write-Output "Finished build at $(Get-Date -Format u)"
msys2-64bit:
extends: .shared_msys2_builder
variables:
MINGW_TARGET: mingw-w64-x86_64
MSYSTEM: MINGW64
# do not remove "--without-default-devices"!
# commit 9f8e6cad65a6 ("gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices"
# changed to compile QEMU with the --without-default-devices switch
# for the msys2 64-bit job, due to the build could not complete within
CONFIGURE_ARGS: --target-list=x86_64-softmmu --without-default-devices -Ddebug=false -Doptimization=0
# qTests don't run successfully with "--without-default-devices",
# so let's exclude the qtests from CI for now.
TEST_ARGS: --no-suite qtest
msys2-32bit:
extends: .shared_msys2_builder
variables:
MINGW_TARGET: mingw-w64-i686
MSYSTEM: MINGW32
CONFIGURE_ARGS: --target-list=ppc64-softmmu -Ddebug=false -Doptimization=0
TEST_ARGS: --no-suite qtest

View File

@@ -36,8 +36,6 @@ Marek Dolata <mkdolata@us.ibm.com> mkdolata@us.ibm.com <mkdolata@us.ibm.com>
Michael Ellerman <mpe@ellerman.id.au> michael@ozlabs.org <michael@ozlabs.org>
Nick Hudson <hnick@vmware.com> hnick@vmware.com <hnick@vmware.com>
Timothée Cocault <timothee.cocault@gmail.com> timothee.cocault@gmail.com <timothee.cocault@gmail.com>
Stefan Weil <sw@weilnetz.de> <weil@mail.berlios.de>
Stefan Weil <sw@weilnetz.de> Stefan Weil <stefan@kiwi.(none)>
# There is also a:
# (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162>
@@ -62,7 +60,6 @@ Ian McKellar <ianloic@google.com> Ian McKellar via Qemu-devel <qemu-devel@nongnu
Julia Suvorova <jusual@mail.ru> Julia Suvorova via Qemu-devel <qemu-devel@nongnu.org>
Justin Terry (VM) <juterry@microsoft.com> Justin Terry (VM) via Qemu-devel <qemu-devel@nongnu.org>
Stefan Weil <sw@weilnetz.de> Stefan Weil via <qemu-devel@nongnu.org>
Stefan Weil <sw@weilnetz.de> Stefan Weil via <qemu-trivial@nongnu.org>
Andrey Drobyshev <andrey.drobyshev@virtuozzo.com> Andrey Drobyshev via <qemu-block@nongnu.org>
BALATON Zoltan <balaton@eik.bme.hu> BALATON Zoltan via <qemu-ppc@nongnu.org>
@@ -84,7 +81,6 @@ Greg Kurz <groug@kaod.org> <gkurz@linux.vnet.ibm.com>
Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
Juan Quintela <quintela@trasno.org> <quintela@redhat.com>
Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
Luc Michel <luc@lmichel.fr> <luc.michel@git.antfield.fr>
@@ -100,9 +96,7 @@ Philippe Mathieu-Daudé <philmd@linaro.org> <f4bug@amsat.org>
Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@redhat.com>
Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@fungible.com>
Roman Bolshakov <rbolshakov@ddn.com> <r.bolshakov@yadro.com>
Sriram Yagnaraman <sriram.yagnaraman@ericsson.com> <sriram.yagnaraman@est.tech>
Stefan Brankovic <stefan.brankovic@syrmia.com> <stefan.brankovic@rt-rk.com.com>
Stefan Weil <sw@weilnetz.de> Stefan Weil <stefan@weilnetz.de>
Taylor Simpson <ltaylorsimpson@gmail.com> <tsimpson@quicinc.com>
Yongbok Kim <yongbok.kim@mips.com> <yongbok.kim@imgtec.com>

View File

@@ -5,21 +5,16 @@
# Required
version: 2
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.11"
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py
# We recommend specifying your dependencies to enable reproducible builds:
# https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
python:
install:
- requirements: docs/requirements.txt
# We want all the document formats
formats: all
# For consistency, we require that QEMU's Sphinx extensions
# run with at least the same minimum version of Python that
# we require for other Python in our codebase (our conf.py
# enforces this, and some code needs it.)
python:
version: 3.6

View File

@@ -1,5 +1,5 @@
os: linux
dist: jammy
dist: focal
language: c
compiler:
- gcc
@@ -7,11 +7,13 @@ cache:
# There is one cache per branch and compiler version.
# characteristics of each job are used to identify the cache:
# - OS name (currently only linux)
# - OS distribution (e.g. "jammy" for Linux)
# - OS distribution (for Linux, bionic or focal)
# - Names and values of visible environment variables set in .travis.yml or Settings panel
timeout: 1200
ccache: true
pip: true
directories:
- $HOME/avocado/data/cache
# The channel name "irc.oftc.net#qemu" is encrypted against qemu/qemu
@@ -33,7 +35,7 @@ env:
- TEST_BUILD_CMD=""
- TEST_CMD="make check V=1"
# This is broadly a list of "mainline" system targets which have support across the major distros
- MAIN_SYSTEM_TARGETS="aarch64-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu"
- MAIN_SOFTMMU_TARGETS="aarch64-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu"
- CCACHE_SLOPPINESS="include_file_ctime,include_file_mtime"
- CCACHE_MAXSIZE=1G
- G_MESSAGES_DEBUG=error
@@ -81,6 +83,7 @@ jobs:
- name: "[aarch64] GCC check-tcg"
arch: arm64
dist: focal
addons:
apt_packages:
- libaio-dev
@@ -106,17 +109,17 @@ jobs:
- libvdeplug-dev
- libvte-2.91-dev
- ninja-build
- python3-tomli
# Tests dependencies
- genisoimage
env:
- TEST_CMD="make check check-tcg V=1"
- CONFIG="--disable-containers --enable-fdt=system
--target-list=${MAIN_SYSTEM_TARGETS} --cxx=/bin/false"
--target-list=${MAIN_SOFTMMU_TARGETS} --cxx=/bin/false"
- UNRELIABLE=true
- name: "[ppc64] Clang check-tcg"
- name: "[ppc64] GCC check-tcg"
arch: ppc64le
compiler: clang
dist: focal
addons:
apt_packages:
- libaio-dev
@@ -142,7 +145,6 @@ jobs:
- libvdeplug-dev
- libvte-2.91-dev
- ninja-build
- python3-tomli
# Tests dependencies
- genisoimage
env:
@@ -152,6 +154,7 @@ jobs:
- name: "[s390x] GCC check-tcg"
arch: s390x
dist: focal
addons:
apt_packages:
- libaio-dev
@@ -177,13 +180,13 @@ jobs:
- libvdeplug-dev
- libvte-2.91-dev
- ninja-build
- python3-tomli
# Tests dependencies
- genisoimage
env:
- TEST_CMD="make check check-tcg V=1"
- CONFIG="--disable-containers
--target-list=hppa-softmmu,mips64-softmmu,ppc64-softmmu,riscv64-softmmu,s390x-softmmu,x86_64-softmmu"
- CONFIG="--disable-containers --enable-fdt=system
--target-list=${MAIN_SOFTMMU_TARGETS},s390x-linux-user"
- UNRELIABLE=true
script:
- BUILD_RC=0 && make -j${JOBS} || BUILD_RC=$?
- |
@@ -194,9 +197,9 @@ jobs:
$(exit $BUILD_RC);
fi
- name: "[s390x] Clang (other-system)"
- name: "[s390x] GCC (other-system)"
arch: s390x
compiler: clang
dist: focal
addons:
apt_packages:
- libaio-dev
@@ -217,16 +220,17 @@ jobs:
- libsnappy-dev
- libzstd-dev
- nettle-dev
- xfslibs-dev
- ninja-build
- python3-tomli
# Tests dependencies
- genisoimage
env:
- CONFIG="--disable-containers --audio-drv-list=sdl --disable-user
--target-list=arm-softmmu,avr-softmmu,microblaze-softmmu,sh4eb-softmmu,sparc64-softmmu,xtensaeb-softmmu"
- CONFIG="--disable-containers --enable-fdt=system --audio-drv-list=sdl
--disable-user --target-list-exclude=${MAIN_SOFTMMU_TARGETS}"
- name: "[s390x] GCC (user)"
arch: s390x
dist: focal
addons:
apt_packages:
- libgcrypt20-dev
@@ -235,14 +239,13 @@ jobs:
- ninja-build
- flex
- bison
- python3-tomli
env:
- TEST_CMD="make check check-tcg V=1"
- CONFIG="--disable-containers --disable-system"
- name: "[s390x] Clang (disable-tcg)"
arch: s390x
compiler: clang
dist: focal
compiler: clang-10
addons:
apt_packages:
- libaio-dev
@@ -268,8 +271,9 @@ jobs:
- libvdeplug-dev
- libvte-2.91-dev
- ninja-build
- python3-tomli
- clang-10
env:
- TEST_CMD="make check-unit"
- CONFIG="--disable-containers --disable-tcg --enable-kvm --disable-tools
--enable-fdt=system --host-cc=clang --cxx=clang++"
- UNRELIABLE=true

View File

@@ -23,9 +23,6 @@ config IVSHMEM
config TPM
bool
config FDT
bool
config VHOST_USER
bool
@@ -38,6 +35,9 @@ config VHOST_KERNEL
config VIRTFS
bool
config PVRDMA
bool
config MULTIPROCESS_ALLOWED
bool
imply MULTIPROCESS

View File

@@ -70,6 +70,7 @@ R: Daniel P. Berrangé <berrange@redhat.com>
R: Thomas Huth <thuth@redhat.com>
R: Markus Armbruster <armbru@redhat.com>
R: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Juan Quintela <quintela@redhat.com>
W: https://www.qemu.org/docs/master/devel/index.html
S: Odd Fixes
F: docs/devel/style.rst
@@ -140,7 +141,6 @@ F: docs/system/target-i386*
F: target/i386/*.[ch]
F: target/i386/Kconfig
F: target/i386/meson.build
F: tools/i386/
Guest CPU cores (TCG)
---------------------
@@ -168,14 +168,12 @@ F: include/exec/target_long.h
F: include/exec/helper*.h
F: include/exec/helper*.h.inc
F: include/exec/helper-info.c.inc
F: include/exec/page-protection.h
F: include/sysemu/cpus.h
F: include/sysemu/tcg.h
F: include/hw/core/tcg-cpu-ops.h
F: host/include/*/host/cpuinfo.h
F: util/cpuinfo-*.c
F: include/tcg/
F: tests/decode/
FPU emulation
M: Aurelien Jarno <aurelien@aurel32.net>
@@ -245,7 +243,6 @@ F: disas/hexagon.c
F: configs/targets/hexagon-linux-user/default.mak
F: docker/dockerfiles/debian-hexagon-cross.docker
F: gdb-xml/hexagon*.xml
T: git https://github.com/quic/qemu.git hex-next
Hexagon idef-parser
M: Alessandro Di Federico <ale@rev.ng>
@@ -287,13 +284,26 @@ MIPS TCG CPUs
M: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Aurelien Jarno <aurelien@aurel32.net>
R: Jiaxun Yang <jiaxun.yang@flygoat.com>
R: Aleksandar Rikalo <arikalo@gmail.com>
R: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
S: Odd Fixes
F: target/mips/
F: disas/*mips.c
F: docs/system/cpu-models-mips.rst.inc
F: tests/tcg/mips/
NiosII TCG CPUs
R: Chris Wulff <crwulff@gmail.com>
R: Marek Vasut <marex@denx.de>
S: Orphan
F: target/nios2/
F: hw/nios2/
F: hw/intc/nios2_vic.c
F: disas/nios2.c
F: include/hw/intc/nios2_vic.h
F: configs/devices/nios2-softmmu/default.mak
F: tests/docker/dockerfiles/debian-nios2-cross.d/build-toolchain.sh
F: tests/tcg/nios2/
OpenRISC TCG CPUs
M: Stafford Horne <shorne@gmail.com>
S: Odd Fixes
@@ -306,6 +316,7 @@ F: tests/tcg/openrisc/
PowerPC TCG CPUs
M: Nicholas Piggin <npiggin@gmail.com>
M: Daniel Henrique Barboza <danielhb413@gmail.com>
R: Cédric Le Goater <clg@kaod.org>
L: qemu-ppc@nongnu.org
S: Odd Fixes
F: target/ppc/
@@ -322,7 +333,7 @@ F: tests/tcg/ppc*/*
RISC-V TCG CPUs
M: Palmer Dabbelt <palmer@dabbelt.com>
M: Alistair Francis <alistair.francis@wdc.com>
M: Bin Meng <bmeng.cn@gmail.com>
M: Bin Meng <bin.meng@windriver.com>
R: Weiwei Li <liwei1518@gmail.com>
R: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
R: Liu Zhiwei <zhiwei_liu@linux.alibaba.com>
@@ -345,7 +356,6 @@ L: qemu-riscv@nongnu.org
S: Supported
F: target/riscv/insn_trans/trans_xthead.c.inc
F: target/riscv/xthead*.decode
F: target/riscv/th_*
F: disas/riscv-xthead*
RISC-V XVentanaCondOps extension
@@ -458,6 +468,7 @@ F: target/mips/sysemu/
PPC KVM CPUs
M: Nicholas Piggin <npiggin@gmail.com>
R: Daniel Henrique Barboza <danielhb413@gmail.com>
R: Cédric Le Goater <clg@kaod.org>
S: Odd Fixes
F: target/ppc/kvm.c
@@ -536,9 +547,8 @@ Guest CPU Cores (Xen)
---------------------
X86 Xen CPUs
M: Stefano Stabellini <sstabellini@kernel.org>
M: Anthony PERARD <anthony@xenproject.org>
M: Anthony Perard <anthony.perard@citrix.com>
M: Paul Durrant <paul@xen.org>
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
L: xen-devel@lists.xenproject.org
S: Supported
F: */xen*
@@ -632,7 +642,6 @@ R: Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
L: qemu-arm@nongnu.org
S: Odd Fixes
F: hw/*/allwinner*
F: hw/ide/ahci-allwinner.c
F: include/hw/*/allwinner*
F: hw/arm/cubieboard.c
F: docs/system/arm/cubieboard.rst
@@ -810,13 +819,12 @@ F: include/hw/misc/imx7_*.h
F: hw/pci-host/designware.c
F: include/hw/pci-host/designware.h
MPS2 / MPS3
MPS2
M: Peter Maydell <peter.maydell@linaro.org>
L: qemu-arm@nongnu.org
S: Maintained
F: hw/arm/mps2.c
F: hw/arm/mps2-tz.c
F: hw/arm/mps3r.c
F: hw/misc/mps2-*.c
F: include/hw/misc/mps2-*.h
F: hw/arm/armsse.c
@@ -1036,7 +1044,6 @@ F: hw/adc/zynq-xadc.c
F: include/hw/misc/zynq_slcr.h
F: include/hw/adc/zynq-xadc.h
X: hw/ssi/xilinx_*
F: docs/system/arm/xlnx-zynq.rst
Xilinx ZynqMP and Versal
M: Alistair Francis <alistair@alistair23.me>
@@ -1115,26 +1122,6 @@ L: qemu-arm@nongnu.org
S: Maintained
F: hw/arm/olimex-stm32-h405.c
STM32L4x5 SoC Family
M: Arnaud Minier <arnaud.minier@telecom-paris.fr>
M: Inès Varhol <ines.varhol@telecom-paris.fr>
L: qemu-arm@nongnu.org
S: Maintained
F: hw/arm/stm32l4x5_soc.c
F: hw/char/stm32l4x5_usart.c
F: hw/misc/stm32l4x5_exti.c
F: hw/misc/stm32l4x5_syscfg.c
F: hw/misc/stm32l4x5_rcc.c
F: hw/gpio/stm32l4x5_gpio.c
F: include/hw/*/stm32l4x5_*.h
B-L475E-IOT01A IoT Node
M: Arnaud Minier <arnaud.minier@telecom-paris.fr>
M: Inès Varhol <ines.varhol@telecom-paris.fr>
L: qemu-arm@nongnu.org
S: Maintained
F: hw/arm/b-l475e-iot01a.c
SmartFusion2
M: Subbaraya Sundeep <sundeep.lkml@gmail.com>
M: Peter Maydell <peter.maydell@linaro.org>
@@ -1162,15 +1149,14 @@ F: docs/system/arm/emcraft-sf2.rst
ASPEED BMCs
M: Cédric Le Goater <clg@kaod.org>
M: Peter Maydell <peter.maydell@linaro.org>
R: Steven Lee <steven_lee@aspeedtech.com>
R: Troy Lee <leetroy@gmail.com>
R: Jamin Lin <jamin_lin@aspeedtech.com>
R: Andrew Jeffery <andrew@codeconstruct.com.au>
R: Joel Stanley <joel@jms.id.au>
L: qemu-arm@nongnu.org
S: Maintained
F: hw/*/*aspeed*
F: hw/misc/pca9552.c
F: include/hw/*/*aspeed*
F: include/hw/misc/pca9552*.h
F: hw/net/ftgmac100.c
F: include/hw/net/ftgmac100.h
F: docs/system/arm/aspeed.rst
@@ -1243,7 +1229,6 @@ LoongArch Machines
------------------
Virt
M: Song Gao <gaosong@loongson.cn>
R: Jiaxun Yang <jiaxun.yang@flygoat.com>
S: Maintained
F: docs/system/loongarch/virt.rst
F: configs/targets/loongarch64-softmmu.mak
@@ -1251,9 +1236,7 @@ F: configs/devices/loongarch64-softmmu/default.mak
F: hw/loongarch/
F: include/hw/loongarch/virt.h
F: include/hw/intc/loongarch_*.h
F: include/hw/intc/loongson_ipi_common.h
F: hw/intc/loongarch_*.c
F: hw/intc/loongson_ipi_common.c
F: include/hw/pci-host/ls7a.h
F: hw/rtc/ls7a_rtc.c
F: gdb-xml/loongarch*.xml
@@ -1346,7 +1329,7 @@ F: include/hw/mips/
Jazz
M: Hervé Poussineau <hpoussin@reactos.org>
R: Aleksandar Rikalo <arikalo@gmail.com>
R: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
S: Maintained
F: hw/mips/jazz.c
F: hw/display/g364fb.c
@@ -1359,7 +1342,6 @@ M: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Aurelien Jarno <aurelien@aurel32.net>
S: Odd Fixes
F: hw/isa/piix.c
F: hw/isa/fdc37m81x-superio.c
F: hw/acpi/piix4.c
F: hw/mips/malta.c
F: hw/pci-host/gt64120.c
@@ -1368,7 +1350,7 @@ F: tests/avocado/linux_ssh_mips_malta.py
F: tests/avocado/machine_mips_malta.py
Mipssim
R: Aleksandar Rikalo <arikalo@gmail.com>
R: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
S: Orphan
F: hw/mips/mipssim.c
F: hw/net/mipsnet.c
@@ -1387,20 +1369,16 @@ Loongson-3 virtual platforms
M: Huacai Chen <chenhuacai@kernel.org>
R: Jiaxun Yang <jiaxun.yang@flygoat.com>
S: Maintained
F: hw/intc/loongson_ipi_common.c
F: hw/intc/loongson_ipi.c
F: hw/intc/loongson_liointc.c
F: hw/mips/loongson3_bootp.c
F: hw/mips/loongson3_bootp.h
F: hw/mips/loongson3_virt.c
F: include/hw/intc/loongson_ipi_common.h
F: include/hw/intc/loongson_ipi.h
F: include/hw/intc/loongson_liointc.h
F: tests/avocado/machine_mips_loongson3v.py
Boston
M: Paul Burton <paulburton@kernel.org>
R: Aleksandar Rikalo <arikalo@gmail.com>
R: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
S: Odd Fixes
F: hw/core/loader-fit.c
F: hw/mips/boston.c
@@ -1428,7 +1406,6 @@ Bamboo
L: qemu-ppc@nongnu.org
S: Orphan
F: hw/ppc/ppc440_bamboo.c
F: hw/pci-host/ppc4xx_pci.c
F: tests/avocado/ppc_bamboo.py
e500
@@ -1510,6 +1487,7 @@ F: tests/avocado/ppc_prep_40p.py
sPAPR (pseries)
M: Nicholas Piggin <npiggin@gmail.com>
R: Daniel Henrique Barboza <danielhb413@gmail.com>
R: Cédric Le Goater <clg@kaod.org>
R: David Gibson <david@gibson.dropbear.id.au>
R: Harsh Prateek Bora <harshpb@linux.ibm.com>
L: qemu-ppc@nongnu.org
@@ -1530,7 +1508,6 @@ F: tests/qtest/libqos/*spapr*
F: tests/qtest/rtas*
F: tests/qtest/libqos/rtas*
F: tests/avocado/ppc_pseries.py
F: tests/avocado/ppc_hv_tests.py
PowerNV (Non-Virtualized)
M: Cédric Le Goater <clg@kaod.org>
@@ -1548,14 +1525,6 @@ F: include/hw/pci-host/pnv*
F: pc-bios/skiboot.lid
F: tests/qtest/pnv*
pca955x
M: Glenn Miles <milesg@linux.ibm.com>
L: qemu-ppc@nongnu.org
L: qemu-arm@nongnu.org
S: Odd Fixes
F: hw/gpio/pca955*.c
F: include/hw/gpio/pca955*.h
virtex_ml507
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
L: qemu-ppc@nongnu.org
@@ -1569,14 +1538,13 @@ L: qemu-ppc@nongnu.org
S: Maintained
F: hw/ppc/sam460ex.c
F: hw/ppc/ppc440_uc.c
F: hw/pci-host/ppc440_pcix.c
F: hw/ppc/ppc440_pcix.c
F: hw/display/sm501*
F: hw/ide/sii3112.c
F: hw/rtc/m41t80.c
F: pc-bios/canyonlands.dt[sb]
F: pc-bios/u-boot-sam460ex-20100605.bin
F: roms/u-boot-sam460ex
F: docs/system/ppc/amigang.rst
pegasos2
M: BALATON Zoltan <balaton@eik.bme.hu>
@@ -1618,7 +1586,7 @@ F: include/hw/riscv/opentitan.h
F: include/hw/*/ibex_*.h
Microchip PolarFire SoC Icicle Kit
M: Bin Meng <bmeng.cn@gmail.com>
M: Bin Meng <bin.meng@windriver.com>
L: qemu-riscv@nongnu.org
S: Supported
F: docs/system/riscv/microchip-icicle-kit.rst
@@ -1645,7 +1613,7 @@ F: include/hw/char/shakti_uart.h
SiFive Machines
M: Alistair Francis <Alistair.Francis@wdc.com>
M: Bin Meng <bmeng.cn@gmail.com>
M: Bin Meng <bin.meng@windriver.com>
M: Palmer Dabbelt <palmer@dabbelt.com>
L: qemu-riscv@nongnu.org
S: Supported
@@ -1725,12 +1693,13 @@ F: hw/rtc/sun4v-rtc.c
F: include/hw/rtc/sun4v-rtc.h
Leon3
M: Clément Chigot <chigot@adacore.com>
M: Fabien Chouteau <chouteau@adacore.com>
M: Frederic Konrad <konrad.frederic@yahoo.fr>
S: Maintained
F: hw/sparc/leon3.c
F: hw/*/grlib*
F: include/hw/*/grlib*
F: tests/avocado/machine_sparc_leon3.py
S390 Machines
-------------
@@ -1882,10 +1851,8 @@ M: Eduardo Habkost <eduardo@habkost.net>
M: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
R: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Yanan Wang <wangyanan55@huawei.com>
R: Zhao Liu <zhao1.liu@intel.com>
S: Supported
F: hw/core/cpu-common.c
F: hw/core/cpu-sysemu.c
F: hw/core/cpu.c
F: hw/core/machine-qmp-cmds.c
F: hw/core/machine.c
F: hw/core/machine-smp.c
@@ -1951,6 +1918,7 @@ IDE
M: John Snow <jsnow@redhat.com>
L: qemu-block@nongnu.org
S: Odd Fixes
F: include/hw/ide.h
F: include/hw/ide/
F: hw/ide/
F: hw/block/block.c
@@ -2084,7 +2052,6 @@ F: hw/ppc/ppc4xx*.c
F: hw/ppc/ppc440_uc.c
F: hw/ppc/ppc440.h
F: hw/i2c/ppc4xx_i2c.c
F: include/hw/pci-host/ppc4xx.h
F: include/hw/ppc/ppc4xx.h
F: include/hw/i2c/ppc4xx_i2c.h
F: hw/intc/ppc-uic.c
@@ -2141,7 +2108,7 @@ F: hw/ssi/xilinx_*
SD (Secure Card)
M: Philippe Mathieu-Daudé <philmd@linaro.org>
M: Bin Meng <bmeng.cn@gmail.com>
M: Bin Meng <bin.meng@windriver.com>
L: qemu-block@nongnu.org
S: Odd Fixes
F: include/hw/sd/sd*
@@ -2152,7 +2119,8 @@ F: tests/qtest/fuzz-sdcard-test.c
F: tests/qtest/sdhci-test.c
USB
S: Orphan
M: Gerd Hoffmann <kraxel@redhat.com>
S: Odd Fixes
F: hw/usb/*
F: stubs/usb-dev-stub.c
F: tests/qtest/usb-*-test.c
@@ -2161,6 +2129,7 @@ F: include/hw/usb.h
F: include/hw/usb/
USB (serial adapter)
R: Gerd Hoffmann <kraxel@redhat.com>
M: Samuel Thibault <samuel.thibault@ens-lyon.org>
S: Maintained
F: hw/usb/dev-serial.c
@@ -2172,8 +2141,7 @@ S: Supported
F: hw/vfio/*
F: include/hw/vfio/
F: docs/igd-assign.txt
F: docs/devel/migration/vfio.rst
F: qapi/vfio.json
F: docs/devel/vfio-migration.rst
vfio-ccw
M: Eric Farman <farman@linux.ibm.com>
@@ -2198,22 +2166,8 @@ F: hw/vfio/ap.c
F: docs/system/s390x/vfio-ap.rst
L: qemu-s390x@nongnu.org
iommufd
M: Yi Liu <yi.l.liu@intel.com>
M: Eric Auger <eric.auger@redhat.com>
M: Zhenzhong Duan <zhenzhong.duan@intel.com>
S: Supported
F: backends/iommufd.c
F: include/sysemu/iommufd.h
F: backends/host_iommu_device.c
F: include/sysemu/host_iommu_device.h
F: include/qemu/chardev_open.h
F: util/chardev_open.c
F: docs/devel/vfio-iommufd.rst
vhost
M: Michael S. Tsirkin <mst@redhat.com>
R: Stefano Garzarella <sgarzare@redhat.com>
S: Supported
F: hw/*/*vhost*
F: docs/interop/vhost-user.json
@@ -2237,7 +2191,6 @@ F: qapi/virtio.json
F: net/vhost-user.c
F: include/hw/virtio/
F: docs/devel/virtio*
F: docs/devel/migration/virtio.rst
virtio-balloon
M: Michael S. Tsirkin <mst@redhat.com>
@@ -2309,9 +2262,8 @@ L: virtio-fs@lists.linux.dev
virtio-input
M: Gerd Hoffmann <kraxel@redhat.com>
S: Odd Fixes
F: docs/system/devices/vhost-user-input.rst
F: hw/input/vhost-user-input.c
F: hw/input/virtio-input*.c
F: hw/virtio/vhost-user-input.c
F: include/hw/virtio/virtio-input.h
F: contrib/vhost-user-input/*
@@ -2340,12 +2292,6 @@ F: include/sysemu/rng*.h
F: backends/rng*.c
F: tests/qtest/virtio-rng-test.c
vhost-user-stubs
M: Alex Bennée <alex.bennee@linaro.org>
S: Maintained
F: hw/virtio/vhost-user-base.c
F: hw/virtio/vhost-user-device*
vhost-user-rng
M: Mathieu Poirier <mathieu.poirier@linaro.org>
S: Supported
@@ -2363,13 +2309,6 @@ F: hw/virtio/vhost-user-gpio*
F: include/hw/virtio/vhost-user-gpio.h
F: tests/qtest/libqos/virtio-gpio.*
vhost-user-snd
M: Alex Bennée <alex.bennee@linaro.org>
R: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
S: Maintained
F: hw/virtio/vhost-user-snd*
F: include/hw/virtio/vhost-user-snd.h
vhost-user-scmi
R: mzamazal@redhat.com
S: Supported
@@ -2412,7 +2351,6 @@ F: docs/system/devices/virtio-snd.rst
nvme
M: Keith Busch <kbusch@kernel.org>
M: Klaus Jensen <its@irrelevant.dk>
R: Jesper Devantier <foss@defmacro.it>
L: qemu-block@nongnu.org
S: Supported
F: hw/nvme/*
@@ -2449,13 +2387,8 @@ F: hw/net/net_tx_pkt*
Vmware
M: Dmitry Fleytman <dmitry.fleytman@gmail.com>
S: Maintained
F: docs/specs/vmw_pvscsi-spec.txt
F: hw/display/vmware_vga.c
F: hw/net/vmxnet*
F: hw/scsi/vmw_pvscsi*
F: pc-bios/efi-vmxnet3.rom
F: pc-bios/vgabios-vmware.bin
F: roms/config.vga-vmware
F: tests/qtest/vmxnet3-test.c
F: docs/specs/vwm_pvscsi-spec.rst
@@ -2465,7 +2398,7 @@ S: Maintained
F: hw/net/rocker/
F: qapi/rocker.json
F: tests/rocker/
F: docs/specs/rocker.rst
F: docs/specs/rocker.txt
e1000x
M: Dmitry Fleytman <dmitry.fleytman@gmail.com>
@@ -2484,7 +2417,7 @@ F: tests/qtest/libqos/e1000e.*
igb
M: Akihiko Odaki <akihiko.odaki@daynix.com>
R: Sriram Yagnaraman <sriram.yagnaraman@ericsson.com>
R: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
S: Maintained
F: docs/system/devices/igb.rst
F: hw/net/igb*
@@ -2504,17 +2437,11 @@ F: hw/net/tulip.c
F: hw/net/tulip.h
pca954x
M: Patrick Leis <venture@google.com>
M: Patrick Venture <venture@google.com>
S: Maintained
F: hw/i2c/i2c_mux_pca954x.c
F: include/hw/i2c/i2c_mux_pca954x.h
pcf8574
M: Dmitrii Sharikhin <d.sharikhin@yadro.com>
S: Maintained
F: hw/gpio/pcf8574.c
F: include/gpio/pcf8574.h
Generic Loader
M: Alistair Francis <alistair@alistair23.me>
S: Maintained
@@ -2589,14 +2516,15 @@ F: hw/display/ramfb*.c
F: include/hw/display/ramfb.h
virtio-gpu
S: Orphan
M: Gerd Hoffmann <kraxel@redhat.com>
S: Odd Fixes
F: hw/display/virtio-gpu*
F: hw/display/virtio-vga.*
F: include/hw/virtio/virtio-gpu.h
F: docs/system/devices/virtio-gpu.rst
vhost-user-blk
M: Raphael Norwitz <raphael@enfabrica.net>
M: Raphael Norwitz <raphael.norwitz@nutanix.com>
S: Maintained
F: contrib/vhost-user-blk/
F: contrib/vhost-user-scsi/
@@ -2611,6 +2539,7 @@ F: include/hw/virtio/virtio-blk-common.h
vhost-user-gpu
M: Marc-André Lureau <marcandre.lureau@redhat.com>
R: Gerd Hoffmann <kraxel@redhat.com>
S: Maintained
F: docs/interop/vhost-user-gpu.rst
F: contrib/vhost-user-gpu
@@ -2881,6 +2810,7 @@ F: util/aio-*.h
F: util/defer-call.c
F: util/fdmon-*.c
F: block/io.c
F: migration/block*
F: include/block/aio.h
F: include/block/aio-wait.h
F: include/qemu/defer-call.h
@@ -2932,7 +2862,6 @@ S: Supported
F: hw/cxl/
F: hw/mem/cxl_type3.c
F: include/hw/cxl/
F: qapi/cxl.json
Dirty Bitmaps
M: Eric Blake <eblake@redhat.com>
@@ -3012,7 +2941,7 @@ F: include/qapi/error.h
F: include/qemu/error-report.h
F: qapi/error.json
F: util/error.c
F: util/error-report.c
F: util/qemu-error.c
F: scripts/coccinelle/err-bad-newline.cocci
F: scripts/coccinelle/error-use-after-free.cocci
F: scripts/coccinelle/error_propagate_null.cocci
@@ -3068,7 +2997,8 @@ F: stubs/memory_device.c
F: docs/nvdimm.txt
SPICE
S: Orphan
M: Gerd Hoffmann <kraxel@redhat.com>
S: Odd Fixes
F: include/ui/qemu-spice.h
F: include/ui/spice-display.h
F: ui/spice-*.c
@@ -3078,6 +3008,7 @@ F: qapi/ui.json
F: docs/spice-port-fqdn.txt
Graphics
M: Gerd Hoffmann <kraxel@redhat.com>
M: Marc-André Lureau <marcandre.lureau@redhat.com>
S: Odd Fixes
F: ui/
@@ -3224,7 +3155,6 @@ M: Eric Blake <eblake@redhat.com>
M: Markus Armbruster <armbru@redhat.com>
S: Supported
F: qapi/*.json
F: qga/qapi-schema.json
T: git https://repo.or.cz/qemu/armbru.git qapi-next
QObject
@@ -3323,7 +3253,6 @@ F: tests/qtest/
F: docs/devel/qgraph.rst
F: docs/devel/qtest.rst
X: tests/qtest/bios-tables-test*
X: tests/qtest/migration-*
Device Fuzzing
M: Alexander Bulekov <alxndr@bu.edu>
@@ -3359,7 +3288,6 @@ Stats
S: Orphan
F: include/sysemu/stats.h
F: stats/
F: qapi/stats.json
Streams
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
@@ -3405,19 +3333,15 @@ F: tests/qtest/*tpm*
F: docs/specs/tpm.rst
T: git https://github.com/stefanberger/qemu-tpm.git tpm-next
SPDM
M: Alistair Francis <alistair.francis@wdc.com>
S: Maintained
F: backends/spdm-socket.c
F: include/sysemu/spdm-socket.h
Checkpatch
S: Odd Fixes
F: scripts/checkpatch.pl
Migration
M: Juan Quintela <quintela@redhat.com>
M: Peter Xu <peterx@redhat.com>
M: Fabiano Rosas <farosas@suse.de>
R: Leonardo Bras <leobras@redhat.com>
S: Maintained
F: hw/core/vmstate-if.c
F: include/hw/vmstate-if.h
@@ -3426,16 +3350,18 @@ F: include/qemu/userfaultfd.h
F: migration/
F: scripts/vmstate-static-checker.py
F: tests/vmstate-static-checker-data/
F: tests/qtest/migration-*
F: docs/devel/migration/
F: tests/qtest/migration-test.c
F: docs/devel/migration.rst
F: qapi/migration.json
F: tests/migration/
F: util/userfaultfd.c
X: migration/rdma*
RDMA Migration
M: Juan Quintela <quintela@redhat.com>
R: Li Zhijian <lizhijian@fujitsu.com>
R: Peter Xu <peterx@redhat.com>
R: Leonardo Bras <leobras@redhat.com>
S: Odd Fixes
F: migration/rdma*
@@ -3447,13 +3373,6 @@ F: include/sysemu/dirtylimit.h
F: migration/dirtyrate.c
F: migration/dirtyrate.h
F: include/sysemu/dirtyrate.h
F: docs/devel/migration/dirty-limit.rst
Detached LUKS header
M: Hyman Huang <yong.huang@smartx.com>
S: Maintained
F: tests/qemu-iotests/tests/luks-detached-header
F: docs/devel/luks-detached-header.rst
D-Bus
M: Marc-André Lureau <marcandre.lureau@redhat.com>
@@ -3487,7 +3406,7 @@ F: qapi/crypto.json
F: tests/unit/test-crypto-*
F: tests/bench/benchmark-crypto-*
F: tests/unit/crypto-tls-*
F: tests/unit/pkix_asn1_tab.c.inc
F: tests/unit/pkix_asn1_tab.c
F: qemu.sasl
Coroutines
@@ -3604,7 +3523,6 @@ F: util/iova-tree.c
elf2dmp
M: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
R: Akihiko Odaki <akihiko.odaki@daynix.com>
S: Maintained
F: contrib/elf2dmp/
@@ -3639,15 +3557,6 @@ F: tests/qtest/adm1272-test.c
F: tests/qtest/max34451-test.c
F: tests/qtest/isl_pmbus_vr-test.c
FSI
M: Ninad Palsule <ninad@linux.ibm.com>
R: Cédric Le Goater <clg@kaod.org>
S: Maintained
F: hw/fsi/*
F: include/hw/fsi/*
F: docs/specs/fsi.rst
F: tests/qtest/aspeed_fsi-test.c
Firmware schema specifications
M: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Daniel P. Berrange <berrange@redhat.com>
@@ -3670,8 +3579,8 @@ F: tests/uefi-test-tools/
VT-d Emulation
M: Michael S. Tsirkin <mst@redhat.com>
M: Peter Xu <peterx@redhat.com>
R: Jason Wang <jasowang@redhat.com>
R: Yi Liu <yi.l.liu@intel.com>
S: Supported
F: hw/i386/intel_iommu.c
F: hw/i386/intel_iommu_internal.h
@@ -3699,16 +3608,6 @@ F: hw/core/clock-vmstate.c
F: hw/core/qdev-clock.c
F: docs/devel/clocks.rst
Reset framework
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: include/hw/resettable.h
F: include/hw/core/resetcontainer.h
F: include/sysemu/reset.h
F: hw/core/reset.c
F: hw/core/resettable.c
F: hw/core/resetcontainer.c
Usermode Emulation
------------------
Overall usermode emulation
@@ -3749,11 +3648,10 @@ TCG Plugins
M: Alex Bennée <alex.bennee@linaro.org>
R: Alexandre Iooss <erdnaxe@crans.org>
R: Mahmoud Mandour <ma.mandourr@gmail.com>
R: Pierrick Bouvier <pierrick.bouvier@linaro.org>
S: Maintained
F: docs/devel/tcg-plugins.rst
F: plugins/
F: tests/tcg/plugins/
F: tests/plugin/
F: tests/avocado/tcg_plugins.py
F: contrib/plugins/
@@ -3784,7 +3682,7 @@ M: Philippe Mathieu-Daudé <philmd@linaro.org>
R: Aurelien Jarno <aurelien@aurel32.net>
R: Huacai Chen <chenhuacai@kernel.org>
R: Jiaxun Yang <jiaxun.yang@flygoat.com>
R: Aleksandar Rikalo <arikalo@gmail.com>
R: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
S: Odd Fixes
F: tcg/mips/
@@ -3829,7 +3727,7 @@ F: block/vmdk.c
RBD
M: Ilya Dryomov <idryomov@gmail.com>
R: Peter Lieven <pl@dlhnet.de>
R: Peter Lieven <pl@kamp.de>
L: qemu-block@nongnu.org
S: Supported
F: block/rbd.c
@@ -3855,7 +3753,7 @@ F: block/blkio.c
iSCSI
M: Ronnie Sahlberg <ronniesahlberg@gmail.com>
M: Paolo Bonzini <pbonzini@redhat.com>
M: Peter Lieven <pl@dlhnet.de>
M: Peter Lieven <pl@kamp.de>
L: qemu-block@nongnu.org
S: Odd Fixes
F: block/iscsi.c
@@ -3871,14 +3769,14 @@ F: nbd/
F: include/block/nbd*
F: qemu-nbd.*
F: blockdev-nbd.c
F: docs/interop/nbd.rst
F: docs/interop/nbd.txt
F: docs/tools/qemu-nbd.rst
F: tests/qemu-iotests/tests/*nbd*
T: git https://repo.or.cz/qemu/ericb.git nbd
T: git https://gitlab.com/vsementsov/qemu.git block
NFS
M: Peter Lieven <pl@dlhnet.de>
M: Peter Lieven <pl@kamp.de>
L: qemu-block@nongnu.org
S: Maintained
F: block/nfs.c
@@ -3964,8 +3862,7 @@ L: qemu-block@nongnu.org
S: Supported
F: block/parallels.c
F: block/parallels-ext.c
F: docs/interop/parallels.rst
F: docs/interop/prl-xml.rst
F: docs/interop/parallels.txt
T: git https://src.openvz.org/scm/~den/qemu.git parallels
qed
@@ -4069,6 +3966,16 @@ F: block/replication.c
F: tests/unit/test-replication.c
F: docs/block-replication.txt
PVRDMA
M: Yuval Shaia <yuval.shaia.ml@gmail.com>
M: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
S: Odd Fixes
F: hw/rdma/*
F: hw/rdma/vmw/*
F: docs/pvrdma.txt
F: contrib/rdmacm-mux/*
F: qapi/rdma.json
Semihosting
M: Alex Bennée <alex.bennee@linaro.org>
S: Maintained
@@ -4241,7 +4148,6 @@ F: docs/conf.py
F: docs/*/conf.py
F: docs/sphinx/
F: docs/_templates/
F: docs/devel/docs.rst
Miscellaneous
-------------
@@ -4254,8 +4160,3 @@ Code Coverage Tools
M: Alex Bennée <alex.bennee@linaro.org>
S: Odd Fixes
F: scripts/coverage/
Machine development tool
M: Maksim Davydov <davydov-max@yandex-team.ru>
S: Supported
F: scripts/compare-machine-types.py

View File

@@ -78,8 +78,7 @@ x := $(shell rm -rf meson-private meson-info meson-logs)
endif
# 1. ensure config-host.mak is up-to-date
config-host.mak: $(SRC_PATH)/configure $(SRC_PATH)/scripts/meson-buildoptions.sh \
$(SRC_PATH)/pythondeps.toml $(SRC_PATH)/VERSION
config-host.mak: $(SRC_PATH)/configure $(SRC_PATH)/scripts/meson-buildoptions.sh $(SRC_PATH)/VERSION
@echo config-host.mak is out-of-date, running configure
@if test -f meson-private/coredata.dat; then \
./config.status --skip-meson; \
@@ -142,13 +141,8 @@ MAKE.n = $(findstring n,$(firstword $(filter-out --%,$(MAKEFLAGS))))
MAKE.k = $(findstring k,$(firstword $(filter-out --%,$(MAKEFLAGS))))
MAKE.q = $(findstring q,$(firstword $(filter-out --%,$(MAKEFLAGS))))
MAKE.nq = $(if $(word 2, $(MAKE.n) $(MAKE.q)),nq)
NINJAFLAGS = \
$(if $V,-v) \
$(if $(MAKE.n), -n) \
$(if $(MAKE.k), -k0) \
$(filter-out -j, \
$(or $(filter -l% -j%, $(MAKEFLAGS)), \
$(if $(filter --jobserver-auth=%, $(MAKEFLAGS)),, -j1))) \
NINJAFLAGS = $(if $V,-v) $(if $(MAKE.n), -n) $(if $(MAKE.k), -k0) \
$(filter-out -j, $(lastword -j1 $(filter -l% -j%, $(MAKEFLAGS)))) \
-d keepdepfile
ninja-cmd-goals = $(or $(MAKECMDGOALS), all)
ninja-cmd-goals += $(foreach g, $(MAKECMDGOALS), $(.ninja-goals.$g))
@@ -208,7 +202,6 @@ clean: recurse-clean
! -path ./roms/edk2/ArmPkg/Library/GccLto/liblto-arm.a \
-exec rm {} +
rm -f TAGS cscope.* *~ */*~
@$(MAKE) -Ctests/qemu-iotests clean
VERSION = $(shell cat $(SRC_PATH)/VERSION)

View File

@@ -82,7 +82,7 @@ guidelines set out in the `style section
the Developers Guide.
Additional information on submitting patches can be found online via
the QEMU website:
the QEMU website
* `<https://wiki.qemu.org/Contribute/SubmitAPatch>`_
* `<https://wiki.qemu.org/Contribute/TrivialPatches>`_
@@ -102,7 +102,7 @@ requires a working 'git send-email' setup, and by default doesn't
automate everything, so you may want to go through the above steps
manually for once.
For installation instructions, please go to:
For installation instructions, please go to
* `<https://github.com/stefanha/git-publish>`_
@@ -159,7 +159,7 @@ Contact
=======
The QEMU community can be contacted in a number of ways, with the two
main methods being email and IRC:
main methods being email and IRC
* `<mailto:qemu-devel@nongnu.org>`_
* `<https://lists.nongnu.org/mailman/listinfo/qemu-devel>`_

View File

@@ -1 +1 @@
9.0.93
8.1.90

View File

@@ -16,4 +16,3 @@ config KVM
config XEN
bool
select FSDEV_9P if VIRTFS
select XEN_BUS

View File

@@ -41,7 +41,7 @@ void accel_blocker_init(void)
void accel_ioctl_begin(void)
{
if (likely(bql_locked())) {
if (likely(qemu_mutex_iothread_locked())) {
return;
}
@@ -51,7 +51,7 @@ void accel_ioctl_begin(void)
void accel_ioctl_end(void)
{
if (likely(bql_locked())) {
if (likely(qemu_mutex_iothread_locked())) {
return;
}
@@ -62,7 +62,7 @@ void accel_ioctl_end(void)
void accel_cpu_ioctl_begin(CPUState *cpu)
{
if (unlikely(bql_locked())) {
if (unlikely(qemu_mutex_iothread_locked())) {
return;
}
@@ -72,7 +72,7 @@ void accel_cpu_ioctl_begin(CPUState *cpu)
void accel_cpu_ioctl_end(CPUState *cpu)
{
if (unlikely(bql_locked())) {
if (unlikely(qemu_mutex_iothread_locked())) {
return;
}
@@ -105,7 +105,7 @@ void accel_ioctl_inhibit_begin(void)
* We allow to inhibit only when holding the BQL, so we can identify
* when an inhibitor wants to issue an ioctl easily.
*/
g_assert(bql_locked());
g_assert(qemu_mutex_iothread_locked());
/* Block further invocations of the ioctls outside the BQL. */
CPU_FOREACH(cpu) {

View File

@@ -62,7 +62,7 @@ void accel_setup_post(MachineState *ms)
}
/* initialize the arch-independent accel operation interfaces */
void accel_system_init_ops_interfaces(AccelClass *ac)
void accel_init_ops_interfaces(AccelClass *ac)
{
const char *ac_name;
char *ops_name;

View File

@@ -10,6 +10,6 @@
#ifndef ACCEL_SYSTEM_H
#define ACCEL_SYSTEM_H
void accel_system_init_ops_interfaces(AccelClass *ac);
void accel_init_ops_interfaces(AccelClass *ac);
#endif /* ACCEL_SYSTEM_H */

View File

@@ -104,7 +104,7 @@ static void accel_init_cpu_interfaces(AccelClass *ac)
void accel_init_interfaces(AccelClass *ac)
{
#ifndef CONFIG_USER_ONLY
accel_system_init_ops_interfaces(ac);
accel_init_ops_interfaces(ac);
#endif /* !CONFIG_USER_ONLY */
accel_init_cpu_interfaces(ac);

View File

@@ -24,9 +24,10 @@ static void *dummy_cpu_thread_fn(void *arg)
rcu_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
current_cpu = cpu;
#ifndef _WIN32
@@ -42,7 +43,7 @@ static void *dummy_cpu_thread_fn(void *arg)
qemu_guest_random_seed_thread_part2(cpu->random_seed);
do {
bql_unlock();
qemu_mutex_unlock_iothread();
#ifndef _WIN32
do {
int sig;
@@ -55,11 +56,11 @@ static void *dummy_cpu_thread_fn(void *arg)
#else
qemu_sem_wait(&cpu->sem);
#endif
bql_lock();
qemu_mutex_lock_iothread();
qemu_wait_io_event(cpu);
} while (!cpu->unplug);
bql_unlock();
qemu_mutex_unlock_iothread();
rcu_unregister_thread();
return NULL;
}
@@ -68,6 +69,9 @@ void dummy_start_vcpu_thread(CPUState *cpu)
{
char thread_name[VCPU_THREAD_NAME_SIZE];
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/DUMMY",
cpu->cpu_index);
qemu_thread_create(cpu->thread, thread_name, dummy_cpu_thread_fn, cpu,

View File

@@ -52,7 +52,7 @@
#include "qemu/main-loop.h"
#include "exec/address-spaces.h"
#include "exec/exec-all.h"
#include "gdbstub/enums.h"
#include "exec/gdbstub.h"
#include "sysemu/cpus.h"
#include "sysemu/hvf.h"
#include "sysemu/hvf_int.h"
@@ -204,15 +204,15 @@ static void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
{
if (!cpu->accel->dirty) {
if (!cpu->vcpu_dirty) {
hvf_get_registers(cpu);
cpu->accel->dirty = true;
cpu->vcpu_dirty = true;
}
}
static void hvf_cpu_synchronize_state(CPUState *cpu)
{
if (!cpu->accel->dirty) {
if (!cpu->vcpu_dirty) {
run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL);
}
}
@@ -221,7 +221,7 @@ static void do_hvf_cpu_synchronize_set_dirty(CPUState *cpu,
run_on_cpu_data arg)
{
/* QEMU state is the reference, push it to HVF now and on next entry */
cpu->accel->dirty = true;
cpu->vcpu_dirty = true;
}
static void hvf_cpu_synchronize_post_reset(CPUState *cpu)
@@ -400,9 +400,9 @@ static int hvf_init_vcpu(CPUState *cpu)
r = hv_vcpu_create(&cpu->accel->fd,
(hv_vcpu_exit_t **)&cpu->accel->exit, NULL);
#else
r = hv_vcpu_create(&cpu->accel->fd, HV_VCPU_DEFAULT);
r = hv_vcpu_create((hv_vcpuid_t *)&cpu->accel->fd, HV_VCPU_DEFAULT);
#endif
cpu->accel->dirty = true;
cpu->vcpu_dirty = 1;
assert_hvf_ok(r);
cpu->accel->guest_debug_enabled = false;
@@ -424,10 +424,11 @@ static void *hvf_cpu_thread_fn(void *arg)
rcu_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
current_cpu = cpu;
hvf_init_vcpu(cpu);
@@ -448,7 +449,7 @@ static void *hvf_cpu_thread_fn(void *arg)
hvf_vcpu_destroy(cpu);
cpu_thread_signal_destroyed(cpu);
bql_unlock();
qemu_mutex_unlock_iothread();
rcu_unregister_thread();
return NULL;
}
@@ -463,6 +464,10 @@ static void hvf_start_vcpu_thread(CPUState *cpu)
*/
assert(hvf_enabled());
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/HVF",
cpu->cpu_index);
qemu_thread_create(cpu->thread, thread_name, hvf_cpu_thread_fn,

View File

@@ -13,30 +13,40 @@
#include "sysemu/hvf.h"
#include "sysemu/hvf_int.h"
const char *hvf_return_string(hv_return_t ret)
{
switch (ret) {
case HV_SUCCESS: return "HV_SUCCESS";
case HV_ERROR: return "HV_ERROR";
case HV_BUSY: return "HV_BUSY";
case HV_BAD_ARGUMENT: return "HV_BAD_ARGUMENT";
case HV_NO_RESOURCES: return "HV_NO_RESOURCES";
case HV_NO_DEVICE: return "HV_NO_DEVICE";
case HV_UNSUPPORTED: return "HV_UNSUPPORTED";
case HV_DENIED: return "HV_DENIED";
default: return "[unknown hv_return value]";
}
}
void assert_hvf_ok_impl(hv_return_t ret, const char *file, unsigned int line,
const char *exp)
void assert_hvf_ok(hv_return_t ret)
{
if (ret == HV_SUCCESS) {
return;
}
error_report("Error: %s = %s (0x%x, at %s:%u)",
exp, hvf_return_string(ret), ret, file, line);
switch (ret) {
case HV_ERROR:
error_report("Error: HV_ERROR");
break;
case HV_BUSY:
error_report("Error: HV_BUSY");
break;
case HV_BAD_ARGUMENT:
error_report("Error: HV_BAD_ARGUMENT");
break;
case HV_NO_RESOURCES:
error_report("Error: HV_NO_RESOURCES");
break;
case HV_NO_DEVICE:
error_report("Error: HV_NO_DEVICE");
break;
case HV_UNSUPPORTED:
error_report("Error: HV_UNSUPPORTED");
break;
#if defined(MAC_OS_VERSION_11_0) && \
MAC_OS_X_VERSION_MIN_REQUIRED >= MAC_OS_VERSION_11_0
case HV_DENIED:
error_report("Error: HV_DENIED");
break;
#endif
default:
error_report("Unknown Error");
}
abort();
}

View File

@@ -33,9 +33,10 @@ static void *kvm_vcpu_thread_fn(void *arg)
rcu_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
cpu->neg.can_do_io = true;
current_cpu = cpu;
r = kvm_init_vcpu(cpu, &error_fatal);
@@ -57,7 +58,7 @@ static void *kvm_vcpu_thread_fn(void *arg)
kvm_destroy_vcpu(cpu);
cpu_thread_signal_destroyed(cpu);
bql_unlock();
qemu_mutex_unlock_iothread();
rcu_unregister_thread();
return NULL;
}
@@ -66,6 +67,9 @@ static void kvm_start_vcpu_thread(CPUState *cpu)
{
char thread_name[VCPU_THREAD_NAME_SIZE];
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/KVM",
cpu->cpu_index);
qemu_thread_create(cpu->thread, thread_name, kvm_vcpu_thread_fn,
@@ -79,10 +83,10 @@ static bool kvm_vcpu_thread_is_idle(CPUState *cpu)
static bool kvm_cpus_are_resettable(void)
{
return !kvm_enabled() || !kvm_state->guest_state_protected;
return !kvm_enabled() || kvm_cpu_check_are_resettable();
}
#ifdef TARGET_KVM_HAVE_GUEST_DEBUG
#ifdef KVM_CAP_SET_GUEST_DEBUG
static int kvm_update_guest_debug_ops(CPUState *cpu)
{
return kvm_update_guest_debug(cpu, 0);
@@ -101,7 +105,7 @@ static void kvm_accel_ops_class_init(ObjectClass *oc, void *data)
ops->synchronize_state = kvm_cpu_synchronize_state;
ops->synchronize_pre_loadvm = kvm_cpu_synchronize_pre_loadvm;
#ifdef TARGET_KVM_HAVE_GUEST_DEBUG
#ifdef KVM_CAP_SET_GUEST_DEBUG
ops->update_guest_debug = kvm_update_guest_debug_ops;
ops->supports_guest_debug = kvm_supports_guest_debug;
ops->insert_breakpoint = kvm_insert_breakpoint;

View File

@@ -27,7 +27,7 @@
#include "hw/pci/msi.h"
#include "hw/pci/msix.h"
#include "hw/s390x/adapter.h"
#include "gdbstub/enums.h"
#include "exec/gdbstub.h"
#include "sysemu/kvm_int.h"
#include "sysemu/runstate.h"
#include "sysemu/cpus.h"
@@ -69,6 +69,16 @@
#define KVM_GUESTDBG_BLOCKIRQ 0
#endif
//#define DEBUG_KVM
#ifdef DEBUG_KVM
#define DPRINTF(fmt, ...) \
do { fprintf(stderr, fmt, ## __VA_ARGS__); } while (0)
#else
#define DPRINTF(fmt, ...) \
do { } while (0)
#endif
struct KVMParkedVcpu {
unsigned long vcpu_id;
int kvm_fd;
@@ -88,11 +98,11 @@ bool kvm_allowed;
bool kvm_readonly_mem_allowed;
bool kvm_vm_attributes_allowed;
bool kvm_msi_use_devid;
static bool kvm_has_guest_debug;
bool kvm_has_guest_debug;
static int kvm_sstep_flags;
static bool kvm_immediate_exit;
static uint64_t kvm_supported_memory_attributes;
static bool kvm_guest_memfd_supported;
static uint64_t kvm_supported_memory_attributes;
static hwaddr kvm_max_slot_size = ~0;
static const KVMCapabilityInfo kvm_required_capabilites[] = {
@@ -285,8 +295,19 @@ static int kvm_set_user_memory_region(KVMMemoryListener *kml, KVMSlot *slot, boo
{
KVMState *s = kvm_state;
struct kvm_userspace_memory_region2 mem;
static int cap_user_memory2 = -1;
int ret;
if (cap_user_memory2 == -1) {
cap_user_memory2 = kvm_check_extension(s, KVM_CAP_USER_MEMORY2);
}
if (!cap_user_memory2 && slot->guest_memfd >= 0) {
error_report("%s, KVM doesn't support KVM_CAP_USER_MEMORY2,"
" which is required by guest memfd!", __func__);
exit(1);
}
mem.slot = slot->slot | (kml->as_id << 16);
mem.guest_phys_addr = slot->start_addr;
mem.userspace_addr = (unsigned long)slot->ram;
@@ -299,17 +320,17 @@ static int kvm_set_user_memory_region(KVMMemoryListener *kml, KVMSlot *slot, boo
* value. This is needed based on KVM commit 75d61fbc. */
mem.memory_size = 0;
if (kvm_guest_memfd_supported) {
if (cap_user_memory2) {
ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION2, &mem);
} else {
ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem);
}
}
if (ret < 0) {
goto err;
}
}
mem.memory_size = slot->memory_size;
if (kvm_guest_memfd_supported) {
if (cap_user_memory2) {
ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION2, &mem);
} else {
ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem);
@@ -321,7 +342,7 @@ err:
mem.userspace_addr, mem.guest_memfd,
mem.guest_memfd_offset, ret);
if (ret < 0) {
if (kvm_guest_memfd_supported) {
if (cap_user_memory2) {
error_report("%s: KVM_SET_USER_MEMORY_REGION2 failed, slot=%d,"
" start=0x%" PRIx64 ", size=0x%" PRIx64 ","
" flags=0x%" PRIx32 ", guest_memfd=%" PRId32 ","
@@ -340,84 +361,14 @@ err:
return ret;
}
void kvm_park_vcpu(CPUState *cpu)
{
struct KVMParkedVcpu *vcpu;
trace_kvm_park_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
vcpu = g_malloc0(sizeof(*vcpu));
vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
vcpu->kvm_fd = cpu->kvm_fd;
QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu, node);
}
int kvm_unpark_vcpu(KVMState *s, unsigned long vcpu_id)
{
struct KVMParkedVcpu *cpu;
int kvm_fd = -ENOENT;
QLIST_FOREACH(cpu, &s->kvm_parked_vcpus, node) {
if (cpu->vcpu_id == vcpu_id) {
QLIST_REMOVE(cpu, node);
kvm_fd = cpu->kvm_fd;
g_free(cpu);
break;
}
}
trace_kvm_unpark_vcpu(vcpu_id, kvm_fd > 0 ? "unparked" : "!found parked");
return kvm_fd;
}
int kvm_create_vcpu(CPUState *cpu)
{
unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
KVMState *s = kvm_state;
int kvm_fd;
/* check if the KVM vCPU already exist but is parked */
kvm_fd = kvm_unpark_vcpu(s, vcpu_id);
if (kvm_fd < 0) {
/* vCPU not parked: create a new KVM vCPU */
kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
if (kvm_fd < 0) {
error_report("KVM_CREATE_VCPU IOCTL failed for vCPU %lu", vcpu_id);
return kvm_fd;
}
}
cpu->kvm_fd = kvm_fd;
cpu->kvm_state = s;
cpu->vcpu_dirty = true;
cpu->dirty_pages = 0;
cpu->throttle_us_per_full = 0;
trace_kvm_create_vcpu(cpu->cpu_index, vcpu_id, kvm_fd);
return 0;
}
int kvm_create_and_park_vcpu(CPUState *cpu)
{
int ret = 0;
ret = kvm_create_vcpu(cpu);
if (!ret) {
kvm_park_vcpu(cpu);
}
return ret;
}
static int do_kvm_destroy_vcpu(CPUState *cpu)
{
KVMState *s = kvm_state;
long mmap_size;
struct KVMParkedVcpu *vcpu = NULL;
int ret = 0;
trace_kvm_destroy_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
DPRINTF("kvm_destroy_vcpu\n");
ret = kvm_arch_destroy_vcpu(cpu);
if (ret < 0) {
@@ -427,7 +378,7 @@ static int do_kvm_destroy_vcpu(CPUState *cpu)
mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
if (mmap_size < 0) {
ret = mmap_size;
trace_kvm_failed_get_vcpu_mmap_size();
DPRINTF("KVM_GET_VCPU_MMAP_SIZE failed\n");
goto err;
}
@@ -443,7 +394,10 @@ static int do_kvm_destroy_vcpu(CPUState *cpu)
}
}
kvm_park_vcpu(cpu);
vcpu = g_malloc0(sizeof(*vcpu));
vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
vcpu->kvm_fd = cpu->kvm_fd;
QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu, node);
err:
return ret;
}
@@ -456,6 +410,29 @@ void kvm_destroy_vcpu(CPUState *cpu)
}
}
static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id)
{
struct KVMParkedVcpu *cpu;
QLIST_FOREACH(cpu, &s->kvm_parked_vcpus, node) {
if (cpu->vcpu_id == vcpu_id) {
int kvm_fd;
QLIST_REMOVE(cpu, node);
kvm_fd = cpu->kvm_fd;
g_free(cpu);
return kvm_fd;
}
}
return kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
}
int __attribute__ ((weak)) kvm_arch_pre_create_vcpu(CPUState *cpu, Error **errp)
{
return 0;
}
int kvm_init_vcpu(CPUState *cpu, Error **errp)
{
KVMState *s = kvm_state;
@@ -464,14 +441,31 @@ int kvm_init_vcpu(CPUState *cpu, Error **errp)
trace_kvm_init_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
ret = kvm_create_vcpu(cpu);
/*
* tdx_pre_create_vcpu() may call cpu_x86_cpuid(). It in turn may call
* kvm_vm_ioctl(). Set cpu->kvm_state in advance to avoid NULL pointer
* dereference.
*/
cpu->kvm_state = s;
ret = kvm_arch_pre_create_vcpu(cpu, errp);
if (ret < 0) {
error_setg_errno(errp, -ret,
"kvm_init_vcpu: kvm_create_vcpu failed (%lu)",
kvm_arch_vcpu_id(cpu));
cpu->kvm_state = NULL;
goto err;
}
ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
if (ret < 0) {
error_setg_errno(errp, -ret, "kvm_init_vcpu: kvm_get_vcpu failed (%lu)",
kvm_arch_vcpu_id(cpu));
cpu->kvm_state = NULL;
goto err;
}
cpu->kvm_fd = ret;
cpu->vcpu_dirty = true;
cpu->dirty_pages = 0;
cpu->throttle_us_per_full = 0;
mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
if (mmap_size < 0) {
ret = mmap_size;
@@ -503,6 +497,7 @@ int kvm_init_vcpu(CPUState *cpu, Error **errp)
PAGE_SIZE * KVM_DIRTY_LOG_PAGE_OFFSET);
if (cpu->kvm_dirty_gfns == MAP_FAILED) {
ret = -errno;
DPRINTF("mmap'ing vcpu dirty gfns failed: %d\n", ret);
goto err;
}
}
@@ -535,8 +530,7 @@ static int kvm_mem_flags(MemoryRegion *mr)
flags |= KVM_MEM_READONLY;
}
if (memory_region_has_guest_memfd(mr)) {
assert(kvm_guest_memfd_supported);
flags |= KVM_MEM_GUEST_MEMFD;
flags |= KVM_MEM_PRIVATE;
}
return flags;
}
@@ -880,7 +874,7 @@ static void kvm_dirty_ring_flush(void)
* should always be with BQL held, serialization is guaranteed.
* However, let's be sure of it.
*/
assert(bql_locked());
assert(qemu_mutex_iothread_locked());
/*
* First make sure to flush the hardware buffers by kicking all
* vcpus out in a synchronous way.
@@ -1193,11 +1187,6 @@ int kvm_vm_check_extension(KVMState *s, unsigned int extension)
return ret;
}
/*
* We track the poisoned pages to be able to:
* - replace them on VM reset
* - block a migration for a VM with a poisoned page
*/
typedef struct HWPoisonPage {
ram_addr_t ram_addr;
QLIST_ENTRY(HWPoisonPage) list;
@@ -1231,11 +1220,6 @@ void kvm_hwpoison_page_add(ram_addr_t ram_addr)
QLIST_INSERT_HEAD(&hwpoison_page_list, page, list);
}
bool kvm_hwpoisoned_mem(void)
{
return !QLIST_EMPTY(&hwpoison_page_list);
}
static uint32_t adjust_ioeventfd_endianness(uint32_t val, uint32_t size)
{
#if HOST_BIG_ENDIAN != TARGET_BIG_ENDIAN
@@ -1339,12 +1323,11 @@ void kvm_set_max_memslot_size(hwaddr max_slot_size)
kvm_max_slot_size = max_slot_size;
}
static int kvm_set_memory_attributes(hwaddr start, uint64_t size, uint64_t attr)
static int kvm_set_memory_attributes(hwaddr start, hwaddr size, uint64_t attr)
{
struct kvm_memory_attributes attrs;
int r;
assert((attr & kvm_supported_memory_attributes) == attr);
attrs.attributes = attr;
attrs.address = start;
attrs.size = size;
@@ -1352,20 +1335,29 @@ static int kvm_set_memory_attributes(hwaddr start, uint64_t size, uint64_t attr)
r = kvm_vm_ioctl(kvm_state, KVM_SET_MEMORY_ATTRIBUTES, &attrs);
if (r) {
error_report("failed to set memory (0x%" HWADDR_PRIx "+0x%" PRIx64 ") "
"with attr 0x%" PRIx64 " error '%s'",
start, size, attr, strerror(errno));
warn_report("%s: failed to set memory (0x%lx+%#zx) with attr 0x%lx error '%s'",
__func__, start, size, attr, strerror(errno));
}
return r;
}
int kvm_set_memory_attributes_private(hwaddr start, uint64_t size)
int kvm_set_memory_attributes_private(hwaddr start, hwaddr size)
{
if (!(kvm_supported_memory_attributes & KVM_MEMORY_ATTRIBUTE_PRIVATE)) {
error_report("KVM doesn't support PRIVATE memory attribute\n");
return -EINVAL;
}
return kvm_set_memory_attributes(start, size, KVM_MEMORY_ATTRIBUTE_PRIVATE);
}
int kvm_set_memory_attributes_shared(hwaddr start, uint64_t size)
int kvm_set_memory_attributes_shared(hwaddr start, hwaddr size)
{
if (!(kvm_supported_memory_attributes & KVM_MEMORY_ATTRIBUTE_PRIVATE)) {
error_report("KVM doesn't support PRIVATE memory attribute\n");
return -EINVAL;
}
return kvm_set_memory_attributes(start, size, 0);
}
@@ -1476,10 +1468,10 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
abort();
}
if (memory_region_has_guest_memfd(mr)) {
if (memory_region_is_default_private(mr)) {
err = kvm_set_memory_attributes_private(start_addr, slot_size);
if (err) {
error_report("%s: failed to set memory attribute private: %s",
error_report("%s: failed to set memory attribute private: %s\n",
__func__, strerror(-err));
exit(1);
}
@@ -1518,9 +1510,9 @@ static void *kvm_dirty_ring_reaper_thread(void *data)
trace_kvm_dirty_ring_reaper("wakeup");
r->reaper_state = KVM_DIRTY_RING_REAPER_REAPING;
bql_lock();
qemu_mutex_lock_iothread();
kvm_dirty_ring_reap(s, NULL);
bql_unlock();
qemu_mutex_unlock_iothread();
r->reaper_iteration++;
}
@@ -1953,8 +1945,8 @@ void kvm_irqchip_commit_routes(KVMState *s)
assert(ret == 0);
}
void kvm_add_routing_entry(KVMState *s,
struct kvm_irq_routing_entry *entry)
static void kvm_add_routing_entry(KVMState *s,
struct kvm_irq_routing_entry *entry)
{
struct kvm_irq_routing_entry *new;
int n, size;
@@ -2051,7 +2043,7 @@ void kvm_irqchip_change_notify(void)
notifier_list_notify(&kvm_irqchip_change_notifiers, NULL);
}
int kvm_irqchip_get_virq(KVMState *s)
static int kvm_irqchip_get_virq(KVMState *s)
{
int next_virq;
@@ -2116,17 +2108,12 @@ int kvm_irqchip_add_msi_route(KVMRouteChange *c, int vector, PCIDevice *dev)
return -EINVAL;
}
if (s->irq_routes->nr < s->gsi_count) {
trace_kvm_irqchip_add_msi_route(dev ? dev->name : (char *)"N/A",
vector, virq);
trace_kvm_irqchip_add_msi_route(dev ? dev->name : (char *)"N/A",
vector, virq);
kvm_add_routing_entry(s, &kroute);
kvm_arch_add_msi_route_post(&kroute, vector, dev);
c->changes++;
} else {
kvm_irqchip_release_virq(s, virq);
return -ENOSPC;
}
kvm_add_routing_entry(s, &kroute);
kvm_arch_add_msi_route_post(&kroute, vector, dev);
c->changes++;
return virq;
}
@@ -2209,6 +2196,62 @@ static int kvm_irqchip_assign_irqfd(KVMState *s, EventNotifier *event,
return kvm_vm_ioctl(s, KVM_IRQFD, &irqfd);
}
int kvm_irqchip_add_adapter_route(KVMState *s, AdapterInfo *adapter)
{
struct kvm_irq_routing_entry kroute = {};
int virq;
if (!kvm_gsi_routing_enabled()) {
return -ENOSYS;
}
virq = kvm_irqchip_get_virq(s);
if (virq < 0) {
return virq;
}
kroute.gsi = virq;
kroute.type = KVM_IRQ_ROUTING_S390_ADAPTER;
kroute.flags = 0;
kroute.u.adapter.summary_addr = adapter->summary_addr;
kroute.u.adapter.ind_addr = adapter->ind_addr;
kroute.u.adapter.summary_offset = adapter->summary_offset;
kroute.u.adapter.ind_offset = adapter->ind_offset;
kroute.u.adapter.adapter_id = adapter->adapter_id;
kvm_add_routing_entry(s, &kroute);
return virq;
}
int kvm_irqchip_add_hv_sint_route(KVMState *s, uint32_t vcpu, uint32_t sint)
{
struct kvm_irq_routing_entry kroute = {};
int virq;
if (!kvm_gsi_routing_enabled()) {
return -ENOSYS;
}
if (!kvm_check_extension(s, KVM_CAP_HYPERV_SYNIC)) {
return -ENOSYS;
}
virq = kvm_irqchip_get_virq(s);
if (virq < 0) {
return virq;
}
kroute.gsi = virq;
kroute.type = KVM_IRQ_ROUTING_HV_SINT;
kroute.flags = 0;
kroute.u.hv_sint.vcpu = vcpu;
kroute.u.hv_sint.sint = sint;
kvm_add_routing_entry(s, &kroute);
kvm_irqchip_commit_routes(s);
return virq;
}
#else /* !KVM_CAP_IRQ_ROUTING */
void kvm_init_irq_routing(KVMState *s)
@@ -2373,7 +2416,7 @@ bool kvm_vcpu_id_is_valid(int vcpu_id)
bool kvm_dirty_ring_enabled(void)
{
return kvm_state && kvm_state->kvm_dirty_ring_size;
return kvm_state->kvm_dirty_ring_size ? true : false;
}
static void query_stats_cb(StatsResultList **result, StatsTarget target,
@@ -2421,11 +2464,11 @@ static int kvm_init(MachineState *ms)
s->sigmask_len = 8;
accel_blocker_init();
#ifdef TARGET_KVM_HAVE_GUEST_DEBUG
#ifdef KVM_CAP_SET_GUEST_DEBUG
QTAILQ_INIT(&s->kvm_sw_breakpoints);
#endif
QLIST_INIT(&s->kvm_parked_vcpus);
s->fd = qemu_open_old(s->device ?: "/dev/kvm", O_RDWR);
s->fd = qemu_open_old("/dev/kvm", O_RDWR);
if (s->fd == -1) {
fprintf(stderr, "Could not access KVM kernel module: %m\n");
ret = -errno;
@@ -2447,12 +2490,6 @@ static int kvm_init(MachineState *ms)
goto err;
}
kvm_supported_memory_attributes = kvm_check_extension(s, KVM_CAP_MEMORY_ATTRIBUTES);
kvm_guest_memfd_supported =
kvm_check_extension(s, KVM_CAP_GUEST_MEMFD) &&
kvm_check_extension(s, KVM_CAP_USER_MEMORY2) &&
(kvm_supported_memory_attributes & KVM_MEMORY_ATTRIBUTE_PRIVATE);
kvm_immediate_exit = kvm_check_extension(s, KVM_CAP_IMMEDIATE_EXIT);
s->nr_slots = kvm_check_extension(s, KVM_CAP_NR_MEMSLOTS);
@@ -2467,6 +2504,11 @@ static int kvm_init(MachineState *ms)
}
s->as = g_new0(struct KVMAs, s->nr_as);
kvm_guest_memfd_supported = kvm_check_extension(s, KVM_CAP_GUEST_MEMFD);
ret = kvm_check_extension(s, KVM_CAP_MEMORY_ATTRIBUTES);
kvm_supported_memory_attributes = ret > 0 ? ret : 0;
if (object_property_find(OBJECT(current_machine), "kvm-type")) {
g_autofree char *kvm_type = object_property_get_str(OBJECT(current_machine),
"kvm-type",
@@ -2611,7 +2653,7 @@ static int kvm_init(MachineState *ms)
kvm_vm_attributes_allowed =
(kvm_check_extension(s, KVM_CAP_VM_ATTRIBUTES) > 0);
#ifdef TARGET_KVM_HAVE_GUEST_DEBUG
#ifdef KVM_CAP_SET_GUEST_DEBUG
kvm_has_guest_debug =
(kvm_check_extension(s, KVM_CAP_SET_GUEST_DEBUG) > 0);
#endif
@@ -2620,7 +2662,7 @@ static int kvm_init(MachineState *ms)
if (kvm_has_guest_debug) {
kvm_sstep_flags = SSTEP_ENABLE;
#if defined TARGET_KVM_HAVE_GUEST_DEBUG
#if defined KVM_CAP_SET_GUEST_DEBUG2
int guest_debug_flags =
kvm_check_extension(s, KVM_CAP_SET_GUEST_DEBUG2);
@@ -2763,9 +2805,14 @@ void kvm_flush_coalesced_mmio_buffer(void)
s->coalesced_flush_in_progress = false;
}
bool kvm_cpu_check_are_resettable(void)
{
return kvm_arch_cpu_check_are_resettable();
}
static void do_kvm_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
{
if (!cpu->vcpu_dirty && !kvm_state->guest_state_protected) {
if (!cpu->vcpu_dirty) {
int ret = kvm_arch_get_registers(cpu);
if (ret) {
error_report("Failed to get registers: %s", strerror(-ret));
@@ -2779,7 +2826,7 @@ static void do_kvm_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
void kvm_cpu_synchronize_state(CPUState *cpu)
{
if (!cpu->vcpu_dirty && !kvm_state->guest_state_protected) {
if (!cpu->vcpu_dirty) {
run_on_cpu(cpu, do_kvm_cpu_synchronize_state, RUN_ON_CPU_NULL);
}
}
@@ -2814,13 +2861,7 @@ static void do_kvm_cpu_synchronize_post_init(CPUState *cpu, run_on_cpu_data arg)
void kvm_cpu_synchronize_post_init(CPUState *cpu)
{
if (!kvm_state->guest_state_protected) {
/*
* This runs before the machine_init_done notifiers, and is the last
* opportunity to synchronize the state of confidential guests.
*/
run_on_cpu(cpu, do_kvm_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
}
run_on_cpu(cpu, do_kvm_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
}
static void do_kvm_cpu_synchronize_pre_loadvm(CPUState *cpu, run_on_cpu_data arg)
@@ -2898,16 +2939,6 @@ int kvm_convert_memory(hwaddr start, hwaddr size, bool to_private)
int ret = -1;
trace_kvm_convert_memory(start, size, to_private ? "shared_to_private" : "private_to_shared");
if (!QEMU_PTR_IS_ALIGNED(start, qemu_real_host_page_size()) ||
!QEMU_PTR_IS_ALIGNED(size, qemu_real_host_page_size())) {
return -1;
}
if (!size) {
return -1;
}
section = memory_region_find(get_system_memory(), start, size);
mr = section.mr;
if (!mr) {
@@ -2921,12 +2952,33 @@ int kvm_convert_memory(hwaddr start, hwaddr size, bool to_private)
* [top of low memory: typically 2GB=0xC000000, 0xFC00000)
*/
if (!to_private) {
return 0;
ret = 0;
}
return -1;
return ret;
}
if (!memory_region_has_guest_memfd(mr)) {
if (memory_region_has_guest_memfd(mr)) {
if (to_private) {
ret = kvm_set_memory_attributes_private(start, size);
} else {
ret = kvm_set_memory_attributes_shared(start, size);
}
if (ret) {
memory_region_unref(section.mr);
return ret;
}
addr = memory_region_get_ram_ptr(section.mr) +
section.offset_within_region;
rb = qemu_ram_block_from_host(addr, false, &offset);
/*
* With KVM_SET_MEMORY_ATTRIBUTES by kvm_set_memory_attributes(),
* operation on underlying file descriptor is only for releasing
* unnecessary pages.
*/
ram_block_convert_range(rb, offset, size, to_private);
} else {
/*
* Because vMMIO region must be shared, guest TD may convert vMMIO
* region to shared explicitly. Don't complain such case. See
@@ -2937,42 +2989,15 @@ int kvm_convert_memory(hwaddr start, hwaddr size, bool to_private)
!memory_region_is_ram_device(mr) &&
!memory_region_is_rom(mr) &&
!memory_region_is_romd(mr)) {
ret = 0;
} else {
error_report("Convert non guest_memfd backed memory region "
ret = 0;
} else {
warn_report("Convert non guest_memfd backed memory region "
"(0x%"HWADDR_PRIx" ,+ 0x%"HWADDR_PRIx") to %s",
start, size, to_private ? "private" : "shared");
}
goto out_unref;
}
if (to_private) {
ret = kvm_set_memory_attributes_private(start, size);
} else {
ret = kvm_set_memory_attributes_shared(start, size);
}
if (ret) {
goto out_unref;
}
addr = memory_region_get_ram_ptr(mr) + section.offset_within_region;
rb = qemu_ram_block_from_host(addr, false, &offset);
if (to_private) {
if (rb->page_size != qemu_real_host_page_size()) {
/*
* shared memory is backed by hugetlb, which is supposed to be
* pre-allocated and doesn't need to be discarded
*/
goto out_unref;
}
ret = ram_block_discard_range(rb, offset, size);
} else {
ret = ram_block_discard_guest_memfd_range(rb, offset, size);
}
out_unref:
memory_region_unref(mr);
memory_region_unref(section.mr);
return ret;
}
@@ -2981,14 +3006,14 @@ int kvm_cpu_exec(CPUState *cpu)
struct kvm_run *run = cpu->kvm_run;
int ret, run_ret;
trace_kvm_cpu_exec();
DPRINTF("kvm_cpu_exec()\n");
if (kvm_arch_process_async_events(cpu)) {
qatomic_set(&cpu->exit_request, 0);
return EXCP_HLT;
}
bql_unlock();
qemu_mutex_unlock_iothread();
cpu_exec_start(cpu);
do {
@@ -3008,7 +3033,7 @@ int kvm_cpu_exec(CPUState *cpu)
kvm_arch_pre_run(cpu, run);
if (qatomic_read(&cpu->exit_request)) {
trace_kvm_interrupt_exit_request();
DPRINTF("interrupt exit requested\n");
/*
* KVM requires us to reenter the kernel after IO exits to complete
* instruction emulation. This self-signal will ensure that we
@@ -3028,17 +3053,17 @@ int kvm_cpu_exec(CPUState *cpu)
#ifdef KVM_HAVE_MCE_INJECTION
if (unlikely(have_sigbus_pending)) {
bql_lock();
qemu_mutex_lock_iothread();
kvm_arch_on_sigbus_vcpu(cpu, pending_sigbus_code,
pending_sigbus_addr);
have_sigbus_pending = false;
bql_unlock();
qemu_mutex_unlock_iothread();
}
#endif
if (run_ret < 0) {
if (run_ret == -EINTR || run_ret == -EAGAIN) {
trace_kvm_io_window_exit();
DPRINTF("io window exit\n");
kvm_eat_signals(cpu);
ret = EXCP_INTERRUPT;
break;
@@ -3062,6 +3087,7 @@ int kvm_cpu_exec(CPUState *cpu)
trace_kvm_run_exit(cpu->cpu_index, run->exit_reason);
switch (run->exit_reason) {
case KVM_EXIT_IO:
DPRINTF("handle_io\n");
/* Called outside BQL */
kvm_handle_io(run->io.port, attrs,
(uint8_t *)run + run->io.data_offset,
@@ -3071,6 +3097,7 @@ int kvm_cpu_exec(CPUState *cpu)
ret = 0;
break;
case KVM_EXIT_MMIO:
DPRINTF("handle_mmio\n");
/* Called outside BQL */
address_space_rw(&address_space_memory,
run->mmio.phys_addr, attrs,
@@ -3080,9 +3107,11 @@ int kvm_cpu_exec(CPUState *cpu)
ret = 0;
break;
case KVM_EXIT_IRQ_WINDOW_OPEN:
DPRINTF("irq_window_open\n");
ret = EXCP_INTERRUPT;
break;
case KVM_EXIT_SHUTDOWN:
DPRINTF("shutdown\n");
qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
ret = EXCP_INTERRUPT;
break;
@@ -3100,7 +3129,7 @@ int kvm_cpu_exec(CPUState *cpu)
* still full. Got kicked by KVM_RESET_DIRTY_RINGS.
*/
trace_kvm_dirty_ring_full(cpu->cpu_index);
bql_lock();
qemu_mutex_lock_iothread();
/*
* We throttle vCPU by making it sleep once it exit from kernel
* due to dirty ring full. In the dirtylimit scenario, reaping
@@ -3112,12 +3141,11 @@ int kvm_cpu_exec(CPUState *cpu)
} else {
kvm_dirty_ring_reap(kvm_state, NULL);
}
bql_unlock();
qemu_mutex_unlock_iothread();
dirtylimit_vcpu_execute(cpu);
ret = 0;
break;
case KVM_EXIT_SYSTEM_EVENT:
trace_kvm_run_exit_system_event(cpu->cpu_index, run->system_event.type);
switch (run->system_event.type) {
case KVM_SYSTEM_EVENT_SHUTDOWN:
qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
@@ -3129,20 +3157,18 @@ int kvm_cpu_exec(CPUState *cpu)
break;
case KVM_SYSTEM_EVENT_CRASH:
kvm_cpu_synchronize_state(cpu);
bql_lock();
qemu_mutex_lock_iothread();
qemu_system_guest_panicked(cpu_get_crash_info(cpu));
bql_unlock();
qemu_mutex_unlock_iothread();
ret = 0;
break;
default:
DPRINTF("kvm_arch_handle_exit\n");
ret = kvm_arch_handle_exit(cpu, run);
break;
}
break;
case KVM_EXIT_MEMORY_FAULT:
trace_kvm_memory_fault(run->memory_fault.gpa,
run->memory_fault.size,
run->memory_fault.flags);
if (run->memory_fault.flags & ~KVM_MEMORY_EXIT_FLAG_PRIVATE) {
error_report("KVM_EXIT_MEMORY_FAULT: Unknown flag 0x%" PRIx64,
(uint64_t)run->memory_fault.flags);
@@ -3153,13 +3179,14 @@ int kvm_cpu_exec(CPUState *cpu)
run->memory_fault.flags & KVM_MEMORY_EXIT_FLAG_PRIVATE);
break;
default:
DPRINTF("kvm_arch_handle_exit\n");
ret = kvm_arch_handle_exit(cpu, run);
break;
}
} while (ret == 0);
cpu_exec_end(cpu);
bql_lock();
qemu_mutex_lock_iothread();
if (ret < 0) {
cpu_dump_state(cpu, stderr, CPU_DUMP_CODE);
@@ -3328,7 +3355,7 @@ bool kvm_arm_supports_user_irq(void)
return kvm_check_extension(kvm_state, KVM_CAP_ARM_USER_IRQ);
}
#ifdef TARGET_KVM_HAVE_GUEST_DEBUG
#ifdef KVM_CAP_SET_GUEST_DEBUG
struct kvm_sw_breakpoint *kvm_find_sw_breakpoint(CPUState *cpu, vaddr pc)
{
struct kvm_sw_breakpoint *bp;
@@ -3488,7 +3515,7 @@ void kvm_remove_all_breakpoints(CPUState *cpu)
}
}
#endif /* !TARGET_KVM_HAVE_GUEST_DEBUG */
#endif /* !KVM_CAP_SET_GUEST_DEBUG */
static int kvm_set_signal_mask(CPUState *cpu, const sigset_t *sigset)
{
@@ -3771,39 +3798,6 @@ static void kvm_set_dirty_ring_size(Object *obj, Visitor *v,
s->kvm_dirty_ring_size = value;
}
static char *kvm_get_device(Object *obj,
Error **errp G_GNUC_UNUSED)
{
KVMState *s = KVM_STATE(obj);
return g_strdup(s->device);
}
static void kvm_set_device(Object *obj,
const char *value,
Error **errp G_GNUC_UNUSED)
{
KVMState *s = KVM_STATE(obj);
g_free(s->device);
s->device = g_strdup(value);
}
static void kvm_set_kvm_rapl(Object *obj, bool value, Error **errp)
{
KVMState *s = KVM_STATE(obj);
s->msr_energy.enable = value;
}
static void kvm_set_kvm_rapl_socket_path(Object *obj,
const char *str,
Error **errp)
{
KVMState *s = KVM_STATE(obj);
g_free(s->msr_energy.socket_path);
s->msr_energy.socket_path = g_strdup(str);
}
static void kvm_accel_instance_init(Object *obj)
{
KVMState *s = KVM_STATE(obj);
@@ -3822,8 +3816,6 @@ static void kvm_accel_instance_init(Object *obj)
s->xen_version = 0;
s->xen_gnttab_max_frames = 64;
s->xen_evtchn_max_pirq = 256;
s->device = NULL;
s->msr_energy.enable = false;
}
/**
@@ -3864,21 +3856,6 @@ static void kvm_accel_class_init(ObjectClass *oc, void *data)
object_class_property_set_description(oc, "dirty-ring-size",
"Size of KVM dirty page ring buffer (default: 0, i.e. use bitmap)");
object_class_property_add_str(oc, "device", kvm_get_device, kvm_set_device);
object_class_property_set_description(oc, "device",
"Path to the device node to use (default: /dev/kvm)");
object_class_property_add_bool(oc, "rapl",
NULL,
kvm_set_kvm_rapl);
object_class_property_set_description(oc, "rapl",
"Allow energy related MSRs for RAPL interface in Guest");
object_class_property_add_str(oc, "rapl-helper-socket", NULL,
kvm_set_kvm_rapl_socket_path);
object_class_property_set_description(oc, "rapl-helper-socket",
"Socket Path for comminucating with the Virtual MSR helper daemon");
kvm_arch_accel_class_init(oc);
}
@@ -3949,7 +3926,7 @@ static StatsList *add_kvmstat_entry(struct kvm_stats_desc *pdesc,
/* Alloc and populate data list */
stats = g_new0(Stats, 1);
stats->name = g_strdup(pdesc->name);
stats->value = g_new0(StatsValue, 1);
stats->value = g_new0(StatsValue, 1);;
if ((pdesc->flags & KVM_STATS_UNIT_MASK) == KVM_STATS_UNIT_BOOLEAN) {
stats->value->u.boolean = *stats_data;
@@ -4298,11 +4275,6 @@ void query_stats_schemas_cb(StatsSchemaList **result, Error **errp)
}
}
void kvm_mark_guest_state_protected(void)
{
kvm_state->guest_state_protected = true;
}
int kvm_create_guest_memfd(uint64_t size, uint64_t flags, Error **errp)
{
int fd;
@@ -4312,14 +4284,13 @@ int kvm_create_guest_memfd(uint64_t size, uint64_t flags, Error **errp)
};
if (!kvm_guest_memfd_supported) {
error_setg(errp, "KVM does not support guest_memfd");
return -1;
error_setg(errp, "KVM doesn't support guest memfd\n");
return -EOPNOTSUPP;
}
fd = kvm_vm_ioctl(kvm_state, KVM_CREATE_GUEST_MEMFD, &guest_memfd);
if (fd < 0) {
error_setg_errno(errp, errno, "Error creating KVM guest_memfd");
return -1;
error_setg_errno(errp, errno, "%s: error creating kvm guest memfd\n", __func__);
}
return fd;

View File

@@ -22,4 +22,5 @@ bool kvm_supports_guest_debug(void);
int kvm_insert_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len);
int kvm_remove_breakpoint(CPUState *cpu, int type, vaddr addr, vaddr len);
void kvm_remove_all_breakpoints(CPUState *cpu);
#endif /* KVM_CPUS_H */

View File

@@ -9,10 +9,6 @@ kvm_device_ioctl(int fd, int type, void *arg) "dev fd %d, type 0x%x, arg %p"
kvm_failed_reg_get(uint64_t id, const char *msg) "Warning: Unable to retrieve ONEREG %" PRIu64 " from KVM: %s"
kvm_failed_reg_set(uint64_t id, const char *msg) "Warning: Unable to set ONEREG %" PRIu64 " to KVM: %s"
kvm_init_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu"
kvm_create_vcpu(int cpu_index, unsigned long arch_cpu_id, int kvm_fd) "index: %d, id: %lu, kvm fd: %d"
kvm_destroy_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu"
kvm_park_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu"
kvm_unpark_vcpu(unsigned long arch_cpu_id, const char *msg) "id: %lu %s"
kvm_irqchip_commit_routes(void) ""
kvm_irqchip_add_msi_route(char *name, int vector, int virq) "dev %s vector %d virq %d"
kvm_irqchip_update_msi_route(int virq) "Updating MSI route virq=%d"
@@ -29,10 +25,4 @@ kvm_dirty_ring_reaper(const char *s) "%s"
kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"PRIu64" pages (took %"PRIi64" us)"
kvm_dirty_ring_reaper_kick(const char *reason) "%s"
kvm_dirty_ring_flush(int finished) "%d"
kvm_failed_get_vcpu_mmap_size(void) ""
kvm_cpu_exec(void) ""
kvm_interrupt_exit_request(void) ""
kvm_io_window_exit(void) ""
kvm_run_exit_system_event(int cpu_index, uint32_t event_type) "cpu_index %d, system_even_type %"PRIu32
kvm_convert_memory(uint64_t start, uint64_t size, const char *msg) "start 0x%" PRIx64 " size 0x%" PRIx64 " %s"
kvm_memory_fault(uint64_t start, uint64_t size, uint64_t flags) "start 0x%" PRIx64 " size 0x%" PRIx64 " flags 0x%" PRIx64

View File

@@ -24,18 +24,6 @@
#include "qemu/main-loop.h"
#include "hw/core/cpu.h"
static int64_t qtest_clock_counter;
static int64_t qtest_get_virtual_clock(void)
{
return qatomic_read_i64(&qtest_clock_counter);
}
static void qtest_set_virtual_clock(int64_t count)
{
qatomic_set_i64(&qtest_clock_counter, count);
}
static int qtest_init_accel(MachineState *ms)
{
return 0;
@@ -64,7 +52,6 @@ static void qtest_accel_ops_class_init(ObjectClass *oc, void *data)
ops->create_vcpu_thread = dummy_start_vcpu_thread;
ops->get_virtual_clock = qtest_get_virtual_clock;
ops->set_virtual_clock = qtest_set_virtual_clock;
};
static const TypeInfo qtest_accel_ops_type = {

View File

@@ -124,13 +124,3 @@ uint32_t kvm_dirty_ring_size(void)
{
return 0;
}
bool kvm_hwpoisoned_mem(void)
{
return false;
}
int kvm_create_guest_memfd(uint64_t size, uint64_t flags, Error **errp)
{
return -ENOSYS;
}

View File

@@ -18,6 +18,24 @@ void tb_flush(CPUState *cpu)
{
}
void tlb_set_dirty(CPUState *cpu, vaddr vaddr)
{
}
int probe_access_flags(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx,
bool nonfault, void **phost, uintptr_t retaddr)
{
g_assert_not_reached();
}
void *probe_access(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr)
{
/* Handled by hardware accelerator. */
g_assert_not_reached();
}
G_NORETURN void cpu_loop_exit(CPUState *cpu)
{
g_assert_not_reached();

View File

@@ -30,6 +30,9 @@
#include "qemu/rcu.h"
#include "exec/log.h"
#include "qemu/main-loop.h"
#if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
#include "hw/i386/apic.h"
#endif
#include "sysemu/cpus.h"
#include "exec/cpu-all.h"
#include "sysemu/cpu-timers.h"
@@ -144,16 +147,6 @@ static void init_delay_params(SyncClocks *sc, const CPUState *cpu)
}
#endif /* CONFIG USER ONLY */
bool tcg_cflags_has(CPUState *cpu, uint32_t flags)
{
return cpu->tcg_cflags & flags;
}
void tcg_cflags_set(CPUState *cpu, uint32_t flags)
{
cpu->tcg_cflags |= flags;
}
uint32_t curr_cflags(CPUState *cpu)
{
uint32_t cflags = cpu->tcg_cflags;
@@ -260,29 +253,43 @@ static inline TranslationBlock *tb_lookup(CPUState *cpu, vaddr pc,
hash = tb_jmp_cache_hash_func(pc);
jc = cpu->tb_jmp_cache;
tb = qatomic_read(&jc->array[hash].tb);
if (likely(tb &&
jc->array[hash].pc == pc &&
tb->cs_base == cs_base &&
tb->flags == flags &&
tb_cflags(tb) == cflags)) {
goto hit;
if (cflags & CF_PCREL) {
/* Use acquire to ensure current load of pc from jc. */
tb = qatomic_load_acquire(&jc->array[hash].tb);
if (likely(tb &&
jc->array[hash].pc == pc &&
tb->cs_base == cs_base &&
tb->flags == flags &&
tb_cflags(tb) == cflags)) {
return tb;
}
tb = tb_htable_lookup(cpu, pc, cs_base, flags, cflags);
if (tb == NULL) {
return NULL;
}
jc->array[hash].pc = pc;
/* Ensure pc is written first. */
qatomic_store_release(&jc->array[hash].tb, tb);
} else {
/* Use rcu_read to ensure current load of pc from *tb. */
tb = qatomic_rcu_read(&jc->array[hash].tb);
if (likely(tb &&
tb->pc == pc &&
tb->cs_base == cs_base &&
tb->flags == flags &&
tb_cflags(tb) == cflags)) {
return tb;
}
tb = tb_htable_lookup(cpu, pc, cs_base, flags, cflags);
if (tb == NULL) {
return NULL;
}
/* Use the pc value already stored in tb->pc. */
qatomic_set(&jc->array[hash].tb, tb);
}
tb = tb_htable_lookup(cpu, pc, cs_base, flags, cflags);
if (tb == NULL) {
return NULL;
}
jc->array[hash].pc = pc;
qatomic_set(&jc->array[hash].tb, tb);
hit:
/*
* As long as tb is not NULL, the contents are consistent. Therefore,
* the virtual PC has to match for non-CF_PCREL translations.
*/
assert((tb_cflags(tb) & CF_PCREL) || tb->pc == pc);
return tb;
}
@@ -350,9 +357,9 @@ static bool check_for_breakpoints_slow(CPUState *cpu, vaddr pc,
#ifdef CONFIG_USER_ONLY
g_assert_not_reached();
#else
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
assert(tcg_ops->debug_check_breakpoint);
match_bp = tcg_ops->debug_check_breakpoint(cpu);
CPUClass *cc = CPU_GET_CLASS(cpu);
assert(cc->tcg_ops->debug_check_breakpoint);
match_bp = cc->tcg_ops->debug_check_breakpoint(cpu);
#endif
}
@@ -378,7 +385,7 @@ static bool check_for_breakpoints_slow(CPUState *cpu, vaddr pc,
* breakpoints are removed.
*/
if (match_page) {
*cflags = (*cflags & ~CF_COUNT_MASK) | CF_NO_GOTO_TB | CF_BP_PAGE | 1;
*cflags = (*cflags & ~CF_COUNT_MASK) | CF_NO_GOTO_TB | 1;
}
return false;
}
@@ -406,14 +413,6 @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
uint64_t cs_base;
uint32_t flags, cflags;
/*
* By definition we've just finished a TB, so I/O is OK.
* Avoid the possibility of calling cpu_io_recompile() if
* a page table walk triggered by tb_lookup() calling
* probe_access_internal() happens to touch an MMIO device.
* The next TB, if we chain to it, will clear the flag again.
*/
cpu->neg.can_do_io = true;
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
cflags = curr_cflags(cpu);
@@ -446,6 +445,7 @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
static inline TranslationBlock * QEMU_DISABLE_CFI
cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
{
CPUArchState *env = cpu_env(cpu);
uintptr_t ret;
TranslationBlock *last_tb;
const void *tb_ptr = itb->tc.ptr;
@@ -455,7 +455,7 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
}
qemu_thread_jit_execute();
ret = tcg_qemu_tb_exec(cpu_env(cpu), tb_ptr);
ret = tcg_qemu_tb_exec(env, tb_ptr);
cpu->neg.can_do_io = true;
qemu_plugin_disable_mem_helpers(cpu);
/*
@@ -476,11 +476,10 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
* counter hit zero); we must restore the guest PC to the address
* of the start of the TB.
*/
CPUClass *cc = cpu->cc;
const TCGCPUOps *tcg_ops = cc->tcg_ops;
CPUClass *cc = CPU_GET_CLASS(cpu);
if (tcg_ops->synchronize_from_tb) {
tcg_ops->synchronize_from_tb(cpu, last_tb);
if (cc->tcg_ops->synchronize_from_tb) {
cc->tcg_ops->synchronize_from_tb(cpu, last_tb);
} else {
tcg_debug_assert(!(tb_cflags(last_tb) & CF_PCREL));
assert(cc->set_pc);
@@ -512,19 +511,19 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
static void cpu_exec_enter(CPUState *cpu)
{
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
CPUClass *cc = CPU_GET_CLASS(cpu);
if (tcg_ops->cpu_exec_enter) {
tcg_ops->cpu_exec_enter(cpu);
if (cc->tcg_ops->cpu_exec_enter) {
cc->tcg_ops->cpu_exec_enter(cpu);
}
}
static void cpu_exec_exit(CPUState *cpu)
{
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
CPUClass *cc = CPU_GET_CLASS(cpu);
if (tcg_ops->cpu_exec_exit) {
tcg_ops->cpu_exec_exit(cpu);
if (cc->tcg_ops->cpu_exec_exit) {
cc->tcg_ops->cpu_exec_exit(cpu);
}
}
@@ -559,8 +558,8 @@ static void cpu_exec_longjmp_cleanup(CPUState *cpu)
tcg_ctx->gen_tb = NULL;
}
#endif
if (bql_locked()) {
bql_unlock();
if (qemu_mutex_iothread_locked()) {
qemu_mutex_unlock_iothread();
}
assert_no_pages_locked();
}
@@ -678,10 +677,16 @@ static inline bool cpu_handle_halt(CPUState *cpu)
{
#ifndef CONFIG_USER_ONLY
if (cpu->halted) {
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
bool leave_halt = tcg_ops->cpu_exec_halt(cpu);
if (!leave_halt) {
#if defined(TARGET_I386)
if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
X86CPU *x86_cpu = X86_CPU(cpu);
qemu_mutex_lock_iothread();
apic_poll_irq(x86_cpu->apic_state);
cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
qemu_mutex_unlock_iothread();
}
#endif /* TARGET_I386 */
if (!cpu_has_work(cpu)) {
return true;
}
@@ -694,7 +699,7 @@ static inline bool cpu_handle_halt(CPUState *cpu)
static inline void cpu_handle_debug_exception(CPUState *cpu)
{
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
CPUClass *cc = CPU_GET_CLASS(cpu);
CPUWatchpoint *wp;
if (!cpu->watchpoint_hit) {
@@ -703,8 +708,8 @@ static inline void cpu_handle_debug_exception(CPUState *cpu)
}
}
if (tcg_ops->debug_excp_handler) {
tcg_ops->debug_excp_handler(cpu);
if (cc->tcg_ops->debug_excp_handler) {
cc->tcg_ops->debug_excp_handler(cpu);
}
}
@@ -716,12 +721,11 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
&& cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0) {
/* Execute just one insn to trigger exception pending in the log */
cpu->cflags_next_tb = (curr_cflags(cpu) & ~CF_USE_ICOUNT)
| CF_NOIRQ | 1;
| CF_LAST_IO | CF_NOIRQ | 1;
}
#endif
return false;
}
if (cpu->exception_index >= EXCP_INTERRUPT) {
/* exit request from the cpu execution loop */
*ret = cpu->exception_index;
@@ -730,59 +734,62 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
}
cpu->exception_index = -1;
return true;
}
} else {
#if defined(CONFIG_USER_ONLY)
/*
* If user mode only, we simulate a fake exception which will be
* handled outside the cpu execution loop.
*/
/* if user mode only, we simulate a fake exception
which will be handled outside the cpu execution
loop */
#if defined(TARGET_I386)
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
tcg_ops->fake_user_interrupt(cpu);
CPUClass *cc = CPU_GET_CLASS(cpu);
cc->tcg_ops->fake_user_interrupt(cpu);
#endif /* TARGET_I386 */
*ret = cpu->exception_index;
cpu->exception_index = -1;
return true;
#else
if (replay_exception()) {
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
bql_lock();
tcg_ops->do_interrupt(cpu);
bql_unlock();
*ret = cpu->exception_index;
cpu->exception_index = -1;
return true;
#else
if (replay_exception()) {
CPUClass *cc = CPU_GET_CLASS(cpu);
qemu_mutex_lock_iothread();
cc->tcg_ops->do_interrupt(cpu);
qemu_mutex_unlock_iothread();
cpu->exception_index = -1;
if (unlikely(cpu->singlestep_enabled)) {
/*
* After processing the exception, ensure an EXCP_DEBUG is
* raised when single-stepping so that GDB doesn't miss the
* next instruction.
*/
*ret = EXCP_DEBUG;
cpu_handle_debug_exception(cpu);
if (unlikely(cpu->singlestep_enabled)) {
/*
* After processing the exception, ensure an EXCP_DEBUG is
* raised when single-stepping so that GDB doesn't miss the
* next instruction.
*/
*ret = EXCP_DEBUG;
cpu_handle_debug_exception(cpu);
return true;
}
} else if (!replay_has_interrupt()) {
/* give a chance to iothread in replay mode */
*ret = EXCP_INTERRUPT;
return true;
}
} else if (!replay_has_interrupt()) {
/* give a chance to iothread in replay mode */
*ret = EXCP_INTERRUPT;
return true;
}
#endif
}
return false;
}
static inline bool icount_exit_request(CPUState *cpu)
#ifndef CONFIG_USER_ONLY
/*
* CPU_INTERRUPT_POLL is a virtual event which gets converted into a
* "real" interrupt event later. It does not need to be recorded for
* replay purposes.
*/
static inline bool need_replay_interrupt(int interrupt_request)
{
if (!icount_enabled()) {
return false;
}
if (cpu->cflags_next_tb != -1 && !(cpu->cflags_next_tb & CF_USE_ICOUNT)) {
return false;
}
return cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0;
#if defined(TARGET_I386)
return !(interrupt_request & CPU_INTERRUPT_POLL);
#else
return true;
#endif
}
#endif /* !CONFIG_USER_ONLY */
static inline bool cpu_handle_interrupt(CPUState *cpu,
TranslationBlock **last_tb)
@@ -805,7 +812,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
if (unlikely(qatomic_read(&cpu->interrupt_request))) {
int interrupt_request;
bql_lock();
qemu_mutex_lock_iothread();
interrupt_request = cpu->interrupt_request;
if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
/* Mask out external interrupts for this step. */
@@ -814,7 +821,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
if (interrupt_request & CPU_INTERRUPT_DEBUG) {
cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
cpu->exception_index = EXCP_DEBUG;
bql_unlock();
qemu_mutex_unlock_iothread();
return true;
}
#if !defined(CONFIG_USER_ONLY)
@@ -825,7 +832,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
cpu->interrupt_request &= ~CPU_INTERRUPT_HALT;
cpu->halted = 1;
cpu->exception_index = EXCP_HLT;
bql_unlock();
qemu_mutex_unlock_iothread();
return true;
}
#if defined(TARGET_I386)
@@ -836,14 +843,14 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
cpu_svm_check_intercept_param(env, SVM_EXIT_INIT, 0, 0);
do_cpu_init(x86_cpu);
cpu->exception_index = EXCP_HALTED;
bql_unlock();
qemu_mutex_unlock_iothread();
return true;
}
#else
else if (interrupt_request & CPU_INTERRUPT_RESET) {
replay_interrupt();
cpu_reset(cpu);
bql_unlock();
qemu_mutex_unlock_iothread();
return true;
}
#endif /* !TARGET_I386 */
@@ -852,11 +859,11 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
True when it is, and we should restart on a new TB,
and via longjmp via cpu_loop_exit. */
else {
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
CPUClass *cc = CPU_GET_CLASS(cpu);
if (tcg_ops->cpu_exec_interrupt(cpu, interrupt_request)) {
if (!tcg_ops->need_replay_interrupt ||
tcg_ops->need_replay_interrupt(interrupt_request)) {
if (cc->tcg_ops->cpu_exec_interrupt &&
cc->tcg_ops->cpu_exec_interrupt(cpu, interrupt_request)) {
if (need_replay_interrupt(interrupt_request)) {
replay_interrupt();
}
/*
@@ -866,7 +873,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
*/
if (unlikely(cpu->singlestep_enabled)) {
cpu->exception_index = EXCP_DEBUG;
bql_unlock();
qemu_mutex_unlock_iothread();
return true;
}
cpu->exception_index = -1;
@@ -885,11 +892,14 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
}
/* If we exit via cpu_loop_exit/longjmp it is reset in cpu_exec */
bql_unlock();
qemu_mutex_unlock_iothread();
}
/* Finally, check if we need to exit to the main loop. */
if (unlikely(qatomic_read(&cpu->exit_request)) || icount_exit_request(cpu)) {
if (unlikely(qatomic_read(&cpu->exit_request))
|| (icount_enabled()
&& (cpu->cflags_next_tb == -1 || cpu->cflags_next_tb & CF_USE_ICOUNT)
&& cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0)) {
qatomic_set(&cpu->exit_request, 0);
if (cpu->exception_index == -1) {
cpu->exception_index = EXCP_INTERRUPT;
@@ -904,6 +914,8 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
vaddr pc, TranslationBlock **last_tb,
int *tb_exit)
{
int32_t insns_left;
trace_exec_tb(tb, pc);
tb = cpu_tb_exec(cpu, tb, tb_exit);
if (*tb_exit != TB_EXIT_REQUESTED) {
@@ -912,7 +924,8 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
}
*last_tb = NULL;
if (cpu_loop_exit_requested(cpu)) {
insns_left = qatomic_read(&cpu->neg.icount_decr.u32);
if (insns_left < 0) {
/* Something asked us to stop executing chained TBs; just
* continue round the main loop. Whatever requested the exit
* will also have set something else (eg exit_request or
@@ -929,7 +942,7 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
/* Ensure global icount has gone forward */
icount_update(cpu);
/* Refill decrementer and continue execution. */
int32_t insns_left = MIN(0xffff, cpu->icount_budget);
insns_left = MIN(0xffff, cpu->icount_budget);
cpu->neg.icount_decr.u16.low = insns_left;
cpu->icount_extra = cpu->icount_budget - insns_left;
@@ -999,8 +1012,14 @@ cpu_exec_loop(CPUState *cpu, SyncClocks *sc)
*/
h = tb_jmp_cache_hash_func(pc);
jc = cpu->tb_jmp_cache;
jc->array[h].pc = pc;
qatomic_set(&jc->array[h].tb, tb);
if (cflags & CF_PCREL) {
jc->array[h].pc = pc;
/* Ensure pc is written first. */
qatomic_store_release(&jc->array[h].tb, tb);
} else {
/* Use the pc value already stored in tb->pc. */
qatomic_set(&jc->array[h].tb, tb);
}
}
#ifndef CONFIG_USER_ONLY
@@ -1051,7 +1070,7 @@ int cpu_exec(CPUState *cpu)
return EXCP_HALTED;
}
RCU_READ_LOCK_GUARD();
rcu_read_lock();
cpu_exec_enter(cpu);
/*
@@ -1065,20 +1084,18 @@ int cpu_exec(CPUState *cpu)
ret = cpu_exec_setjmp(cpu, &sc);
cpu_exec_exit(cpu);
rcu_read_unlock();
return ret;
}
bool tcg_exec_realizefn(CPUState *cpu, Error **errp)
{
static bool tcg_target_initialized;
CPUClass *cc = CPU_GET_CLASS(cpu);
if (!tcg_target_initialized) {
/* Check mandatory TCGCPUOps handlers */
#ifndef CONFIG_USER_ONLY
assert(cpu->cc->tcg_ops->cpu_exec_halt);
assert(cpu->cc->tcg_ops->cpu_exec_interrupt);
#endif /* !CONFIG_USER_ONLY */
cpu->cc->tcg_ops->initialize();
cc->tcg_ops->initialize();
tcg_target_initialized = true;
}

View File

@@ -21,16 +21,12 @@
#include "qemu/main-loop.h"
#include "hw/core/tcg-cpu-ops.h"
#include "exec/exec-all.h"
#include "exec/page-protection.h"
#include "exec/memory.h"
#include "exec/cpu_ldst.h"
#include "exec/cputlb.h"
#include "exec/tb-flush.h"
#include "exec/memory-internal.h"
#include "exec/ram_addr.h"
#include "exec/mmu-access-type.h"
#include "exec/tlb-common.h"
#include "exec/vaddr.h"
#include "tcg/tcg.h"
#include "qemu/error-report.h"
#include "exec/log.h"
@@ -99,54 +95,6 @@ static inline size_t sizeof_tlb(CPUTLBDescFast *fast)
return fast->mask + (1 << CPU_TLB_ENTRY_BITS);
}
static inline uint64_t tlb_read_idx(const CPUTLBEntry *entry,
MMUAccessType access_type)
{
/* Do not rearrange the CPUTLBEntry structure members. */
QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_read) !=
MMU_DATA_LOAD * sizeof(uint64_t));
QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_write) !=
MMU_DATA_STORE * sizeof(uint64_t));
QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_code) !=
MMU_INST_FETCH * sizeof(uint64_t));
#if TARGET_LONG_BITS == 32
/* Use qatomic_read, in case of addr_write; only care about low bits. */
const uint32_t *ptr = (uint32_t *)&entry->addr_idx[access_type];
ptr += HOST_BIG_ENDIAN;
return qatomic_read(ptr);
#else
const uint64_t *ptr = &entry->addr_idx[access_type];
# if TCG_OVERSIZED_GUEST
return *ptr;
# else
/* ofs might correspond to .addr_write, so use qatomic_read */
return qatomic_read(ptr);
# endif
#endif
}
static inline uint64_t tlb_addr_write(const CPUTLBEntry *entry)
{
return tlb_read_idx(entry, MMU_DATA_STORE);
}
/* Find the TLB index corresponding to the mmu_idx + address pair. */
static inline uintptr_t tlb_index(CPUState *cpu, uintptr_t mmu_idx,
vaddr addr)
{
uintptr_t size_mask = cpu->neg.tlb.f[mmu_idx].mask >> CPU_TLB_ENTRY_BITS;
return (addr >> TARGET_PAGE_BITS) & size_mask;
}
/* Find the TLB entry corresponding to the mmu_idx + address pair. */
static inline CPUTLBEntry *tlb_entry(CPUState *cpu, uintptr_t mmu_idx,
vaddr addr)
{
return &cpu->neg.tlb.f[mmu_idx].table[tlb_index(cpu, mmu_idx, addr)];
}
static void tlb_window_reset(CPUTLBDesc *desc, int64_t ns,
size_t max_entries)
{
@@ -418,9 +366,12 @@ void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap)
{
tlb_debug("mmu_idx: 0x%" PRIx16 "\n", idxmap);
assert_cpu_is_self(cpu);
tlb_flush_by_mmuidx_async_work(cpu, RUN_ON_CPU_HOST_INT(idxmap));
if (cpu->created && !qemu_cpu_is_self(cpu)) {
async_run_on_cpu(cpu, tlb_flush_by_mmuidx_async_work,
RUN_ON_CPU_HOST_INT(idxmap));
} else {
tlb_flush_by_mmuidx_async_work(cpu, RUN_ON_CPU_HOST_INT(idxmap));
}
}
void tlb_flush(CPUState *cpu)
@@ -428,6 +379,21 @@ void tlb_flush(CPUState *cpu)
tlb_flush_by_mmuidx(cpu, ALL_MMUIDX_BITS);
}
void tlb_flush_by_mmuidx_all_cpus(CPUState *src_cpu, uint16_t idxmap)
{
const run_on_cpu_func fn = tlb_flush_by_mmuidx_async_work;
tlb_debug("mmu_idx: 0x%"PRIx16"\n", idxmap);
flush_all_helper(src_cpu, fn, RUN_ON_CPU_HOST_INT(idxmap));
fn(src_cpu, RUN_ON_CPU_HOST_INT(idxmap));
}
void tlb_flush_all_cpus(CPUState *src_cpu)
{
tlb_flush_by_mmuidx_all_cpus(src_cpu, ALL_MMUIDX_BITS);
}
void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src_cpu, uint16_t idxmap)
{
const run_on_cpu_func fn = tlb_flush_by_mmuidx_async_work;
@@ -609,12 +575,28 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, vaddr addr, uint16_t idxmap)
{
tlb_debug("addr: %016" VADDR_PRIx " mmu_idx:%" PRIx16 "\n", addr, idxmap);
assert_cpu_is_self(cpu);
/* This should already be page aligned */
addr &= TARGET_PAGE_MASK;
tlb_flush_page_by_mmuidx_async_0(cpu, addr, idxmap);
if (qemu_cpu_is_self(cpu)) {
tlb_flush_page_by_mmuidx_async_0(cpu, addr, idxmap);
} else if (idxmap < TARGET_PAGE_SIZE) {
/*
* Most targets have only a few mmu_idx. In the case where
* we can stuff idxmap into the low TARGET_PAGE_BITS, avoid
* allocating memory for this operation.
*/
async_run_on_cpu(cpu, tlb_flush_page_by_mmuidx_async_1,
RUN_ON_CPU_TARGET_PTR(addr | idxmap));
} else {
TLBFlushPageByMMUIdxData *d = g_new(TLBFlushPageByMMUIdxData, 1);
/* Otherwise allocate a structure, freed by the worker. */
d->addr = addr;
d->idxmap = idxmap;
async_run_on_cpu(cpu, tlb_flush_page_by_mmuidx_async_2,
RUN_ON_CPU_HOST_PTR(d));
}
}
void tlb_flush_page(CPUState *cpu, vaddr addr)
@@ -622,6 +604,46 @@ void tlb_flush_page(CPUState *cpu, vaddr addr)
tlb_flush_page_by_mmuidx(cpu, addr, ALL_MMUIDX_BITS);
}
void tlb_flush_page_by_mmuidx_all_cpus(CPUState *src_cpu, vaddr addr,
uint16_t idxmap)
{
tlb_debug("addr: %016" VADDR_PRIx " mmu_idx:%"PRIx16"\n", addr, idxmap);
/* This should already be page aligned */
addr &= TARGET_PAGE_MASK;
/*
* Allocate memory to hold addr+idxmap only when needed.
* See tlb_flush_page_by_mmuidx for details.
*/
if (idxmap < TARGET_PAGE_SIZE) {
flush_all_helper(src_cpu, tlb_flush_page_by_mmuidx_async_1,
RUN_ON_CPU_TARGET_PTR(addr | idxmap));
} else {
CPUState *dst_cpu;
/* Allocate a separate data block for each destination cpu. */
CPU_FOREACH(dst_cpu) {
if (dst_cpu != src_cpu) {
TLBFlushPageByMMUIdxData *d
= g_new(TLBFlushPageByMMUIdxData, 1);
d->addr = addr;
d->idxmap = idxmap;
async_run_on_cpu(dst_cpu, tlb_flush_page_by_mmuidx_async_2,
RUN_ON_CPU_HOST_PTR(d));
}
}
}
tlb_flush_page_by_mmuidx_async_0(src_cpu, addr, idxmap);
}
void tlb_flush_page_all_cpus(CPUState *src, vaddr addr)
{
tlb_flush_page_by_mmuidx_all_cpus(src, addr, ALL_MMUIDX_BITS);
}
void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
vaddr addr,
uint16_t idxmap)
@@ -777,8 +799,6 @@ void tlb_flush_range_by_mmuidx(CPUState *cpu, vaddr addr,
{
TLBFlushRangeData d;
assert_cpu_is_self(cpu);
/*
* If all bits are significant, and len is small,
* this devolves to tlb_flush_page.
@@ -799,7 +819,14 @@ void tlb_flush_range_by_mmuidx(CPUState *cpu, vaddr addr,
d.idxmap = idxmap;
d.bits = bits;
tlb_flush_range_by_mmuidx_async_0(cpu, d);
if (qemu_cpu_is_self(cpu)) {
tlb_flush_range_by_mmuidx_async_0(cpu, d);
} else {
/* Otherwise allocate a structure, freed by the worker. */
TLBFlushRangeData *p = g_memdup(&d, sizeof(d));
async_run_on_cpu(cpu, tlb_flush_range_by_mmuidx_async_1,
RUN_ON_CPU_HOST_PTR(p));
}
}
void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, vaddr addr,
@@ -808,6 +835,54 @@ void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, vaddr addr,
tlb_flush_range_by_mmuidx(cpu, addr, TARGET_PAGE_SIZE, idxmap, bits);
}
void tlb_flush_range_by_mmuidx_all_cpus(CPUState *src_cpu,
vaddr addr, vaddr len,
uint16_t idxmap, unsigned bits)
{
TLBFlushRangeData d;
CPUState *dst_cpu;
/*
* If all bits are significant, and len is small,
* this devolves to tlb_flush_page.
*/
if (bits >= TARGET_LONG_BITS && len <= TARGET_PAGE_SIZE) {
tlb_flush_page_by_mmuidx_all_cpus(src_cpu, addr, idxmap);
return;
}
/* If no page bits are significant, this devolves to tlb_flush. */
if (bits < TARGET_PAGE_BITS) {
tlb_flush_by_mmuidx_all_cpus(src_cpu, idxmap);
return;
}
/* This should already be page aligned */
d.addr = addr & TARGET_PAGE_MASK;
d.len = len;
d.idxmap = idxmap;
d.bits = bits;
/* Allocate a separate data block for each destination cpu. */
CPU_FOREACH(dst_cpu) {
if (dst_cpu != src_cpu) {
TLBFlushRangeData *p = g_memdup(&d, sizeof(d));
async_run_on_cpu(dst_cpu,
tlb_flush_range_by_mmuidx_async_1,
RUN_ON_CPU_HOST_PTR(p));
}
}
tlb_flush_range_by_mmuidx_async_0(src_cpu, d);
}
void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
vaddr addr, uint16_t idxmap,
unsigned bits)
{
tlb_flush_range_by_mmuidx_all_cpus(src_cpu, addr, TARGET_PAGE_SIZE,
idxmap, bits);
}
void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
vaddr addr,
vaddr len,
@@ -964,7 +1039,7 @@ static inline void tlb_set_dirty1_locked(CPUTLBEntry *tlb_entry,
/* update the TLB corresponding to virtual page vaddr
so that it is no longer dirty */
static void tlb_set_dirty(CPUState *cpu, vaddr addr)
void tlb_set_dirty(CPUState *cpu, vaddr addr)
{
int mmu_idx;
@@ -1070,11 +1145,14 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
" prot=%x idx=%d\n",
addr, full->phys_addr, prot, mmu_idx);
read_flags = full->tlb_fill_flags;
read_flags = 0;
if (full->lg_page_size < TARGET_PAGE_BITS) {
/* Repeat the MMU check and TLB fill on every access. */
read_flags |= TLB_INVALID_MASK;
}
if (full->attrs.byte_swap) {
read_flags |= TLB_BSWAP;
}
is_ram = memory_region_is_ram(section->mr);
is_romd = memory_region_is_romd(section->mr);
@@ -1378,8 +1456,9 @@ static int probe_access_internal(CPUState *cpu, vaddr addr,
flags |= full->slow_flags[access_type];
/* Fold all "mmio-like" bits into TLB_MMIO. This is not RAM. */
if (unlikely(flags & ~(TLB_WATCHPOINT | TLB_NOTDIRTY | TLB_CHECK_ALIGNED))
|| (access_type != MMU_INST_FETCH && force_mmio)) {
if (unlikely(flags & ~(TLB_WATCHPOINT | TLB_NOTDIRTY))
||
(access_type != MMU_INST_FETCH && force_mmio)) {
*phost = NULL;
return TLB_MMIO;
}
@@ -1400,8 +1479,7 @@ int probe_access_full(CPUArchState *env, vaddr addr, int size,
/* Handle clean RAM pages. */
if (unlikely(flags & TLB_NOTDIRTY)) {
int dirtysize = size == 0 ? 1 : size;
notdirty_write(env_cpu(env), addr, dirtysize, *pfull, retaddr);
notdirty_write(env_cpu(env), addr, 1, *pfull, retaddr);
flags &= ~TLB_NOTDIRTY;
}
@@ -1424,8 +1502,7 @@ int probe_access_full_mmu(CPUArchState *env, vaddr addr, int size,
/* Handle clean RAM pages. */
if (unlikely(flags & TLB_NOTDIRTY)) {
int dirtysize = size == 0 ? 1 : size;
notdirty_write(env_cpu(env), addr, dirtysize, *pfull, 0);
notdirty_write(env_cpu(env), addr, 1, *pfull, 0);
flags &= ~TLB_NOTDIRTY;
}
@@ -1447,8 +1524,7 @@ int probe_access_flags(CPUArchState *env, vaddr addr, int size,
/* Handle clean RAM pages. */
if (unlikely(flags & TLB_NOTDIRTY)) {
int dirtysize = size == 0 ? 1 : size;
notdirty_write(env_cpu(env), addr, dirtysize, full, retaddr);
notdirty_write(env_cpu(env), addr, 1, full, retaddr);
flags &= ~TLB_NOTDIRTY;
}
@@ -1484,7 +1560,7 @@ void *probe_access(CPUArchState *env, vaddr addr, int size,
/* Handle clean RAM pages. */
if (flags & TLB_NOTDIRTY) {
notdirty_write(env_cpu(env), addr, size, full, retaddr);
notdirty_write(env_cpu(env), addr, 1, full, retaddr);
}
}
@@ -1522,7 +1598,7 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, vaddr addr,
void *p;
(void)probe_access_internal(env_cpu(env), addr, 1, MMU_INST_FETCH,
cpu_mmu_index(env_cpu(env), true), false,
cpu_mmu_index(env, true), false,
&p, &full, 0, false);
if (p == NULL) {
return -1;
@@ -1760,31 +1836,6 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
tcg_debug_assert((flags & TLB_BSWAP) == 0);
}
/*
* This alignment check differs from the one above, in that this is
* based on the atomicity of the operation. The intended use case is
* the ARM memory type field of each PTE, where access to pages with
* Device memory type require alignment.
*/
if (unlikely(flags & TLB_CHECK_ALIGNED)) {
MemOp size = l->memop & MO_SIZE;
switch (l->memop & MO_ATOM_MASK) {
case MO_ATOM_NONE:
size = MO_8;
break;
case MO_ATOM_IFALIGN_PAIR:
case MO_ATOM_WITHIN16_PAIR:
size = size ? size - 1 : 0;
break;
default:
break;
}
if (addr & ((1 << size) - 1)) {
cpu_unaligned_access(cpu, addr, type, l->mmu_idx, ra);
}
}
return crosspage;
}
@@ -1921,7 +1972,7 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
* @size: number of bytes
* @mmu_idx: virtual address context
* @ra: return address into tcg generated code, or 0
* Context: BQL held
* Context: iothread lock held
*
* Load @size bytes from @addr, which is memory-mapped i/o.
* The bytes are concatenated in big-endian order with @ret_be.
@@ -1968,6 +2019,7 @@ static uint64_t do_ld_mmio_beN(CPUState *cpu, CPUTLBEntryFull *full,
MemoryRegion *mr;
hwaddr mr_offset;
MemTxAttrs attrs;
uint64_t ret;
tcg_debug_assert(size > 0 && size <= 8);
@@ -1975,9 +2027,12 @@ static uint64_t do_ld_mmio_beN(CPUState *cpu, CPUTLBEntryFull *full,
section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, ra);
mr = section->mr;
BQL_LOCK_GUARD();
return int_ld_mmio_beN(cpu, full, ret_be, addr, size, mmu_idx,
type, ra, mr, mr_offset);
qemu_mutex_lock_iothread();
ret = int_ld_mmio_beN(cpu, full, ret_be, addr, size, mmu_idx,
type, ra, mr, mr_offset);
qemu_mutex_unlock_iothread();
return ret;
}
static Int128 do_ld16_mmio_beN(CPUState *cpu, CPUTLBEntryFull *full,
@@ -1996,11 +2051,13 @@ static Int128 do_ld16_mmio_beN(CPUState *cpu, CPUTLBEntryFull *full,
section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, ra);
mr = section->mr;
BQL_LOCK_GUARD();
qemu_mutex_lock_iothread();
a = int_ld_mmio_beN(cpu, full, ret_be, addr, size - 8, mmu_idx,
MMU_DATA_LOAD, ra, mr, mr_offset);
b = int_ld_mmio_beN(cpu, full, ret_be, addr + size - 8, 8, mmu_idx,
MMU_DATA_LOAD, ra, mr, mr_offset + size - 8);
qemu_mutex_unlock_iothread();
return int128_make128(b, a);
}
@@ -2461,7 +2518,7 @@ static Int128 do_ld16_mmu(CPUState *cpu, vaddr addr,
* @size: number of bytes
* @mmu_idx: virtual address context
* @ra: return address into tcg generated code, or 0
* Context: BQL held
* Context: iothread lock held
*
* Store @size bytes at @addr, which is memory-mapped i/o.
* The bytes to store are extracted in little-endian order from @val_le;
@@ -2509,6 +2566,7 @@ static uint64_t do_st_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full,
hwaddr mr_offset;
MemoryRegion *mr;
MemTxAttrs attrs;
uint64_t ret;
tcg_debug_assert(size > 0 && size <= 8);
@@ -2516,9 +2574,12 @@ static uint64_t do_st_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full,
section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, ra);
mr = section->mr;
BQL_LOCK_GUARD();
return int_st_mmio_leN(cpu, full, val_le, addr, size, mmu_idx,
ra, mr, mr_offset);
qemu_mutex_lock_iothread();
ret = int_st_mmio_leN(cpu, full, val_le, addr, size, mmu_idx,
ra, mr, mr_offset);
qemu_mutex_unlock_iothread();
return ret;
}
static uint64_t do_st16_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full,
@@ -2529,6 +2590,7 @@ static uint64_t do_st16_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full,
MemoryRegion *mr;
hwaddr mr_offset;
MemTxAttrs attrs;
uint64_t ret;
tcg_debug_assert(size > 8 && size <= 16);
@@ -2536,11 +2598,14 @@ static uint64_t do_st16_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full,
section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, ra);
mr = section->mr;
BQL_LOCK_GUARD();
qemu_mutex_lock_iothread();
int_st_mmio_leN(cpu, full, int128_getlo(val_le), addr, 8,
mmu_idx, ra, mr, mr_offset);
return int_st_mmio_leN(cpu, full, int128_gethi(val_le), addr + 8,
size - 8, mmu_idx, ra, mr, mr_offset + 8);
ret = int_st_mmio_leN(cpu, full, int128_gethi(val_le), addr + 8,
size - 8, mmu_idx, ra, mr, mr_offset + 8);
qemu_mutex_unlock_iothread();
return ret;
}
/*
@@ -2891,30 +2956,26 @@ static void do_st16_mmu(CPUState *cpu, vaddr addr, Int128 val,
uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr)
{
CPUState *cs = env_cpu(env);
MemOpIdx oi = make_memop_idx(MO_UB, cpu_mmu_index(cs, true));
return do_ld1_mmu(cs, addr, oi, 0, MMU_INST_FETCH);
MemOpIdx oi = make_memop_idx(MO_UB, cpu_mmu_index(env, true));
return do_ld1_mmu(env_cpu(env), addr, oi, 0, MMU_INST_FETCH);
}
uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr addr)
{
CPUState *cs = env_cpu(env);
MemOpIdx oi = make_memop_idx(MO_TEUW, cpu_mmu_index(cs, true));
return do_ld2_mmu(cs, addr, oi, 0, MMU_INST_FETCH);
MemOpIdx oi = make_memop_idx(MO_TEUW, cpu_mmu_index(env, true));
return do_ld2_mmu(env_cpu(env), addr, oi, 0, MMU_INST_FETCH);
}
uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr)
{
CPUState *cs = env_cpu(env);
MemOpIdx oi = make_memop_idx(MO_TEUL, cpu_mmu_index(cs, true));
return do_ld4_mmu(cs, addr, oi, 0, MMU_INST_FETCH);
MemOpIdx oi = make_memop_idx(MO_TEUL, cpu_mmu_index(env, true));
return do_ld4_mmu(env_cpu(env), addr, oi, 0, MMU_INST_FETCH);
}
uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr addr)
{
CPUState *cs = env_cpu(env);
MemOpIdx oi = make_memop_idx(MO_TEUQ, cpu_mmu_index(cs, true));
return do_ld8_mmu(cs, addr, oi, 0, MMU_INST_FETCH);
MemOpIdx oi = make_memop_idx(MO_TEUQ, cpu_mmu_index(env, true));
return do_ld8_mmu(env_cpu(env), addr, oi, 0, MMU_INST_FETCH);
}
uint8_t cpu_ldb_code_mmu(CPUArchState *env, abi_ptr addr,

View File

@@ -6,10 +6,11 @@
#include "qemu/osdep.h"
#include "qemu/lockable.h"
#include "tcg/debuginfo.h"
#include <elfutils/libdwfl.h>
#include "debuginfo.h"
static QemuMutex lock;
static Dwfl *dwfl;
static const Dwfl_Callbacks dwfl_callbacks = {

View File

@@ -4,8 +4,8 @@
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef TCG_DEBUGINFO_H
#define TCG_DEBUGINFO_H
#ifndef ACCEL_TCG_DEBUGINFO_H
#define ACCEL_TCG_DEBUGINFO_H
#include "qemu/bitops.h"

View File

@@ -49,19 +49,21 @@ static bool icount_sleep = true;
/* Arbitrarily pick 1MIPS as the minimum allowable speed. */
#define MAX_ICOUNT_SHIFT 10
/* Do not count executed instructions */
ICountMode use_icount = ICOUNT_DISABLED;
/*
* 0 = Do not count executed instructions.
* 1 = Fixed conversion of insn to ns via "shift" option
* 2 = Runtime adaptive algorithm to compute shift
*/
int use_icount;
static void icount_enable_precise(void)
{
/* Fixed conversion of insn to ns via "shift" option */
use_icount = ICOUNT_PRECISE;
use_icount = 1;
}
static void icount_enable_adaptive(void)
{
/* Runtime adaptive algorithm to compute shift */
use_icount = ICOUNT_ADAPTATIVE;
use_icount = 2;
}
/*
@@ -254,7 +256,7 @@ static void icount_warp_rt(void)
int64_t warp_delta;
warp_delta = clock - timers_state.vm_clock_warp_start;
if (icount_enabled() == ICOUNT_ADAPTATIVE) {
if (icount_enabled() == 2) {
/*
* In adaptive mode, do not let QEMU_CLOCK_VIRTUAL run too far
* ahead of real time (it might already be ahead so careful not
@@ -336,8 +338,10 @@ void icount_start_warp_timer(void)
deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL,
~QEMU_TIMER_ATTR_EXTERNAL);
if (deadline < 0) {
if (!icount_sleep) {
warn_report_once("icount sleep disabled and no active timers");
static bool notified;
if (!icount_sleep && !notified) {
warn_report("icount sleep disabled and no active timers");
notified = true;
}
return;
}
@@ -415,7 +419,7 @@ void icount_account_warp_timer(void)
icount_warp_rt();
}
bool icount_configure(QemuOpts *opts, Error **errp)
void icount_configure(QemuOpts *opts, Error **errp)
{
const char *option = qemu_opt_get(opts, "shift");
bool sleep = qemu_opt_get_bool(opts, "sleep", true);
@@ -425,28 +429,27 @@ bool icount_configure(QemuOpts *opts, Error **errp)
if (!option) {
if (qemu_opt_get(opts, "align") != NULL) {
error_setg(errp, "Please specify shift option when using align");
return false;
}
return true;
return;
}
if (align && !sleep) {
error_setg(errp, "align=on and sleep=off are incompatible");
return false;
return;
}
if (strcmp(option, "auto") != 0) {
if (qemu_strtol(option, NULL, 0, &time_shift) < 0
|| time_shift < 0 || time_shift > MAX_ICOUNT_SHIFT) {
error_setg(errp, "icount: Invalid shift value");
return false;
return;
}
} else if (icount_align_option) {
error_setg(errp, "shift=auto and align=on are incompatible");
return false;
return;
} else if (!icount_sleep) {
error_setg(errp, "shift=auto and sleep=off are incompatible");
return false;
return;
}
icount_sleep = sleep;
@@ -460,7 +463,7 @@ bool icount_configure(QemuOpts *opts, Error **errp)
if (time_shift >= 0) {
timers_state.icount_time_shift = time_shift;
icount_enable_precise();
return true;
return;
}
icount_enable_adaptive();
@@ -488,14 +491,11 @@ bool icount_configure(QemuOpts *opts, Error **errp)
timer_mod(timers_state.icount_vm_timer,
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
NANOSECONDS_PER_SECOND / 10);
return true;
}
void icount_notify_exit(void)
{
assert(icount_enabled());
if (current_cpu) {
if (icount_enabled() && current_cpu) {
qemu_cpu_kick(current_cpu);
qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
}

View File

@@ -9,51 +9,18 @@
#ifndef ACCEL_TCG_INTERNAL_COMMON_H
#define ACCEL_TCG_INTERNAL_COMMON_H
#include "exec/cpu-common.h"
#include "exec/translation-block.h"
extern int64_t max_delay;
extern int64_t max_advance;
extern bool one_insn_per_tb;
/*
* Return true if CS is not running in parallel with other cpus, either
* because there are no other cpus or we are within an exclusive context.
*/
static inline bool cpu_in_serial_context(CPUState *cs)
{
return !tcg_cflags_has(cs, CF_PARALLEL) || cpu_in_exclusive_context(cs);
return !(cs->tcg_cflags & CF_PARALLEL) || cpu_in_exclusive_context(cs);
}
/**
* cpu_plugin_mem_cbs_enabled() - are plugin memory callbacks enabled?
* @cs: CPUState pointer
*
* The memory callbacks are installed if a plugin has instrumented an
* instruction for memory. This can be useful to know if you want to
* force a slow path for a series of memory accesses.
*/
static inline bool cpu_plugin_mem_cbs_enabled(const CPUState *cpu)
{
#ifdef CONFIG_PLUGIN
return !!cpu->neg.plugin_mem_cbs;
#else
return false;
#endif
}
TranslationBlock *tb_gen_code(CPUState *cpu, vaddr pc,
uint64_t cs_base, uint32_t flags,
int cflags);
void page_init(void);
void tb_htable_init(void);
void tb_reset_jump(TranslationBlock *tb, int n);
TranslationBlock *tb_link_page(TranslationBlock *tb);
void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
uintptr_t host_pc);
bool tcg_exec_realizefn(CPUState *cpu, Error **errp);
void tcg_exec_unrealizefn(CPUState *cpu);
#endif

View File

@@ -69,7 +69,19 @@ void tb_invalidate_phys_range_fast(ram_addr_t ram_addr,
G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr);
#endif /* CONFIG_SOFTMMU */
TranslationBlock *tb_gen_code(CPUState *cpu, vaddr pc,
uint64_t cs_base, uint32_t flags,
int cflags);
void page_init(void);
void tb_htable_init(void);
void tb_reset_jump(TranslationBlock *tb, int n);
TranslationBlock *tb_link_page(TranslationBlock *tb);
bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc);
void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
uintptr_t host_pc);
bool tcg_exec_realizefn(CPUState *cpu, Error **errp);
void tcg_exec_unrealizefn(CPUState *cpu);
/* Return the current PC from CPU, which may be cached in TB. */
static inline vaddr log_pc(CPUState *cpu, const TranslationBlock *tb)
@@ -81,6 +93,8 @@ static inline vaddr log_pc(CPUState *cpu, const TranslationBlock *tb)
}
}
extern bool one_insn_per_tb;
/**
* tcg_req_mo:
* @type: TCGBar

View File

@@ -9,8 +9,8 @@
* See the COPYING file in the top-level directory.
*/
#include "host/load-extract-al16-al8.h.inc"
#include "host/store-insert-al16.h.inc"
#include "host/load-extract-al16-al8.h"
#include "host/store-insert-al16.h"
#ifdef CONFIG_ATOMIC64
# define HAVE_al8 true
@@ -76,7 +76,7 @@ static int required_atomicity(CPUState *cpu, uintptr_t p, MemOp memop)
/*
* Examine the alignment of p to determine if there are subobjects
* that must be aligned. Note that we only really need ctz4() --
* any more significant bits are discarded by the immediately
* any more sigificant bits are discarded by the immediately
* following comparison.
*/
tmp = ctz32(p);

View File

@@ -125,9 +125,7 @@ void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx oi)
static void plugin_load_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi)
{
if (cpu_plugin_mem_cbs_enabled(env_cpu(env))) {
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
}
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
}
uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra)
@@ -190,9 +188,7 @@ Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr,
static void plugin_store_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi)
{
if (cpu_plugin_mem_cbs_enabled(env_cpu(env))) {
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
}
void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val,
@@ -358,8 +354,7 @@ void cpu_stq_le_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
uint32_t cpu_ldub_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldub_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_ldub_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
int cpu_ldsb_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
@@ -369,8 +364,7 @@ int cpu_ldsb_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
uint32_t cpu_lduw_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_lduw_be_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_lduw_be_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
int cpu_ldsw_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
@@ -380,20 +374,17 @@ int cpu_ldsw_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
uint32_t cpu_ldl_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldl_be_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_ldl_be_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
uint64_t cpu_ldq_be_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldq_be_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_ldq_be_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
uint32_t cpu_lduw_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_lduw_le_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_lduw_le_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
int cpu_ldsw_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
@@ -403,63 +394,54 @@ int cpu_ldsw_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
uint32_t cpu_ldl_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldl_le_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_ldl_le_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
uint64_t cpu_ldq_le_data_ra(CPUArchState *env, abi_ptr addr, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
return cpu_ldq_le_mmuidx_ra(env, addr, mmu_index, ra);
return cpu_ldq_le_mmuidx_ra(env, addr, cpu_mmu_index(env, false), ra);
}
void cpu_stb_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stb_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stb_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stw_be_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stw_be_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stw_be_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stl_be_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stl_be_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stl_be_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stq_be_data_ra(CPUArchState *env, abi_ptr addr,
uint64_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stq_be_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stq_be_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stw_le_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stw_le_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stw_le_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stl_le_data_ra(CPUArchState *env, abi_ptr addr,
uint32_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stl_le_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stl_le_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
void cpu_stq_le_data_ra(CPUArchState *env, abi_ptr addr,
uint64_t val, uintptr_t ra)
{
int mmu_index = cpu_mmu_index(env_cpu(env), false);
cpu_stq_le_mmuidx_ra(env, addr, val, mmu_index, ra);
cpu_stq_le_mmuidx_ra(env, addr, val, cpu_mmu_index(env, false), ra);
}
/*--------------------------*/

View File

@@ -1,8 +1,8 @@
tcg_ss = ss.source_set()
common_ss.add(when: 'CONFIG_TCG', if_true: files(
'cpu-exec-common.c',
))
tcg_specific_ss = ss.source_set()
tcg_specific_ss.add(files(
tcg_ss.add(files(
'tcg-all.c',
'cpu-exec.c',
'tb-maint.c',
@@ -11,16 +11,17 @@ tcg_specific_ss.add(files(
'translate-all.c',
'translator.c',
))
tcg_specific_ss.add(when: 'CONFIG_USER_ONLY', if_true: files('user-exec.c'))
tcg_specific_ss.add(when: 'CONFIG_SYSTEM_ONLY', if_false: files('user-exec-stub.c'))
tcg_ss.add(when: 'CONFIG_USER_ONLY', if_true: files('user-exec.c'))
tcg_ss.add(when: 'CONFIG_SYSTEM_ONLY', if_false: files('user-exec-stub.c'))
if get_option('plugins')
tcg_specific_ss.add(files('plugin-gen.c'))
tcg_ss.add(files('plugin-gen.c'))
endif
specific_ss.add_all(when: 'CONFIG_TCG', if_true: tcg_specific_ss)
tcg_ss.add(when: libdw, if_true: files('debuginfo.c'))
tcg_ss.add(when: 'CONFIG_LINUX', if_true: files('perf.c'))
specific_ss.add_all(when: 'CONFIG_TCG', if_true: tcg_ss)
specific_ss.add(when: ['CONFIG_SYSTEM_ONLY', 'CONFIG_TCG'], if_true: files(
'cputlb.c',
'watchpoint.c',
))
system_ss.add(when: ['CONFIG_TCG'], if_true: files(

View File

@@ -10,13 +10,13 @@
#include "qemu/osdep.h"
#include "elf.h"
#include "exec/target_page.h"
#include "exec/translation-block.h"
#include "exec/exec-all.h"
#include "qemu/timer.h"
#include "tcg/debuginfo.h"
#include "tcg/perf.h"
#include "tcg/tcg.h"
#include "debuginfo.h"
#include "perf.h"
static FILE *safe_fopen_w(const char *path)
{
int saved_errno;
@@ -335,7 +335,11 @@ void perf_report_code(uint64_t guest_pc, TranslationBlock *tb,
/* FIXME: This replicates the restore_state_to_opc() logic. */
q[insn].address = gen_insn_data[insn * start_words + 0];
if (tb_cflags(tb) & CF_PCREL) {
q[insn].address |= (guest_pc & qemu_target_page_mask());
q[insn].address |= (guest_pc & TARGET_PAGE_MASK);
} else {
#if defined(TARGET_I386)
q[insn].address -= tb->cs_base;
#endif
}
q[insn].flags = DEBUGINFO_SYMBOL | (jitdump ? DEBUGINFO_LINE : 0);
}

View File

@@ -4,8 +4,8 @@
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef TCG_PERF_H
#define TCG_PERF_H
#ifndef ACCEL_TCG_PERF_H
#define ACCEL_TCG_PERF_H
#if defined(CONFIG_TCG) && defined(CONFIG_LINUX)
/* Start writing perf-<pid>.map. */

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,4 @@
#ifdef CONFIG_PLUGIN
DEF_HELPER_FLAGS_2(plugin_vcpu_udata_cb, TCG_CALL_NO_RWG | TCG_CALL_PLUGIN, void, i32, ptr)
DEF_HELPER_FLAGS_4(plugin_vcpu_mem_cb, TCG_CALL_NO_RWG | TCG_CALL_PLUGIN, void, i32, i32, i64, ptr)
#endif

View File

@@ -9,25 +9,20 @@
#ifndef ACCEL_TCG_TB_JMP_CACHE_H
#define ACCEL_TCG_TB_JMP_CACHE_H
#include "qemu/rcu.h"
#include "exec/cpu-common.h"
#define TB_JMP_CACHE_BITS 12
#define TB_JMP_CACHE_SIZE (1 << TB_JMP_CACHE_BITS)
/*
* Invalidated in parallel; all accesses to 'tb' must be atomic.
* A valid entry is read/written by a single CPU, therefore there is
* no need for qatomic_rcu_read() and pc is always consistent with a
* non-NULL value of 'tb'. Strictly speaking pc is only needed for
* CF_PCREL, but it's used always for simplicity.
* Accessed in parallel; all accesses to 'tb' must be atomic.
* For CF_PCREL, accesses to 'pc' must be protected by a
* load_acquire/store_release to 'tb'.
*/
typedef struct CPUJumpCache {
struct CPUJumpCache {
struct rcu_head rcu;
struct {
TranslationBlock *tb;
vaddr pc;
} array[TB_JMP_CACHE_SIZE];
} CPUJumpCache;
};
#endif /* ACCEL_TCG_TB_JMP_CACHE_H */

View File

@@ -23,7 +23,6 @@
#include "exec/cputlb.h"
#include "exec/log.h"
#include "exec/exec-all.h"
#include "exec/page-protection.h"
#include "exec/tb-flush.h"
#include "exec/translate-all.h"
#include "sysemu/tcg.h"
@@ -713,7 +712,7 @@ static void tb_record(TranslationBlock *tb)
tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr0 >> TARGET_PAGE_BITS;
assert(paddr0 != -1);
if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
@@ -745,7 +744,7 @@ static void tb_remove(TranslationBlock *tb)
tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr0 >> TARGET_PAGE_BITS;
assert(paddr0 != -1);
if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
@@ -1022,7 +1021,7 @@ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t last)
* Called with mmap_lock held for user-mode emulation
* NOTE: this function must not be called while a TB is running.
*/
static void tb_invalidate_phys_page(tb_page_addr_t addr)
void tb_invalidate_phys_page(tb_page_addr_t addr)
{
tb_page_addr_t start, last;
@@ -1084,7 +1083,8 @@ bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc)
if (current_tb_modified) {
/* Force execution of one insn next time. */
CPUState *cpu = current_cpu;
cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu);
cpu->cflags_next_tb =
1 | CF_LAST_IO | CF_NOIRQ | curr_cflags(current_cpu);
return true;
}
return false;
@@ -1154,13 +1154,36 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages,
if (current_tb_modified) {
page_collection_unlock(pages);
/* Force execution of one insn next time. */
current_cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu);
current_cpu->cflags_next_tb =
1 | CF_LAST_IO | CF_NOIRQ | curr_cflags(current_cpu);
mmap_unlock();
cpu_loop_exit_noexc(current_cpu);
}
#endif
}
/*
* Invalidate all TBs which intersect with the target physical
* address page @addr.
*/
void tb_invalidate_phys_page(tb_page_addr_t addr)
{
struct page_collection *pages;
tb_page_addr_t start, last;
PageDesc *p;
p = page_find(addr >> TARGET_PAGE_BITS);
if (p == NULL) {
return;
}
start = addr & TARGET_PAGE_MASK;
last = addr | ~TARGET_PAGE_MASK;
pages = page_collection_lock(start, last);
tb_invalidate_phys_page_range__locked(pages, p, start, last, 0);
page_collection_unlock(pages);
}
/*
* Invalidate all TBs which intersect with the target physical address range
* [start;last]. NOTE: start and end may refer to *different* physical pages.

View File

@@ -123,12 +123,12 @@ void icount_prepare_for_run(CPUState *cpu, int64_t cpu_budget)
if (cpu->icount_budget == 0) {
/*
* We're called without the BQL, so must take it while
* We're called without the iothread lock, so must take it while
* we're calling timer handlers.
*/
bql_lock();
qemu_mutex_lock_iothread();
icount_notify_aio_contexts();
bql_unlock();
qemu_mutex_unlock_iothread();
}
}

View File

@@ -76,7 +76,7 @@ static void *mttcg_cpu_thread_fn(void *arg)
rcu_add_force_rcu_notifier(&force_rcu.notifier);
tcg_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
@@ -91,9 +91,9 @@ static void *mttcg_cpu_thread_fn(void *arg)
do {
if (cpu_can_run(cpu)) {
int r;
bql_unlock();
r = tcg_cpu_exec(cpu);
bql_lock();
qemu_mutex_unlock_iothread();
r = tcg_cpus_exec(cpu);
qemu_mutex_lock_iothread();
switch (r) {
case EXCP_DEBUG:
cpu_handle_guest_debug(cpu);
@@ -105,9 +105,9 @@ static void *mttcg_cpu_thread_fn(void *arg)
*/
break;
case EXCP_ATOMIC:
bql_unlock();
qemu_mutex_unlock_iothread();
cpu_exec_step_atomic(cpu);
bql_lock();
qemu_mutex_lock_iothread();
default:
/* Ignore everything else? */
break;
@@ -118,8 +118,8 @@ static void *mttcg_cpu_thread_fn(void *arg)
qemu_wait_io_event(cpu);
} while (!cpu->unplug || cpu_can_run(cpu));
tcg_cpu_destroy(cpu);
bql_unlock();
tcg_cpus_destroy(cpu);
qemu_mutex_unlock_iothread();
rcu_remove_force_rcu_notifier(&force_rcu.notifier);
rcu_unregister_thread();
return NULL;
@@ -137,6 +137,10 @@ void mttcg_start_vcpu_thread(CPUState *cpu)
g_assert(tcg_enabled());
tcg_cpu_init_cflags(cpu, current_machine->smp.max_cpus > 1);
cpu->thread = g_new0(QemuThread, 1);
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
/* create a thread per vCPU with TCG (MTTCG) */
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/TCG",
cpu->cpu_index);

View File

@@ -111,7 +111,7 @@ static void rr_wait_io_event(void)
while (all_cpu_threads_idle()) {
rr_stop_kick_timer();
qemu_cond_wait_bql(first_cpu->halt_cond);
qemu_cond_wait_iothread(first_cpu->halt_cond);
}
rr_start_kick_timer();
@@ -131,7 +131,7 @@ static void rr_deal_with_unplugged_cpus(void)
CPU_FOREACH(cpu) {
if (cpu->unplug && !cpu_can_run(cpu)) {
tcg_cpu_destroy(cpu);
tcg_cpus_destroy(cpu);
break;
}
}
@@ -188,7 +188,7 @@ static void *rr_cpu_thread_fn(void *arg)
rcu_add_force_rcu_notifier(&force_rcu);
tcg_register_thread();
bql_lock();
qemu_mutex_lock_iothread();
qemu_thread_get_self(cpu->thread);
cpu->thread_id = qemu_get_thread_id();
@@ -198,7 +198,7 @@ static void *rr_cpu_thread_fn(void *arg)
/* wait for initial kick-off after machine start */
while (first_cpu->stopped) {
qemu_cond_wait_bql(first_cpu->halt_cond);
qemu_cond_wait_iothread(first_cpu->halt_cond);
/* process any pending work */
CPU_FOREACH(cpu) {
@@ -218,9 +218,9 @@ static void *rr_cpu_thread_fn(void *arg)
/* Only used for icount_enabled() */
int64_t cpu_budget = 0;
bql_unlock();
qemu_mutex_unlock_iothread();
replay_mutex_lock();
bql_lock();
qemu_mutex_lock_iothread();
if (icount_enabled()) {
int cpu_count = rr_cpu_count();
@@ -254,23 +254,23 @@ static void *rr_cpu_thread_fn(void *arg)
if (cpu_can_run(cpu)) {
int r;
bql_unlock();
qemu_mutex_unlock_iothread();
if (icount_enabled()) {
icount_prepare_for_run(cpu, cpu_budget);
}
r = tcg_cpu_exec(cpu);
r = tcg_cpus_exec(cpu);
if (icount_enabled()) {
icount_process_data(cpu);
}
bql_lock();
qemu_mutex_lock_iothread();
if (r == EXCP_DEBUG) {
cpu_handle_guest_debug(cpu);
break;
} else if (r == EXCP_ATOMIC) {
bql_unlock();
qemu_mutex_unlock_iothread();
cpu_exec_step_atomic(cpu);
bql_lock();
qemu_mutex_lock_iothread();
break;
}
} else if (cpu->stop) {
@@ -317,23 +317,22 @@ void rr_start_vcpu_thread(CPUState *cpu)
tcg_cpu_init_cflags(cpu, false);
if (!single_tcg_cpu_thread) {
single_tcg_halt_cond = cpu->halt_cond;
single_tcg_cpu_thread = cpu->thread;
cpu->thread = g_new0(QemuThread, 1);
cpu->halt_cond = g_new0(QemuCond, 1);
qemu_cond_init(cpu->halt_cond);
/* share a single thread for all cpus with TCG */
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "ALL CPUs/TCG");
qemu_thread_create(cpu->thread, thread_name,
rr_cpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE);
single_tcg_halt_cond = cpu->halt_cond;
single_tcg_cpu_thread = cpu->thread;
} else {
/* we share the thread, dump spare data */
g_free(cpu->thread);
qemu_cond_destroy(cpu->halt_cond);
g_free(cpu->halt_cond);
/* we share the thread */
cpu->thread = single_tcg_cpu_thread;
cpu->halt_cond = single_tcg_halt_cond;
/* copy the stuff done at start of rr_cpu_thread_fn */
cpu->thread_id = first_cpu->thread_id;
cpu->neg.can_do_io = 1;
cpu->created = true;

View File

@@ -35,9 +35,7 @@
#include "exec/exec-all.h"
#include "exec/hwaddr.h"
#include "exec/tb-flush.h"
#include "gdbstub/enums.h"
#include "hw/core/cpu.h"
#include "exec/gdbstub.h"
#include "tcg-accel-ops.h"
#include "tcg-accel-ops-mttcg.h"
@@ -62,15 +60,15 @@ void tcg_cpu_init_cflags(CPUState *cpu, bool parallel)
cflags |= parallel ? CF_PARALLEL : 0;
cflags |= icount_enabled() ? CF_USE_ICOUNT : 0;
tcg_cflags_set(cpu, cflags);
cpu->tcg_cflags |= cflags;
}
void tcg_cpu_destroy(CPUState *cpu)
void tcg_cpus_destroy(CPUState *cpu)
{
cpu_thread_signal_destroyed(cpu);
}
int tcg_cpu_exec(CPUState *cpu)
int tcg_cpus_exec(CPUState *cpu)
{
int ret;
assert(tcg_enabled());
@@ -90,7 +88,7 @@ static void tcg_cpu_reset_hold(CPUState *cpu)
/* mask must never be zero, except for A20 change call */
void tcg_handle_interrupt(CPUState *cpu, int mask)
{
g_assert(bql_locked());
g_assert(qemu_mutex_iothread_locked());
cpu->interrupt_request |= mask;

View File

@@ -14,8 +14,8 @@
#include "sysemu/cpus.h"
void tcg_cpu_destroy(CPUState *cpu);
int tcg_cpu_exec(CPUState *cpu);
void tcg_cpus_destroy(CPUState *cpu);
int tcg_cpus_exec(CPUState *cpu);
void tcg_handle_interrupt(CPUState *cpu, int mask);
void tcg_cpu_init_cflags(CPUState *cpu, bool parallel);

View File

@@ -38,7 +38,7 @@
#if !defined(CONFIG_USER_ONLY)
#include "hw/boards.h"
#endif
#include "internal-common.h"
#include "internal-target.h"
struct TCGState {
AccelState parent_obj;

View File

@@ -63,7 +63,7 @@
#include "tb-context.h"
#include "internal-common.h"
#include "internal-target.h"
#include "tcg/perf.h"
#include "perf.h"
#include "tcg/insn-start-words.h"
TBContext tb_ctx;
@@ -256,6 +256,7 @@ bool cpu_unwind_state_data(CPUState *cpu, uintptr_t host_pc, uint64_t *data)
void page_init(void)
{
page_size_init();
page_table_config_init();
}
@@ -303,7 +304,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
if (phys_pc == -1) {
/* Generate a one-shot TB with 1 insn in it */
cflags = (cflags & ~CF_COUNT_MASK) | 1;
cflags = (cflags & ~CF_COUNT_MASK) | CF_LAST_IO | 1;
}
max_insns = cflags & CF_COUNT_MASK;
@@ -631,10 +632,10 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
* operations only (which execute after completion) so we don't
* double instrument the instruction.
*/
cpu->cflags_next_tb = curr_cflags(cpu) | CF_MEMI_ONLY | n;
cpu->cflags_next_tb = curr_cflags(cpu) | CF_MEMI_ONLY | CF_LAST_IO | n;
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
vaddr pc = cpu->cc->get_pc(cpu);
vaddr pc = log_pc(cpu, tb);
if (qemu_log_in_addr_range(pc)) {
qemu_log("cpu_io_recompile: rewound execution of TB to %016"
VADDR_PRIx "\n", pc);
@@ -644,6 +645,15 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
cpu_loop_exit_noexc(cpu);
}
#else /* CONFIG_USER_ONLY */
void cpu_interrupt(CPUState *cpu, int mask)
{
g_assert(qemu_mutex_iothread_locked());
cpu->interrupt_request |= mask;
qatomic_set(&cpu->neg.icount_decr.u16.high, -1);
}
#endif /* CONFIG_USER_ONLY */
/*

View File

@@ -12,23 +12,26 @@
#include "qemu/error-report.h"
#include "exec/exec-all.h"
#include "exec/translator.h"
#include "exec/cpu_ldst.h"
#include "exec/plugin-gen.h"
#include "exec/cpu_ldst.h"
#include "tcg/tcg-op-common.h"
#include "internal-target.h"
#include "disas/disas.h"
static void set_can_do_io(DisasContextBase *db, bool val)
{
QEMU_BUILD_BUG_ON(sizeof_field(CPUState, neg.can_do_io) != 1);
tcg_gen_st8_i32(tcg_constant_i32(val), tcg_env,
offsetof(ArchCPU, parent_obj.neg.can_do_io) -
offsetof(ArchCPU, env));
if (db->saved_can_do_io != val) {
db->saved_can_do_io = val;
QEMU_BUILD_BUG_ON(sizeof_field(CPUState, neg.can_do_io) != 1);
tcg_gen_st8_i32(tcg_constant_i32(val), tcg_env,
offsetof(ArchCPU, parent_obj.neg.can_do_io) -
offsetof(ArchCPU, env));
}
}
bool translator_io_start(DisasContextBase *db)
{
set_can_do_io(db, true);
/*
* Ensure that this instruction will be the last in the TB.
* The target may override this to something more forceful.
@@ -81,6 +84,13 @@ static TCGOp *gen_tb_start(DisasContextBase *db, uint32_t cflags)
- offsetof(ArchCPU, env));
}
/*
* cpu->neg.can_do_io is set automatically here at the beginning of
* each translation block. The cost is minimal, plus it would be
* very easy to forget doing it in the translator.
*/
set_can_do_io(db, db->max_insns == 1 && (cflags & CF_LAST_IO));
return icount_start_insn;
}
@@ -119,7 +129,6 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
{
uint32_t cflags = tb_cflags(tb);
TCGOp *icount_start_insn;
TCGOp *first_insn_start = NULL;
bool plugin_enabled;
/* Initialize DisasContext */
@@ -130,12 +139,9 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
db->num_insns = 0;
db->max_insns = *max_insns;
db->singlestep_enabled = cflags & CF_SINGLE_STEP;
db->insn_start = NULL;
db->fake_insn = false;
db->saved_can_do_io = -1;
db->host_addr[0] = host_pc;
db->host_addr[1] = NULL;
db->record_start = 0;
db->record_len = 0;
ops->init_disas_context(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
@@ -145,28 +151,32 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
ops->tb_start(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
plugin_enabled = plugin_gen_tb_start(cpu, db);
if (cflags & CF_MEMI_ONLY) {
/* We should only see CF_MEMI_ONLY for io_recompile. */
assert(cflags & CF_LAST_IO);
plugin_enabled = plugin_gen_tb_start(cpu, db, true);
} else {
plugin_enabled = plugin_gen_tb_start(cpu, db, false);
}
db->plugin_enabled = plugin_enabled;
while (true) {
*max_insns = ++db->num_insns;
ops->insn_start(db, cpu);
db->insn_start = tcg_last_op();
if (first_insn_start == NULL) {
first_insn_start = db->insn_start;
}
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
if (plugin_enabled) {
plugin_gen_insn_start(cpu, db);
}
/*
* Disassemble one instruction. The translate_insn hook should
* update db->pc_next and db->is_jmp to indicate what should be
* done next -- either exiting this loop or locate the start of
* the next instruction.
*/
/* Disassemble one instruction. The translate_insn hook should
update db->pc_next and db->is_jmp to indicate what should be
done next -- either exiting this loop or locate the start of
the next instruction. */
if (db->num_insns == db->max_insns && (cflags & CF_LAST_IO)) {
/* Accept I/O on the last instruction. */
set_can_do_io(db, true);
}
ops->translate_insn(db, cpu);
/*
@@ -199,277 +209,172 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
ops->tb_stop(db, cpu);
gen_tb_end(tb, cflags, icount_start_insn, db->num_insns);
/*
* Manage can_do_io for the translation block: set to false before
* the first insn and set to true before the last insn.
*/
if (db->num_insns == 1) {
tcg_debug_assert(first_insn_start == db->insn_start);
} else {
tcg_debug_assert(first_insn_start != db->insn_start);
tcg_ctx->emit_before_op = first_insn_start;
set_can_do_io(db, false);
}
tcg_ctx->emit_before_op = db->insn_start;
set_can_do_io(db, true);
tcg_ctx->emit_before_op = NULL;
/* May be used by disas_log or plugin callbacks. */
tb->size = db->pc_next - db->pc_first;
tb->icount = db->num_insns;
if (plugin_enabled) {
plugin_gen_tb_end(cpu, db->num_insns);
}
/* The disas_log hook may use these values rather than recompute. */
tb->size = db->pc_next - db->pc_first;
tb->icount = db->num_insns;
if (qemu_loglevel_mask(CPU_LOG_TB_IN_ASM)
&& qemu_log_in_addr_range(db->pc_first)) {
FILE *logfile = qemu_log_trylock();
if (logfile) {
fprintf(logfile, "----------------\n");
if (!ops->disas_log ||
!ops->disas_log(db, cpu, logfile)) {
fprintf(logfile, "IN: %s\n", lookup_symbol(db->pc_first));
target_disas(logfile, cpu, db);
}
ops->disas_log(db, cpu, logfile);
fprintf(logfile, "\n");
qemu_log_unlock(logfile);
}
}
}
static bool translator_ld(CPUArchState *env, DisasContextBase *db,
void *dest, vaddr pc, size_t len)
static void *translator_access(CPUArchState *env, DisasContextBase *db,
vaddr pc, size_t len)
{
TranslationBlock *tb = db->tb;
vaddr last = pc + len - 1;
void *host;
vaddr base;
vaddr base, end;
TranslationBlock *tb;
tb = db->tb;
/* Use slow path if first page is MMIO. */
if (unlikely(tb_page_addr0(tb) == -1)) {
/* We capped translation with first page MMIO in tb_gen_code. */
tcg_debug_assert(db->max_insns == 1);
return false;
return NULL;
}
host = db->host_addr[0];
base = db->pc_first;
if (likely(((base ^ last) & TARGET_PAGE_MASK) == 0)) {
/* Entire read is from the first page. */
memcpy(dest, host + (pc - base), len);
return true;
}
if (unlikely(((base ^ pc) & TARGET_PAGE_MASK) == 0)) {
/* Read begins on the first page and extends to the second. */
size_t len0 = -(pc | TARGET_PAGE_MASK);
memcpy(dest, host + (pc - base), len0);
pc += len0;
dest += len0;
len -= len0;
}
/*
* The read must conclude on the second page and not extend to a third.
*
* TODO: We could allow the two pages to be virtually discontiguous,
* since we already allow the two pages to be physically discontiguous.
* The only reasonable use case would be executing an insn at the end
* of the address space wrapping around to the beginning. For that,
* we would need to know the current width of the address space.
* In the meantime, assert.
*/
base = (base & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE;
assert(((base ^ pc) & TARGET_PAGE_MASK) == 0);
assert(((base ^ last) & TARGET_PAGE_MASK) == 0);
host = db->host_addr[1];
if (host == NULL) {
tb_page_addr_t page0, old_page1, new_page1;
new_page1 = get_page_addr_code_hostp(env, base, &db->host_addr[1]);
/*
* If the second page is MMIO, treat as if the first page
* was MMIO as well, so that we do not cache the TB.
*/
if (unlikely(new_page1 == -1)) {
tb_unlock_pages(tb);
tb_set_page_addr0(tb, -1);
/* Require that this be the final insn. */
db->max_insns = db->num_insns;
return false;
}
/*
* If this is not the first time around, and page1 matches,
* then we already have the page locked. Alternately, we're
* not doing anything to prevent the PTE from changing, so
* we might wind up with a different page, requiring us to
* re-do the locking.
*/
old_page1 = tb_page_addr1(tb);
if (likely(new_page1 != old_page1)) {
page0 = tb_page_addr0(tb);
if (unlikely(old_page1 != -1)) {
tb_unlock_page1(page0, old_page1);
}
tb_set_page_addr1(tb, new_page1);
tb_lock_page1(page0, new_page1);
}
end = pc + len - 1;
if (likely(is_same_page(db, end))) {
host = db->host_addr[0];
base = db->pc_first;
} else {
host = db->host_addr[1];
base = TARGET_PAGE_ALIGN(db->pc_first);
if (host == NULL) {
tb_page_addr_t page0, old_page1, new_page1;
new_page1 = get_page_addr_code_hostp(env, base, &db->host_addr[1]);
/*
* If the second page is MMIO, treat as if the first page
* was MMIO as well, so that we do not cache the TB.
*/
if (unlikely(new_page1 == -1)) {
tb_unlock_pages(tb);
tb_set_page_addr0(tb, -1);
return NULL;
}
/*
* If this is not the first time around, and page1 matches,
* then we already have the page locked. Alternately, we're
* not doing anything to prevent the PTE from changing, so
* we might wind up with a different page, requiring us to
* re-do the locking.
*/
old_page1 = tb_page_addr1(tb);
if (likely(new_page1 != old_page1)) {
page0 = tb_page_addr0(tb);
if (unlikely(old_page1 != -1)) {
tb_unlock_page1(page0, old_page1);
}
tb_set_page_addr1(tb, new_page1);
tb_lock_page1(page0, new_page1);
}
host = db->host_addr[1];
}
/* Use slow path when crossing pages. */
if (is_same_page(db, pc)) {
return NULL;
}
}
memcpy(dest, host + (pc - base), len);
return true;
tcg_debug_assert(pc >= base);
return host + (pc - base);
}
static void record_save(DisasContextBase *db, vaddr pc,
const void *from, int size)
static void plugin_insn_append(abi_ptr pc, const void *from, size_t size)
{
int offset;
#ifdef CONFIG_PLUGIN
struct qemu_plugin_insn *insn = tcg_ctx->plugin_insn;
abi_ptr off;
/* Do not record probes before the start of TB. */
if (pc < db->pc_first) {
if (insn == NULL) {
return;
}
/*
* In translator_access, we verified that pc is within 2 pages
* of pc_first, thus this will never overflow.
*/
offset = pc - db->pc_first;
/*
* Either the first or second page may be I/O. If it is the second,
* then the first byte we need to record will be at a non-zero offset.
* In either case, we should not need to record but a single insn.
*/
if (db->record_len == 0) {
db->record_start = offset;
db->record_len = size;
} else {
assert(offset == db->record_start + db->record_len);
assert(db->record_len + size <= sizeof(db->record));
db->record_len += size;
off = pc - insn->vaddr;
if (off < insn->data->len) {
g_byte_array_set_size(insn->data, off);
} else if (off > insn->data->len) {
/* we have an unexpected gap */
g_assert_not_reached();
}
memcpy(db->record + (offset - db->record_start), from, size);
insn->data = g_byte_array_append(insn->data, from, size);
#endif
}
size_t translator_st_len(const DisasContextBase *db)
uint8_t translator_ldub(CPUArchState *env, DisasContextBase *db, abi_ptr pc)
{
return db->fake_insn ? db->record_len : db->tb->size;
uint8_t ret;
void *p = translator_access(env, db, pc, sizeof(ret));
if (p) {
plugin_insn_append(pc, p, sizeof(ret));
return ldub_p(p);
}
ret = cpu_ldub_code(env, pc);
plugin_insn_append(pc, &ret, sizeof(ret));
return ret;
}
bool translator_st(const DisasContextBase *db, void *dest,
vaddr addr, size_t len)
uint16_t translator_lduw(CPUArchState *env, DisasContextBase *db, abi_ptr pc)
{
size_t offset, offset_end;
uint16_t ret, plug;
void *p = translator_access(env, db, pc, sizeof(ret));
if (addr < db->pc_first) {
return false;
if (p) {
plugin_insn_append(pc, p, sizeof(ret));
return lduw_p(p);
}
offset = addr - db->pc_first;
offset_end = offset + len;
if (offset_end > translator_st_len(db)) {
return false;
}
if (!db->fake_insn) {
size_t offset_page1 = -(db->pc_first | TARGET_PAGE_MASK);
/* Get all the bytes from the first page. */
if (db->host_addr[0]) {
if (offset_end <= offset_page1) {
memcpy(dest, db->host_addr[0] + offset, len);
return true;
}
if (offset < offset_page1) {
size_t len0 = offset_page1 - offset;
memcpy(dest, db->host_addr[0] + offset, len0);
offset += len0;
dest += len0;
}
}
/* Get any bytes from the second page. */
if (db->host_addr[1] && offset >= offset_page1) {
memcpy(dest, db->host_addr[1] + (offset - offset_page1),
offset_end - offset);
return true;
}
}
/* Else get recorded bytes. */
if (db->record_len != 0 &&
offset >= db->record_start &&
offset_end <= db->record_start + db->record_len) {
memcpy(dest, db->record + (offset - db->record_start),
offset_end - offset);
return true;
}
return false;
ret = cpu_lduw_code(env, pc);
plug = tswap16(ret);
plugin_insn_append(pc, &plug, sizeof(ret));
return ret;
}
uint8_t translator_ldub(CPUArchState *env, DisasContextBase *db, vaddr pc)
uint32_t translator_ldl(CPUArchState *env, DisasContextBase *db, abi_ptr pc)
{
uint8_t raw;
uint32_t ret, plug;
void *p = translator_access(env, db, pc, sizeof(ret));
if (!translator_ld(env, db, &raw, pc, sizeof(raw))) {
raw = cpu_ldub_code(env, pc);
record_save(db, pc, &raw, sizeof(raw));
if (p) {
plugin_insn_append(pc, p, sizeof(ret));
return ldl_p(p);
}
return raw;
ret = cpu_ldl_code(env, pc);
plug = tswap32(ret);
plugin_insn_append(pc, &plug, sizeof(ret));
return ret;
}
uint16_t translator_lduw(CPUArchState *env, DisasContextBase *db, vaddr pc)
uint64_t translator_ldq(CPUArchState *env, DisasContextBase *db, abi_ptr pc)
{
uint16_t raw, tgt;
uint64_t ret, plug;
void *p = translator_access(env, db, pc, sizeof(ret));
if (translator_ld(env, db, &raw, pc, sizeof(raw))) {
tgt = tswap16(raw);
} else {
tgt = cpu_lduw_code(env, pc);
raw = tswap16(tgt);
record_save(db, pc, &raw, sizeof(raw));
if (p) {
plugin_insn_append(pc, p, sizeof(ret));
return ldq_p(p);
}
return tgt;
ret = cpu_ldq_code(env, pc);
plug = tswap64(ret);
plugin_insn_append(pc, &plug, sizeof(ret));
return ret;
}
uint32_t translator_ldl(CPUArchState *env, DisasContextBase *db, vaddr pc)
void translator_fake_ldb(uint8_t insn8, abi_ptr pc)
{
uint32_t raw, tgt;
if (translator_ld(env, db, &raw, pc, sizeof(raw))) {
tgt = tswap32(raw);
} else {
tgt = cpu_ldl_code(env, pc);
raw = tswap32(tgt);
record_save(db, pc, &raw, sizeof(raw));
}
return tgt;
}
uint64_t translator_ldq(CPUArchState *env, DisasContextBase *db, vaddr pc)
{
uint64_t raw, tgt;
if (translator_ld(env, db, &raw, pc, sizeof(raw))) {
tgt = tswap64(raw);
} else {
tgt = cpu_ldq_code(env, pc);
raw = tswap64(tgt);
record_save(db, pc, &raw, sizeof(raw));
}
return tgt;
}
void translator_fake_ld(DisasContextBase *db, const void *data, size_t len)
{
db->fake_insn = true;
record_save(db, db->pc_first, data, len);
plugin_insn_append(pc, &insn8, sizeof(insn8));
}

View File

@@ -24,9 +24,7 @@
#include "qemu/bitops.h"
#include "qemu/rcu.h"
#include "exec/cpu_ldst.h"
#include "qemu/main-loop.h"
#include "exec/translate-all.h"
#include "exec/page-protection.h"
#include "exec/helper-proto.h"
#include "qemu/atomic128.h"
#include "trace/trace-root.h"
@@ -38,13 +36,6 @@ __thread uintptr_t helper_retaddr;
//#define DEBUG_SIGNAL
void cpu_interrupt(CPUState *cpu, int mask)
{
g_assert(bql_locked());
cpu->interrupt_request |= mask;
qatomic_set(&cpu->neg.icount_decr.u16.high, -1);
}
/*
* Adjust the pc to pass to cpu_restore_state; return the memop type.
*/
@@ -660,17 +651,16 @@ void page_protect(tb_page_addr_t address)
{
PageFlagsNode *p;
target_ulong start, last;
int host_page_size = qemu_real_host_page_size();
int prot;
assert_memory_lock();
if (host_page_size <= TARGET_PAGE_SIZE) {
if (qemu_host_page_size <= TARGET_PAGE_SIZE) {
start = address & TARGET_PAGE_MASK;
last = start + TARGET_PAGE_SIZE - 1;
} else {
start = address & -host_page_size;
last = start + host_page_size - 1;
start = address & qemu_host_page_mask;
last = start + qemu_host_page_size - 1;
}
p = pageflags_find(start, last);
@@ -681,7 +671,7 @@ void page_protect(tb_page_addr_t address)
if (unlikely(p->itree.last < last)) {
/* More than one protection region covers the one host page. */
assert(TARGET_PAGE_SIZE < host_page_size);
assert(TARGET_PAGE_SIZE < qemu_host_page_size);
while ((p = pageflags_next(p, start, last)) != NULL) {
prot |= p->flags;
}
@@ -689,7 +679,7 @@ void page_protect(tb_page_addr_t address)
if (prot & PAGE_WRITE) {
pageflags_set_clear(start, last, 0, PAGE_WRITE);
mprotect(g2h_untagged(start), last - start + 1,
mprotect(g2h_untagged(start), qemu_host_page_size,
prot & (PAGE_READ | PAGE_EXEC) ? PROT_READ : PROT_NONE);
}
}
@@ -735,19 +725,18 @@ int page_unprotect(target_ulong address, uintptr_t pc)
}
#endif
} else {
int host_page_size = qemu_real_host_page_size();
target_ulong start, len, i;
int prot;
if (host_page_size <= TARGET_PAGE_SIZE) {
if (qemu_host_page_size <= TARGET_PAGE_SIZE) {
start = address & TARGET_PAGE_MASK;
len = TARGET_PAGE_SIZE;
prot = p->flags | PAGE_WRITE;
pageflags_set_clear(start, start + len - 1, PAGE_WRITE, 0);
current_tb_invalidated = tb_invalidate_phys_page_unwind(start, pc);
} else {
start = address & -host_page_size;
len = host_page_size;
start = address & qemu_host_page_mask;
len = qemu_host_page_size;
prot = 0;
for (i = 0; i < len; i += TARGET_PAGE_SIZE) {
@@ -773,7 +762,7 @@ int page_unprotect(target_ulong address, uintptr_t pc)
if (prot & PAGE_EXEC) {
prot = (prot & ~PAGE_EXEC) | PAGE_READ;
}
mprotect((void *)g2h_untagged(start), len, prot & PAGE_RWX);
mprotect((void *)g2h_untagged(start), len, prot & PAGE_BITS);
}
mmap_unlock();
@@ -873,7 +862,7 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, vaddr addr,
typedef struct TargetPageDataNode {
struct rcu_head rcu;
IntervalTreeNode itree;
char data[] __attribute__((aligned));
char data[TPD_PAGES][TARGET_PAGE_DATA_SIZE] __attribute__((aligned));
} TargetPageDataNode;
static IntervalTreeRoot targetdata_root;
@@ -911,8 +900,7 @@ void page_reset_target_data(target_ulong start, target_ulong last)
n_last = MIN(last, n->last);
p_len = (n_last + 1 - n_start) >> TARGET_PAGE_BITS;
memset(t->data + p_ofs * TARGET_PAGE_DATA_SIZE, 0,
p_len * TARGET_PAGE_DATA_SIZE);
memset(t->data[p_ofs], 0, p_len * TARGET_PAGE_DATA_SIZE);
}
}
@@ -920,7 +908,7 @@ void *page_get_target_data(target_ulong address)
{
IntervalTreeNode *n;
TargetPageDataNode *t;
target_ulong page, region, p_ofs;
target_ulong page, region;
page = address & TARGET_PAGE_MASK;
region = address & TBD_MASK;
@@ -936,8 +924,7 @@ void *page_get_target_data(target_ulong address)
mmap_lock();
n = interval_tree_iter_first(&targetdata_root, page, page);
if (!n) {
t = g_malloc0(sizeof(TargetPageDataNode)
+ TPD_PAGES * TARGET_PAGE_DATA_SIZE);
t = g_new0(TargetPageDataNode, 1);
n = &t->itree;
n->start = region;
n->last = region | ~TBD_MASK;
@@ -947,8 +934,7 @@ void *page_get_target_data(target_ulong address)
}
t = container_of(n, TargetPageDataNode, itree);
p_ofs = (page - region) >> TARGET_PAGE_BITS;
return t->data + p_ofs * TARGET_PAGE_DATA_SIZE;
return t->data[(page - region) >> TARGET_PAGE_BITS];
}
#else
void page_reset_target_data(target_ulong start, target_ulong last) { }

View File

@@ -1,18 +0,0 @@
/*
* SPDX-FileContributor: Philippe Mathieu-Daudé <philmd@linaro.org>
* SPDX-FileCopyrightText: 2023 Linaro Ltd.
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef ACCEL_TCG_VCPU_STATE_H
#define ACCEL_TCG_VCPU_STATE_H
#include "hw/core/cpu.h"
#ifdef CONFIG_USER_ONLY
static inline TaskState *get_task_state(const CPUState *cs)
{
return cs->opaque;
}
#endif
#endif

View File

@@ -1,143 +0,0 @@
/*
* CPU watchpoints
*
* Copyright (c) 2003 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "qemu/main-loop.h"
#include "qemu/error-report.h"
#include "exec/exec-all.h"
#include "exec/translate-all.h"
#include "sysemu/tcg.h"
#include "sysemu/replay.h"
#include "hw/core/tcg-cpu-ops.h"
#include "hw/core/cpu.h"
/*
* Return true if this watchpoint address matches the specified
* access (ie the address range covered by the watchpoint overlaps
* partially or completely with the address range covered by the
* access).
*/
static inline bool watchpoint_address_matches(CPUWatchpoint *wp,
vaddr addr, vaddr len)
{
/*
* We know the lengths are non-zero, but a little caution is
* required to avoid errors in the case where the range ends
* exactly at the top of the address space and so addr + len
* wraps round to zero.
*/
vaddr wpend = wp->vaddr + wp->len - 1;
vaddr addrend = addr + len - 1;
return !(addr > wpend || wp->vaddr > addrend);
}
/* Return flags for watchpoints that match addr + prot. */
int cpu_watchpoint_address_matches(CPUState *cpu, vaddr addr, vaddr len)
{
CPUWatchpoint *wp;
int ret = 0;
QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) {
if (watchpoint_address_matches(wp, addr, len)) {
ret |= wp->flags;
}
}
return ret;
}
/* Generate a debug exception if a watchpoint has been hit. */
void cpu_check_watchpoint(CPUState *cpu, vaddr addr, vaddr len,
MemTxAttrs attrs, int flags, uintptr_t ra)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
CPUWatchpoint *wp;
assert(tcg_enabled());
if (cpu->watchpoint_hit) {
/*
* We re-entered the check after replacing the TB.
* Now raise the debug interrupt so that it will
* trigger after the current instruction.
*/
bql_lock();
cpu_interrupt(cpu, CPU_INTERRUPT_DEBUG);
bql_unlock();
return;
}
if (cc->tcg_ops->adjust_watchpoint_address) {
/* this is currently used only by ARM BE32 */
addr = cc->tcg_ops->adjust_watchpoint_address(cpu, addr, len);
}
assert((flags & ~BP_MEM_ACCESS) == 0);
QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) {
int hit_flags = wp->flags & flags;
if (hit_flags && watchpoint_address_matches(wp, addr, len)) {
if (replay_running_debug()) {
/*
* replay_breakpoint reads icount.
* Force recompile to succeed, because icount may
* be read only at the end of the block.
*/
if (!cpu->neg.can_do_io) {
/* Force execution of one insn next time. */
cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(cpu);
cpu_loop_exit_restore(cpu, ra);
}
/*
* Don't process the watchpoints when we are
* in a reverse debugging operation.
*/
replay_breakpoint();
return;
}
wp->flags |= hit_flags << BP_HIT_SHIFT;
wp->hitaddr = MAX(addr, wp->vaddr);
wp->hitattrs = attrs;
if (wp->flags & BP_CPU
&& cc->tcg_ops->debug_check_watchpoint
&& !cc->tcg_ops->debug_check_watchpoint(cpu, wp)) {
wp->flags &= ~BP_WATCHPOINT_HIT;
continue;
}
cpu->watchpoint_hit = wp;
mmap_lock();
/* This call also restores vCPU state */
tb_check_watchpoint(cpu, ra);
if (wp->flags & BP_STOP_BEFORE_ACCESS) {
cpu->exception_index = EXCP_DEBUG;
mmap_unlock();
cpu_loop_exit(cpu);
} else {
/* Force execution of one insn next time. */
cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(cpu);
mmap_unlock();
cpu_loop_exit_noexc(cpu);
}
} else {
wp->flags &= ~BP_WATCHPOINT_HIT;
}
}
}

View File

@@ -15,7 +15,6 @@
#include "hw/xen/xen_native.h"
#include "hw/xen/xen-legacy-backend.h"
#include "hw/xen/xen_pt.h"
#include "hw/xen/xen_igd.h"
#include "chardev/char.h"
#include "qemu/accel.h"
#include "sysemu/cpus.h"

View File

@@ -1683,7 +1683,7 @@ static const VMStateDescription vmstate_audio = {
.version_id = 1,
.minimum_version_id = 1,
.needed = vmstate_audio_needed,
.fields = (const VMStateField[]) {
.fields = (VMStateField[]) {
VMSTATE_END_OF_LIST()
}
};
@@ -1744,7 +1744,7 @@ static AudioState *audio_init(Audiodev *dev, Error **errp)
if (driver) {
done = !audio_driver_init(s, driver, dev, errp);
} else {
error_setg(errp, "Unknown audio driver `%s'", drvname);
error_setg(errp, "Unknown audio driver `%s'\n", drvname);
}
if (!done) {
goto out;
@@ -1758,15 +1758,12 @@ static AudioState *audio_init(Audiodev *dev, Error **errp)
goto out;
}
s->dev = dev = e->dev;
QSIMPLEQ_REMOVE_HEAD(&default_audiodevs, next);
g_free(e);
drvname = AudiodevDriver_str(dev->driver);
driver = audio_driver_lookup(drvname);
if (!audio_driver_init(s, driver, dev, NULL)) {
break;
}
qapi_free_Audiodev(dev);
s->dev = NULL;
QSIMPLEQ_REMOVE_HEAD(&default_audiodevs, next);
}
}

View File

@@ -44,6 +44,11 @@ typedef struct coreaudioVoiceOut {
bool enabled;
} coreaudioVoiceOut;
#if !defined(MAC_OS_VERSION_12_0) \
|| (MAC_OS_X_VERSION_MIN_REQUIRED < MAC_OS_VERSION_12_0)
#define kAudioObjectPropertyElementMain kAudioObjectPropertyElementMaster
#endif
static const AudioObjectPropertyAddress voice_addr = {
kAudioHardwarePropertyDefaultOutputDevice,
kAudioObjectPropertyScopeGlobal,
@@ -294,7 +299,7 @@ COREAUDIO_WRAPPER_FUNC(write, size_t, (HWVoiceOut *hw, void *buf, size_t size),
#undef COREAUDIO_WRAPPER_FUNC
/*
* callback to feed audiooutput buffer. called without BQL.
* callback to feed audiooutput buffer. called without iothread lock.
* allowed to lock "buf_mutex", but disallowed to have any other locks.
*/
static OSStatus audioDeviceIOProc(
@@ -533,7 +538,7 @@ static void update_device_playback_state(coreaudioVoiceOut *core)
}
}
/* called without BQL. */
/* called without iothread lock. */
static OSStatus handle_voice_change(
AudioObjectID in_object_id,
UInt32 in_number_addresses,
@@ -542,7 +547,7 @@ static OSStatus handle_voice_change(
{
coreaudioVoiceOut *core = in_client_data;
bql_lock();
qemu_mutex_lock_iothread();
if (core->outputDeviceID) {
fini_out_device(core);
@@ -552,7 +557,7 @@ static OSStatus handle_voice_change(
update_device_playback_state(core);
}
bql_unlock();
qemu_mutex_unlock_iothread();
return 0;
}

View File

@@ -105,7 +105,7 @@ static size_t dbus_put_buffer_out(HWVoiceOut *hw, void *buf, size_t size)
assert(buf == vo->buf + vo->buf_pos && vo->buf_pos + size <= vo->buf_size);
vo->buf_pos += size;
trace_dbus_audio_put_buffer_out(vo->buf_pos, vo->buf_size);
trace_dbus_audio_put_buffer_out(size);
if (vo->buf_pos < vo->buf_size) {
return size;

View File

@@ -30,8 +30,7 @@ endforeach
if dbus_display
module_ss = ss.source_set()
module_ss.add(when: [gio, pixman],
if_true: [dbus_display1, files('dbusaudio.c')])
module_ss.add(when: [gio, pixman], if_true: files('dbusaudio.c'))
audio_modules += {'dbus': module_ss}
endif

View File

@@ -11,6 +11,7 @@
#include "qemu/osdep.h"
#include "qemu/module.h"
#include "audio.h"
#include <errno.h>
#include "qemu/error-report.h"
#include "qapi/error.h"
#include <spa/param/audio/format-utils.h>

View File

@@ -15,7 +15,7 @@ oss_version(int version) "OSS version = 0x%x"
# dbusaudio.c
dbus_audio_register(const char *s, const char *dir) "sender = %s, dir = %s"
dbus_audio_put_buffer_out(size_t pos, size_t size) "buf_pos = %zu, buf_size = %zu"
dbus_audio_put_buffer_out(size_t len) "len = %zu"
dbus_audio_read(size_t len) "len = %zu"
# pwaudio.c

View File

@@ -1,9 +1 @@
source tpm/Kconfig
config IOMMUFD
bool
depends on VFIO
config SPDM_SOCKET
bool
default y

View File

@@ -23,7 +23,6 @@
#include "qemu/osdep.h"
#include "sysemu/cryptodev.h"
#include "qemu/error-report.h"
#include "qapi/error.h"
#include "standard-headers/linux/virtio_crypto.h"
#include "crypto/cipher.h"
@@ -397,8 +396,8 @@ static int cryptodev_builtin_create_session(
case VIRTIO_CRYPTO_HASH_CREATE_SESSION:
case VIRTIO_CRYPTO_MAC_CREATE_SESSION:
default:
error_report("Unsupported opcode :%" PRIu32 "",
sess_info->op_code);
error_setg(&local_error, "Unsupported opcode :%" PRIu32 "",
sess_info->op_code);
return -VIRTIO_CRYPTO_NOTSUPP;
}
@@ -428,9 +427,7 @@ static int cryptodev_builtin_close_session(
CRYPTODEV_BACKEND_BUILTIN(backend);
CryptoDevBackendBuiltinSession *session;
if (session_id >= MAX_NUM_SESSIONS || !builtin->sessions[session_id]) {
return -VIRTIO_CRYPTO_INVSESS;
}
assert(session_id < MAX_NUM_SESSIONS && builtin->sessions[session_id]);
session = builtin->sessions[session_id];
if (session->cipher) {
@@ -555,8 +552,8 @@ static int cryptodev_builtin_operation(
if (op_info->session_id >= MAX_NUM_SESSIONS ||
builtin->sessions[op_info->session_id] == NULL) {
error_report("Cannot find a valid session id: %" PRIu64 "",
op_info->session_id);
error_setg(&local_error, "Cannot find a valid session id: %" PRIu64 "",
op_info->session_id);
return -VIRTIO_CRYPTO_INVSESS;
}

View File

@@ -398,7 +398,6 @@ static void cryptodev_backend_set_ops(Object *obj, Visitor *v,
static void
cryptodev_backend_complete(UserCreatable *uc, Error **errp)
{
ERRP_GUARD();
CryptoDevBackend *backend = CRYPTODEV_BACKEND(uc);
CryptoDevBackendClass *bc = CRYPTODEV_BACKEND_GET_CLASS(uc);
uint32_t services;
@@ -407,20 +406,11 @@ cryptodev_backend_complete(UserCreatable *uc, Error **errp)
QTAILQ_INIT(&backend->opinfos);
value = backend->tc.buckets[THROTTLE_OPS_TOTAL].avg;
cryptodev_backend_set_throttle(backend, THROTTLE_OPS_TOTAL, value, errp);
if (*errp) {
return;
}
value = backend->tc.buckets[THROTTLE_BPS_TOTAL].avg;
cryptodev_backend_set_throttle(backend, THROTTLE_BPS_TOTAL, value, errp);
if (*errp) {
return;
}
if (bc->init) {
bc->init(backend, errp);
if (*errp) {
return;
}
}
services = backend->conf.crypto_services;

View File

@@ -393,7 +393,7 @@ static const VMStateDescription dbus_vmstate = {
.version_id = 0,
.pre_save = dbus_vmstate_pre_save,
.post_load = dbus_vmstate_post_load,
.fields = (const VMStateField[]) {
.fields = (VMStateField[]) {
VMSTATE_UINT32(data_size, DBusVMState),
VMSTATE_VBUFFER_ALLOC_UINT32(data, DBusVMState, 0, 0, data_size),
VMSTATE_END_OF_LIST()

View File

@@ -1,33 +0,0 @@
/*
* Host IOMMU device abstract
*
* Copyright (C) 2024 Intel Corporation.
*
* Authors: Zhenzhong Duan <zhenzhong.duan@intel.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/host_iommu_device.h"
OBJECT_DEFINE_ABSTRACT_TYPE(HostIOMMUDevice,
host_iommu_device,
HOST_IOMMU_DEVICE,
OBJECT)
static void host_iommu_device_class_init(ObjectClass *oc, void *data)
{
}
static void host_iommu_device_init(Object *obj)
{
}
static void host_iommu_device_finalize(Object *obj)
{
HostIOMMUDevice *hiod = HOST_IOMMU_DEVICE(obj);
g_free(hiod->name);
}

View File

@@ -17,28 +17,31 @@
#include "sysemu/hostmem.h"
#include "hw/i386/hostmem-epc.h"
static bool
static void
sgx_epc_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
g_autofree char *name = NULL;
uint32_t ram_flags;
char *name;
int fd;
if (!backend->size) {
error_setg(errp, "can't create backend with size 0");
return false;
return;
}
fd = qemu_open("/dev/sgx_vepc", O_RDWR, errp);
fd = qemu_open_old("/dev/sgx_vepc", O_RDWR);
if (fd < 0) {
return false;
error_setg_errno(errp, errno,
"failed to open /dev/sgx_vepc to alloc SGX EPC");
return;
}
backend->aligned = true;
name = object_get_canonical_path(OBJECT(backend));
ram_flags = (backend->share ? RAM_SHARED : 0) | RAM_PROTECTED;
return memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend), name,
backend->size, ram_flags, fd, 0, errp);
memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend),
name, backend->size, ram_flags,
fd, 0, errp);
g_free(name);
}
static void sgx_epc_backend_instance_init(Object *obj)

View File

@@ -36,25 +36,24 @@ struct HostMemoryBackendFile {
OnOffAuto rom;
};
static bool
static void
file_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
#ifndef CONFIG_POSIX
error_setg(errp, "backend '%s' not supported on this host",
object_get_typename(OBJECT(backend)));
return false;
#else
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(backend);
g_autofree gchar *name = NULL;
uint32_t ram_flags;
gchar *name;
if (!backend->size) {
error_setg(errp, "can't create backend with size 0");
return false;
return;
}
if (!fb->mem_path) {
error_setg(errp, "mem-path property not set");
return false;
return;
}
switch (fb->rom) {
@@ -66,32 +65,32 @@ file_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
if (!fb->readonly) {
error_setg(errp, "property 'rom' = 'on' is not supported with"
" 'readonly' = 'off'");
return false;
return;
}
break;
case ON_OFF_AUTO_OFF:
if (fb->readonly && backend->share) {
error_setg(errp, "property 'rom' = 'off' is incompatible with"
" 'readonly' = 'on' and 'share' = 'on'");
return false;
return;
}
break;
default:
g_assert_not_reached();
assert(false);
}
backend->aligned = true;
name = host_memory_backend_get_name(backend);
ram_flags = backend->share ? RAM_SHARED : 0;
ram_flags |= fb->readonly ? RAM_READONLY_FD : 0;
ram_flags |= fb->rom == ON_OFF_AUTO_ON ? RAM_READONLY : 0;
ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
ram_flags |= backend->guest_memfd ? RAM_GUEST_MEMFD : 0;
ram_flags |= backend->require_guest_memfd ? RAM_GUEST_MEMFD : 0;
ram_flags |= fb->is_pmem ? RAM_PMEM : 0;
ram_flags |= RAM_NAMED_FILE;
return memory_region_init_ram_from_file(&backend->mr, OBJECT(backend), name,
backend->size, fb->align, ram_flags,
fb->mem_path, fb->offset, errp);
memory_region_init_ram_from_file(&backend->mr, OBJECT(backend), name,
backend->size, fb->align, ram_flags,
fb->mem_path, fb->offset, errp);
g_free(name);
#endif
}

View File

@@ -31,17 +31,17 @@ struct HostMemoryBackendMemfd {
bool seal;
};
static bool
static void
memfd_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
HostMemoryBackendMemfd *m = MEMORY_BACKEND_MEMFD(backend);
g_autofree char *name = NULL;
uint32_t ram_flags;
char *name;
int fd;
if (!backend->size) {
error_setg(errp, "can't create backend with size 0");
return false;
return;
}
fd = qemu_memfd_create(TYPE_MEMORY_BACKEND_MEMFD, backend->size,
@@ -49,16 +49,16 @@ memfd_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
F_SEAL_GROW | F_SEAL_SHRINK | F_SEAL_SEAL : 0,
errp);
if (fd == -1) {
return false;
return;
}
backend->aligned = true;
name = host_memory_backend_get_name(backend);
ram_flags = backend->share ? RAM_SHARED : 0;
ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
ram_flags |= backend->guest_memfd ? RAM_GUEST_MEMFD : 0;
return memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend), name,
backend->size, ram_flags, fd, 0, errp);
ram_flags |= backend->require_guest_memfd ? RAM_GUEST_MEMFD : 0;
memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend), name,
backend->size, ram_flags, fd, 0, errp);
g_free(name);
}
static bool

View File

@@ -16,24 +16,24 @@
#include "qemu/module.h"
#include "qom/object_interfaces.h"
static bool
static void
ram_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
g_autofree char *name = NULL;
uint32_t ram_flags;
char *name;
if (!backend->size) {
error_setg(errp, "can't create backend with size 0");
return false;
return;
}
name = host_memory_backend_get_name(backend);
ram_flags = backend->share ? RAM_SHARED : 0;
ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
ram_flags |= backend->guest_memfd ? RAM_GUEST_MEMFD : 0;
return memory_region_init_ram_flags_nomigrate(&backend->mr, OBJECT(backend),
name, backend->size,
ram_flags, errp);
ram_flags |= backend->require_guest_memfd ? RAM_GUEST_MEMFD : 0;
memory_region_init_ram_flags_nomigrate(&backend->mr, OBJECT(backend), name,
backend->size, ram_flags, errp);
g_free(name);
}
static void

View File

@@ -1,123 +0,0 @@
/*
* QEMU host POSIX shared memory object backend
*
* Copyright (C) 2024 Red Hat Inc
*
* Authors:
* Stefano Garzarella <sgarzare@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/hostmem.h"
#include "qapi/error.h"
#define TYPE_MEMORY_BACKEND_SHM "memory-backend-shm"
OBJECT_DECLARE_SIMPLE_TYPE(HostMemoryBackendShm, MEMORY_BACKEND_SHM)
struct HostMemoryBackendShm {
HostMemoryBackend parent_obj;
};
static bool
shm_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
g_autoptr(GString) shm_name = g_string_new(NULL);
g_autofree char *backend_name = NULL;
uint32_t ram_flags;
int fd, oflag;
mode_t mode;
if (!backend->size) {
error_setg(errp, "can't create shm backend with size 0");
return false;
}
if (!backend->share) {
error_setg(errp, "can't create shm backend with `share=off`");
return false;
}
/*
* Let's use `mode = 0` because we don't want other processes to open our
* memory unless we share the file descriptor with them.
*/
mode = 0;
oflag = O_RDWR | O_CREAT | O_EXCL;
backend_name = host_memory_backend_get_name(backend);
/*
* Some operating systems allow creating anonymous POSIX shared memory
* objects (e.g. FreeBSD provides the SHM_ANON constant), but this is not
* defined by POSIX, so let's create a unique name.
*
* From Linux's shm_open(3) man-page:
* For portable use, a shared memory object should be identified
* by a name of the form /somename;"
*/
g_string_printf(shm_name, "/qemu-" FMT_pid "-shm-%s", getpid(),
backend_name);
fd = shm_open(shm_name->str, oflag, mode);
if (fd < 0) {
error_setg_errno(errp, errno,
"failed to create POSIX shared memory");
return false;
}
/*
* We have the file descriptor, so we no longer need to expose the
* POSIX shared memory object. However it will remain allocated as long as
* there are file descriptors pointing to it.
*/
shm_unlink(shm_name->str);
if (ftruncate(fd, backend->size) == -1) {
error_setg_errno(errp, errno,
"failed to resize POSIX shared memory to %" PRIu64,
backend->size);
close(fd);
return false;
}
ram_flags = RAM_SHARED;
ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
return memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend),
backend_name, backend->size,
ram_flags, fd, 0, errp);
}
static void
shm_backend_instance_init(Object *obj)
{
HostMemoryBackendShm *m = MEMORY_BACKEND_SHM(obj);
MEMORY_BACKEND(m)->share = true;
}
static void
shm_backend_class_init(ObjectClass *oc, void *data)
{
HostMemoryBackendClass *bc = MEMORY_BACKEND_CLASS(oc);
bc->alloc = shm_backend_memory_alloc;
}
static const TypeInfo shm_backend_info = {
.name = TYPE_MEMORY_BACKEND_SHM,
.parent = TYPE_MEMORY_BACKEND,
.instance_init = shm_backend_instance_init,
.class_init = shm_backend_class_init,
.instance_size = sizeof(HostMemoryBackendShm),
};
static void register_types(void)
{
type_register_static(&shm_backend_info);
}
type_init(register_types);

View File

@@ -20,8 +20,6 @@
#include "qom/object_interfaces.h"
#include "qemu/mmap-alloc.h"
#include "qemu/madvise.h"
#include "qemu/cutils.h"
#include "hw/qdev-core.h"
#ifdef CONFIG_NUMA
#include <numaif.h>
@@ -170,24 +168,19 @@ static void host_memory_backend_set_merge(Object *obj, bool value, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (QEMU_MADV_MERGEABLE == QEMU_MADV_INVALID) {
if (value) {
error_setg(errp, "Memory merging is not supported on this host");
}
assert(!backend->merge);
if (!host_memory_backend_mr_inited(backend)) {
backend->merge = value;
return;
}
if (!host_memory_backend_mr_inited(backend) &&
value != backend->merge) {
if (value != backend->merge) {
void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr);
qemu_madvise(ptr, sz,
value ? QEMU_MADV_MERGEABLE : QEMU_MADV_UNMERGEABLE);
backend->merge = value;
}
backend->merge = value;
}
static bool host_memory_backend_get_dump(Object *obj, Error **errp)
@@ -201,24 +194,19 @@ static void host_memory_backend_set_dump(Object *obj, bool value, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (QEMU_MADV_DONTDUMP == QEMU_MADV_INVALID) {
if (!value) {
error_setg(errp, "Dumping guest memory cannot be disabled on this host");
}
assert(backend->dump);
if (!host_memory_backend_mr_inited(backend)) {
backend->dump = value;
return;
}
if (host_memory_backend_mr_inited(backend) &&
value != backend->dump) {
if (value != backend->dump) {
void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr);
qemu_madvise(ptr, sz,
value ? QEMU_MADV_DODUMP : QEMU_MADV_DONTDUMP);
backend->dump = value;
}
backend->dump = value;
}
static bool host_memory_backend_get_prealloc(Object *obj, Error **errp)
@@ -231,6 +219,7 @@ static bool host_memory_backend_get_prealloc(Object *obj, Error **errp)
static void host_memory_backend_set_prealloc(Object *obj, bool value,
Error **errp)
{
Error *local_err = NULL;
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (!backend->reserve && value) {
@@ -248,8 +237,10 @@ static void host_memory_backend_set_prealloc(Object *obj, bool value,
void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr);
if (!qemu_prealloc_mem(fd, ptr, sz, backend->prealloc_threads,
backend->prealloc_context, false, errp)) {
qemu_prealloc_mem(fd, ptr, sz, backend->prealloc_threads,
backend->prealloc_context, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
backend->prealloc = true;
@@ -288,7 +279,7 @@ static void host_memory_backend_init(Object *obj)
/* TODO: convert access to globals to compat properties */
backend->merge = machine_mem_merge(machine);
backend->dump = machine_dump_guest_core(machine);
backend->guest_memfd = machine_require_guest_memfd(machine);
backend->require_guest_memfd = machine_require_guest_memfd(machine);
backend->reserve = true;
backend->prealloc_threads = machine->smp.cpus;
}
@@ -334,101 +325,91 @@ host_memory_backend_memory_complete(UserCreatable *uc, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(uc);
HostMemoryBackendClass *bc = MEMORY_BACKEND_GET_CLASS(uc);
Error *local_err = NULL;
void *ptr;
uint64_t sz;
size_t pagesize;
bool async = !phase_check(PHASE_LATE_BACKENDS_CREATED);
if (!bc->alloc) {
return;
}
if (!bc->alloc(backend, errp)) {
return;
}
if (bc->alloc) {
bc->alloc(backend, &local_err);
if (local_err) {
goto out;
}
ptr = memory_region_get_ram_ptr(&backend->mr);
sz = memory_region_size(&backend->mr);
pagesize = qemu_ram_pagesize(backend->mr.ram_block);
ptr = memory_region_get_ram_ptr(&backend->mr);
sz = memory_region_size(&backend->mr);
if (backend->aligned && !QEMU_IS_ALIGNED(sz, pagesize)) {
g_autofree char *pagesize_str = size_to_str(pagesize);
error_setg(errp, "backend '%s' memory size must be multiple of %s",
object_get_typename(OBJECT(uc)), pagesize_str);
return;
}
if (backend->merge) {
qemu_madvise(ptr, sz, QEMU_MADV_MERGEABLE);
}
if (!backend->dump) {
qemu_madvise(ptr, sz, QEMU_MADV_DONTDUMP);
}
if (backend->merge) {
qemu_madvise(ptr, sz, QEMU_MADV_MERGEABLE);
}
if (!backend->dump) {
qemu_madvise(ptr, sz, QEMU_MADV_DONTDUMP);
}
#ifdef CONFIG_NUMA
unsigned long lastbit = find_last_bit(backend->host_nodes, MAX_NODES);
/* lastbit == MAX_NODES means maxnode = 0 */
unsigned long maxnode = (lastbit + 1) % (MAX_NODES + 1);
/*
* Ensure policy won't be ignored in case memory is preallocated
* before mbind(). note: MPOL_MF_STRICT is ignored on hugepages so
* this doesn't catch hugepage case.
*/
unsigned flags = MPOL_MF_STRICT | MPOL_MF_MOVE;
int mode = backend->policy;
unsigned long lastbit = find_last_bit(backend->host_nodes, MAX_NODES);
/* lastbit == MAX_NODES means maxnode = 0 */
unsigned long maxnode = (lastbit + 1) % (MAX_NODES + 1);
/* ensure policy won't be ignored in case memory is preallocated
* before mbind(). note: MPOL_MF_STRICT is ignored on hugepages so
* this doesn't catch hugepage case. */
unsigned flags = MPOL_MF_STRICT | MPOL_MF_MOVE;
int mode = backend->policy;
/* check for invalid host-nodes and policies and give more verbose
* error messages than mbind(). */
if (maxnode && backend->policy == MPOL_DEFAULT) {
error_setg(errp, "host-nodes must be empty for policy default,"
" or you should explicitly specify a policy other"
" than default");
return;
} else if (maxnode == 0 && backend->policy != MPOL_DEFAULT) {
error_setg(errp, "host-nodes must be set for policy %s",
HostMemPolicy_str(backend->policy));
return;
}
/*
* We can have up to MAX_NODES nodes, but we need to pass maxnode+1
* as argument to mbind() due to an old Linux bug (feature?) which
* cuts off the last specified node. This means backend->host_nodes
* must have MAX_NODES+1 bits available.
*/
assert(sizeof(backend->host_nodes) >=
BITS_TO_LONGS(MAX_NODES + 1) * sizeof(unsigned long));
assert(maxnode <= MAX_NODES);
#ifdef HAVE_NUMA_HAS_PREFERRED_MANY
if (mode == MPOL_PREFERRED && numa_has_preferred_many() > 0) {
/*
* Replace with MPOL_PREFERRED_MANY otherwise the mbind() below
* silently picks the first node.
*/
mode = MPOL_PREFERRED_MANY;
}
#endif
if (maxnode &&
mbind(ptr, sz, mode, backend->host_nodes, maxnode + 1, flags)) {
if (backend->policy != MPOL_DEFAULT || errno != ENOSYS) {
error_setg_errno(errp, errno,
"cannot bind memory to host NUMA nodes");
/* check for invalid host-nodes and policies and give more verbose
* error messages than mbind(). */
if (maxnode && backend->policy == MPOL_DEFAULT) {
error_setg(errp, "host-nodes must be empty for policy default,"
" or you should explicitly specify a policy other"
" than default");
return;
} else if (maxnode == 0 && backend->policy != MPOL_DEFAULT) {
error_setg(errp, "host-nodes must be set for policy %s",
HostMemPolicy_str(backend->policy));
return;
}
}
/* We can have up to MAX_NODES nodes, but we need to pass maxnode+1
* as argument to mbind() due to an old Linux bug (feature?) which
* cuts off the last specified node. This means backend->host_nodes
* must have MAX_NODES+1 bits available.
*/
assert(sizeof(backend->host_nodes) >=
BITS_TO_LONGS(MAX_NODES + 1) * sizeof(unsigned long));
assert(maxnode <= MAX_NODES);
#ifdef HAVE_NUMA_HAS_PREFERRED_MANY
if (mode == MPOL_PREFERRED && numa_has_preferred_many() > 0) {
/*
* Replace with MPOL_PREFERRED_MANY otherwise the mbind() below
* silently picks the first node.
*/
mode = MPOL_PREFERRED_MANY;
}
#endif
/*
* Preallocate memory after the NUMA policy has been instantiated.
* This is necessary to guarantee memory is allocated with
* specified NUMA policy in place.
*/
if (backend->prealloc && !qemu_prealloc_mem(memory_region_get_fd(&backend->mr),
ptr, sz,
backend->prealloc_threads,
backend->prealloc_context,
async, errp)) {
return;
if (maxnode &&
mbind(ptr, sz, mode, backend->host_nodes, maxnode + 1, flags)) {
if (backend->policy != MPOL_DEFAULT || errno != ENOSYS) {
error_setg_errno(errp, errno,
"cannot bind memory to host NUMA nodes");
return;
}
}
#endif
/* Preallocate memory after the NUMA policy has been instantiated.
* This is necessary to guarantee memory is allocated with
* specified NUMA policy in place.
*/
if (backend->prealloc) {
qemu_prealloc_mem(memory_region_get_fd(&backend->mr), ptr, sz,
backend->prealloc_threads,
backend->prealloc_context, &local_err);
if (local_err) {
goto out;
}
}
}
out:
error_propagate(errp, local_err);
}
static bool

View File

@@ -1,360 +0,0 @@
/*
* iommufd container backend
*
* Copyright (C) 2023 Intel Corporation.
* Copyright Red Hat, Inc. 2023
*
* Authors: Yi Liu <yi.l.liu@intel.com>
* Eric Auger <eric.auger@redhat.com>
*
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#include "qemu/osdep.h"
#include "sysemu/iommufd.h"
#include "qapi/error.h"
#include "qemu/module.h"
#include "qom/object_interfaces.h"
#include "qemu/error-report.h"
#include "monitor/monitor.h"
#include "trace.h"
#include "hw/vfio/vfio-common.h"
#include <sys/ioctl.h>
#include <linux/iommufd.h>
static void iommufd_backend_init(Object *obj)
{
IOMMUFDBackend *be = IOMMUFD_BACKEND(obj);
be->fd = -1;
be->users = 0;
be->owned = true;
}
static void iommufd_backend_finalize(Object *obj)
{
IOMMUFDBackend *be = IOMMUFD_BACKEND(obj);
if (be->owned) {
close(be->fd);
be->fd = -1;
}
}
static void iommufd_backend_set_fd(Object *obj, const char *str, Error **errp)
{
ERRP_GUARD();
IOMMUFDBackend *be = IOMMUFD_BACKEND(obj);
int fd = -1;
fd = monitor_fd_param(monitor_cur(), str, errp);
if (fd == -1) {
error_prepend(errp, "Could not parse remote object fd %s:", str);
return;
}
be->fd = fd;
be->owned = false;
trace_iommu_backend_set_fd(be->fd);
}
static bool iommufd_backend_can_be_deleted(UserCreatable *uc)
{
IOMMUFDBackend *be = IOMMUFD_BACKEND(uc);
return !be->users;
}
static void iommufd_backend_class_init(ObjectClass *oc, void *data)
{
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
ucc->can_be_deleted = iommufd_backend_can_be_deleted;
object_class_property_add_str(oc, "fd", NULL, iommufd_backend_set_fd);
}
bool iommufd_backend_connect(IOMMUFDBackend *be, Error **errp)
{
int fd;
if (be->owned && !be->users) {
fd = qemu_open("/dev/iommu", O_RDWR, errp);
if (fd < 0) {
return false;
}
be->fd = fd;
}
be->users++;
trace_iommufd_backend_connect(be->fd, be->owned, be->users);
return true;
}
void iommufd_backend_disconnect(IOMMUFDBackend *be)
{
if (!be->users) {
goto out;
}
be->users--;
if (!be->users && be->owned) {
close(be->fd);
be->fd = -1;
}
out:
trace_iommufd_backend_disconnect(be->fd, be->users);
}
bool iommufd_backend_alloc_ioas(IOMMUFDBackend *be, uint32_t *ioas_id,
Error **errp)
{
int fd = be->fd;
struct iommu_ioas_alloc alloc_data = {
.size = sizeof(alloc_data),
.flags = 0,
};
if (ioctl(fd, IOMMU_IOAS_ALLOC, &alloc_data)) {
error_setg_errno(errp, errno, "Failed to allocate ioas");
return false;
}
*ioas_id = alloc_data.out_ioas_id;
trace_iommufd_backend_alloc_ioas(fd, *ioas_id);
return true;
}
void iommufd_backend_free_id(IOMMUFDBackend *be, uint32_t id)
{
int ret, fd = be->fd;
struct iommu_destroy des = {
.size = sizeof(des),
.id = id,
};
ret = ioctl(fd, IOMMU_DESTROY, &des);
trace_iommufd_backend_free_id(fd, id, ret);
if (ret) {
error_report("Failed to free id: %u %m", id);
}
}
int iommufd_backend_map_dma(IOMMUFDBackend *be, uint32_t ioas_id, hwaddr iova,
ram_addr_t size, void *vaddr, bool readonly)
{
int ret, fd = be->fd;
struct iommu_ioas_map map = {
.size = sizeof(map),
.flags = IOMMU_IOAS_MAP_READABLE |
IOMMU_IOAS_MAP_FIXED_IOVA,
.ioas_id = ioas_id,
.__reserved = 0,
.user_va = (uintptr_t)vaddr,
.iova = iova,
.length = size,
};
if (!readonly) {
map.flags |= IOMMU_IOAS_MAP_WRITEABLE;
}
ret = ioctl(fd, IOMMU_IOAS_MAP, &map);
trace_iommufd_backend_map_dma(fd, ioas_id, iova, size,
vaddr, readonly, ret);
if (ret) {
ret = -errno;
/* TODO: Not support mapping hardware PCI BAR region for now. */
if (errno == EFAULT) {
warn_report("IOMMU_IOAS_MAP failed: %m, PCI BAR?");
} else {
error_report("IOMMU_IOAS_MAP failed: %m");
}
}
return ret;
}
int iommufd_backend_unmap_dma(IOMMUFDBackend *be, uint32_t ioas_id,
hwaddr iova, ram_addr_t size)
{
int ret, fd = be->fd;
struct iommu_ioas_unmap unmap = {
.size = sizeof(unmap),
.ioas_id = ioas_id,
.iova = iova,
.length = size,
};
ret = ioctl(fd, IOMMU_IOAS_UNMAP, &unmap);
/*
* IOMMUFD takes mapping as some kind of object, unmapping
* nonexistent mapping is treated as deleting a nonexistent
* object and return ENOENT. This is different from legacy
* backend which allows it. vIOMMU may trigger a lot of
* redundant unmapping, to avoid flush the log, treat them
* as succeess for IOMMUFD just like legacy backend.
*/
if (ret && errno == ENOENT) {
trace_iommufd_backend_unmap_dma_non_exist(fd, ioas_id, iova, size, ret);
ret = 0;
} else {
trace_iommufd_backend_unmap_dma(fd, ioas_id, iova, size, ret);
}
if (ret) {
ret = -errno;
error_report("IOMMU_IOAS_UNMAP failed: %m");
}
return ret;
}
bool iommufd_backend_alloc_hwpt(IOMMUFDBackend *be, uint32_t dev_id,
uint32_t pt_id, uint32_t flags,
uint32_t data_type, uint32_t data_len,
void *data_ptr, uint32_t *out_hwpt,
Error **errp)
{
int ret, fd = be->fd;
struct iommu_hwpt_alloc alloc_hwpt = {
.size = sizeof(struct iommu_hwpt_alloc),
.flags = flags,
.dev_id = dev_id,
.pt_id = pt_id,
.data_type = data_type,
.data_len = data_len,
.data_uptr = (uintptr_t)data_ptr,
};
ret = ioctl(fd, IOMMU_HWPT_ALLOC, &alloc_hwpt);
trace_iommufd_backend_alloc_hwpt(fd, dev_id, pt_id, flags, data_type,
data_len, (uintptr_t)data_ptr,
alloc_hwpt.out_hwpt_id, ret);
if (ret) {
error_setg_errno(errp, errno, "Failed to allocate hwpt");
return false;
}
*out_hwpt = alloc_hwpt.out_hwpt_id;
return true;
}
bool iommufd_backend_set_dirty_tracking(IOMMUFDBackend *be,
uint32_t hwpt_id, bool start,
Error **errp)
{
int ret;
struct iommu_hwpt_set_dirty_tracking set_dirty = {
.size = sizeof(set_dirty),
.hwpt_id = hwpt_id,
.flags = start ? IOMMU_HWPT_DIRTY_TRACKING_ENABLE : 0,
};
ret = ioctl(be->fd, IOMMU_HWPT_SET_DIRTY_TRACKING, &set_dirty);
trace_iommufd_backend_set_dirty(be->fd, hwpt_id, start, ret ? errno : 0);
if (ret) {
error_setg_errno(errp, errno,
"IOMMU_HWPT_SET_DIRTY_TRACKING(hwpt_id %u) failed",
hwpt_id);
return false;
}
return true;
}
bool iommufd_backend_get_dirty_bitmap(IOMMUFDBackend *be,
uint32_t hwpt_id,
uint64_t iova, ram_addr_t size,
uint64_t page_size, uint64_t *data,
Error **errp)
{
int ret;
struct iommu_hwpt_get_dirty_bitmap get_dirty_bitmap = {
.size = sizeof(get_dirty_bitmap),
.hwpt_id = hwpt_id,
.iova = iova,
.length = size,
.page_size = page_size,
.data = (uintptr_t)data,
};
ret = ioctl(be->fd, IOMMU_HWPT_GET_DIRTY_BITMAP, &get_dirty_bitmap);
trace_iommufd_backend_get_dirty_bitmap(be->fd, hwpt_id, iova, size,
page_size, ret ? errno : 0);
if (ret) {
error_setg_errno(errp, errno,
"IOMMU_HWPT_GET_DIRTY_BITMAP (iova: 0x%"HWADDR_PRIx
" size: 0x"RAM_ADDR_FMT") failed", iova, size);
return false;
}
return true;
}
bool iommufd_backend_get_device_info(IOMMUFDBackend *be, uint32_t devid,
uint32_t *type, void *data, uint32_t len,
uint64_t *caps, Error **errp)
{
struct iommu_hw_info info = {
.size = sizeof(info),
.dev_id = devid,
.data_len = len,
.data_uptr = (uintptr_t)data,
};
if (ioctl(be->fd, IOMMU_GET_HW_INFO, &info)) {
error_setg_errno(errp, errno, "Failed to get hardware info");
return false;
}
g_assert(type);
*type = info.out_data_type;
g_assert(caps);
*caps = info.out_capabilities;
return true;
}
static int hiod_iommufd_get_cap(HostIOMMUDevice *hiod, int cap, Error **errp)
{
HostIOMMUDeviceCaps *caps = &hiod->caps;
switch (cap) {
case HOST_IOMMU_DEVICE_CAP_IOMMU_TYPE:
return caps->type;
case HOST_IOMMU_DEVICE_CAP_AW_BITS:
return vfio_device_get_aw_bits(hiod->agent);
default:
error_setg(errp, "%s: unsupported capability %x", hiod->name, cap);
return -EINVAL;
}
}
static void hiod_iommufd_class_init(ObjectClass *oc, void *data)
{
HostIOMMUDeviceClass *hioc = HOST_IOMMU_DEVICE_CLASS(oc);
hioc->get_cap = hiod_iommufd_get_cap;
};
static const TypeInfo types[] = {
{
.name = TYPE_IOMMUFD_BACKEND,
.parent = TYPE_OBJECT,
.instance_size = sizeof(IOMMUFDBackend),
.instance_init = iommufd_backend_init,
.instance_finalize = iommufd_backend_finalize,
.class_size = sizeof(IOMMUFDBackendClass),
.class_init = iommufd_backend_class_init,
.interfaces = (InterfaceInfo[]) {
{ TYPE_USER_CREATABLE },
{ }
}
}, {
.name = TYPE_HOST_IOMMU_DEVICE_IOMMUFD,
.parent = TYPE_HOST_IOMMU_DEVICE,
.class_init = hiod_iommufd_class_init,
.abstract = true,
}
};
DEFINE_TYPES(types)

View File

@@ -10,15 +10,9 @@ system_ss.add([files(
'confidential-guest-support.c',
), numa])
if host_os != 'windows'
system_ss.add(files('rng-random.c'))
system_ss.add(files('hostmem-file.c'))
system_ss.add([files('hostmem-shm.c'), rt])
endif
if host_os == 'linux'
system_ss.add(files('hostmem-memfd.c'))
system_ss.add(files('host_iommu_device.c'))
endif
system_ss.add(when: 'CONFIG_POSIX', if_true: files('rng-random.c'))
system_ss.add(when: 'CONFIG_POSIX', if_true: files('hostmem-file.c'))
system_ss.add(when: 'CONFIG_LINUX', if_true: files('hostmem-memfd.c'))
if keyutils.found()
system_ss.add(keyutils, files('cryptodev-lkcf.c'))
endif
@@ -26,13 +20,10 @@ if have_vhost_user
system_ss.add(when: 'CONFIG_VIRTIO', if_true: files('vhost-user.c'))
endif
system_ss.add(when: 'CONFIG_VIRTIO_CRYPTO', if_true: files('cryptodev-vhost.c'))
system_ss.add(when: 'CONFIG_IOMMUFD', if_true: files('iommufd.c'))
if have_vhost_user_crypto
system_ss.add(when: 'CONFIG_VIRTIO_CRYPTO', if_true: files('cryptodev-vhost-user.c'))
endif
system_ss.add(when: gio, if_true: files('dbus-vmstate.c'))
system_ss.add(when: 'CONFIG_SGX', if_true: files('hostmem-epc.c'))
system_ss.add(when: 'CONFIG_SPDM_SOCKET', if_true: files('spdm-socket.c'))
subdir('tpm')

View File

@@ -75,7 +75,10 @@ static void rng_random_opened(RngBackend *b, Error **errp)
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
"filename", "a valid filename");
} else {
s->fd = qemu_open(s->filename, O_RDONLY | O_NONBLOCK, errp);
s->fd = qemu_open_old(s->filename, O_RDONLY | O_NONBLOCK);
if (s->fd == -1) {
error_setg_file_open(errp, errno, s->filename);
}
}
}

View File

@@ -1,216 +0,0 @@
/* SPDX-License-Identifier: BSD-3-Clause */
/*
* QEMU SPDM socket support
*
* This is based on:
* https://github.com/DMTF/spdm-emu/blob/07c0a838bcc1c6207c656ac75885c0603e344b6f/spdm_emu/spdm_emu_common/command.c
* but has been re-written to match QEMU style
*
* Copyright (c) 2021, DMTF. All rights reserved.
* Copyright (c) 2023. Western Digital Corporation or its affiliates.
*/
#include "qemu/osdep.h"
#include "sysemu/spdm-socket.h"
#include "qapi/error.h"
static bool read_bytes(const int socket, uint8_t *buffer,
size_t number_of_bytes)
{
ssize_t number_received = 0;
ssize_t result;
while (number_received < number_of_bytes) {
result = recv(socket, buffer + number_received,
number_of_bytes - number_received, 0);
if (result <= 0) {
return false;
}
number_received += result;
}
return true;
}
static bool read_data32(const int socket, uint32_t *data)
{
bool result;
result = read_bytes(socket, (uint8_t *)data, sizeof(uint32_t));
if (!result) {
return result;
}
*data = ntohl(*data);
return true;
}
static bool read_multiple_bytes(const int socket, uint8_t *buffer,
uint32_t *bytes_received,
uint32_t max_buffer_length)
{
uint32_t length;
bool result;
result = read_data32(socket, &length);
if (!result) {
return result;
}
if (length > max_buffer_length) {
return false;
}
if (bytes_received) {
*bytes_received = length;
}
if (length == 0) {
return true;
}
return read_bytes(socket, buffer, length);
}
static bool receive_platform_data(const int socket,
uint32_t transport_type,
uint32_t *command,
uint8_t *receive_buffer,
uint32_t *bytes_to_receive)
{
bool result;
uint32_t response;
uint32_t bytes_received;
result = read_data32(socket, &response);
if (!result) {
return result;
}
*command = response;
result = read_data32(socket, &transport_type);
if (!result) {
return result;
}
bytes_received = 0;
result = read_multiple_bytes(socket, receive_buffer, &bytes_received,
*bytes_to_receive);
if (!result) {
return result;
}
*bytes_to_receive = bytes_received;
return result;
}
static bool write_bytes(const int socket, const uint8_t *buffer,
uint32_t number_of_bytes)
{
ssize_t number_sent = 0;
ssize_t result;
while (number_sent < number_of_bytes) {
result = send(socket, buffer + number_sent,
number_of_bytes - number_sent, 0);
if (result == -1) {
return false;
}
number_sent += result;
}
return true;
}
static bool write_data32(const int socket, uint32_t data)
{
data = htonl(data);
return write_bytes(socket, (uint8_t *)&data, sizeof(uint32_t));
}
static bool write_multiple_bytes(const int socket, const uint8_t *buffer,
uint32_t bytes_to_send)
{
bool result;
result = write_data32(socket, bytes_to_send);
if (!result) {
return result;
}
return write_bytes(socket, buffer, bytes_to_send);
}
static bool send_platform_data(const int socket,
uint32_t transport_type, uint32_t command,
const uint8_t *send_buffer, size_t bytes_to_send)
{
bool result;
result = write_data32(socket, command);
if (!result) {
return result;
}
result = write_data32(socket, transport_type);
if (!result) {
return result;
}
return write_multiple_bytes(socket, send_buffer, bytes_to_send);
}
int spdm_socket_connect(uint16_t port, Error **errp)
{
int client_socket;
struct sockaddr_in server_addr;
client_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if (client_socket < 0) {
error_setg(errp, "cannot create socket: %s", strerror(errno));
return -1;
}
memset((char *)&server_addr, 0, sizeof(server_addr));
server_addr.sin_family = AF_INET;
server_addr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
server_addr.sin_port = htons(port);
if (connect(client_socket, (struct sockaddr *)&server_addr,
sizeof(server_addr)) < 0) {
error_setg(errp, "cannot connect: %s", strerror(errno));
close(client_socket);
return -1;
}
return client_socket;
}
uint32_t spdm_socket_rsp(const int socket, uint32_t transport_type,
void *req, uint32_t req_len,
void *rsp, uint32_t rsp_len)
{
uint32_t command;
bool result;
result = send_platform_data(socket, transport_type,
SPDM_SOCKET_COMMAND_NORMAL,
req, req_len);
if (!result) {
return 0;
}
result = receive_platform_data(socket, transport_type, &command,
(uint8_t *)rsp, &rsp_len);
if (!result) {
return 0;
}
assert(command != 0);
return rsp_len;
}
void spdm_socket_close(const int socket, uint32_t transport_type)
{
send_platform_data(socket, transport_type,
SPDM_SOCKET_COMMAND_SHUTDOWN, NULL, 0);
}

View File

@@ -904,7 +904,7 @@ static void tpm_emulator_vm_state_change(void *opaque, bool running,
trace_tpm_emulator_vm_state_change(running, state);
if (!running || !tpm_emu->relock_storage) {
if (!running || state != RUN_STATE_RUNNING || !tpm_emu->relock_storage) {
return;
}
@@ -939,7 +939,7 @@ static const VMStateDescription vmstate_tpm_emulator = {
.version_id = 0,
.pre_save = tpm_emulator_pre_save,
.post_load = tpm_emulator_post_load,
.fields = (const VMStateField[]) {
.fields = (VMStateField[]) {
VMSTATE_UINT32(state_blobs.permanent_flags, TPMEmulator),
VMSTATE_UINT32(state_blobs.permanent.size, TPMEmulator),
VMSTATE_VBUFFER_ALLOC_UINT32(state_blobs.permanent.buffer,

View File

@@ -339,11 +339,10 @@ void tpm_util_show_buffer(const unsigned char *buffer,
size_t len, i;
char *line_buffer, *p;
if (!trace_event_get_state_backends(TRACE_TPM_UTIL_SHOW_BUFFER_CONTENT)) {
if (!trace_event_get_state_backends(TRACE_TPM_UTIL_SHOW_BUFFER)) {
return;
}
len = MIN(tpm_cmd_get_size(buffer), buffer_size);
trace_tpm_util_show_buffer_header(string, len);
/*
* allocate enough room for 3 chars per buffer entry plus a
@@ -357,7 +356,7 @@ void tpm_util_show_buffer(const unsigned char *buffer,
}
p += sprintf(p, "%.2X ", buffer[i]);
}
trace_tpm_util_show_buffer_content(line_buffer);
trace_tpm_util_show_buffer(string, len, line_buffer);
g_free(line_buffer);
}

View File

@@ -10,8 +10,7 @@ tpm_util_get_buffer_size_len(uint32_t len, size_t expected) "tpm_resp->len = %u,
tpm_util_get_buffer_size_hdr_len2(uint32_t len, size_t expected) "tpm2_resp->hdr.len = %u, expected = %zu"
tpm_util_get_buffer_size_len2(uint32_t len, size_t expected) "tpm2_resp->len = %u, expected = %zu"
tpm_util_get_buffer_size(size_t len) "buffersize of device: %zu"
tpm_util_show_buffer_header(const char *direction, size_t len) "direction: %s len: %zu"
tpm_util_show_buffer_content(const char *buf) "%s"
tpm_util_show_buffer(const char *direction, size_t len, const char *buf) "direction: %s len: %zu\n%s"
# tpm_emulator.c
tpm_emulator_set_locality(uint8_t locty) "setting locality to %d"

View File

@@ -5,16 +5,3 @@ dbus_vmstate_pre_save(void)
dbus_vmstate_post_load(int version_id) "version_id: %d"
dbus_vmstate_loading(const char *id) "id: %s"
dbus_vmstate_saving(const char *id) "id: %s"
# iommufd.c
iommufd_backend_connect(int fd, bool owned, uint32_t users) "fd=%d owned=%d users=%d"
iommufd_backend_disconnect(int fd, uint32_t users) "fd=%d users=%d"
iommu_backend_set_fd(int fd) "pre-opened /dev/iommu fd=%d"
iommufd_backend_map_dma(int iommufd, uint32_t ioas, uint64_t iova, uint64_t size, void *vaddr, bool readonly, int ret) " iommufd=%d ioas=%d iova=0x%"PRIx64" size=0x%"PRIx64" addr=%p readonly=%d (%d)"
iommufd_backend_unmap_dma_non_exist(int iommufd, uint32_t ioas, uint64_t iova, uint64_t size, int ret) " Unmap nonexistent mapping: iommufd=%d ioas=%d iova=0x%"PRIx64" size=0x%"PRIx64" (%d)"
iommufd_backend_unmap_dma(int iommufd, uint32_t ioas, uint64_t iova, uint64_t size, int ret) " iommufd=%d ioas=%d iova=0x%"PRIx64" size=0x%"PRIx64" (%d)"
iommufd_backend_alloc_ioas(int iommufd, uint32_t ioas) " iommufd=%d ioas=%d"
iommufd_backend_alloc_hwpt(int iommufd, uint32_t dev_id, uint32_t pt_id, uint32_t flags, uint32_t hwpt_type, uint32_t len, uint64_t data_ptr, uint32_t out_hwpt_id, int ret) " iommufd=%d dev_id=%u pt_id=%u flags=0x%x hwpt_type=%u len=%u data_ptr=0x%"PRIx64" out_hwpt=%u (%d)"
iommufd_backend_free_id(int iommufd, uint32_t id, int ret) " iommufd=%d id=%d (%d)"
iommufd_backend_set_dirty(int iommufd, uint32_t hwpt_id, bool start, int ret) " iommufd=%d hwpt=%u enable=%d (%d)"
iommufd_backend_get_dirty_bitmap(int iommufd, uint32_t hwpt_id, uint64_t iova, uint64_t size, uint64_t page_size, int ret) " iommufd=%d hwpt=%u iova=0x%"PRIx64" size=0x%"PRIx64" page_size=0x%"PRIx64" (%d)"

499
block.c

File diff suppressed because it is too large Load Diff

View File

@@ -356,7 +356,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
BlockDriverState *target, int64_t speed,
MirrorSyncMode sync_mode, BdrvDirtyBitmap *sync_bitmap,
BitmapSyncMode bitmap_mode,
bool compress, bool discard_source,
bool compress,
const char *filter_node_name,
BackupPerf *perf,
BlockdevOnError on_source_error,
@@ -457,8 +457,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
goto error;
}
cbw = bdrv_cbw_append(bs, target, filter_node_name, discard_source,
&bcs, errp);
cbw = bdrv_cbw_append(bs, target, filter_node_name, &bcs, errp);
if (!cbw) {
goto error;
}
@@ -497,7 +496,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
block_copy_set_speed(bcs, speed);
/* Required permissions are taken by copy-before-write filter target */
bdrv_graph_wrlock();
bdrv_graph_wrlock(target);
block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
&error_abort);
bdrv_graph_wrunlock();

View File

@@ -1073,7 +1073,7 @@ static BlockDriver bdrv_blkdebug = {
.is_filter = true,
.bdrv_parse_filename = blkdebug_parse_filename,
.bdrv_open = blkdebug_open,
.bdrv_file_open = blkdebug_open,
.bdrv_close = blkdebug_close,
.bdrv_reopen_prepare = blkdebug_reopen_prepare,
.bdrv_child_perm = blkdebug_child_perm,

Some files were not shown because too many files have changed in this diff Show More