Compare commits

...

74 Commits

Author SHA1 Message Date
Xiaoyao Li
d20f93da31 docs: Add TDX documentation
Add docs/system/i386/tdx.rst for TDX support, and add tdx in
confidential-guest-support.rst

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>

---
Changes since v1:
 - Add prerequisite of private gmem;
 - update example command to launch TD;

Changes since RFC v4:
 - add the restriction that kernel-irqchip must be split
2023-11-15 00:57:10 -05:00
Sean Christopherson
18f152401f i386/tdx: Don't get/put guest state for TDX VMs
Don't get/put state of TDX VMs since accessing/mutating guest state of
production TDs is not supported.

Note, it will be allowed for a debug TD. Corresponding support will be
introduced when debug TD support is implemented in the future.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
5c8d76a6e4 i386/tdx: Skip kvm_put_apicbase() for TDs
KVM doesn't allow wirting to MSR_IA32_APICBASE for TDs.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
f226ab5a34 i386/tdx: Only configure MSR_IA32_UCODE_REV in kvm_init_msrs() for TDs
For TDs, only MSR_IA32_UCODE_REV in kvm_init_msrs() can be configured
by VMM, while the features enumerated/controlled by other MSRs except
MSR_IA32_UCODE_REV in kvm_init_msrs() are not under control of VMM.

Only configure MSR_IA32_UCODE_REV for TDs.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
e954cd4b22 i386/tdx: Don't synchronize guest tsc for TDs
TSC of TDs is not accessible and KVM doesn't allow access of
MSR_IA32_TSC for TDs. To avoid the assert() in kvm_get_tsc, make
kvm_synchronize_all_tsc() noop for TDs,

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Connor Kuehl <ckuehl@redhat.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
2dbde0df24 hw/i386: add option to forcibly report edge trigger in acpi tables
When level trigger isn't supported on x86 platform,
forcibly report edge trigger in acpi tables.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
9faa2b03a9 hw/i386: add eoi_intercept_unsupported member to X86MachineState
Add a new bool member, eoi_intercept_unsupported, to X86MachineState
with default value false. Set true for TDX VM.

Inability to intercept eoi causes impossibility to emulate level
triggered interrupt to be re-injected when level is still kept active.
which affects interrupt controller emulation.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
d04ea558aa i386/tdx: LMCE is not supported for TDX
LMCE is not supported TDX since KVM doesn't provide emulation for
MSR_IA32_FEAT_CTL.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
378e61b145 i386/tdx: Don't allow system reset for TDX VMs
TDX CPU state is protected and thus vcpu state cann't be reset by VMM.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
fdd53e1a81 i386/tdx: Disable PIC for TDX VMs
Legacy PIC (8259) cannot be supported for TDX VMs since TDX module
doesn't allow directly interrupt injection.  Using posted interrupts
for the PIC is not a viable option as the guest BIOS/kernel will not
do EOI for PIC IRQs, i.e. will leave the vIRR bit set.

Hence disable PIC for TDX VMs and error out if user wants PIC.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
e1ce29aa33 i386/tdx: Disable SMM for TDX VMs
TDX doesn't support SMM and VMM cannot emulate SMM for TDX VMs because
VMM cannot manipulate TDX VM's memory.

Disable SMM for TDX VMs and error out if user requests to enable SMM.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
ffe9a6dccb q35: Introduce smm_ranges property for q35-pci-host
Add a q35 property to check whether or not SMM ranges, e.g. SMRAM, TSEG,
etc... exist for the target platform.  TDX doesn't support SMM and doesn't
play nice with QEMU modifying related guest memory ranges.

Signed-off-by: Isaku Yamahata <isaku.yamahata@linux.intel.com>
Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
7163da7797 pci-host/q35: Move PAM initialization above SMRAM initialization
In mch_realize(), process PAM initialization before SMRAM initialization so
that later patch can skill all the SMRAM related with a single check.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
a5368c6202 i386/tdx: Wire TDX_REPORT_FATAL_ERROR with GuestPanic facility
Integrate TDX's TDX_REPORT_FATAL_ERROR into QEMU GuestPanic facility

Originated-from: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
Changes from v2:
- Add docmentation of new type and struct (Daniel)
- refine the error message handling (Daniel)
2023-11-15 00:57:10 -05:00
Xiaoyao Li
f597afe309 i386/tdx: Handle TDG.VP.VMCALL<REPORT_FATAL_ERROR>
TD guest can use TDG.VP.VMCALL<REPORT_FATAL_ERROR> to request termination
with error message encoded in GPRs.

Parse and print the error message, and terminate the TD guest in the
handler.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
a23e29229e i386/tdx: Limit the range size for MapGPA
If the range for TDG.VP.VMCALL<MapGPA> is too large, process the limited
size and return retry error.  It's bad for VMM to take too long time,
e.g. second order, with blocking vcpu execution.  It results in too many
missing timer interrupts.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
addb6b676d i386/tdx: handle TDG.VP.VMCALL<MapGPA> hypercall
MapGPA is a hypercall to convert GPA from/to private GPA to/from shared GPA.
As the conversion function is already implemented as kvm_convert_memory,
wire it to TDX hypercall exit.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Chenyi Qiang
824f58fdba i386/tdx: setup a timer for the qio channel
To avoid no response from QGS server, setup a timer for the transaction.
If timeout, make it an error and interrupt guest. Define the threshold of
time to 30s at present, maybe change to other value if not appropriate.

Extract the common cleanup code to make it more clear.

Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
Changes in v3:
 - Use t->timer_armed to track if t->timer is initialized;
2023-11-15 00:57:10 -05:00
Isaku Yamahata
9955e91601 i386/tdx: handle TDG.VP.VMCALL<GetQuote>
For GetQuote, delegate a request to Quote Generation Service.
Add property "quote-generation-socket" to tdx-guest, whihc is a property
of type SocketAddress to specify Quote Generation Service(QGS).

On request, connect to the QGS, read request buffer from shared guest
memory, send the request buffer to the server and store the response
into shared guest memory and notify TD guest by interrupt.

command line example:
  qemu-system-x86_64 \
    -object '{"qom-type":"tdx-guest","id":"tdx0","quote-generation-socket":{"type": "vsock", "cid":"2","port":"1234"}}' \
    -machine confidential-guest-support=tdx0

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Codeveloped-by: Chenyi Qiang <chenyi.qiang@intel.com>
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
Changes in v3:
- rename property "quote-generation-service" to "quote-generation-socket";
- change the type of "quote-generation-socket" from str to
  SocketAddress;
- squash next patch into this one;
2023-11-15 00:57:10 -05:00
Isaku Yamahata
80417c539e i386/tdx: handle TDG.VP.VMCALL<SetupEventNotifyInterrupt>
For SetupEventNotifyInterrupt, record interrupt vector and the apic id
of the vcpu that received this TDVMCALL.

Later it can inject interrupt with given vector to the specific vcpu
that received SetupEventNotifyInterrupt.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
511b09205f i386/tdx: Finalize TDX VM
Invoke KVM_TDX_FINALIZE_VM to finalize the TD's measurement and make
the TD vCPUs runnable once machine initialization is complete.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
85a8152691 i386/tdx: Call KVM_TDX_INIT_VCPU to initialize TDX vcpu
TDX vcpu needs to be initialized by SEAMCALL(TDH.VP.INIT) and KVM
provides vcpu level IOCTL KVM_TDX_INIT_VCPU for it.

KVM_TDX_INIT_VCPU needs the address of the HOB as input. Invoke it for
each vcpu after HOB list is created.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Chao Peng
08c095f67a i386/tdx: register TDVF as private memory
Allocate private guest memfd memory for BIOS if it's TD VM.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
d8da77238e memory: Introduce memory_region_init_ram_guest_memfd()
Introduce memory_region_init_ram_guest_memfd() to allocate private
guset memfd on the MemoryRegion initialization. It's for the use case of
TDVF, which must be private on TDX case.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
ab857b85a5 i386/tdx: Add TDVF memory via KVM_TDX_INIT_MEM_REGION
TDVF firmware (CODE and VARS) needs to be added/copied to TD's private
memory via KVM_TDX_INIT_MEM_REGION, as well as TD HOB and TEMP memory.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>

---
Changes in v1:
  - rename variable @metadata to @flags
2023-11-15 00:57:10 -05:00
Xiaoyao Li
517fb00637 i386/tdx: Setup the TD HOB list
The TD HOB list is used to pass the information from VMM to TDVF. The TD
HOB must include PHIT HOB and Resource Descriptor HOB. More details can
be found in TDVF specification and PI specification.

Build the TD HOB in TDX's machine_init_done callback.

Co-developed-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>

---
Changes in v1:
  - drop the code of adding mmio resources since OVMF prepares all the
    MMIO hob itself.
2023-11-15 00:57:10 -05:00
Xiaoyao Li
359be44b10 headers: Add definitions from UEFI spec for volumes, resources, etc...
Add UEFI definitions for literals, enums, structs, GUIDs, etc... that
will be used by TDX to build the UEFI Hand-Off Block (HOB) that is passed
to the Trusted Domain Virtual Firmware (TDVF).

All values come from the UEFI specification [1], PI spec [2] and TDVF
design guide[3].

[1] UEFI Specification v2.1.0 https://uefi.org/sites/default/files/resources/UEFI_Spec_2_10_Aug29.pdf
[2] UEFI PI spec v1.8 https://uefi.org/sites/default/files/resources/UEFI_PI_Spec_1_8_March3.pdf
[3] https://software.intel.com/content/dam/develop/external/us/en/documents/tdx-virtual-firmware-design-guide-rev-1.pdf

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
0222b6f05d i386/tdx: Track RAM entries for TDX VM
The RAM of TDX VM can be classified into two types:

 - TDX_RAM_UNACCEPTED: default type of TDX memory, which needs to be
   accepted by TDX guest before it can be used and will be all-zeros
   after being accepted.

 - TDX_RAM_ADDED: the RAM that is ADD'ed to TD guest before running, and
   can be used directly. E.g., TD HOB and TEMP MEM that needed by TDVF.

Maintain TdxRamEntries[] which grabs the initial RAM info from e820 table
and mark each RAM range as default type TDX_RAM_UNACCEPTED.

Then turn the range of TD HOB and TEMP MEM to TDX_RAM_ADDED since these
ranges will be ADD'ed before TD runs and no need to be accepted runtime.

The TdxRamEntries[] are later used to setup the memory TD resource HOB
that passes memory info from QEMU to TDVF.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
---
Changes in v3:
- use enum TdxRamType in struct TdxRamEntry; (Isaku)
- Fix the indention; (Daniel)

Changes in v1:
  - simplify the algorithm of tdx_accept_ram_range() (Suggested-by: Gerd Hoffman)
    (1) Change the existing entry to cover the accepted ram range.
    (2) If there is room before the accepted ram range add a
	TDX_RAM_UNACCEPTED entry for that.
    (3) If there is room after the accepted ram range add a
	TDX_RAM_UNACCEPTED entry for that.
2023-11-15 00:57:10 -05:00
Xiaoyao Li
8b7763f7ca i386/tdx: Track mem_ptr for each firmware entry of TDVF
For each TDVF sections, QEMU needs to copy the content to guest
private memory via KVM API (KVM_TDX_INIT_MEM_REGION).

Introduce a field @mem_ptr for TdxFirmwareEntry to track the memory
pointer of each TDVF sections. So that QEMU can add/copy them to guest
private memory later.

TDVF sections can be classified into two groups:
 - Firmware itself, e.g., TDVF BFV and CFV, that located separately from
   guest RAM. Its memory pointer is the bios pointer.

 - Sections located at guest RAM, e.g., TEMP_MEM and TD_HOB.
   mmap a new memory range for them.

Register a machine_init_done callback to do the stuff.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
33edcc8426 i386/tdx: Don't initialize pc.rom for TDX VMs
For TDX, the address below 1MB are entirely general RAM. No need to
initialize pc.rom memory region for TDs.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
This is more as a workaround of the issue that for q35 machine type, the
real memslot update (which requires memslot deletion )for pc.rom happens
after tdx_init_memory_region. It leads to the private memory ADD'ed
before get lost. I haven't work out a good solution to resolve the
order issue. So just skip the pc.rom setup to avoid memslot deletion.
2023-11-15 00:57:10 -05:00
Xiaoyao Li
19195c88df i386/tdx: Skip BIOS shadowing setup
TDX doesn't support map different GPAs to same private memory. Thus,
aliasing top 128KB of BIOS as isa-bios is not supported.

On the other hand, TDX guest cannot go to real mode, it can work fine
without isa-bios.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
---
Changes in v1:
 - update commit message and comment to clarify
2023-11-15 00:57:10 -05:00
Xiaoyao Li
954b9e7a42 i386/tdx: Parse TDVF metadata for TDX VM
TDX cannot support pflash device since it doesn't support read-only
memslot and doesn't support emulation. Load TDVF(OVMF) with -bios option
for TDs.

When boot a TD, besides loading TDVF to the address below 4G, it needs
parse TDVF metadata.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
5aad9329d2 i386/tdvf: Introduce function to parse TDVF metadata
TDX VM needs to boot with its specialized firmware, Trusted Domain
Virtual Firmware (TDVF). QEMU needs to parse TDVF and map it in TD
guest memory prior to running the TDX VM.

A TDVF Metadata in TDVF image describes the structure of firmware.
QEMU refers to it to setup memory for TDVF. Introduce function
tdvf_parse_metadata() to parse the metadata from TDVF image and store
the info of each TDVF section.

TDX metadata is located by a TDX metadata offset block, which is a
GUID-ed structure. The data portion of the GUID structure contains
only an 4-byte field that is the offset of TDX metadata to the end
of firmware file.

Select X86_FW_OVMF when TDX is enable to leverage existing functions
to parse and search OVMF's GUID-ed structures.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>

---
Changes in v1:
 - rename tdvf_parse_section_entry() to
   tdvf_parse_and_check_section_entry()
Changes in RFC v4:
 - rename TDX_METADATA_GUID to TDX_METADATA_OFFSET_GUID
2023-11-15 00:57:10 -05:00
Isaku Yamahata
583aae3302 kvm/tdx: Ignore memory conversion to shared of unassigned region
TDX requires vMMIO region to be shared.  For KVM, MMIO region is the region
which kvm memslot isn't assigned to (except in-kernel emulation).
qemu has the memory region for vMMIO at each device level.

While OVMF issues MapGPA(to-shared) conservatively on 32bit PCI MMIO
region, qemu doesn't find corresponding vMMIO region because it's before
PCI device allocation and memory_region_find() finds the device region, not
PCI bus region.  It's safe to ignore MapGPA(to-shared) because when guest
accesses those region they use GPA with shared bit set for vMMIO.  Ignore
memory conversion request of non-assigned region to shared and return
success.  Otherwise OVMF is confused and panics there.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Isaku Yamahata
80af3f2547 kvm/tdx: Don't complain when converting vMMIO region to shared
Because vMMIO region needs to be shared region, guest TD may explicitly
convert such region from private to shared.  Don't complain such
conversion.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
ef90193248 i386/tdx: Make memory type private by default
By default (due to the recent UPM change), restricted memory attribute is
shared.  Convert the memory region from shared to private at the memory
slot creation time.

add kvm region registering function to check the flag
and convert the region, and add memory listener to TDX guest code to set
the flag to the possible memory region.

Without this patch
- Secure-EPT violation on private area
- KVM_MEMORY_FAULT EXIT (kvm -> qemu)
- qemu converts the 4K page from shared to private
- Resume VCPU execution
- Secure-EPT violation again
- KVM resolves EPT Violation
This also prevents huge page because page conversion is done at 4K
granularity.  Although it's possible to merge 4K private mapping into
2M large page, it slows guest boot.

With this patch
- After memory slot creation, convert the region from private to shared
- Secure-EPT violation on private area.
- KVM resolves EPT Violation

Originated-from: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
769b8158f6 kvm/memory: Introduce the infrastructure to set the default shared/private value
Introduce new flag RAM_DEFAULT_PRIVATE for RAMBlock. It's used to
indicate the default attribute,  private or not.

Set the RAM range to private explicitly when it's default private.

Originated-from: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:10 -05:00
Xiaoyao Li
d7db7d2ea1 i386/tdx: Set kvm_readonly_mem_enabled to false for TDX VM
TDX only supports readonly for shared memory but not for private memory.

In the view of QEMU, it has no idea whether a memslot is used as shared
memory of private. Thus just mark kvm_readonly_mem_enabled to false to
TDX VM for simplicity.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
004db71f60 i386/tdx: Implement user specified tsc frequency
Reuse "-cpu,tsc-frequency=" to get user wanted tsc frequency and call VM
scope VM_SET_TSC_KHZ to set the tsc frequency of TD before KVM_TDX_INIT_VM.

Besides, sanity check the tsc frequency to be in the legal range and
legal granularity (required by TDX module).

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
---
Changes in v3:
- use @errp to report error info; (Daniel)

Changes in v1:
- Use VM scope VM_SET_TSC_KHZ to set the TSC frequency of TD since KVM
  side drop the @tsc_khz field in struct kvm_tdx_init_vm
2023-11-15 00:57:09 -05:00
Isaku Yamahata
bbe50409ce i386/tdx: Allows mrconfigid/mrowner/mrownerconfig for TDX_INIT_VM
Three sha384 hash values, mrconfigid, mrowner and mrownerconfig, of a TD
can be provided for TDX attestation.

So far they were hard coded as 0. Now allow user to specify those values
via property mrconfigid, mrowner and mrownerconfig. They are all in
base64 format.

example
-object tdx-guest, \
  mrconfigid=ASNFZ4mrze8BI0VniavN7wEjRWeJq83vASNFZ4mrze8BI0VniavN7wEjRWeJq83v,\
  mrowner=ASNFZ4mrze8BI0VniavN7wEjRWeJq83vASNFZ4mrze8BI0VniavN7wEjRWeJq83v,\
  mrownerconfig=ASNFZ4mrze8BI0VniavN7wEjRWeJq83vASNFZ4mrze8BI0VniavN7wEjRWeJq83v

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
Changes in v3:
 - use base64 encoding instread of hex-string;
2023-11-15 00:57:09 -05:00
Xiaoyao Li
2ac24a3f82 i386/tdx: Validate TD attributes
Validate TD attributes with tdx_caps that fixed-0 bits must be zero and
fixed-1 bits must be set.

Besides, sanity check the attribute bits that have not been supported by
QEMU yet. e.g., debug bit, it will be allowed in the future when debug
TD support lands in QEMU.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
---
Changes in v3:
- using error_setg() for error report; (Daniel)
2023-11-15 00:57:09 -05:00
Xiaoyao Li
7671a8d293 i386/tdx: Wire CPU features up with attributes of TD guest
For QEMU VMs, PKS is configured via CPUID_7_0_ECX_PKS and PMU is
configured by x86cpu->enable_pmu. Reuse the existing configuration
interface for TDX VMs.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:09 -05:00
Isaku Yamahata
5bf04c14d8 i386/tdx: Make sept_ve_disable set by default
For TDX KVM use case, Linux guest is the most major one.  It requires
sept_ve_disable set.  Make it default for the main use case.  For other use
case, it can be enabled/disabled via qemu command line.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
f503878704 i386/tdx: Add property sept-ve-disable for tdx-guest object
Bit 28 of TD attribute, named SEPT_VE_DISABLE. When set to 1, it disables
EPT violation conversion to #VE on guest TD access of PENDING pages.

Some guest OS (e.g., Linux TD guest) may require this bit as 1.
Otherwise refuse to boot.

Add sept-ve-disable property for tdx-guest object, for user to configure
this bit.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
---
Changes in v3:
- update the comment of property @sept-ve-disable to make it more
  descriptive and use new format. (Daniel and Markus)
2023-11-15 00:57:09 -05:00
Xiaoyao Li
98f599ec0b i386/tdx: Initialize TDX before creating TD vcpus
Invoke KVM_TDX_INIT in kvm_arch_pre_create_vcpu() that KVM_TDX_INIT
configures global TD configurations, e.g. the canonical CPUID config,
and must be executed prior to creating vCPUs.

Use kvm_x86_arch_cpuid() to setup the CPUID settings for TDX VM.

Note, this doesn't address the fact that QEMU may change the CPUID
configuration when creating vCPUs, i.e. punts on refactoring QEMU to
provide a stable CPUID config prior to kvm_arch_init().

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
---
Changes in v3:
- Pass @errp in tdx_pre_create_vcpu() and pass error info to it. (Daniel)
2023-11-15 00:57:09 -05:00
Xiaoyao Li
a1b994d89a kvm: Introduce kvm_arch_pre_create_vcpu()
Introduce kvm_arch_pre_create_vcpu(), to perform arch-dependent
work prior to create any vcpu. This is for i386 TDX because it needs
call TDX_INIT_VM before creating any vcpu.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
---
Changes in v3:
- pass @errp to kvm_arch_pre_create_vcpu(); (Per Daniel)
2023-11-15 00:57:09 -05:00
Sean Christopherson
04fc588ea9 i386/kvm: Move architectural CPUID leaf generation to separate helper
Move the architectural (for lack of a better term) CPUID leaf generation
to a separate helper so that the generation code can be reused by TDX,
which needs to generate a canonical VM-scoped configuration.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
7e454d2ca4 i386/tdx: Integrate tdx_caps->attrs_fixed0/1 to tdx_cpuid_lookup
Some bits in TD attributes have corresponding CPUID feature bits. Reflect
the fixed0/1 restriction on TD attributes to their corresponding CPUID
bits in tdx_cpuid_lookup[] as well.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
b4a0470949 i386/tdx: Integrate tdx_caps->xfam_fixed0/1 into tdx_cpuid_lookup
KVM requires userspace to pass XFAM configuration via CPUID 0xD leaves.

Convert tdx_caps->xfam_fixed0/1 into corresponding
tdx_cpuid_lookup[].tdx_fixed0/1 field of CPUID 0xD leaves. Thus the
requirement can be applied naturally.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
5fdefc08b3 i386/tdx: Update tdx_cpuid_lookup[].tdx_fixed0/1 by tdx_caps.cpuid_config[]
tdx_cpuid_lookup[].tdx_fixed0/1 is QEMU maintained data which reflects
TDX restrictions regrading how some CPUIDs are virtualized by TDX.

It's retrieved from TDX spec. However, TDX may change some fixed
fields to configurable in the future. Update
tdx_cpuid.lookup[].tdx_fixed0/1 fields by removing the bits that
reported from TDX module as configurable. This can adapt with the
updated TDX (module) automatically.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
d54122cefb i386/tdx: Adjust the supported CPUID based on TDX restrictions
According to Chapter "CPUID Virtualization" in TDX module spec, CPUID
bits of TD can be classified into 6 types:

------------------------------------------------------------------------
1 | As configured | configurable by VMM, independent of native value;
------------------------------------------------------------------------
2 | As configured | configurable by VMM if the bit is supported natively
    (if native)   | Otherwise it equals as native(0).
------------------------------------------------------------------------
3 | Fixed         | fixed to 0/1
------------------------------------------------------------------------
4 | Native        | reflect the native value
------------------------------------------------------------------------
5 | Calculated    | calculated by TDX module.
------------------------------------------------------------------------
6 | Inducing #VE  | get #VE exception
------------------------------------------------------------------------

Note:
1. All the configurable XFAM related features and TD attributes related
   features fall into type #2. And fixed0/1 bits of XFAM and TD
   attributes fall into type #3.

2. For CPUID leaves not listed in "CPUID virtualization Overview" table
   in TDX module spec, TDX module injects #VE to TDs when those are
   queried. For this case, TDs can request CPUID emulation from VMM via
   TDVMCALL and the values are fully controlled by VMM.

Due to TDX module has its own virtualization policy on CPUID bits, it leads
to what reported via KVM_GET_SUPPORTED_CPUID diverges from the supported
CPUID bits for TDs. In order to keep a consistent CPUID configuration
between VMM and TDs. Adjust supported CPUID for TDs based on TDX
restrictions.

Currently only focus on the CPUID leaves recognized by QEMU's
feature_word_info[] that are indexed by a FeatureWord.

Introduce a TDX CPUID lookup table, which maintains 1 entry for each
FeatureWord. Each entry has below fields:

 - tdx_fixed0/1: The bits that are fixed as 0/1;

 - vmm_fixup:   The bits that are configurable from the view of TDX module.
                But they requires emulation of VMM when they are configured
	        as enabled. For those, they are not supported if VMM doesn't
		report them as supported. So they need be fixed up by
		checking if VMM supports them.

 - inducing_ve: TD gets #VE when querying this CPUID leaf. The result is
                totally configurable by VMM.

 - supported_on_ve: It's valid only when @inducing_ve is true. It represents
		    the maximum feature set supported that be emulated
		    for TDs.

By applying TDX CPUID lookup table and TDX capabilities reported from
TDX module, the supported CPUID for TDs can be obtained from following
steps:

- get the base of VMM supported feature set;

- if the leaf is not a FeatureWord just return VMM's value without
  modification;

- if the leaf is an inducing_ve type, applying supported_on_ve mask and
  return;

- include all native bits, it covers type #2, #4, and parts of type #1.
  (it also includes some unsupported bits. The following step will
   correct it.)

- apply fixed0/1 to it (it covers #3, and rectifies the previous step);

- add configurable bits (it covers the other part of type #1);

- fix the ones in vmm_fixup;

- filter the one has valid .supported field;

(Calculated type is ignored since it's determined at runtime).

Co-developed-by: Chenyi Qiang <chenyi.qiang@intel.com>
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
ef64621235 i386/tdx: Introduce is_tdx_vm() helper and cache tdx_guest object
It will need special handling for TDX VMs all around the QEMU.
Introduce is_tdx_vm() helper to query if it's a TDX VM.

Cache tdx_guest object thus no need to cast from ms->cgs every time.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
---
changes in v3:
- replace object_dynamic_cast with TDX_GUEST();
2023-11-15 00:57:09 -05:00
Xiaoyao Li
575bfcd358 i386/tdx: Get tdx_capabilities via KVM_TDX_CAPABILITIES
KVM provides TDX capabilities via sub command KVM_TDX_CAPABILITIES of
IOCTL(KVM_MEMORY_ENCRYPT_OP). Get the capabilities when initializing
TDX context. It will be used to validate user's setting later.

Since there is no interface reporting how many cpuid configs contains in
KVM_TDX_CAPABILITIES, QEMU chooses to try starting with a known number
and abort when it exceeds KVM_MAX_CPUID_ENTRIES.

Besides, introduce the interfaces to invoke TDX "ioctls" at different
scope (KVM, VM and VCPU) in preparation.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
Changes in v3:
- rename __tdx_ioctl() to tdx_ioctl_internal()
- Pass errp in get_tdx_capabilities();

changes in v2:
  - Make the error message more clear;

changes in v1:
  - start from nr_cpuid_configs = 6 for the loop;
  - stop the loop when nr_cpuid_configs exceeds KVM_MAX_CPUID_ENTRIES;
2023-11-15 00:57:09 -05:00
Xiaoyao Li
38b04243ce i386/tdx: Implement tdx_kvm_init() to initialize TDX VM context
Introduce tdx_kvm_init() and invoke it in kvm_confidential_guest_init()
if it's a TDX VM.

Set ms->require_guest_memfd to require kvm guest memfd allocation for any
memory backend. More TDX specific initialization will be added later.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
c64b4c4e28 target/i386: Introduce kvm_confidential_guest_init()
Introduce a separate function kvm_confidential_guest_init(), which
dispatches specific confidential guest initialization function by
ms->cgs type.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
871c9f21ee target/i386: Parse TDX vm type
TDX VM requires VM type KVM_X86_TDX_VM to be passed to
kvm_ioctl(KVM_CREATE_VM).

If tdx-guest object is specified to confidential-guest-support, like,

  qemu -machine ...,confidential-guest-support=tdx0 \
       -object tdx-guest,id=tdx0,...

it parses VM type as KVM_X86_TDX_VM.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
3770c1f6cb target/i386: Implement mc->kvm_type() to get VM type
Implement mc->kvm_type() for i386 machines. It provides a way for user
to create SW_PROTECTE_VM.

Also store the vm_type in machinestate to other code to query what the
VM type is.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
636758fe40 i386: Introduce tdx-guest object
Introduce tdx-guest object which implements the interface of
CONFIDENTIAL_GUEST_SUPPORT, and will be used to create TDX VMs (TDs) by

  qemu -machine ...,confidential-guest-support=tdx0	\
       -object tdx-guest,id=tdx0

It has only one member 'attributes' with fixed value 0 and not
configurable so far.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Markus Armbruster <armbru@redhat.com>
---
changes in v1
- make @attributes not user-settable
2023-11-15 00:57:09 -05:00
Xiaoyao Li
c9636b4bf5 *** HACK *** linux-headers: Update headers to pull in TDX API changes
Pull in recent TDX updates, which are not backwards compatible.

It's just to make this series runnable. It will be updated by script

	scripts/update-linux-headers.sh

once TDX support is upstreamed in linux kernel

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Isaku Yamahata
d39ad1ccfd trace/kvm: Add trace for page convertion between shared and private
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Chao Peng
552a4e57a8 kvm: handle KVM_EXIT_MEMORY_FAULT
Currently only KVM_MEMORY_EXIT_FLAG_PRIVATE in flags is valid when
KVM_EXIT_MEMORY_FAULT happens. It indicates userspace needs to do
the memory conversion on the RAMBlock to turn the memory into desired
attribute, i.e., private/shared.

Note, KVM_EXIT_MEMORY_FAULT makes sense only when the RAMBlock has
guest_memfd memory backend.

Note, KVM_EXIT_MEMORY_FAULT returns with -EFAULT, so special handling is
added.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
105fa48cab physmem: Introduce ram_block_convert_range() for page conversion
It's used for discarding opposite memory after memory conversion, for
confidential guest.

When page is converted from shared to private, the original shared
memory can be discarded via ram_block_discard_range();

When page is converted from private to shared, the original private
memory is back'ed by guest_memfd. Introduce
ram_block_discard_guest_memfd_range() for discarding memory in
guest_memfd.

Originally-from: Isaku Yamahata <isaku.yamahata@intel.com>
Codeveloped-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
9824769418 physmem: replace function name with __func__ in ram_block_discard_range()
Use __func__ to avoid hard-coded function name.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
02fda3dd28 physmem: Relax the alignment check of host_startaddr in ram_block_discard_range()
Commit d3a5038c46 ("exec: ram_block_discard_range") introduced
ram_block_discard_range() which grabs some code from
ram_discard_range(). However, during code movement, it changed alignment
check of host_startaddr from qemu_host_page_size to rb->page_size.

When ramblock is back'ed by hugepage, it requires the startaddr to be
huge page size aligned, which is a overkill. e.g., TDX's private-shared
page conversion is done at 4KB granularity. Shared page is discarded
when it gets converts to private and when shared page back'ed by
hugepage it is going to fail on this check.

So change to alignment check back to qemu_host_page_size.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
Changes in v3:
 - Newly added in v3;
2023-11-15 00:57:09 -05:00
Xiaoyao Li
609e0ca1d8 kvm: Introduce support for memory_attributes
Introduce the helper functions to set the attributes of a range of
memory to private or shared.

This is necessary to notify KVM the private/shared attribute of each gpa
range. KVM needs the information to decide the GPA needs to be mapped at
hva-based shared memory or guest_memfd based private memory.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Chao Peng
d21132fe6e kvm: Enable KVM_SET_USER_MEMORY_REGION2 for memslot
Switch to KVM_SET_USER_MEMORY_REGION2 when supported by KVM.

With KVM_SET_USER_MEMORY_REGION2, QEMU can set up memory region that
backend'ed both by hva-based shared memory and guest memfd based private
memory.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
902dc0c4a7 HostMem: Add mechanism to opt in kvm guest memfd via MachineState
Add a new member "require_guest_memfd" to memory backends. When it's set
to true, it enables RAM_GUEST_MEMFD in ram_flags, thus private kvm
guest_memfd will be allocated during RAMBlock allocation.

Memory backend's @require_guest_memfd is wired with @require_guest_memfd
field of MachineState. MachineState::require_guest_memfd is supposed to
be set by any VMs that requires KVM guest memfd as private memory, e.g.,
TDX VM.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
7bd3bf6642 RAMBlock/guest_memfd: Enable KVM_GUEST_MEMFD_ALLOW_HUGEPAGE
KVM allows KVM_GUEST_MEMFD_ALLOW_HUGEPAGE for guest memfd. When the
flag is set, KVM tries to allocate memory with transparent hugeapge at
first and falls back to non-hugepage on failure.

However, KVM defines one restriction that size must be hugepage size
aligned when KVM_GUEST_MEMFD_ALLOW_HUGEPAGE is set.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
v3:
 - New one in v3.
2023-11-15 00:57:09 -05:00
Xiaoyao Li
843bbbd03b RAMBlock: Add support of KVM private guest memfd
Add KVM guest_memfd support to RAMBlock so both normal hva based memory
and kvm guest memfd based private memory can be associated in one RAMBlock.

Introduce new flag RAM_GUEST_MEMFD. When it's set, it calls KVM ioctl to
create private guest_memfd during RAMBlock setup.

Note, RAM_GUEST_MEMFD is supposed to be set for memory backends of
confidential guests, such as TDX VM. How and when to set it for memory
backends will be implemented in the following patches.

Introduce memory_region_has_guest_memfd() to query if the MemoryRegion has
KVM guest_memfd allocated.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
Changes in v3:
- rename gmem to guest_memfd;
- close(guest_memfd) when RAMBlock is released; (Daniel P. Berrangé)
- Suqash the patch that introduces memory_region_has_guest_memfd().
2023-11-15 00:57:09 -05:00
Xiaoyao Li
3091ad45a5 *** HACK *** linux-headers: Update headers to pull in gmem APIs
This patch needs to be updated by script

	scripts/update-linux-headers.sh

once gmem fd support is upstreamed in Linux kernel.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
8d635c6681 trace/kvm: Split address space and slot id in trace_kvm_set_user_memory()
The upper 16 bits of kvm_userspace_memory_region::slot are
address space id. Parse it separately in trace_kvm_set_user_memory().

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
f5fd218755 i386/cpuid: Remove subleaf constraint on CPUID leaf 1F
No such constraint that subleaf index needs to be less than 64.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
d5591ff440 i386/cpuid: Decrease cpuid_i when skipping CPUID leaf 1F
Decrease array index cpuid_i when CPUID leaf 1F is skipped, otherwise it
will get an all zero'ed CPUID entry with leaf 0 and subleaf 0. It
conflicts with correct leaf 0.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
2023-11-15 00:57:09 -05:00
Xiaoyao Li
605d572f9c i386/pc: Drop pc_machine_kvm_type()
pc_machine_kvm_type() was introduced by commit e21be724ea ("i386/xen:
add pc_machine_kvm_type to initialize XEN_EMULATE mode") to do Xen
specific initialization by utilizing kvm_type method.

commit eeedfe6c63 ("hw/xen: Simplify emulated Xen platform init")
moves the Xen specific initialization to pc_basic_device_init().

There is no need to keep the PC specific kvm_type() implementation
anymore. On the other hand, later patch will implement kvm_type()
method for all x86/i386 machines to support KVM_X86_SW_PROTECTED_VM.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
2023-11-15 00:57:09 -05:00
54 changed files with 3860 additions and 416 deletions

View File

@@ -101,6 +101,8 @@ bool kvm_msi_use_devid;
bool kvm_has_guest_debug;
static int kvm_sstep_flags;
static bool kvm_immediate_exit;
static bool kvm_guest_memfd_supported;
static uint64_t kvm_supported_memory_attributes;
static hwaddr kvm_max_slot_size = ~0;
static const KVMCapabilityInfo kvm_required_capabilites[] = {
@@ -292,34 +294,69 @@ int kvm_physical_memory_addr_from_host(KVMState *s, void *ram,
static int kvm_set_user_memory_region(KVMMemoryListener *kml, KVMSlot *slot, bool new)
{
KVMState *s = kvm_state;
struct kvm_userspace_memory_region mem;
struct kvm_userspace_memory_region2 mem;
static int cap_user_memory2 = -1;
int ret;
if (cap_user_memory2 == -1) {
cap_user_memory2 = kvm_check_extension(s, KVM_CAP_USER_MEMORY2);
}
if (!cap_user_memory2 && slot->guest_memfd >= 0) {
error_report("%s, KVM doesn't support KVM_CAP_USER_MEMORY2,"
" which is required by guest memfd!", __func__);
exit(1);
}
mem.slot = slot->slot | (kml->as_id << 16);
mem.guest_phys_addr = slot->start_addr;
mem.userspace_addr = (unsigned long)slot->ram;
mem.flags = slot->flags;
mem.guest_memfd = slot->guest_memfd;
mem.guest_memfd_offset = slot->guest_memfd_offset;
if (slot->memory_size && !new && (mem.flags ^ slot->old_flags) & KVM_MEM_READONLY) {
/* Set the slot size to 0 before setting the slot to the desired
* value. This is needed based on KVM commit 75d61fbc. */
mem.memory_size = 0;
ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem);
if (cap_user_memory2) {
ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION2, &mem);
} else {
ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem);
}
if (ret < 0) {
goto err;
}
}
mem.memory_size = slot->memory_size;
ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem);
if (cap_user_memory2) {
ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION2, &mem);
} else {
ret = kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem);
}
slot->old_flags = mem.flags;
err:
trace_kvm_set_user_memory(mem.slot, mem.flags, mem.guest_phys_addr,
mem.memory_size, mem.userspace_addr, ret);
trace_kvm_set_user_memory(mem.slot >> 16, (uint16_t)mem.slot, mem.flags,
mem.guest_phys_addr, mem.memory_size,
mem.userspace_addr, mem.guest_memfd,
mem.guest_memfd_offset, ret);
if (ret < 0) {
error_report("%s: KVM_SET_USER_MEMORY_REGION failed, slot=%d,"
" start=0x%" PRIx64 ", size=0x%" PRIx64 ": %s",
__func__, mem.slot, slot->start_addr,
(uint64_t)mem.memory_size, strerror(errno));
if (cap_user_memory2) {
error_report("%s: KVM_SET_USER_MEMORY_REGION2 failed, slot=%d,"
" start=0x%" PRIx64 ", size=0x%" PRIx64 ","
" flags=0x%" PRIx32 ", guest_memfd=%" PRId32 ","
" guest_memfd_offset=0x%" PRIx64 ": %s",
__func__, mem.slot, slot->start_addr,
(uint64_t)mem.memory_size, mem.flags,
mem.guest_memfd, (uint64_t)mem.guest_memfd_offset,
strerror(errno));
} else {
error_report("%s: KVM_SET_USER_MEMORY_REGION failed, slot=%d,"
" start=0x%" PRIx64 ", size=0x%" PRIx64 ": %s",
__func__, mem.slot, slot->start_addr,
(uint64_t)mem.memory_size, strerror(errno));
}
}
return ret;
}
@@ -391,6 +428,11 @@ static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id)
return kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
}
int __attribute__ ((weak)) kvm_arch_pre_create_vcpu(CPUState *cpu, Error **errp)
{
return 0;
}
int kvm_init_vcpu(CPUState *cpu, Error **errp)
{
KVMState *s = kvm_state;
@@ -399,15 +441,27 @@ int kvm_init_vcpu(CPUState *cpu, Error **errp)
trace_kvm_init_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
/*
* tdx_pre_create_vcpu() may call cpu_x86_cpuid(). It in turn may call
* kvm_vm_ioctl(). Set cpu->kvm_state in advance to avoid NULL pointer
* dereference.
*/
cpu->kvm_state = s;
ret = kvm_arch_pre_create_vcpu(cpu, errp);
if (ret < 0) {
cpu->kvm_state = NULL;
goto err;
}
ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
if (ret < 0) {
error_setg_errno(errp, -ret, "kvm_init_vcpu: kvm_get_vcpu failed (%lu)",
kvm_arch_vcpu_id(cpu));
cpu->kvm_state = NULL;
goto err;
}
cpu->kvm_fd = ret;
cpu->kvm_state = s;
cpu->vcpu_dirty = true;
cpu->dirty_pages = 0;
cpu->throttle_us_per_full = 0;
@@ -475,6 +529,9 @@ static int kvm_mem_flags(MemoryRegion *mr)
if (readonly && kvm_readonly_mem_allowed) {
flags |= KVM_MEM_READONLY;
}
if (memory_region_has_guest_memfd(mr)) {
flags |= KVM_MEM_PRIVATE;
}
return flags;
}
@@ -1266,6 +1323,44 @@ void kvm_set_max_memslot_size(hwaddr max_slot_size)
kvm_max_slot_size = max_slot_size;
}
static int kvm_set_memory_attributes(hwaddr start, hwaddr size, uint64_t attr)
{
struct kvm_memory_attributes attrs;
int r;
attrs.attributes = attr;
attrs.address = start;
attrs.size = size;
attrs.flags = 0;
r = kvm_vm_ioctl(kvm_state, KVM_SET_MEMORY_ATTRIBUTES, &attrs);
if (r) {
warn_report("%s: failed to set memory (0x%lx+%#zx) with attr 0x%lx error '%s'",
__func__, start, size, attr, strerror(errno));
}
return r;
}
int kvm_set_memory_attributes_private(hwaddr start, hwaddr size)
{
if (!(kvm_supported_memory_attributes & KVM_MEMORY_ATTRIBUTE_PRIVATE)) {
error_report("KVM doesn't support PRIVATE memory attribute\n");
return -EINVAL;
}
return kvm_set_memory_attributes(start, size, KVM_MEMORY_ATTRIBUTE_PRIVATE);
}
int kvm_set_memory_attributes_shared(hwaddr start, hwaddr size)
{
if (!(kvm_supported_memory_attributes & KVM_MEMORY_ATTRIBUTE_PRIVATE)) {
error_report("KVM doesn't support PRIVATE memory attribute\n");
return -EINVAL;
}
return kvm_set_memory_attributes(start, size, 0);
}
/* Called with KVMMemoryListener.slots_lock held */
static void kvm_set_phys_mem(KVMMemoryListener *kml,
MemoryRegionSection *section, bool add)
@@ -1362,6 +1457,9 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
mem->ram_start_offset = ram_start_offset;
mem->ram = ram;
mem->flags = kvm_mem_flags(mr);
mem->guest_memfd = mr->ram_block->guest_memfd;
mem->guest_memfd_offset = (uint8_t*)ram - mr->ram_block->host;
kvm_slot_init_dirty_bitmap(mem);
err = kvm_set_user_memory_region(kml, mem, true);
if (err) {
@@ -1369,6 +1467,16 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
strerror(-err));
abort();
}
if (memory_region_is_default_private(mr)) {
err = kvm_set_memory_attributes_private(start_addr, slot_size);
if (err) {
error_report("%s: failed to set memory attribute private: %s\n",
__func__, strerror(-err));
exit(1);
}
}
start_addr += slot_size;
ram_start_offset += slot_size;
ram += slot_size;
@@ -2396,6 +2504,11 @@ static int kvm_init(MachineState *ms)
}
s->as = g_new0(struct KVMAs, s->nr_as);
kvm_guest_memfd_supported = kvm_check_extension(s, KVM_CAP_GUEST_MEMFD);
ret = kvm_check_extension(s, KVM_CAP_MEMORY_ATTRIBUTES);
kvm_supported_memory_attributes = ret > 0 ? ret : 0;
if (object_property_find(OBJECT(current_machine), "kvm-type")) {
g_autofree char *kvm_type = object_property_get_str(OBJECT(current_machine),
"kvm-type",
@@ -2816,6 +2929,78 @@ static void kvm_eat_signals(CPUState *cpu)
} while (sigismember(&chkset, SIG_IPI));
}
int kvm_convert_memory(hwaddr start, hwaddr size, bool to_private)
{
MemoryRegionSection section;
ram_addr_t offset;
MemoryRegion *mr;
RAMBlock *rb;
void *addr;
int ret = -1;
trace_kvm_convert_memory(start, size, to_private ? "shared_to_private" : "private_to_shared");
section = memory_region_find(get_system_memory(), start, size);
mr = section.mr;
if (!mr) {
/*
* Ignore converting non-assigned region to shared.
*
* TDX requires vMMIO region to be shared to inject #VE to guest.
* OVMF issues conservatively MapGPA(shared) on 32bit PCI MMIO region,
* and vIO-APIC 0xFEC00000 4K page.
* OVMF assigns 32bit PCI MMIO region to
* [top of low memory: typically 2GB=0xC000000, 0xFC00000)
*/
if (!to_private) {
ret = 0;
}
return ret;
}
if (memory_region_has_guest_memfd(mr)) {
if (to_private) {
ret = kvm_set_memory_attributes_private(start, size);
} else {
ret = kvm_set_memory_attributes_shared(start, size);
}
if (ret) {
memory_region_unref(section.mr);
return ret;
}
addr = memory_region_get_ram_ptr(section.mr) +
section.offset_within_region;
rb = qemu_ram_block_from_host(addr, false, &offset);
/*
* With KVM_SET_MEMORY_ATTRIBUTES by kvm_set_memory_attributes(),
* operation on underlying file descriptor is only for releasing
* unnecessary pages.
*/
ram_block_convert_range(rb, offset, size, to_private);
} else {
/*
* Because vMMIO region must be shared, guest TD may convert vMMIO
* region to shared explicitly. Don't complain such case. See
* memory_region_type() for checking if the region is MMIO region.
*/
if (!to_private &&
!memory_region_is_ram(mr) &&
!memory_region_is_ram_device(mr) &&
!memory_region_is_rom(mr) &&
!memory_region_is_romd(mr)) {
ret = 0;
} else {
warn_report("Convert non guest_memfd backed memory region "
"(0x%"HWADDR_PRIx" ,+ 0x%"HWADDR_PRIx") to %s",
start, size, to_private ? "private" : "shared");
}
}
memory_region_unref(section.mr);
return ret;
}
int kvm_cpu_exec(CPUState *cpu)
{
struct kvm_run *run = cpu->kvm_run;
@@ -2883,18 +3068,20 @@ int kvm_cpu_exec(CPUState *cpu)
ret = EXCP_INTERRUPT;
break;
}
fprintf(stderr, "error: kvm run failed %s\n",
strerror(-run_ret));
if (!(run_ret == -EFAULT && run->exit_reason == KVM_EXIT_MEMORY_FAULT)) {
fprintf(stderr, "error: kvm run failed %s\n",
strerror(-run_ret));
#ifdef TARGET_PPC
if (run_ret == -EBUSY) {
fprintf(stderr,
"This is probably because your SMT is enabled.\n"
"VCPU can only run on primary threads with all "
"secondary threads offline.\n");
}
if (run_ret == -EBUSY) {
fprintf(stderr,
"This is probably because your SMT is enabled.\n"
"VCPU can only run on primary threads with all "
"secondary threads offline.\n");
}
#endif
ret = -1;
break;
ret = -1;
break;
}
}
trace_kvm_run_exit(cpu->cpu_index, run->exit_reason);
@@ -2981,6 +3168,16 @@ int kvm_cpu_exec(CPUState *cpu)
break;
}
break;
case KVM_EXIT_MEMORY_FAULT:
if (run->memory_fault.flags & ~KVM_MEMORY_EXIT_FLAG_PRIVATE) {
error_report("KVM_EXIT_MEMORY_FAULT: Unknown flag 0x%" PRIx64,
(uint64_t)run->memory_fault.flags);
ret = -1;
break;
}
ret = kvm_convert_memory(run->memory_fault.gpa, run->memory_fault.size,
run->memory_fault.flags & KVM_MEMORY_EXIT_FLAG_PRIVATE);
break;
default:
DPRINTF("kvm_arch_handle_exit\n");
ret = kvm_arch_handle_exit(cpu, run);
@@ -4077,3 +4274,24 @@ void query_stats_schemas_cb(StatsSchemaList **result, Error **errp)
query_stats_schema_vcpu(first_cpu, &stats_args);
}
}
int kvm_create_guest_memfd(uint64_t size, uint64_t flags, Error **errp)
{
int fd;
struct kvm_create_guest_memfd guest_memfd = {
.size = size,
.flags = flags,
};
if (!kvm_guest_memfd_supported) {
error_setg(errp, "KVM doesn't support guest memfd\n");
return -EOPNOTSUPP;
}
fd = kvm_vm_ioctl(kvm_state, KVM_CREATE_GUEST_MEMFD, &guest_memfd);
if (fd < 0) {
error_setg_errno(errp, errno, "%s: error creating kvm guest memfd\n", __func__);
}
return fd;
}

View File

@@ -15,7 +15,7 @@ kvm_irqchip_update_msi_route(int virq) "Updating MSI route virq=%d"
kvm_irqchip_release_virq(int virq) "virq %d"
kvm_set_ioeventfd_mmio(int fd, uint64_t addr, uint32_t val, bool assign, uint32_t size, bool datamatch) "fd: %d @0x%" PRIx64 " val=0x%x assign: %d size: %d match: %d"
kvm_set_ioeventfd_pio(int fd, uint16_t addr, uint32_t val, bool assign, uint32_t size, bool datamatch) "fd: %d @0x%x val=0x%x assign: %d size: %d match: %d"
kvm_set_user_memory(uint32_t slot, uint32_t flags, uint64_t guest_phys_addr, uint64_t memory_size, uint64_t userspace_addr, int ret) "Slot#%d flags=0x%x gpa=0x%"PRIx64 " size=0x%"PRIx64 " ua=0x%"PRIx64 " ret=%d"
kvm_set_user_memory(uint16_t as, uint16_t slot, uint32_t flags, uint64_t guest_phys_addr, uint64_t memory_size, uint64_t userspace_addr, uint32_t fd, uint64_t fd_offset, int ret) "AddrSpace#%d Slot#%d flags=0x%x gpa=0x%"PRIx64 " size=0x%"PRIx64 " ua=0x%"PRIx64 " guest_memfd=%d" " guest_memfd_offset=0x%" PRIx64 " ret=%d"
kvm_clear_dirty_log(uint32_t slot, uint64_t start, uint32_t size) "slot#%"PRId32" start 0x%"PRIx64" size 0x%"PRIx32
kvm_resample_fd_notify(int gsi) "gsi %d"
kvm_dirty_ring_full(int id) "vcpu %d"
@@ -25,4 +25,4 @@ kvm_dirty_ring_reaper(const char *s) "%s"
kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"PRIu64" pages (took %"PRIi64" us)"
kvm_dirty_ring_reaper_kick(const char *reason) "%s"
kvm_dirty_ring_flush(int finished) "%d"
kvm_convert_memory(uint64_t start, uint64_t size, const char *msg) "start 0x%" PRIx64 " size 0x%" PRIx64 " %s"

View File

@@ -84,6 +84,7 @@ file_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
ram_flags |= fb->readonly ? RAM_READONLY_FD : 0;
ram_flags |= fb->rom == ON_OFF_AUTO_ON ? RAM_READONLY : 0;
ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
ram_flags |= backend->require_guest_memfd ? RAM_GUEST_MEMFD : 0;
ram_flags |= fb->is_pmem ? RAM_PMEM : 0;
ram_flags |= RAM_NAMED_FILE;
memory_region_init_ram_from_file(&backend->mr, OBJECT(backend), name,

View File

@@ -55,6 +55,7 @@ memfd_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
name = host_memory_backend_get_name(backend);
ram_flags = backend->share ? RAM_SHARED : 0;
ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
ram_flags |= backend->require_guest_memfd ? RAM_GUEST_MEMFD : 0;
memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend), name,
backend->size, ram_flags, fd, 0, errp);
g_free(name);

View File

@@ -30,6 +30,7 @@ ram_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
name = host_memory_backend_get_name(backend);
ram_flags = backend->share ? RAM_SHARED : 0;
ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
ram_flags |= backend->require_guest_memfd ? RAM_GUEST_MEMFD : 0;
memory_region_init_ram_flags_nomigrate(&backend->mr, OBJECT(backend), name,
backend->size, ram_flags, errp);
g_free(name);

View File

@@ -279,6 +279,7 @@ static void host_memory_backend_init(Object *obj)
/* TODO: convert access to globals to compat properties */
backend->merge = machine_mem_merge(machine);
backend->dump = machine_dump_guest_core(machine);
backend->require_guest_memfd = machine_require_guest_memfd(machine);
backend->reserve = true;
backend->prealloc_threads = machine->smp.cpus;
}

View File

@@ -18,6 +18,7 @@
#CONFIG_QXL=n
#CONFIG_SEV=n
#CONFIG_SGA=n
#CONFIG_TDX=n
#CONFIG_TEST_DEVICES=n
#CONFIG_TPM_CRB=n
#CONFIG_TPM_TIS_ISA=n

View File

@@ -38,6 +38,7 @@ Supported mechanisms
Currently supported confidential guest mechanisms are:
* AMD Secure Encrypted Virtualization (SEV) (see :doc:`i386/amd-memory-encryption`)
* Intel Trust Domain Extension (TDX) (see :doc:`i386/tdx`)
* POWER Protected Execution Facility (PEF) (see :ref:`power-papr-protected-execution-facility-pef`)
* s390x Protected Virtualization (PV) (see :doc:`s390x/protvirt`)

113
docs/system/i386/tdx.rst Normal file
View File

@@ -0,0 +1,113 @@
Intel Trusted Domain eXtension (TDX)
====================================
Intel Trusted Domain eXtensions (TDX) refers to an Intel technology that extends
Virtual Machine Extensions (VMX) and Multi-Key Total Memory Encryption (MKTME)
with a new kind of virtual machine guest called a Trust Domain (TD). A TD runs
in a CPU mode that is designed to protect the confidentiality of its memory
contents and its CPU state from any other software, including the hosting
Virtual Machine Monitor (VMM), unless explicitly shared by the TD itself.
Prerequisites
-------------
To run TD, the physical machine needs to have TDX module loaded and initialized
while KVM hypervisor has TDX support and has TDX enabled. If those requirements
are met, the ``KVM_CAP_VM_TYPES`` will report the support of ``KVM_X86_TDX_VM``.
Trust Domain Virtual Firmware (TDVF)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Trust Domain Virtual Firmware (TDVF) is required to provide TD services to boot
TD Guest OS. TDVF needs to be copied to guest private memory and measured before
a TD boots.
The VM scope ``MEMORY_ENCRYPT_OP`` ioctl provides command ``KVM_TDX_INIT_MEM_REGION``
to copy the TDVF image to TD's private memory space.
Since TDX doesn't support readonly memslot, TDVF cannot be mapped as pflash
device and it actually works as RAM. "-bios" option is chosen to load TDVF.
OVMF is the opensource firmware that implements the TDVF support. Thus the
command line to specify and load TDVF is ``-bios OVMF.fd``
KVM private gmem
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
TD's memory (RAM) needs to be able to be transformed between private and shared.
And its BIOS (OVMF/TDVF) needs to be mapped as private. Thus QEMU needs to
allocate private gmem for them via KVM's IOCTL (KVM_CREATE_GUEST_MEMFD), which
requires KVM is newer enough that reports KVM_CAP_GUEST_MEMFD.
Feature Control
---------------
Unlike non-TDX VM, the CPU features (enumerated by CPU or MSR) of a TD is not
under full control of VMM. VMM can only configure part of features of a TD on
``KVM_TDX_INIT_VM`` command of VM scope ``MEMORY_ENCRYPT_OP`` ioctl.
The configurable features have three types:
- Attributes:
- PKS (bit 30) controls whether Supervisor Protection Keys is exposed to TD,
which determines related CPUID bit and CR4 bit;
- PERFMON (bit 63) controls whether PMU is exposed to TD.
- XSAVE related features (XFAM):
XFAM is a 64b mask, which has the same format as XCR0 or IA32_XSS MSR. It
determines the set of extended features available for use by the guest TD.
- CPUID features:
Only some bits of some CPUID leaves are directly configurable by VMM.
What features can be configured is reported via TDX capabilities.
TDX capabilities
~~~~~~~~~~~~~~~~
The VM scope ``MEMORY_ENCRYPT_OP`` ioctl provides command ``KVM_TDX_CAPABILITIES``
to get the TDX capabilities from KVM. It returns a data structure of
``struct kvm_tdx_capabilites``, which tells the supported configuration of
attributes, XFAM and CPUIDs.
Launching a TD (TDX VM)
-----------------------
To launch a TDX guest, below are new added and required:
.. parsed-literal::
|qemu_system_x86| \\
-object tdx-guest,id=tdx0 \\
-machine ...,kernel-irqchip=split,confidential-guest-support=tdx0 \\
-bios OVMF.fd \\
Debugging
---------
Bit 0 of TD attributes, is DEBUG bit, which decides if the TD runs in off-TD
debug mode. When in off-TD debug mode, TD's VCPU state and private memory are
accessible via given SEAMCALLs. This requires KVM to expose APIs to invoke those
SEAMCALLs and resonponding QEMU change.
It's targeted as future work.
restrictions
------------
- kernel-irqchip must be split;
- No readonly support for private memory;
- No SMM support: SMM support requires manipulating the guset register states
which is not allowed;
Live Migration
--------------
TODO
References
----------
- `TDX Homepage <https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html>`__

View File

@@ -29,6 +29,7 @@ Architectural features
i386/kvm-pv
i386/sgx
i386/amd-memory-encryption
i386/tdx
OS requirements
~~~~~~~~~~~~~~~

View File

@@ -1189,6 +1189,11 @@ bool machine_mem_merge(MachineState *machine)
return machine->mem_merge;
}
bool machine_require_guest_memfd(MachineState *machine)
{
return machine->require_guest_memfd;
}
static char *cpu_slot_to_string(const CPUArchId *cpu)
{
GString *s = g_string_new(NULL);

View File

@@ -10,6 +10,11 @@ config SGX
bool
depends on KVM
config TDX
bool
select X86_FW_OVMF
depends on KVM
config PC
bool
imply APPLESMC
@@ -26,6 +31,7 @@ config PC
imply QXL
imply SEV
imply SGX
imply TDX
imply TEST_DEVICES
imply TPM_CRB
imply TPM_TIS_ISA

View File

@@ -975,7 +975,8 @@ static void build_dbg_aml(Aml *table)
aml_append(table, scope);
}
static Aml *build_link_dev(const char *name, uint8_t uid, Aml *reg)
static Aml *build_link_dev(const char *name, uint8_t uid, Aml *reg,
bool level_trigger_unsupported)
{
Aml *dev;
Aml *crs;
@@ -987,7 +988,10 @@ static Aml *build_link_dev(const char *name, uint8_t uid, Aml *reg)
aml_append(dev, aml_name_decl("_UID", aml_int(uid)));
crs = aml_resource_template();
aml_append(crs, aml_interrupt(AML_CONSUMER, AML_LEVEL, AML_ACTIVE_HIGH,
aml_append(crs, aml_interrupt(AML_CONSUMER,
level_trigger_unsupported ?
AML_EDGE : AML_LEVEL,
AML_ACTIVE_HIGH,
AML_SHARED, irqs, ARRAY_SIZE(irqs)));
aml_append(dev, aml_name_decl("_PRS", crs));
@@ -1011,7 +1015,8 @@ static Aml *build_link_dev(const char *name, uint8_t uid, Aml *reg)
return dev;
}
static Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi)
static Aml *build_gsi_link_dev(const char *name, uint8_t uid,
uint8_t gsi, bool level_trigger_unsupported)
{
Aml *dev;
Aml *crs;
@@ -1024,7 +1029,10 @@ static Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi)
crs = aml_resource_template();
irqs = gsi;
aml_append(crs, aml_interrupt(AML_CONSUMER, AML_LEVEL, AML_ACTIVE_HIGH,
aml_append(crs, aml_interrupt(AML_CONSUMER,
level_trigger_unsupported ?
AML_EDGE : AML_LEVEL,
AML_ACTIVE_HIGH,
AML_SHARED, &irqs, 1));
aml_append(dev, aml_name_decl("_PRS", crs));
@@ -1043,7 +1051,7 @@ static Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi)
}
/* _CRS method - get current settings */
static Aml *build_iqcr_method(bool is_piix4)
static Aml *build_iqcr_method(bool is_piix4, bool level_trigger_unsupported)
{
Aml *if_ctx;
uint32_t irqs;
@@ -1051,7 +1059,9 @@ static Aml *build_iqcr_method(bool is_piix4)
Aml *crs = aml_resource_template();
irqs = 0;
aml_append(crs, aml_interrupt(AML_CONSUMER, AML_LEVEL,
aml_append(crs, aml_interrupt(AML_CONSUMER,
level_trigger_unsupported ?
AML_EDGE : AML_LEVEL,
AML_ACTIVE_HIGH, AML_SHARED, &irqs, 1));
aml_append(method, aml_name_decl("PRR0", crs));
@@ -1085,7 +1095,7 @@ static Aml *build_irq_status_method(void)
return method;
}
static void build_piix4_pci0_int(Aml *table)
static void build_piix4_pci0_int(Aml *table, bool level_trigger_unsupported)
{
Aml *dev;
Aml *crs;
@@ -1098,12 +1108,16 @@ static void build_piix4_pci0_int(Aml *table)
aml_append(sb_scope, pci0_scope);
aml_append(sb_scope, build_irq_status_method());
aml_append(sb_scope, build_iqcr_method(true));
aml_append(sb_scope, build_iqcr_method(true, level_trigger_unsupported));
aml_append(sb_scope, build_link_dev("LNKA", 0, aml_name("PRQ0")));
aml_append(sb_scope, build_link_dev("LNKB", 1, aml_name("PRQ1")));
aml_append(sb_scope, build_link_dev("LNKC", 2, aml_name("PRQ2")));
aml_append(sb_scope, build_link_dev("LNKD", 3, aml_name("PRQ3")));
aml_append(sb_scope, build_link_dev("LNKA", 0, aml_name("PRQ0"),
level_trigger_unsupported));
aml_append(sb_scope, build_link_dev("LNKB", 1, aml_name("PRQ1"),
level_trigger_unsupported));
aml_append(sb_scope, build_link_dev("LNKC", 2, aml_name("PRQ2"),
level_trigger_unsupported));
aml_append(sb_scope, build_link_dev("LNKD", 3, aml_name("PRQ3"),
level_trigger_unsupported));
dev = aml_device("LNKS");
{
@@ -1112,7 +1126,9 @@ static void build_piix4_pci0_int(Aml *table)
crs = aml_resource_template();
irqs = 9;
aml_append(crs, aml_interrupt(AML_CONSUMER, AML_LEVEL,
aml_append(crs, aml_interrupt(AML_CONSUMER,
level_trigger_unsupported ?
AML_EDGE : AML_LEVEL,
AML_ACTIVE_HIGH, AML_SHARED,
&irqs, 1));
aml_append(dev, aml_name_decl("_PRS", crs));
@@ -1198,7 +1214,7 @@ static Aml *build_q35_routing_table(const char *str)
return pkg;
}
static void build_q35_pci0_int(Aml *table)
static void build_q35_pci0_int(Aml *table, bool level_trigger_unsupported)
{
Aml *method;
Aml *sb_scope = aml_scope("_SB");
@@ -1237,25 +1253,41 @@ static void build_q35_pci0_int(Aml *table)
aml_append(sb_scope, pci0_scope);
aml_append(sb_scope, build_irq_status_method());
aml_append(sb_scope, build_iqcr_method(false));
aml_append(sb_scope, build_iqcr_method(false, level_trigger_unsupported));
aml_append(sb_scope, build_link_dev("LNKA", 0, aml_name("PRQA")));
aml_append(sb_scope, build_link_dev("LNKB", 1, aml_name("PRQB")));
aml_append(sb_scope, build_link_dev("LNKC", 2, aml_name("PRQC")));
aml_append(sb_scope, build_link_dev("LNKD", 3, aml_name("PRQD")));
aml_append(sb_scope, build_link_dev("LNKE", 4, aml_name("PRQE")));
aml_append(sb_scope, build_link_dev("LNKF", 5, aml_name("PRQF")));
aml_append(sb_scope, build_link_dev("LNKG", 6, aml_name("PRQG")));
aml_append(sb_scope, build_link_dev("LNKH", 7, aml_name("PRQH")));
aml_append(sb_scope, build_link_dev("LNKA", 0, aml_name("PRQA"),
level_trigger_unsupported));
aml_append(sb_scope, build_link_dev("LNKB", 1, aml_name("PRQB"),
level_trigger_unsupported));
aml_append(sb_scope, build_link_dev("LNKC", 2, aml_name("PRQC"),
level_trigger_unsupported));
aml_append(sb_scope, build_link_dev("LNKD", 3, aml_name("PRQD"),
level_trigger_unsupported));
aml_append(sb_scope, build_link_dev("LNKE", 4, aml_name("PRQE"),
level_trigger_unsupported));
aml_append(sb_scope, build_link_dev("LNKF", 5, aml_name("PRQF"),
level_trigger_unsupported));
aml_append(sb_scope, build_link_dev("LNKG", 6, aml_name("PRQG"),
level_trigger_unsupported));
aml_append(sb_scope, build_link_dev("LNKH", 7, aml_name("PRQH"),
level_trigger_unsupported));
aml_append(sb_scope, build_gsi_link_dev("GSIA", 0x10, 0x10));
aml_append(sb_scope, build_gsi_link_dev("GSIB", 0x11, 0x11));
aml_append(sb_scope, build_gsi_link_dev("GSIC", 0x12, 0x12));
aml_append(sb_scope, build_gsi_link_dev("GSID", 0x13, 0x13));
aml_append(sb_scope, build_gsi_link_dev("GSIE", 0x14, 0x14));
aml_append(sb_scope, build_gsi_link_dev("GSIF", 0x15, 0x15));
aml_append(sb_scope, build_gsi_link_dev("GSIG", 0x16, 0x16));
aml_append(sb_scope, build_gsi_link_dev("GSIH", 0x17, 0x17));
aml_append(sb_scope, build_gsi_link_dev("GSIA", 0x10, 0x10,
level_trigger_unsupported));
aml_append(sb_scope, build_gsi_link_dev("GSIB", 0x11, 0x11,
level_trigger_unsupported));
aml_append(sb_scope, build_gsi_link_dev("GSIC", 0x12, 0x12,
level_trigger_unsupported));
aml_append(sb_scope, build_gsi_link_dev("GSID", 0x13, 0x13,
level_trigger_unsupported));
aml_append(sb_scope, build_gsi_link_dev("GSIE", 0x14, 0x14,
level_trigger_unsupported));
aml_append(sb_scope, build_gsi_link_dev("GSIF", 0x15, 0x15,
level_trigger_unsupported));
aml_append(sb_scope, build_gsi_link_dev("GSIG", 0x16, 0x16,
level_trigger_unsupported));
aml_append(sb_scope, build_gsi_link_dev("GSIH", 0x17, 0x17,
level_trigger_unsupported));
aml_append(table, sb_scope);
}
@@ -1436,6 +1468,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
PCMachineState *pcms = PC_MACHINE(machine);
PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(machine);
X86MachineState *x86ms = X86_MACHINE(machine);
bool level_trigger_unsupported = x86ms->eoi_intercept_unsupported;
AcpiMcfgInfo mcfg;
bool mcfg_valid = !!acpi_get_mcfg(&mcfg);
uint32_t nr_mem = machine->ram_slots;
@@ -1468,7 +1501,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
if (pm->pcihp_bridge_en || pm->pcihp_root_en) {
build_x86_acpi_pci_hotplug(dsdt, pm->pcihp_io_base);
}
build_piix4_pci0_int(dsdt);
build_piix4_pci0_int(dsdt, level_trigger_unsupported);
} else if (q35) {
sb_scope = aml_scope("_SB");
dev = aml_device("PCI0");
@@ -1512,7 +1545,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
if (pm->pcihp_bridge_en) {
build_x86_acpi_pci_hotplug(dsdt, pm->pcihp_io_base);
}
build_q35_pci0_int(dsdt);
build_q35_pci0_int(dsdt, level_trigger_unsupported);
}
if (misc->has_hpet) {

View File

@@ -103,6 +103,7 @@ void acpi_build_madt(GArray *table_data, BIOSLinker *linker,
const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(MACHINE(x86ms));
AcpiTable table = { .sig = "APIC", .rev = 3, .oem_id = oem_id,
.oem_table_id = oem_table_id };
bool level_trigger_unsupported = x86ms->eoi_intercept_unsupported;
acpi_table_begin(&table, table_data);
/* Local APIC Address */
@@ -122,18 +123,43 @@ void acpi_build_madt(GArray *table_data, BIOSLinker *linker,
IO_APIC_SECONDARY_ADDRESS, IO_APIC_SECONDARY_IRQBASE);
}
if (x86ms->apic_xrupt_override) {
build_xrupt_override(table_data, 0, 2,
0 /* Flags: Conforms to the specifications of the bus */);
}
for (i = 1; i < 16; i++) {
if (!(x86ms->pci_irq_mask & (1 << i))) {
/* No need for a INT source override structure. */
continue;
if (level_trigger_unsupported) {
/* Force edge trigger */
if (x86ms->apic_xrupt_override) {
build_xrupt_override(table_data, 0, 2,
/* Flags: active high, edge triggered */
1 | (1 << 2));
}
for (i = x86ms->apic_xrupt_override ? 1 : 0; i < 16; i++) {
build_xrupt_override(table_data, i, i,
/* Flags: active high, edge triggered */
1 | (1 << 2));
}
if (x86ms->ioapic2) {
for (i = 0; i < 16; i++) {
build_xrupt_override(table_data, IO_APIC_SECONDARY_IRQBASE + i,
IO_APIC_SECONDARY_IRQBASE + i,
/* Flags: active high, edge triggered */
1 | (1 << 2));
}
}
} else {
if (x86ms->apic_xrupt_override) {
build_xrupt_override(table_data, 0, 2,
0 /* Flags: Conforms to the specifications of the bus */);
}
for (i = 1; i < 16; i++) {
if (!(x86ms->pci_irq_mask & (1 << i))) {
/* No need for a INT source override structure. */
continue;
}
build_xrupt_override(table_data, i, i,
0xd /* Flags: Active high, Level Triggered */);
}
build_xrupt_override(table_data, i, i,
0xd /* Flags: Active high, Level Triggered */);
}
if (x2apic_mode) {

View File

@@ -27,6 +27,7 @@ i386_ss.add(when: 'CONFIG_PC', if_true: files(
'port92.c'))
i386_ss.add(when: 'CONFIG_X86_FW_OVMF', if_true: files('pc_sysfw_ovmf.c'),
if_false: files('pc_sysfw_ovmf-stubs.c'))
i386_ss.add(when: 'CONFIG_TDX', if_true: files('tdvf.c', 'tdvf-hob.c'))
subdir('kvm')
subdir('xen')

View File

@@ -43,6 +43,7 @@
#include "sysemu/xen.h"
#include "sysemu/reset.h"
#include "kvm/kvm_i386.h"
#include "kvm/tdx.h"
#include "hw/xen/xen.h"
#include "qapi/qmp/qlist.h"
#include "qemu/error-report.h"
@@ -1036,16 +1037,18 @@ void pc_memory_init(PCMachineState *pcms,
/* Initialize PC system firmware */
pc_system_firmware_init(pcms, rom_memory);
option_rom_mr = g_malloc(sizeof(*option_rom_mr));
memory_region_init_ram(option_rom_mr, NULL, "pc.rom", PC_ROM_SIZE,
&error_fatal);
if (pcmc->pci_enabled) {
memory_region_set_readonly(option_rom_mr, true);
if (!is_tdx_vm()) {
option_rom_mr = g_malloc(sizeof(*option_rom_mr));
memory_region_init_ram(option_rom_mr, NULL, "pc.rom", PC_ROM_SIZE,
&error_fatal);
if (pcmc->pci_enabled) {
memory_region_set_readonly(option_rom_mr, true);
}
memory_region_add_subregion_overlap(rom_memory,
PC_ROM_MIN_VGA,
option_rom_mr,
1);
}
memory_region_add_subregion_overlap(rom_memory,
PC_ROM_MIN_VGA,
option_rom_mr,
1);
fw_cfg = fw_cfg_arch_create(machine,
x86ms->boot_cpus, x86ms->apic_id_limit);
@@ -1755,11 +1758,6 @@ static void pc_machine_initfn(Object *obj)
cxl_machine_init(obj, &pcms->cxl_devices_state);
}
int pc_machine_kvm_type(MachineState *machine, const char *kvm_type)
{
return 0;
}
static void pc_machine_reset(MachineState *machine, ShutdownCause reason)
{
CPUState *cs;

View File

@@ -236,6 +236,8 @@ static void pc_q35_init(MachineState *machine)
x86ms->above_4g_mem_size, NULL);
object_property_set_bool(phb, PCI_HOST_BYPASS_IOMMU,
pcms->default_bus_bypass_iommu, NULL);
object_property_set_bool(phb, PCI_HOST_PROP_SMM_RANGES,
x86_machine_is_smm_enabled(x86ms), NULL);
sysbus_realize_and_unref(SYS_BUS_DEVICE(phb), &error_fatal);
/* pci */

View File

@@ -37,6 +37,7 @@
#include "hw/block/flash.h"
#include "sysemu/kvm.h"
#include "sev.h"
#include "kvm/tdx.h"
#define FLASH_SECTOR_SIZE 4096
@@ -265,5 +266,11 @@ void x86_firmware_configure(void *ptr, int size)
}
sev_encrypt_flash(ptr, size, &error_fatal);
} else if (is_tdx_vm()) {
ret = tdx_parse_tdvf(ptr, size);
if (ret) {
error_report("failed to parse TDVF for TDX VM");
exit(1);
}
}
}

147
hw/i386/tdvf-hob.c Normal file
View File

@@ -0,0 +1,147 @@
/*
* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright (c) 2020 Intel Corporation
* Author: Isaku Yamahata <isaku.yamahata at gmail.com>
* <isaku.yamahata at intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
* You should have received a copy of the GNU General Public License along
* with this program; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "qemu/log.h"
#include "qemu/error-report.h"
#include "e820_memory_layout.h"
#include "hw/i386/pc.h"
#include "hw/i386/x86.h"
#include "hw/pci/pcie_host.h"
#include "sysemu/kvm.h"
#include "standard-headers/uefi/uefi.h"
#include "tdvf-hob.h"
typedef struct TdvfHob {
hwaddr hob_addr;
void *ptr;
int size;
/* working area */
void *current;
void *end;
} TdvfHob;
static uint64_t tdvf_current_guest_addr(const TdvfHob *hob)
{
return hob->hob_addr + (hob->current - hob->ptr);
}
static void tdvf_align(TdvfHob *hob, size_t align)
{
hob->current = QEMU_ALIGN_PTR_UP(hob->current, align);
}
static void *tdvf_get_area(TdvfHob *hob, uint64_t size)
{
void *ret;
if (hob->current + size > hob->end) {
error_report("TD_HOB overrun, size = 0x%" PRIx64, size);
exit(1);
}
ret = hob->current;
hob->current += size;
tdvf_align(hob, 8);
return ret;
}
static void tdvf_hob_add_memory_resources(TdxGuest *tdx, TdvfHob *hob)
{
EFI_HOB_RESOURCE_DESCRIPTOR *region;
EFI_RESOURCE_ATTRIBUTE_TYPE attr;
EFI_RESOURCE_TYPE resource_type;
TdxRamEntry *e;
int i;
for (i = 0; i < tdx->nr_ram_entries; i++) {
e = &tdx->ram_entries[i];
if (e->type == TDX_RAM_UNACCEPTED) {
resource_type = EFI_RESOURCE_MEMORY_UNACCEPTED;
attr = EFI_RESOURCE_ATTRIBUTE_TDVF_UNACCEPTED;
} else if (e->type == TDX_RAM_ADDED){
resource_type = EFI_RESOURCE_SYSTEM_MEMORY;
attr = EFI_RESOURCE_ATTRIBUTE_TDVF_PRIVATE;
} else {
error_report("unknown TDX_RAM_ENTRY type %d", e->type);
exit(1);
}
region = tdvf_get_area(hob, sizeof(*region));
*region = (EFI_HOB_RESOURCE_DESCRIPTOR) {
.Header = {
.HobType = EFI_HOB_TYPE_RESOURCE_DESCRIPTOR,
.HobLength = cpu_to_le16(sizeof(*region)),
.Reserved = cpu_to_le32(0),
},
.Owner = EFI_HOB_OWNER_ZERO,
.ResourceType = cpu_to_le32(resource_type),
.ResourceAttribute = cpu_to_le32(attr),
.PhysicalStart = cpu_to_le64(e->address),
.ResourceLength = cpu_to_le64(e->length),
};
}
}
void tdvf_hob_create(TdxGuest *tdx, TdxFirmwareEntry *td_hob)
{
TdvfHob hob = {
.hob_addr = td_hob->address,
.size = td_hob->size,
.ptr = td_hob->mem_ptr,
.current = td_hob->mem_ptr,
.end = td_hob->mem_ptr + td_hob->size,
};
EFI_HOB_GENERIC_HEADER *last_hob;
EFI_HOB_HANDOFF_INFO_TABLE *hit;
/* Note, Efi{Free}Memory{Bottom,Top} are ignored, leave 'em zeroed. */
hit = tdvf_get_area(&hob, sizeof(*hit));
*hit = (EFI_HOB_HANDOFF_INFO_TABLE) {
.Header = {
.HobType = EFI_HOB_TYPE_HANDOFF,
.HobLength = cpu_to_le16(sizeof(*hit)),
.Reserved = cpu_to_le32(0),
},
.Version = cpu_to_le32(EFI_HOB_HANDOFF_TABLE_VERSION),
.BootMode = cpu_to_le32(0),
.EfiMemoryTop = cpu_to_le64(0),
.EfiMemoryBottom = cpu_to_le64(0),
.EfiFreeMemoryTop = cpu_to_le64(0),
.EfiFreeMemoryBottom = cpu_to_le64(0),
.EfiEndOfHobList = cpu_to_le64(0), /* initialized later */
};
tdvf_hob_add_memory_resources(tdx, &hob);
last_hob = tdvf_get_area(&hob, sizeof(*last_hob));
*last_hob = (EFI_HOB_GENERIC_HEADER) {
.HobType = EFI_HOB_TYPE_END_OF_HOB_LIST,
.HobLength = cpu_to_le16(sizeof(*last_hob)),
.Reserved = cpu_to_le32(0),
};
hit->EfiEndOfHobList = tdvf_current_guest_addr(&hob);
}

24
hw/i386/tdvf-hob.h Normal file
View File

@@ -0,0 +1,24 @@
#ifndef HW_I386_TD_HOB_H
#define HW_I386_TD_HOB_H
#include "hw/i386/tdvf.h"
#include "target/i386/kvm/tdx.h"
void tdvf_hob_create(TdxGuest *tdx, TdxFirmwareEntry *td_hob);
#define EFI_RESOURCE_ATTRIBUTE_TDVF_PRIVATE \
(EFI_RESOURCE_ATTRIBUTE_PRESENT | \
EFI_RESOURCE_ATTRIBUTE_INITIALIZED | \
EFI_RESOURCE_ATTRIBUTE_TESTED)
#define EFI_RESOURCE_ATTRIBUTE_TDVF_UNACCEPTED \
(EFI_RESOURCE_ATTRIBUTE_PRESENT | \
EFI_RESOURCE_ATTRIBUTE_INITIALIZED | \
EFI_RESOURCE_ATTRIBUTE_TESTED)
#define EFI_RESOURCE_ATTRIBUTE_TDVF_MMIO \
(EFI_RESOURCE_ATTRIBUTE_PRESENT | \
EFI_RESOURCE_ATTRIBUTE_INITIALIZED | \
EFI_RESOURCE_ATTRIBUTE_UNCACHEABLE)
#endif

200
hw/i386/tdvf.c Normal file
View File

@@ -0,0 +1,200 @@
/*
* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright (c) 2020 Intel Corporation
* Author: Isaku Yamahata <isaku.yamahata at gmail.com>
* <isaku.yamahata at intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
* You should have received a copy of the GNU General Public License along
* with this program; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "qemu/error-report.h"
#include "hw/i386/pc.h"
#include "hw/i386/tdvf.h"
#include "sysemu/kvm.h"
#define TDX_METADATA_OFFSET_GUID "e47a6535-984a-4798-865e-4685a7bf8ec2"
#define TDX_METADATA_VERSION 1
#define TDVF_SIGNATURE 0x46564454 /* TDVF as little endian */
typedef struct {
uint32_t DataOffset;
uint32_t RawDataSize;
uint64_t MemoryAddress;
uint64_t MemoryDataSize;
uint32_t Type;
uint32_t Attributes;
} TdvfSectionEntry;
typedef struct {
uint32_t Signature;
uint32_t Length;
uint32_t Version;
uint32_t NumberOfSectionEntries;
TdvfSectionEntry SectionEntries[];
} TdvfMetadata;
struct tdx_metadata_offset {
uint32_t offset;
};
static TdvfMetadata *tdvf_get_metadata(void *flash_ptr, int size)
{
TdvfMetadata *metadata;
uint32_t offset = 0;
uint8_t *data;
if ((uint32_t) size != size) {
return NULL;
}
if (pc_system_ovmf_table_find(TDX_METADATA_OFFSET_GUID, &data, NULL)) {
offset = size - le32_to_cpu(((struct tdx_metadata_offset *)data)->offset);
if (offset + sizeof(*metadata) > size) {
return NULL;
}
} else {
error_report("Cannot find TDX_METADATA_OFFSET_GUID");
return NULL;
}
metadata = flash_ptr + offset;
/* Finally, verify the signature to determine if this is a TDVF image. */
metadata->Signature = le32_to_cpu(metadata->Signature);
if (metadata->Signature != TDVF_SIGNATURE) {
error_report("Invalid TDVF signature in metadata!");
return NULL;
}
/* Sanity check that the TDVF doesn't overlap its own metadata. */
metadata->Length = le32_to_cpu(metadata->Length);
if (offset + metadata->Length > size) {
return NULL;
}
/* Only version 1 is supported/defined. */
metadata->Version = le32_to_cpu(metadata->Version);
if (metadata->Version != TDX_METADATA_VERSION) {
return NULL;
}
return metadata;
}
static int tdvf_parse_and_check_section_entry(const TdvfSectionEntry *src,
TdxFirmwareEntry *entry)
{
entry->data_offset = le32_to_cpu(src->DataOffset);
entry->data_len = le32_to_cpu(src->RawDataSize);
entry->address = le64_to_cpu(src->MemoryAddress);
entry->size = le64_to_cpu(src->MemoryDataSize);
entry->type = le32_to_cpu(src->Type);
entry->attributes = le32_to_cpu(src->Attributes);
/* sanity check */
if (entry->size < entry->data_len) {
error_report("Broken metadata RawDataSize 0x%x MemoryDataSize 0x%lx",
entry->data_len, entry->size);
return -1;
}
if (!QEMU_IS_ALIGNED(entry->address, TARGET_PAGE_SIZE)) {
error_report("MemoryAddress 0x%lx not page aligned", entry->address);
return -1;
}
if (!QEMU_IS_ALIGNED(entry->size, TARGET_PAGE_SIZE)) {
error_report("MemoryDataSize 0x%lx not page aligned", entry->size);
return -1;
}
switch (entry->type) {
case TDVF_SECTION_TYPE_BFV:
case TDVF_SECTION_TYPE_CFV:
/* The sections that must be copied from firmware image to TD memory */
if (entry->data_len == 0) {
error_report("%d section with RawDataSize == 0", entry->type);
return -1;
}
break;
case TDVF_SECTION_TYPE_TD_HOB:
case TDVF_SECTION_TYPE_TEMP_MEM:
/* The sections that no need to be copied from firmware image */
if (entry->data_len != 0) {
error_report("%d section with RawDataSize 0x%x != 0",
entry->type, entry->data_len);
return -1;
}
break;
default:
error_report("TDVF contains unsupported section type %d", entry->type);
return -1;
}
return 0;
}
int tdvf_parse_metadata(TdxFirmware *fw, void *flash_ptr, int size)
{
TdvfSectionEntry *sections;
TdvfMetadata *metadata;
ssize_t entries_size;
uint32_t len, i;
metadata = tdvf_get_metadata(flash_ptr, size);
if (!metadata) {
return -EINVAL;
}
//load and parse metadata entries
fw->nr_entries = le32_to_cpu(metadata->NumberOfSectionEntries);
if (fw->nr_entries < 2) {
error_report("Invalid number of fw entries (%u) in TDVF", fw->nr_entries);
return -EINVAL;
}
len = le32_to_cpu(metadata->Length);
entries_size = fw->nr_entries * sizeof(TdvfSectionEntry);
if (len != sizeof(*metadata) + entries_size) {
error_report("TDVF metadata len (0x%x) mismatch, expected (0x%x)",
len, (uint32_t)(sizeof(*metadata) + entries_size));
return -EINVAL;
}
fw->entries = g_new(TdxFirmwareEntry, fw->nr_entries);
sections = g_new(TdvfSectionEntry, fw->nr_entries);
if (!memcpy(sections, (void *)metadata + sizeof(*metadata), entries_size)) {
error_report("Failed to read TDVF section entries");
goto err;
}
for (i = 0; i < fw->nr_entries; i++) {
if (tdvf_parse_and_check_section_entry(&sections[i], &fw->entries[i])) {
goto err;
}
}
g_free(sections);
fw->mem_ptr = flash_ptr;
return 0;
err:
g_free(sections);
fw->entries = 0;
g_free(fw->entries);
return -EINVAL;
}

View File

@@ -47,6 +47,7 @@
#include "hw/intc/i8259.h"
#include "hw/rtc/mc146818rtc.h"
#include "target/i386/sev.h"
#include "kvm/tdx.h"
#include "hw/acpi/cpu_hotplug.h"
#include "hw/irq.h"
@@ -1145,9 +1146,17 @@ void x86_bios_rom_init(MachineState *ms, const char *default_firmware,
(bios_size % 65536) != 0) {
goto bios_error;
}
bios = g_malloc(sizeof(*bios));
memory_region_init_ram(bios, NULL, "pc.bios", bios_size, &error_fatal);
if (sev_enabled()) {
if (is_tdx_vm()) {
memory_region_init_ram_guest_memfd(bios, NULL, "pc.bios", bios_size,
&error_fatal);
tdx_set_tdvf_region(bios);
} else {
memory_region_init_ram(bios, NULL, "pc.bios", bios_size, &error_fatal);
}
if (sev_enabled() || is_tdx_vm()) {
/*
* The concept of a "reset" simply doesn't exist for
* confidential computing guests, we have to destroy and
@@ -1169,17 +1178,20 @@ void x86_bios_rom_init(MachineState *ms, const char *default_firmware,
}
g_free(filename);
/* map the last 128KB of the BIOS in ISA space */
isa_bios_size = MIN(bios_size, 128 * KiB);
isa_bios = g_malloc(sizeof(*isa_bios));
memory_region_init_alias(isa_bios, NULL, "isa-bios", bios,
bios_size - isa_bios_size, isa_bios_size);
memory_region_add_subregion_overlap(rom_memory,
0x100000 - isa_bios_size,
isa_bios,
1);
if (!isapc_ram_fw) {
memory_region_set_readonly(isa_bios, true);
/* For TDX, alias different GPAs to same private memory is not supported */
if (!is_tdx_vm()) {
/* map the last 128KB of the BIOS in ISA space */
isa_bios_size = MIN(bios_size, 128 * KiB);
isa_bios = g_malloc(sizeof(*isa_bios));
memory_region_init_alias(isa_bios, NULL, "isa-bios", bios,
bios_size - isa_bios_size, isa_bios_size);
memory_region_add_subregion_overlap(rom_memory,
0x100000 - isa_bios_size,
isa_bios,
1);
if (!isapc_ram_fw) {
memory_region_set_readonly(isa_bios, true);
}
}
/* map all the bios at the top of memory */
@@ -1377,6 +1389,17 @@ static void machine_set_sgx_epc(Object *obj, Visitor *v, const char *name,
qapi_free_SgxEPCList(list);
}
static int x86_kvm_type(MachineState *ms, const char *vm_type)
{
X86MachineState *x86ms = X86_MACHINE(ms);
int kvm_type;
kvm_type = kvm_get_vm_type(ms, vm_type);
x86ms->vm_type = kvm_type;
return kvm_type;
}
static void x86_machine_initfn(Object *obj)
{
X86MachineState *x86ms = X86_MACHINE(obj);
@@ -1390,6 +1413,7 @@ static void x86_machine_initfn(Object *obj)
x86ms->oem_table_id = g_strndup(ACPI_BUILD_APPNAME8, 8);
x86ms->bus_lock_ratelimit = 0;
x86ms->above_4g_mem_start = 4 * GiB;
x86ms->eoi_intercept_unsupported = false;
}
static void x86_machine_class_init(ObjectClass *oc, void *data)
@@ -1401,6 +1425,7 @@ static void x86_machine_class_init(ObjectClass *oc, void *data)
mc->cpu_index_to_instance_props = x86_cpu_index_to_props;
mc->get_default_cpu_node_id = x86_get_default_cpu_node_id;
mc->possible_cpu_arch_ids = x86_possible_cpu_arch_ids;
mc->kvm_type = x86_kvm_type;
x86mc->save_tsc_khz = true;
x86mc->fwcfg_dma_enabled = true;
nc->nmi_monitor_handler = x86_nmi;

View File

@@ -179,6 +179,8 @@ static Property q35_host_props[] = {
mch.below_4g_mem_size, 0),
DEFINE_PROP_SIZE(PCI_HOST_ABOVE_4G_MEM_SIZE, Q35PCIHost,
mch.above_4g_mem_size, 0),
DEFINE_PROP_BOOL(PCI_HOST_PROP_SMM_RANGES, Q35PCIHost,
mch.has_smm_ranges, true),
DEFINE_PROP_BOOL("x-pci-hole64-fix", Q35PCIHost, pci_hole64_fix, true),
DEFINE_PROP_END_OF_LIST(),
};
@@ -214,6 +216,7 @@ static void q35_host_initfn(Object *obj)
/* mch's object_initialize resets the default value, set it again */
qdev_prop_set_uint64(DEVICE(s), PCI_HOST_PROP_PCI_HOLE64_SIZE,
Q35_PCI_HOST_HOLE64_SIZE_DEFAULT);
object_property_add(obj, PCI_HOST_PROP_PCI_HOLE_START, "uint32",
q35_host_get_pci_hole_start,
NULL, NULL, NULL);
@@ -476,6 +479,10 @@ static void mch_write_config(PCIDevice *d,
mch_update_pciexbar(mch);
}
if (!mch->has_smm_ranges) {
return;
}
if (ranges_overlap(address, len, MCH_HOST_BRIDGE_SMRAM,
MCH_HOST_BRIDGE_SMRAM_SIZE)) {
mch_update_smram(mch);
@@ -494,10 +501,13 @@ static void mch_write_config(PCIDevice *d,
static void mch_update(MCHPCIState *mch)
{
mch_update_pciexbar(mch);
mch_update_pam(mch);
mch_update_smram(mch);
mch_update_ext_tseg_mbytes(mch);
mch_update_smbase_smram(mch);
if (mch->has_smm_ranges) {
mch_update_smram(mch);
mch_update_ext_tseg_mbytes(mch);
mch_update_smbase_smram(mch);
}
/*
* pci hole goes from end-of-low-ram to io-apic.
@@ -538,19 +548,21 @@ static void mch_reset(DeviceState *qdev)
pci_set_quad(d->config + MCH_HOST_BRIDGE_PCIEXBAR,
MCH_HOST_BRIDGE_PCIEXBAR_DEFAULT);
d->config[MCH_HOST_BRIDGE_SMRAM] = MCH_HOST_BRIDGE_SMRAM_DEFAULT;
d->config[MCH_HOST_BRIDGE_ESMRAMC] = MCH_HOST_BRIDGE_ESMRAMC_DEFAULT;
d->wmask[MCH_HOST_BRIDGE_SMRAM] = MCH_HOST_BRIDGE_SMRAM_WMASK;
d->wmask[MCH_HOST_BRIDGE_ESMRAMC] = MCH_HOST_BRIDGE_ESMRAMC_WMASK;
if (mch->has_smm_ranges) {
d->config[MCH_HOST_BRIDGE_SMRAM] = MCH_HOST_BRIDGE_SMRAM_DEFAULT;
d->config[MCH_HOST_BRIDGE_ESMRAMC] = MCH_HOST_BRIDGE_ESMRAMC_DEFAULT;
d->wmask[MCH_HOST_BRIDGE_SMRAM] = MCH_HOST_BRIDGE_SMRAM_WMASK;
d->wmask[MCH_HOST_BRIDGE_ESMRAMC] = MCH_HOST_BRIDGE_ESMRAMC_WMASK;
if (mch->ext_tseg_mbytes > 0) {
pci_set_word(d->config + MCH_HOST_BRIDGE_EXT_TSEG_MBYTES,
MCH_HOST_BRIDGE_EXT_TSEG_MBYTES_QUERY);
if (mch->ext_tseg_mbytes > 0) {
pci_set_word(d->config + MCH_HOST_BRIDGE_EXT_TSEG_MBYTES,
MCH_HOST_BRIDGE_EXT_TSEG_MBYTES_QUERY);
}
d->config[MCH_HOST_BRIDGE_F_SMBASE] = 0;
d->wmask[MCH_HOST_BRIDGE_F_SMBASE] = 0xff;
}
d->config[MCH_HOST_BRIDGE_F_SMBASE] = 0;
d->wmask[MCH_HOST_BRIDGE_F_SMBASE] = 0xff;
mch_update(mch);
}
@@ -568,6 +580,20 @@ static void mch_realize(PCIDevice *d, Error **errp)
/* setup pci memory mapping */
pc_pci_as_mapping_init(mch->system_memory, mch->pci_address_space);
/* PAM */
init_pam(&mch->pam_regions[0], OBJECT(mch), mch->ram_memory,
mch->system_memory, mch->pci_address_space,
PAM_BIOS_BASE, PAM_BIOS_SIZE);
for (i = 0; i < ARRAY_SIZE(mch->pam_regions) - 1; ++i) {
init_pam(&mch->pam_regions[i + 1], OBJECT(mch), mch->ram_memory,
mch->system_memory, mch->pci_address_space,
PAM_EXPAN_BASE + i * PAM_EXPAN_SIZE, PAM_EXPAN_SIZE);
}
if (!mch->has_smm_ranges) {
return;
}
/* if *disabled* show SMRAM to all CPUs */
memory_region_init_alias(&mch->smram_region, OBJECT(mch), "smram-region",
mch->pci_address_space, MCH_HOST_BRIDGE_SMRAM_C_BASE,
@@ -634,15 +660,6 @@ static void mch_realize(PCIDevice *d, Error **errp)
object_property_add_const_link(qdev_get_machine(), "smram",
OBJECT(&mch->smram));
init_pam(&mch->pam_regions[0], OBJECT(mch), mch->ram_memory,
mch->system_memory, mch->pci_address_space,
PAM_BIOS_BASE, PAM_BIOS_SIZE);
for (i = 0; i < ARRAY_SIZE(mch->pam_regions) - 1; ++i) {
init_pam(&mch->pam_regions[i + 1], OBJECT(mch), mch->ram_memory,
mch->system_memory, mch->pci_address_space,
PAM_EXPAN_BASE + i * PAM_EXPAN_SIZE, PAM_EXPAN_SIZE);
}
}
uint64_t mch_mcfg_base(void)

View File

@@ -175,6 +175,8 @@ typedef int (RAMBlockIterFunc)(RAMBlock *rb, void *opaque);
int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque);
int ram_block_discard_range(RAMBlock *rb, uint64_t start, size_t length);
int ram_block_convert_range(RAMBlock *rb, uint64_t start, size_t length,
bool shared_to_private);
#endif

View File

@@ -243,6 +243,12 @@ typedef struct IOMMUTLBEvent {
/* RAM FD is opened read-only */
#define RAM_READONLY_FD (1 << 11)
/* RAM can be private that has kvm gmem backend */
#define RAM_GUEST_MEMFD (1 << 12)
/* RAM is default private */
#define RAM_DEFAULT_PRIVATE (1 << 13)
static inline void iommu_notifier_init(IOMMUNotifier *n, IOMMUNotify fn,
IOMMUNotifierFlag flags,
hwaddr start, hwaddr end,
@@ -844,6 +850,7 @@ struct IOMMUMemoryRegion {
#define MEMORY_LISTENER_PRIORITY_MIN 0
#define MEMORY_LISTENER_PRIORITY_ACCEL 10
#define MEMORY_LISTENER_PRIORITY_DEV_BACKEND 10
#define MEMORY_LISTENER_PRIORITY_ACCEL_HIGH 20
/**
* struct MemoryListener: callbacks structure for updates to the physical memory map
@@ -1583,6 +1590,12 @@ void memory_region_init_ram(MemoryRegion *mr,
uint64_t size,
Error **errp);
void memory_region_init_ram_guest_memfd(MemoryRegion *mr,
Object *owner,
const char *name,
uint64_t size,
Error **errp);
/**
* memory_region_init_rom: Initialize a ROM memory region.
*
@@ -1702,6 +1715,19 @@ static inline bool memory_region_is_romd(MemoryRegion *mr)
*/
bool memory_region_is_protected(MemoryRegion *mr);
/**
* memory_region_has_guest_memfd: check whether a memory region has guest_memfd
* associated
*
* Returns %true if a memory region's ram_block has valid guest_memfd assigned.
*
* @mr: the memory region being queried
*/
bool memory_region_has_guest_memfd(MemoryRegion *mr);
void memory_region_set_default_private(MemoryRegion *mr);
bool memory_region_is_default_private(MemoryRegion *mr);
/**
* memory_region_get_iommu: check whether a memory region is an iommu
*

View File

@@ -41,6 +41,7 @@ struct RAMBlock {
QLIST_HEAD(, RAMBlockNotifier) ramblock_notifiers;
int fd;
uint64_t fd_offset;
int guest_memfd;
size_t page_size;
/* dirty bitmap used during migration */
unsigned long *bmap;

View File

@@ -30,6 +30,7 @@ bool machine_usb(MachineState *machine);
int machine_phandle_start(MachineState *machine);
bool machine_dump_guest_core(MachineState *machine);
bool machine_mem_merge(MachineState *machine);
bool machine_require_guest_memfd(MachineState *machine);
HotpluggableCPUList *machine_query_hotpluggable_cpus(MachineState *machine);
void machine_set_cpu_numa_node(MachineState *machine,
const CpuInstanceProperties *props,
@@ -364,6 +365,7 @@ struct MachineState {
char *dt_compatible;
bool dump_guest_core;
bool mem_merge;
bool require_guest_memfd;
bool usb;
bool usb_disabled;
char *firmware;

View File

@@ -165,6 +165,7 @@ void pc_guest_info_init(PCMachineState *pcms);
#define PCI_HOST_PROP_PCI_HOLE64_SIZE "pci-hole64-size"
#define PCI_HOST_BELOW_4G_MEM_SIZE "below-4g-mem-size"
#define PCI_HOST_ABOVE_4G_MEM_SIZE "above-4g-mem-size"
#define PCI_HOST_PROP_SMM_RANGES "smm-ranges"
void pc_pci_as_mapping_init(MemoryRegion *system_memory,
@@ -309,15 +310,12 @@ extern const size_t pc_compat_1_5_len;
extern GlobalProperty pc_compat_1_4[];
extern const size_t pc_compat_1_4_len;
int pc_machine_kvm_type(MachineState *machine, const char *vm_type);
#define DEFINE_PC_MACHINE(suffix, namestr, initfn, optsfn) \
static void pc_machine_##suffix##_class_init(ObjectClass *oc, void *data) \
{ \
MachineClass *mc = MACHINE_CLASS(oc); \
optsfn(mc); \
mc->init = initfn; \
mc->kvm_type = pc_machine_kvm_type; \
} \
static const TypeInfo pc_machine_type_##suffix = { \
.name = namestr TYPE_MACHINE_SUFFIX, \

58
include/hw/i386/tdvf.h Normal file
View File

@@ -0,0 +1,58 @@
/*
* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright (c) 2020 Intel Corporation
* Author: Isaku Yamahata <isaku.yamahata at gmail.com>
* <isaku.yamahata at intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
* You should have received a copy of the GNU General Public License along
* with this program; if not, see <http://www.gnu.org/licenses/>.
*/
#ifndef HW_I386_TDVF_H
#define HW_I386_TDVF_H
#include "qemu/osdep.h"
#define TDVF_SECTION_TYPE_BFV 0
#define TDVF_SECTION_TYPE_CFV 1
#define TDVF_SECTION_TYPE_TD_HOB 2
#define TDVF_SECTION_TYPE_TEMP_MEM 3
#define TDVF_SECTION_ATTRIBUTES_MR_EXTEND (1U << 0)
#define TDVF_SECTION_ATTRIBUTES_PAGE_AUG (1U << 1)
typedef struct TdxFirmwareEntry {
uint32_t data_offset;
uint32_t data_len;
uint64_t address;
uint64_t size;
uint32_t type;
uint32_t attributes;
void *mem_ptr;
} TdxFirmwareEntry;
typedef struct TdxFirmware {
void *mem_ptr;
uint32_t nr_entries;
TdxFirmwareEntry *entries;
} TdxFirmware;
#define for_each_tdx_fw_entry(fw, e) \
for (e = (fw)->entries; e != (fw)->entries + (fw)->nr_entries; e++)
int tdvf_parse_metadata(TdxFirmware *fw, void *flash_ptr, int size);
#endif /* HW_I386_TDVF_H */

View File

@@ -41,6 +41,7 @@ struct X86MachineState {
MachineState parent;
/*< public >*/
unsigned int vm_type;
/* Pointers to devices and objects: */
ISADevice *rtc;
@@ -58,6 +59,7 @@ struct X86MachineState {
/* CPU and apic information: */
bool apic_xrupt_override;
bool eoi_intercept_unsupported;
unsigned pci_irq_mask;
unsigned apic_id_limit;
uint16_t boot_cpus;

View File

@@ -50,6 +50,7 @@ struct MCHPCIState {
MemoryRegion tseg_blackhole, tseg_window;
MemoryRegion smbase_blackhole, smbase_window;
bool has_smram_at_smbase;
bool has_smm_ranges;
Range pci_hole;
uint64_t below_4g_mem_size;
uint64_t above_4g_mem_size;

View File

@@ -0,0 +1,198 @@
/*
* Copyright (C) 2020 Intel Corporation
*
* Author: Isaku Yamahata <isaku.yamahata at gmail.com>
* <isaku.yamahata at intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
* You should have received a copy of the GNU General Public License along
* with this program; if not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef HW_I386_UEFI_H
#define HW_I386_UEFI_H
/***************************************************************************/
/*
* basic EFI definitions
* supplemented with UEFI Specification Version 2.8 (Errata A)
* released February 2020
*/
/* UEFI integer is little endian */
typedef struct {
uint32_t Data1;
uint16_t Data2;
uint16_t Data3;
uint8_t Data4[8];
} EFI_GUID;
typedef enum {
EfiReservedMemoryType,
EfiLoaderCode,
EfiLoaderData,
EfiBootServicesCode,
EfiBootServicesData,
EfiRuntimeServicesCode,
EfiRuntimeServicesData,
EfiConventionalMemory,
EfiUnusableMemory,
EfiACPIReclaimMemory,
EfiACPIMemoryNVS,
EfiMemoryMappedIO,
EfiMemoryMappedIOPortSpace,
EfiPalCode,
EfiPersistentMemory,
EfiUnacceptedMemoryType,
EfiMaxMemoryType
} EFI_MEMORY_TYPE;
#define EFI_HOB_HANDOFF_TABLE_VERSION 0x0009
#define EFI_HOB_TYPE_HANDOFF 0x0001
#define EFI_HOB_TYPE_MEMORY_ALLOCATION 0x0002
#define EFI_HOB_TYPE_RESOURCE_DESCRIPTOR 0x0003
#define EFI_HOB_TYPE_GUID_EXTENSION 0x0004
#define EFI_HOB_TYPE_FV 0x0005
#define EFI_HOB_TYPE_CPU 0x0006
#define EFI_HOB_TYPE_MEMORY_POOL 0x0007
#define EFI_HOB_TYPE_FV2 0x0009
#define EFI_HOB_TYPE_LOAD_PEIM_UNUSED 0x000A
#define EFI_HOB_TYPE_UEFI_CAPSULE 0x000B
#define EFI_HOB_TYPE_FV3 0x000C
#define EFI_HOB_TYPE_UNUSED 0xFFFE
#define EFI_HOB_TYPE_END_OF_HOB_LIST 0xFFFF
typedef struct {
uint16_t HobType;
uint16_t HobLength;
uint32_t Reserved;
} EFI_HOB_GENERIC_HEADER;
typedef uint64_t EFI_PHYSICAL_ADDRESS;
typedef uint32_t EFI_BOOT_MODE;
typedef struct {
EFI_HOB_GENERIC_HEADER Header;
uint32_t Version;
EFI_BOOT_MODE BootMode;
EFI_PHYSICAL_ADDRESS EfiMemoryTop;
EFI_PHYSICAL_ADDRESS EfiMemoryBottom;
EFI_PHYSICAL_ADDRESS EfiFreeMemoryTop;
EFI_PHYSICAL_ADDRESS EfiFreeMemoryBottom;
EFI_PHYSICAL_ADDRESS EfiEndOfHobList;
} EFI_HOB_HANDOFF_INFO_TABLE;
#define EFI_RESOURCE_SYSTEM_MEMORY 0x00000000
#define EFI_RESOURCE_MEMORY_MAPPED_IO 0x00000001
#define EFI_RESOURCE_IO 0x00000002
#define EFI_RESOURCE_FIRMWARE_DEVICE 0x00000003
#define EFI_RESOURCE_MEMORY_MAPPED_IO_PORT 0x00000004
#define EFI_RESOURCE_MEMORY_RESERVED 0x00000005
#define EFI_RESOURCE_IO_RESERVED 0x00000006
#define EFI_RESOURCE_MEMORY_UNACCEPTED 0x00000007
#define EFI_RESOURCE_MAX_MEMORY_TYPE 0x00000008
#define EFI_RESOURCE_ATTRIBUTE_PRESENT 0x00000001
#define EFI_RESOURCE_ATTRIBUTE_INITIALIZED 0x00000002
#define EFI_RESOURCE_ATTRIBUTE_TESTED 0x00000004
#define EFI_RESOURCE_ATTRIBUTE_SINGLE_BIT_ECC 0x00000008
#define EFI_RESOURCE_ATTRIBUTE_MULTIPLE_BIT_ECC 0x00000010
#define EFI_RESOURCE_ATTRIBUTE_ECC_RESERVED_1 0x00000020
#define EFI_RESOURCE_ATTRIBUTE_ECC_RESERVED_2 0x00000040
#define EFI_RESOURCE_ATTRIBUTE_READ_PROTECTED 0x00000080
#define EFI_RESOURCE_ATTRIBUTE_WRITE_PROTECTED 0x00000100
#define EFI_RESOURCE_ATTRIBUTE_EXECUTION_PROTECTED 0x00000200
#define EFI_RESOURCE_ATTRIBUTE_UNCACHEABLE 0x00000400
#define EFI_RESOURCE_ATTRIBUTE_WRITE_COMBINEABLE 0x00000800
#define EFI_RESOURCE_ATTRIBUTE_WRITE_THROUGH_CACHEABLE 0x00001000
#define EFI_RESOURCE_ATTRIBUTE_WRITE_BACK_CACHEABLE 0x00002000
#define EFI_RESOURCE_ATTRIBUTE_16_BIT_IO 0x00004000
#define EFI_RESOURCE_ATTRIBUTE_32_BIT_IO 0x00008000
#define EFI_RESOURCE_ATTRIBUTE_64_BIT_IO 0x00010000
#define EFI_RESOURCE_ATTRIBUTE_UNCACHED_EXPORTED 0x00020000
#define EFI_RESOURCE_ATTRIBUTE_READ_ONLY_PROTECTED 0x00040000
#define EFI_RESOURCE_ATTRIBUTE_READ_ONLY_PROTECTABLE 0x00080000
#define EFI_RESOURCE_ATTRIBUTE_READ_PROTECTABLE 0x00100000
#define EFI_RESOURCE_ATTRIBUTE_WRITE_PROTECTABLE 0x00200000
#define EFI_RESOURCE_ATTRIBUTE_EXECUTION_PROTECTABLE 0x00400000
#define EFI_RESOURCE_ATTRIBUTE_PERSISTENT 0x00800000
#define EFI_RESOURCE_ATTRIBUTE_PERSISTABLE 0x01000000
#define EFI_RESOURCE_ATTRIBUTE_MORE_RELIABLE 0x02000000
typedef uint32_t EFI_RESOURCE_TYPE;
typedef uint32_t EFI_RESOURCE_ATTRIBUTE_TYPE;
typedef struct {
EFI_HOB_GENERIC_HEADER Header;
EFI_GUID Owner;
EFI_RESOURCE_TYPE ResourceType;
EFI_RESOURCE_ATTRIBUTE_TYPE ResourceAttribute;
EFI_PHYSICAL_ADDRESS PhysicalStart;
uint64_t ResourceLength;
} EFI_HOB_RESOURCE_DESCRIPTOR;
typedef struct {
EFI_HOB_GENERIC_HEADER Header;
EFI_GUID Name;
/* guid specific data follows */
} EFI_HOB_GUID_TYPE;
typedef struct {
EFI_HOB_GENERIC_HEADER Header;
EFI_PHYSICAL_ADDRESS BaseAddress;
uint64_t Length;
} EFI_HOB_FIRMWARE_VOLUME;
typedef struct {
EFI_HOB_GENERIC_HEADER Header;
EFI_PHYSICAL_ADDRESS BaseAddress;
uint64_t Length;
EFI_GUID FvName;
EFI_GUID FileName;
} EFI_HOB_FIRMWARE_VOLUME2;
typedef struct {
EFI_HOB_GENERIC_HEADER Header;
EFI_PHYSICAL_ADDRESS BaseAddress;
uint64_t Length;
uint32_t AuthenticationStatus;
bool ExtractedFv;
EFI_GUID FvName;
EFI_GUID FileName;
} EFI_HOB_FIRMWARE_VOLUME3;
typedef struct {
EFI_HOB_GENERIC_HEADER Header;
uint8_t SizeOfMemorySpace;
uint8_t SizeOfIoSpace;
uint8_t Reserved[6];
} EFI_HOB_CPU;
typedef struct {
EFI_HOB_GENERIC_HEADER Header;
} EFI_HOB_MEMORY_POOL;
typedef struct {
EFI_HOB_GENERIC_HEADER Header;
EFI_PHYSICAL_ADDRESS BaseAddress;
uint64_t Length;
} EFI_HOB_UEFI_CAPSULE;
#define EFI_HOB_OWNER_ZERO \
((EFI_GUID){ 0x00000000, 0x0000, 0x0000, \
{ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 } })
#endif

View File

@@ -66,6 +66,7 @@ struct HostMemoryBackend {
uint64_t size;
bool merge, dump, use_canonical_path;
bool prealloc, is_mapped, share, reserve;
bool require_guest_memfd;
uint32_t prealloc_threads;
ThreadContext *prealloc_context;
DECLARE_BITMAP(host_nodes, MAX_NODES + 1);

View File

@@ -341,6 +341,7 @@ int kvm_arch_get_default_type(MachineState *ms);
int kvm_arch_init(MachineState *ms, KVMState *s);
int kvm_arch_pre_create_vcpu(CPUState *cpu, Error **errp);
int kvm_arch_init_vcpu(CPUState *cpu);
int kvm_arch_destroy_vcpu(CPUState *cpu);
@@ -538,4 +539,11 @@ bool kvm_arch_cpu_check_are_resettable(void);
bool kvm_dirty_ring_enabled(void);
uint32_t kvm_dirty_ring_size(void);
int kvm_create_guest_memfd(uint64_t size, uint64_t flags, Error **errp);
int kvm_set_memory_attributes_private(hwaddr start, hwaddr size);
int kvm_set_memory_attributes_shared(hwaddr start, hwaddr size);
int kvm_convert_memory(hwaddr start, hwaddr size, bool to_private);
#endif

View File

@@ -30,6 +30,8 @@ typedef struct KVMSlot
int as_id;
/* Cache of the offset in ram address space */
ram_addr_t ram_start_offset;
int guest_memfd;
hwaddr guest_memfd_offset;
} KVMSlot;
typedef struct KVMMemoryUpdate {

View File

@@ -560,4 +560,98 @@ struct kvm_pmu_event_filter {
/* x86-specific KVM_EXIT_HYPERCALL flags. */
#define KVM_EXIT_HYPERCALL_LONG_MODE BIT(0)
#define KVM_X86_DEFAULT_VM 0
#define KVM_X86_SW_PROTECTED_VM 1
#define KVM_X86_TDX_VM 2
#define KVM_X86_SNP_VM 3
/* Trust Domain eXtension sub-ioctl() commands. */
enum kvm_tdx_cmd_id {
KVM_TDX_CAPABILITIES = 0,
KVM_TDX_INIT_VM,
KVM_TDX_INIT_VCPU,
KVM_TDX_INIT_MEM_REGION,
KVM_TDX_FINALIZE_VM,
KVM_TDX_RELEASE_VM,
KVM_TDX_CMD_NR_MAX,
};
struct kvm_tdx_cmd {
/* enum kvm_tdx_cmd_id */
__u32 id;
/* flags for sub-commend. If sub-command doesn't use this, set zero. */
__u32 flags;
/*
* data for each sub-command. An immediate or a pointer to the actual
* data in process virtual address. If sub-command doesn't use it,
* set zero.
*/
__u64 data;
/*
* Auxiliary error code. The sub-command may return TDX SEAMCALL
* status code in addition to -Exxx.
* Defined for consistency with struct kvm_sev_cmd.
*/
__u64 error;
};
struct kvm_tdx_cpuid_config {
__u32 leaf;
__u32 sub_leaf;
__u32 eax;
__u32 ebx;
__u32 ecx;
__u32 edx;
};
struct kvm_tdx_capabilities {
__u64 attrs_fixed0;
__u64 attrs_fixed1;
__u64 xfam_fixed0;
__u64 xfam_fixed1;
#define TDX_CAP_GPAW_48 (1 << 0)
#define TDX_CAP_GPAW_52 (1 << 1)
__u32 supported_gpaw;
__u32 padding;
__u64 reserved[251];
__u32 nr_cpuid_configs;
struct kvm_tdx_cpuid_config cpuid_configs[];
};
struct kvm_tdx_init_vm {
__u64 attributes;
__u64 mrconfigid[6]; /* sha384 digest */
__u64 mrowner[6]; /* sha384 digest */
__u64 mrownerconfig[6]; /* sha348 digest */
/*
* For future extensibility to make sizeof(struct kvm_tdx_init_vm) = 8KB.
* This should be enough given sizeof(TD_PARAMS) = 1024.
* 8KB was chosen given because
* sizeof(struct kvm_cpuid_entry2) * KVM_MAX_CPUID_ENTRIES(=256) = 8KB.
*/
__u64 reserved[1004];
/*
* Call KVM_TDX_INIT_VM before vcpu creation, thus before
* KVM_SET_CPUID2.
* This configuration supersedes KVM_SET_CPUID2s for VCPUs because the
* TDX module directly virtualizes those CPUIDs without VMM. The user
* space VMM, e.g. qemu, should make KVM_SET_CPUID2 consistent with
* those values. If it doesn't, KVM may have wrong idea of vCPUIDs of
* the guest, and KVM may wrongly emulate CPUIDs or MSRs that the TDX
* module doesn't virtualize.
*/
struct kvm_cpuid2 cpuid;
};
#define KVM_TDX_MEASURE_MEMORY_REGION (1UL << 0)
struct kvm_tdx_init_mem_region {
__u64 source_addr;
__u64 gpa;
__u64 nr_pages;
};
#endif /* _ASM_X86_KVM_H */

View File

@@ -95,6 +95,19 @@ struct kvm_userspace_memory_region {
__u64 userspace_addr; /* start of the userspace allocated memory */
};
/* for KVM_SET_USER_MEMORY_REGION2 */
struct kvm_userspace_memory_region2 {
__u32 slot;
__u32 flags;
__u64 guest_phys_addr;
__u64 memory_size;
__u64 userspace_addr;
__u64 guest_memfd_offset;
__u32 guest_memfd;
__u32 pad1;
__u64 pad2[14];
};
/*
* The bit 0 ~ bit 15 of kvm_userspace_memory_region::flags are visible for
* userspace, other bits are reserved for kvm internal use which are defined
@@ -102,6 +115,7 @@ struct kvm_userspace_memory_region {
*/
#define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0)
#define KVM_MEM_READONLY (1UL << 1)
#define KVM_MEM_PRIVATE (1UL << 2)
/* for KVM_IRQ_LINE */
struct kvm_irq_level {
@@ -223,6 +237,92 @@ struct kvm_xen_exit {
} u;
};
/* masks for reg_mask to indicate which registers are passed. */
#define TDX_VMCALL_REG_MASK_RBX BIT_ULL(2)
#define TDX_VMCALL_REG_MASK_RDX BIT_ULL(3)
#define TDX_VMCALL_REG_MASK_RSI BIT_ULL(6)
#define TDX_VMCALL_REG_MASK_RDI BIT_ULL(7)
#define TDX_VMCALL_REG_MASK_R8 BIT_ULL(8)
#define TDX_VMCALL_REG_MASK_R9 BIT_ULL(9)
#define TDX_VMCALL_REG_MASK_R10 BIT_ULL(10)
#define TDX_VMCALL_REG_MASK_R11 BIT_ULL(11)
#define TDX_VMCALL_REG_MASK_R12 BIT_ULL(12)
#define TDX_VMCALL_REG_MASK_R13 BIT_ULL(13)
#define TDX_VMCALL_REG_MASK_R14 BIT_ULL(14)
#define TDX_VMCALL_REG_MASK_R15 BIT_ULL(15)
struct kvm_tdx_exit {
#define KVM_EXIT_TDX_VMCALL 1
__u32 type;
__u32 pad;
union {
struct kvm_tdx_vmcall {
/*
* RAX(bit 0), RCX(bit 1) and RSP(bit 4) are reserved.
* RAX(bit 0): TDG.VP.VMCALL status code.
* RCX(bit 1): bitmap for used registers.
* RSP(bit 4): the caller stack.
*/
union {
__u64 in_rcx;
__u64 reg_mask;
};
/*
* Guest-Host-Communication Interface for TDX spec
* defines the ABI for TDG.VP.VMCALL.
*/
/* Input parameters: guest -> VMM */
union {
__u64 in_r10;
__u64 type;
};
union {
__u64 in_r11;
__u64 subfunction;
};
/*
* Subfunction specific.
* Registers are used in this order to pass input
* arguments. r12=arg0, r13=arg1, etc.
*/
__u64 in_r12;
__u64 in_r13;
__u64 in_r14;
__u64 in_r15;
__u64 in_rbx;
__u64 in_rdi;
__u64 in_rsi;
__u64 in_r8;
__u64 in_r9;
__u64 in_rdx;
/* Output parameters: VMM -> guest */
union {
__u64 out_r10;
__u64 status_code;
};
/*
* Subfunction specific.
* Registers are used in this order to output return
* values. r11=ret0, r12=ret1, etc.
*/
__u64 out_r11;
__u64 out_r12;
__u64 out_r13;
__u64 out_r14;
__u64 out_r15;
__u64 out_rbx;
__u64 out_rdi;
__u64 out_rsi;
__u64 out_r8;
__u64 out_r9;
__u64 out_rdx;
} vmcall;
} u;
};
#define KVM_S390_GET_SKEYS_NONE 1
#define KVM_S390_SKEYS_MAX 1048576
@@ -264,6 +364,8 @@ struct kvm_xen_exit {
#define KVM_EXIT_RISCV_SBI 35
#define KVM_EXIT_RISCV_CSR 36
#define KVM_EXIT_NOTIFY 37
#define KVM_EXIT_MEMORY_FAULT 39
#define KVM_EXIT_TDX 40
/* For KVM_EXIT_INTERNAL_ERROR */
/* Emulate instruction failed. */
@@ -506,6 +608,15 @@ struct kvm_run {
#define KVM_NOTIFY_CONTEXT_INVALID (1 << 0)
__u32 flags;
} notify;
/* KVM_EXIT_MEMORY_FAULT */
struct {
#define KVM_MEMORY_EXIT_FLAG_PRIVATE (1ULL << 3)
__u64 flags;
__u64 gpa;
__u64 size;
} memory_fault;
/* KVM_EXIT_TDX_VMCALL */
struct kvm_tdx_exit tdx;
/* Fix the size of the union. */
char padding[256];
};
@@ -1188,6 +1299,11 @@ struct kvm_ppc_resize_hpt {
#define KVM_CAP_COUNTER_OFFSET 227
#define KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE 228
#define KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES 229
#define KVM_CAP_USER_MEMORY2 231
#define KVM_CAP_MEMORY_FAULT_INFO 232
#define KVM_CAP_MEMORY_ATTRIBUTES 233
#define KVM_CAP_GUEST_MEMFD 234
#define KVM_CAP_VM_TYPES 235
#ifdef KVM_CAP_IRQ_ROUTING
@@ -1469,6 +1585,8 @@ struct kvm_vfio_spapr_tce {
struct kvm_userspace_memory_region)
#define KVM_SET_TSS_ADDR _IO(KVMIO, 0x47)
#define KVM_SET_IDENTITY_MAP_ADDR _IOW(KVMIO, 0x48, __u64)
#define KVM_SET_USER_MEMORY_REGION2 _IOW(KVMIO, 0x49, \
struct kvm_userspace_memory_region2)
/* enable ucontrol for s390 */
struct kvm_s390_ucas_mapping {
@@ -2252,4 +2370,26 @@ struct kvm_s390_zpci_op {
/* flags for kvm_s390_zpci_op->u.reg_aen.flags */
#define KVM_S390_ZPCIOP_REGAEN_HOST (1 << 0)
/* Available with KVM_CAP_MEMORY_ATTRIBUTES */
#define KVM_SET_MEMORY_ATTRIBUTES _IOW(KVMIO, 0xd2, struct kvm_memory_attributes)
struct kvm_memory_attributes {
__u64 address;
__u64 size;
__u64 attributes;
__u64 flags;
};
#define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3)
#define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd)
struct kvm_create_guest_memfd {
__u64 size;
__u64 flags;
__u64 reserved[6];
};
#define KVM_GUEST_MEMFD_ALLOW_HUGEPAGE (1ULL << 0)
#endif /* __LINUX_KVM_H */

View File

@@ -878,6 +878,33 @@
'reduced-phys-bits': 'uint32',
'*kernel-hashes': 'bool' } }
##
# @TdxGuestProperties:
#
# Properties for tdx-guest objects.
#
# @sept-ve-disable: toggle bit 28 of TD attributes to control disabling
# of EPT violation conversion to #VE on guest TD access of PENDING
# pages. Some guest OS (e.g., Linux TD guest) may require this to
# be set, otherwise they refuse to boot.
#
# @mrconfigid: base64 encoded MRCONFIGID SHA384 digest
#
# @mrowner: base64 encoded MROWNER SHA384 digest
#
# @mrownerconfig: base64 MROWNERCONFIG SHA384 digest
#
# @quote-generation-socket: socket address for Quote Generation Service(QGS)
#
# Since: 8.2
##
{ 'struct': 'TdxGuestProperties',
'data': { '*sept-ve-disable': 'bool',
'*mrconfigid': 'str',
'*mrowner': 'str',
'*mrownerconfig': 'str',
'*quote-generation-socket': 'SocketAddress' } }
##
# @ThreadContextProperties:
#
@@ -956,6 +983,7 @@
'sev-guest',
'thread-context',
's390-pv-guest',
'tdx-guest',
'throttle-group',
'tls-creds-anon',
'tls-creds-psk',
@@ -1022,6 +1050,7 @@
'secret_keyring': { 'type': 'SecretKeyringProperties',
'if': 'CONFIG_SECRET_KEYRING' },
'sev-guest': 'SevGuestProperties',
'tdx-guest': 'TdxGuestProperties',
'thread-context': 'ThreadContextProperties',
'throttle-group': 'ThrottleGroupProperties',
'tls-creds-anon': 'TlsCredsAnonProperties',

View File

@@ -496,10 +496,12 @@
#
# @s390: s390 guest panic information type (Since: 2.12)
#
# @tdx: tdx guest panic information type (Since: 8.2)
#
# Since: 2.9
##
{ 'enum': 'GuestPanicInformationType',
'data': [ 'hyper-v', 's390' ] }
'data': [ 'hyper-v', 's390', 'tdx' ] }
##
# @GuestPanicInformation:
@@ -514,7 +516,8 @@
'base': {'type': 'GuestPanicInformationType'},
'discriminator': 'type',
'data': {'hyper-v': 'GuestPanicInformationHyperV',
's390': 'GuestPanicInformationS390'}}
's390': 'GuestPanicInformationS390',
'tdx' : 'GuestPanicInformationTdx'}}
##
# @GuestPanicInformationHyperV:
@@ -577,6 +580,26 @@
'psw-addr': 'uint64',
'reason': 'S390CrashReason'}}
##
# @GuestPanicInformationTdx:
#
# TDX GHCI TDG.VP.VMCALL<ReportFatalError> specific guest panic information
#
# @error-code: TD-specific error code
#
# @gpa: 4KB-aligned guest physical address of the page that containing
# additional error data
#
# @message: TD guest provided message string. (It's not so trustable
# and cannot be assumed to be well formed because it comes from guest)
#
# Since: 8.2
##
{'struct': 'GuestPanicInformationTdx',
'data': {'error-code': 'uint64',
'gpa': 'uint64',
'message': 'str'}}
##
# @MEMORY_FAILURE:
#

View File

@@ -1862,6 +1862,24 @@ bool memory_region_is_protected(MemoryRegion *mr)
return mr->ram && (mr->ram_block->flags & RAM_PROTECTED);
}
bool memory_region_has_guest_memfd(MemoryRegion *mr)
{
return mr->ram_block && mr->ram_block->guest_memfd >= 0;
}
bool memory_region_is_default_private(MemoryRegion *mr)
{
return memory_region_has_guest_memfd(mr) &&
(mr->ram_block->flags & RAM_DEFAULT_PRIVATE);
}
void memory_region_set_default_private(MemoryRegion *mr)
{
if (memory_region_has_guest_memfd(mr)) {
mr->ram_block->flags |= RAM_DEFAULT_PRIVATE;
}
}
uint8_t memory_region_get_dirty_log_mask(MemoryRegion *mr)
{
uint8_t mask = mr->dirty_log_mask;
@@ -3614,6 +3632,33 @@ void memory_region_init_ram(MemoryRegion *mr,
vmstate_register_ram(mr, owner_dev);
}
void memory_region_init_ram_guest_memfd(MemoryRegion *mr,
Object *owner,
const char *name,
uint64_t size,
Error **errp)
{
DeviceState *owner_dev;
Error *err = NULL;
memory_region_init_ram_flags_nomigrate(mr, owner, name, size,
RAM_GUEST_MEMFD, &err);
if (err) {
error_propagate(errp, err);
return;
}
memory_region_set_default_private(mr);
/* This will assert if owner is neither NULL nor a DeviceState.
* We only want the owner here for the purposes of defining a
* unique name for migration. TODO: Ideally we should implement
* a naming scheme for Objects which are not DeviceStates, in
* which case we can relax this restriction.
*/
owner_dev = DEVICE(owner);
vmstate_register_ram(mr, owner_dev);
}
void memory_region_init_rom(MemoryRegion *mr,
Object *owner,
const char *name,

View File

@@ -1803,6 +1803,40 @@ static void dirty_memory_extend(ram_addr_t old_ram_size,
}
}
#ifdef CONFIG_KVM
#define HPAGE_PMD_SIZE_PATH "/sys/kernel/mm/transparent_hugepage/hpage_pmd_size"
#define DEFAULT_PMD_SIZE (1ul << 21)
static uint32_t get_thp_size(void)
{
gchar *content = NULL;
const char *endptr;
static uint64_t thp_size = 0;
uint64_t tmp;
if (thp_size != 0) {
return thp_size;
}
if (g_file_get_contents(HPAGE_PMD_SIZE_PATH, &content, NULL, NULL) &&
!qemu_strtou64(content, &endptr, 0, &tmp) &&
(!endptr || *endptr == '\n')) {
/* Sanity-check the value and fallback to something reasonable. */
if (!tmp || !is_power_of_2(tmp)) {
warn_report("Read unsupported THP size: %" PRIx64, tmp);
} else {
thp_size = tmp;
}
}
if (!thp_size) {
thp_size = DEFAULT_PMD_SIZE;
}
return thp_size;
}
#endif
static void ram_block_add(RAMBlock *new_block, Error **errp)
{
const bool noreserve = qemu_ram_is_noreserve(new_block);
@@ -1841,6 +1875,20 @@ static void ram_block_add(RAMBlock *new_block, Error **errp)
}
}
#ifdef CONFIG_KVM
if (kvm_enabled() && new_block->flags & RAM_GUEST_MEMFD &&
new_block->guest_memfd < 0) {
uint64_t flags = QEMU_IS_ALIGNED(new_block->max_length, get_thp_size()) ?
KVM_GUEST_MEMFD_ALLOW_HUGEPAGE : 0;
new_block->guest_memfd = kvm_create_guest_memfd(new_block->max_length,
flags, errp);
if (new_block->guest_memfd < 0) {
qemu_mutex_unlock_ramlist();
return;
}
}
#endif
new_ram_size = MAX(old_ram_size,
(new_block->offset + new_block->max_length) >> TARGET_PAGE_BITS);
if (new_ram_size > old_ram_size) {
@@ -1903,7 +1951,7 @@ RAMBlock *qemu_ram_alloc_from_fd(ram_addr_t size, MemoryRegion *mr,
/* Just support these ram flags by now. */
assert((ram_flags & ~(RAM_SHARED | RAM_PMEM | RAM_NORESERVE |
RAM_PROTECTED | RAM_NAMED_FILE | RAM_READONLY |
RAM_READONLY_FD)) == 0);
RAM_READONLY_FD | RAM_GUEST_MEMFD)) == 0);
if (xen_enabled()) {
error_setg(errp, "-mem-path not supported with Xen");
@@ -1938,6 +1986,7 @@ RAMBlock *qemu_ram_alloc_from_fd(ram_addr_t size, MemoryRegion *mr,
new_block->used_length = size;
new_block->max_length = size;
new_block->flags = ram_flags;
new_block->guest_memfd = -1;
new_block->host = file_ram_alloc(new_block, size, fd, !file_size, offset,
errp);
if (!new_block->host) {
@@ -2016,7 +2065,7 @@ RAMBlock *qemu_ram_alloc_internal(ram_addr_t size, ram_addr_t max_size,
Error *local_err = NULL;
assert((ram_flags & ~(RAM_SHARED | RAM_RESIZEABLE | RAM_PREALLOC |
RAM_NORESERVE)) == 0);
RAM_NORESERVE| RAM_GUEST_MEMFD)) == 0);
assert(!host ^ (ram_flags & RAM_PREALLOC));
size = HOST_PAGE_ALIGN(size);
@@ -2028,6 +2077,7 @@ RAMBlock *qemu_ram_alloc_internal(ram_addr_t size, ram_addr_t max_size,
new_block->max_length = max_size;
assert(max_size >= size);
new_block->fd = -1;
new_block->guest_memfd = -1;
new_block->page_size = qemu_real_host_page_size();
new_block->host = host;
new_block->flags = ram_flags;
@@ -2050,7 +2100,7 @@ RAMBlock *qemu_ram_alloc_from_ptr(ram_addr_t size, void *host,
RAMBlock *qemu_ram_alloc(ram_addr_t size, uint32_t ram_flags,
MemoryRegion *mr, Error **errp)
{
assert((ram_flags & ~(RAM_SHARED | RAM_NORESERVE)) == 0);
assert((ram_flags & ~(RAM_SHARED | RAM_NORESERVE | RAM_GUEST_MEMFD)) == 0);
return qemu_ram_alloc_internal(size, size, NULL, NULL, ram_flags, mr, errp);
}
@@ -2078,6 +2128,11 @@ static void reclaim_ramblock(RAMBlock *block)
} else {
qemu_anon_ram_free(block->host, block->max_length);
}
if (block->guest_memfd >= 0) {
close(block->guest_memfd);
}
g_free(block);
}
@@ -3477,17 +3532,16 @@ int ram_block_discard_range(RAMBlock *rb, uint64_t start, size_t length)
uint8_t *host_startaddr = rb->host + start;
if (!QEMU_PTR_IS_ALIGNED(host_startaddr, rb->page_size)) {
error_report("ram_block_discard_range: Unaligned start address: %p",
host_startaddr);
if (!QEMU_PTR_IS_ALIGNED(host_startaddr, qemu_host_page_size)) {
error_report("%s: Unaligned start address: %p",
__func__, host_startaddr);
goto err;
}
if ((start + length) <= rb->max_length) {
bool need_madvise, need_fallocate;
if (!QEMU_IS_ALIGNED(length, rb->page_size)) {
error_report("ram_block_discard_range: Unaligned length: %zx",
length);
error_report("%s: Unaligned length: %zx", __func__, length);
goto err;
}
@@ -3511,8 +3565,8 @@ int ram_block_discard_range(RAMBlock *rb, uint64_t start, size_t length)
* proper error message.
*/
if (rb->flags & RAM_READONLY_FD) {
error_report("ram_block_discard_range: Discarding RAM"
" with readonly files is not supported");
error_report("%s: Discarding RAM with readonly files is not"
" supported", __func__);
goto err;
}
@@ -3527,27 +3581,26 @@ int ram_block_discard_range(RAMBlock *rb, uint64_t start, size_t length)
* file.
*/
if (!qemu_ram_is_shared(rb)) {
warn_report_once("ram_block_discard_range: Discarding RAM"
warn_report_once("%s: Discarding RAM"
" in private file mappings is possibly"
" dangerous, because it will modify the"
" underlying file and will affect other"
" users of the file");
" users of the file", __func__);
}
ret = fallocate(rb->fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
start, length);
if (ret) {
ret = -errno;
error_report("ram_block_discard_range: Failed to fallocate "
"%s:%" PRIx64 " +%zx (%d)",
rb->idstr, start, length, ret);
error_report("%s: Failed to fallocate %s:%" PRIx64 " +%zx (%d)",
__func__, rb->idstr, start, length, ret);
goto err;
}
#else
ret = -ENOSYS;
error_report("ram_block_discard_range: fallocate not available/file"
error_report("%s: fallocate not available/file"
"%s:%" PRIx64 " +%zx (%d)",
rb->idstr, start, length, ret);
__func__, rb->idstr, start, length, ret);
goto err;
#endif
}
@@ -3565,31 +3618,52 @@ int ram_block_discard_range(RAMBlock *rb, uint64_t start, size_t length)
}
if (ret) {
ret = -errno;
error_report("ram_block_discard_range: Failed to discard range "
error_report("%s: Failed to discard range "
"%s:%" PRIx64 " +%zx (%d)",
rb->idstr, start, length, ret);
__func__, rb->idstr, start, length, ret);
goto err;
}
#else
ret = -ENOSYS;
error_report("ram_block_discard_range: MADVISE not available"
"%s:%" PRIx64 " +%zx (%d)",
rb->idstr, start, length, ret);
error_report("%s: MADVISE not available %s:%" PRIx64 " +%zx (%d)",
__func__, rb->idstr, start, length, ret);
goto err;
#endif
}
trace_ram_block_discard_range(rb->idstr, host_startaddr, length,
need_madvise, need_fallocate, ret);
} else {
error_report("ram_block_discard_range: Overrun block '%s' (%" PRIu64
"/%zx/" RAM_ADDR_FMT")",
rb->idstr, start, length, rb->max_length);
error_report("%s: Overrun block '%s' (%" PRIu64 "/%zx/" RAM_ADDR_FMT")",
__func__, rb->idstr, start, length, rb->max_length);
}
err:
return ret;
}
static int ram_block_discard_guest_memfd_range(RAMBlock *rb, uint64_t start,
size_t length)
{
int ret = -1;
#ifdef CONFIG_FALLOCATE_PUNCH_HOLE
ret = fallocate(rb->guest_memfd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
start, length);
if (ret) {
ret = -errno;
error_report("%s: Failed to fallocate %s:%" PRIx64 " +%zx (%d)",
__func__, rb->idstr, start, length, ret);
}
#else
ret = -ENOSYS;
error_report("%s: fallocate not available %s:%" PRIx64 " +%zx (%d)",
__func__, rb->idstr, start, length, ret);
#endif
return ret;
}
bool ramblock_is_pmem(RAMBlock *rb)
{
return rb->flags & RAM_PMEM;
@@ -3777,3 +3851,30 @@ bool ram_block_discard_is_required(void)
return qatomic_read(&ram_block_discard_required_cnt) ||
qatomic_read(&ram_block_coordinated_discard_required_cnt);
}
int ram_block_convert_range(RAMBlock *rb, uint64_t start, size_t length,
bool shared_to_private)
{
if (!rb || rb->guest_memfd < 0) {
return -1;
}
if (!QEMU_PTR_IS_ALIGNED(start, qemu_host_page_size) ||
!QEMU_PTR_IS_ALIGNED(length, qemu_host_page_size)) {
return -1;
}
if (!length) {
return -1;
}
if (start + length > rb->max_length) {
return -1;
}
if (shared_to_private) {
return ram_block_discard_range(rb, start, length);
} else {
return ram_block_discard_guest_memfd_range(rb, start, length);
}
}

View File

@@ -518,6 +518,52 @@ static void qemu_system_wakeup(void)
}
}
static char* tdx_parse_panic_message(char *message)
{
bool printable = false;
char *buf = NULL;
int len = 0, i;
/*
* Although message is defined as a json string, we shouldn't
* unconditionally treat it as is because the guest generated it and
* it's not necessarily trustable.
*/
if (message) {
/* The caller guarantees the NUL-terminated string. */
len = strlen(message);
printable = len > 0;
for (i = 0; i < len; i++) {
if (!(0x20 <= message[i] && message[i] <= 0x7e)) {
printable = false;
break;
}
}
}
if (!printable && len) {
/* 3 = length of "%02x " */
buf = g_malloc(len * 3);
for (i = 0; i < len; i++) {
if (message[i] == '\0') {
break;
} else {
sprintf(buf + 3 * i, "%02x ", message[i]);
}
}
if (i > 0)
/* replace the last ' '(space) to NUL */
buf[i * 3 - 1] = '\0';
else
buf[0] = '\0';
return buf;
}
return message;
}
void qemu_system_guest_panicked(GuestPanicInformation *info)
{
qemu_log_mask(LOG_GUEST_ERROR, "Guest crashed");
@@ -559,7 +605,15 @@ void qemu_system_guest_panicked(GuestPanicInformation *info)
S390CrashReason_str(info->u.s390.reason),
info->u.s390.psw_mask,
info->u.s390.psw_addr);
} else if (info->type == GUEST_PANIC_INFORMATION_TYPE_TDX) {
qemu_log_mask(LOG_GUEST_ERROR,
" TDX guest reports fatal error:\"%s\""
" error code: 0x%016" PRIx64 " gpa page: 0x%016" PRIx64 "\n",
tdx_parse_panic_message(info->u.tdx.message),
info->u.tdx.error_code,
info->u.tdx.gpa);
}
qapi_free_GuestPanicInformation(info);
}
}

View File

@@ -20,6 +20,15 @@
#ifndef I386_CPU_INTERNAL_H
#define I386_CPU_INTERNAL_H
typedef struct FeatureMask {
FeatureWord index;
uint64_t mask;
} FeatureMask;
typedef struct FeatureDep {
FeatureMask from, to;
} FeatureDep;
typedef enum FeatureWordType {
CPUID_FEATURE_WORD,
MSR_FEATURE_WORD,

View File

@@ -1442,15 +1442,6 @@ FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
},
};
typedef struct FeatureMask {
FeatureWord index;
uint64_t mask;
} FeatureMask;
typedef struct FeatureDep {
FeatureMask from, to;
} FeatureDep;
static FeatureDep feature_dependencies[] = {
{
.from = { FEAT_7_0_EDX, CPUID_7_0_EDX_ARCH_CAPABILITIES },
@@ -1575,9 +1566,6 @@ static const X86RegisterInfo32 x86_reg_info_32[CPU_NB_REGS32] = {
};
#undef REGISTER
/* CPUID feature bits available in XSS */
#define CPUID_XSTATE_XSS_MASK (XSTATE_ARCH_LBR_MASK)
ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT] = {
[XSTATE_FP_BIT] = {
/* x87 FP state component is always enabled if XSAVE is supported */

View File

@@ -588,6 +588,9 @@ typedef enum X86Seg {
XSTATE_Hi16_ZMM_MASK | XSTATE_PKRU_MASK | \
XSTATE_XTILE_CFG_MASK | XSTATE_XTILE_DATA_MASK)
/* CPUID feature bits available in XSS */
#define CPUID_XSTATE_XSS_MASK (XSTATE_ARCH_LBR_MASK)
/* CPUID feature words */
typedef enum FeatureWord {
FEAT_1_EDX, /* CPUID[1].EDX */
@@ -780,6 +783,8 @@ uint64_t x86_cpu_get_supported_feature_word(FeatureWord w,
/* Support RDFSBASE/RDGSBASE/WRFSBASE/WRGSBASE */
#define CPUID_7_0_EBX_FSGSBASE (1U << 0)
/* Support for TSC adjustment MSR 0x3B */
#define CPUID_7_0_EBX_TSC_ADJUST (1U << 1)
/* Support SGX */
#define CPUID_7_0_EBX_SGX (1U << 2)
/* 1st Group of Advanced Bit Manipulation Extensions */
@@ -798,8 +803,12 @@ uint64_t x86_cpu_get_supported_feature_word(FeatureWord w,
#define CPUID_7_0_EBX_INVPCID (1U << 10)
/* Restricted Transactional Memory */
#define CPUID_7_0_EBX_RTM (1U << 11)
/* Cache QoS Monitoring */
#define CPUID_7_0_EBX_PQM (1U << 12)
/* Memory Protection Extension */
#define CPUID_7_0_EBX_MPX (1U << 14)
/* Resource Director Technology Allocation */
#define CPUID_7_0_EBX_RDT_A (1U << 15)
/* AVX-512 Foundation */
#define CPUID_7_0_EBX_AVX512F (1U << 16)
/* AVX-512 Doubleword & Quadword Instruction */
@@ -855,12 +864,20 @@ uint64_t x86_cpu_get_supported_feature_word(FeatureWord w,
#define CPUID_7_0_ECX_AVX512VNNI (1U << 11)
/* Support for VPOPCNT[B,W] and VPSHUFBITQMB */
#define CPUID_7_0_ECX_AVX512BITALG (1U << 12)
/* Intel Total Memory Encryption */
#define CPUID_7_0_ECX_TME (1U << 13)
/* POPCNT for vectors of DW/QW */
#define CPUID_7_0_ECX_AVX512_VPOPCNTDQ (1U << 14)
/* Placeholder for bit 15 */
#define CPUID_7_0_ECX_FZM (1U << 15)
/* 5-level Page Tables */
#define CPUID_7_0_ECX_LA57 (1U << 16)
/* MAWAU for MPX */
#define CPUID_7_0_ECX_MAWAU (31U << 17)
/* Read Processor ID */
#define CPUID_7_0_ECX_RDPID (1U << 22)
/* KeyLocker */
#define CPUID_7_0_ECX_KeyLocker (1U << 23)
/* Bus Lock Debug Exception */
#define CPUID_7_0_ECX_BUS_LOCK_DETECT (1U << 24)
/* Cache Line Demote Instruction */
@@ -869,6 +886,8 @@ uint64_t x86_cpu_get_supported_feature_word(FeatureWord w,
#define CPUID_7_0_ECX_MOVDIRI (1U << 27)
/* Move 64 Bytes as Direct Store Instruction */
#define CPUID_7_0_ECX_MOVDIR64B (1U << 28)
/* ENQCMD and ENQCMDS instructions */
#define CPUID_7_0_ECX_ENQCMD (1U << 29)
/* Support SGX Launch Control */
#define CPUID_7_0_ECX_SGX_LC (1U << 30)
/* Protection Keys for Supervisor-mode Pages */
@@ -886,6 +905,8 @@ uint64_t x86_cpu_get_supported_feature_word(FeatureWord w,
#define CPUID_7_0_EDX_SERIALIZE (1U << 14)
/* TSX Suspend Load Address Tracking instruction */
#define CPUID_7_0_EDX_TSX_LDTRK (1U << 16)
/* PCONFIG instruction */
#define CPUID_7_0_EDX_PCONFIG (1U << 18)
/* Architectural LBRs */
#define CPUID_7_0_EDX_ARCH_LBR (1U << 19)
/* AMX_BF16 instruction */

View File

@@ -15,6 +15,7 @@
#include "sysemu/sysemu.h"
#include "hw/boards.h"
#include "tdx.h"
#include "kvm_i386.h"
#include "hw/core/accel-cpu.h"
@@ -60,6 +61,10 @@ static bool lmce_supported(void)
if (kvm_ioctl(kvm_state, KVM_X86_GET_MCE_CAP_SUPPORTED, &mce_cap) < 0) {
return false;
}
if (is_tdx_vm())
return false;
return !!(mce_cap & MCG_LMCE_P);
}

View File

@@ -32,6 +32,7 @@
#include "sysemu/runstate.h"
#include "kvm_i386.h"
#include "sev.h"
#include "tdx.h"
#include "xen-emu.h"
#include "hyperv.h"
#include "hyperv-proto.h"
@@ -61,6 +62,7 @@
#include "migration/blocker.h"
#include "exec/memattrs.h"
#include "trace.h"
#include "tdx.h"
#include CONFIG_DEVICES
@@ -161,6 +163,36 @@ static KVMMSRHandlers msr_handlers[KVM_MSR_FILTER_MAX_RANGES];
static RateLimit bus_lock_ratelimit_ctrl;
static int kvm_get_one_msr(X86CPU *cpu, int index, uint64_t *value);
static const char* vm_type_name[] = {
[KVM_X86_DEFAULT_VM] = "default",
[KVM_X86_SW_PROTECTED_VM] = "sw-protected-vm",
[KVM_X86_TDX_VM] = "tdx",
};
int kvm_get_vm_type(MachineState *ms, const char *vm_type)
{
int kvm_type = KVM_X86_DEFAULT_VM;
if (ms->cgs && object_dynamic_cast(OBJECT(ms->cgs), TYPE_TDX_GUEST)) {
kvm_type = KVM_X86_TDX_VM;
}
/*
* old KVM doesn't support KVM_CAP_VM_TYPES and KVM_X86_DEFAULT_VM
* is always supported
*/
if (kvm_type == KVM_X86_DEFAULT_VM) {
return kvm_type;
}
if (!(kvm_check_extension(KVM_STATE(ms->accelerator), KVM_CAP_VM_TYPES) & BIT(kvm_type))) {
error_report("vm-type %s not supported by KVM", vm_type_name[kvm_type]);
exit(1);
}
return kvm_type;
}
bool kvm_has_smm(void)
{
return kvm_vm_check_extension(kvm_state, KVM_CAP_X86_SMM);
@@ -247,7 +279,7 @@ void kvm_synchronize_all_tsc(void)
{
CPUState *cpu;
if (kvm_enabled()) {
if (kvm_enabled() && !is_tdx_vm()) {
CPU_FOREACH(cpu) {
run_on_cpu(cpu, do_kvm_synchronize_tsc, RUN_ON_CPU_NULL);
}
@@ -490,6 +522,10 @@ uint32_t kvm_arch_get_supported_cpuid(KVMState *s, uint32_t function,
ret |= 1U << KVM_HINTS_REALTIME;
}
if (is_tdx_vm()) {
tdx_get_supported_cpuid(function, index, reg, &ret);
}
return ret;
}
@@ -759,6 +795,15 @@ static int kvm_arch_set_tsc_khz(CPUState *cs)
int r, cur_freq;
bool set_ioctl = false;
/*
* TSC of TD vcpu is immutable, it cannot be set/changed via vcpu scope
* VM_SET_TSC_KHZ, but only be initialized via VM scope VM_SET_TSC_KHZ
* before ioctl KVM_TDX_INIT_VM in tdx_pre_create_vcpu()
*/
if (is_tdx_vm()) {
return 0;
}
if (!env->tsc_khz) {
return 0;
}
@@ -1655,8 +1700,6 @@ static int hyperv_init_vcpu(X86CPU *cpu)
static Error *invtsc_mig_blocker;
#define KVM_MAX_CPUID_ENTRIES 100
static void kvm_init_xsave(CPUX86State *env)
{
if (has_xsave2) {
@@ -1699,6 +1742,236 @@ static void kvm_init_nested_state(CPUX86State *env)
}
}
uint32_t kvm_x86_arch_cpuid(CPUX86State *env, struct kvm_cpuid_entry2 *entries,
uint32_t cpuid_i)
{
uint32_t limit, i, j;
uint32_t unused;
struct kvm_cpuid_entry2 *c;
cpu_x86_cpuid(env, 0, 0, &limit, &unused, &unused, &unused);
for (i = 0; i <= limit; i++) {
if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
fprintf(stderr, "unsupported level value: 0x%x\n", limit);
abort();
}
c = &entries[cpuid_i++];
switch (i) {
case 2: {
/* Keep reading function 2 till all the input is received */
int times;
c->function = i;
c->flags = KVM_CPUID_FLAG_STATEFUL_FUNC |
KVM_CPUID_FLAG_STATE_READ_NEXT;
cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
times = c->eax & 0xff;
for (j = 1; j < times; ++j) {
if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
fprintf(stderr, "cpuid_data is full, no space for "
"cpuid(eax:2):eax & 0xf = 0x%x\n", times);
abort();
}
c = &entries[cpuid_i++];
c->function = i;
c->flags = KVM_CPUID_FLAG_STATEFUL_FUNC;
cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
}
break;
}
case 0x1f:
if (env->nr_dies < 2) {
cpuid_i--;
break;
}
/* fallthrough */
case 4:
case 0xb:
case 0xd:
for (j = 0; ; j++) {
if (i == 0xd && j == 64) {
break;
}
c->function = i;
c->flags = KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
c->index = j;
cpu_x86_cpuid(env, i, j, &c->eax, &c->ebx, &c->ecx, &c->edx);
if (i == 4 && c->eax == 0) {
break;
}
if (i == 0xb && !(c->ecx & 0xff00)) {
break;
}
if (i == 0x1f && !(c->ecx & 0xff00)) {
break;
}
if (i == 0xd && c->eax == 0) {
continue;
}
if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
fprintf(stderr, "cpuid_data is full, no space for "
"cpuid(eax:0x%x,ecx:0x%x)\n", i, j);
abort();
}
c = &entries[cpuid_i++];
}
break;
case 0x7:
case 0x12:
for (j = 0; ; j++) {
c->function = i;
c->flags = KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
c->index = j;
cpu_x86_cpuid(env, i, j, &c->eax, &c->ebx, &c->ecx, &c->edx);
if (j > 1 && (c->eax & 0xf) != 1) {
break;
}
if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
fprintf(stderr, "cpuid_data is full, no space for "
"cpuid(eax:0x12,ecx:0x%x)\n", j);
abort();
}
c = &entries[cpuid_i++];
}
break;
case 0x14:
case 0x1d:
case 0x1e: {
uint32_t times;
c->function = i;
c->index = 0;
c->flags = KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
times = c->eax;
for (j = 1; j <= times; ++j) {
if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
fprintf(stderr, "cpuid_data is full, no space for "
"cpuid(eax:0x%x,ecx:0x%x)\n", i, j);
abort();
}
c = &entries[cpuid_i++];
c->function = i;
c->index = j;
c->flags = KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
cpu_x86_cpuid(env, i, j, &c->eax, &c->ebx, &c->ecx, &c->edx);
}
break;
}
default:
c->function = i;
c->flags = 0;
cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
if (!c->eax && !c->ebx && !c->ecx && !c->edx) {
/*
* KVM already returns all zeroes if a CPUID entry is missing,
* so we can omit it and avoid hitting KVM's 80-entry limit.
*/
cpuid_i--;
}
break;
}
}
if (limit >= 0x0a) {
uint32_t eax, edx;
cpu_x86_cpuid(env, 0x0a, 0, &eax, &unused, &unused, &edx);
has_architectural_pmu_version = eax & 0xff;
if (has_architectural_pmu_version > 0) {
num_architectural_pmu_gp_counters = (eax & 0xff00) >> 8;
/* Shouldn't be more than 32, since that's the number of bits
* available in EBX to tell us _which_ counters are available.
* Play it safe.
*/
if (num_architectural_pmu_gp_counters > MAX_GP_COUNTERS) {
num_architectural_pmu_gp_counters = MAX_GP_COUNTERS;
}
if (has_architectural_pmu_version > 1) {
num_architectural_pmu_fixed_counters = edx & 0x1f;
if (num_architectural_pmu_fixed_counters > MAX_FIXED_COUNTERS) {
num_architectural_pmu_fixed_counters = MAX_FIXED_COUNTERS;
}
}
}
}
cpu_x86_cpuid(env, 0x80000000, 0, &limit, &unused, &unused, &unused);
for (i = 0x80000000; i <= limit; i++) {
if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
fprintf(stderr, "unsupported xlevel value: 0x%x\n", limit);
abort();
}
c = &entries[cpuid_i++];
switch (i) {
case 0x8000001d:
/* Query for all AMD cache information leaves */
for (j = 0; ; j++) {
c->function = i;
c->flags = KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
c->index = j;
cpu_x86_cpuid(env, i, j, &c->eax, &c->ebx, &c->ecx, &c->edx);
if (c->eax == 0) {
break;
}
if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
fprintf(stderr, "cpuid_data is full, no space for "
"cpuid(eax:0x%x,ecx:0x%x)\n", i, j);
abort();
}
c = &entries[cpuid_i++];
}
break;
default:
c->function = i;
c->flags = 0;
cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
if (!c->eax && !c->ebx && !c->ecx && !c->edx) {
/*
* KVM already returns all zeroes if a CPUID entry is missing,
* so we can omit it and avoid hitting KVM's 80-entry limit.
*/
cpuid_i--;
}
break;
}
}
/* Call Centaur's CPUID instructions they are supported. */
if (env->cpuid_xlevel2 > 0) {
cpu_x86_cpuid(env, 0xC0000000, 0, &limit, &unused, &unused, &unused);
for (i = 0xC0000000; i <= limit; i++) {
if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
fprintf(stderr, "unsupported xlevel2 value: 0x%x\n", limit);
abort();
}
c = &entries[cpuid_i++];
c->function = i;
c->flags = 0;
cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
}
}
return cpuid_i;
}
int kvm_arch_init_vcpu(CPUState *cs)
{
struct {
@@ -1715,8 +1988,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
X86CPU *cpu = X86_CPU(cs);
CPUX86State *env = &cpu->env;
uint32_t limit, i, j, cpuid_i;
uint32_t unused;
uint32_t cpuid_i;
struct kvm_cpuid_entry2 *c;
uint32_t signature[3];
int kvm_base = KVM_CPUID_SIGNATURE;
@@ -1869,8 +2141,6 @@ int kvm_arch_init_vcpu(CPUState *cs)
c->edx = env->features[FEAT_KVM_HINTS];
}
cpu_x86_cpuid(env, 0, 0, &limit, &unused, &unused, &unused);
if (cpu->kvm_pv_enforce_cpuid) {
r = kvm_vcpu_enable_cap(cs, KVM_CAP_ENFORCE_PV_FEATURE_CPUID, 0, 1);
if (r < 0) {
@@ -1881,227 +2151,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
}
}
for (i = 0; i <= limit; i++) {
if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
fprintf(stderr, "unsupported level value: 0x%x\n", limit);
abort();
}
c = &cpuid_data.entries[cpuid_i++];
switch (i) {
case 2: {
/* Keep reading function 2 till all the input is received */
int times;
c->function = i;
c->flags = KVM_CPUID_FLAG_STATEFUL_FUNC |
KVM_CPUID_FLAG_STATE_READ_NEXT;
cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
times = c->eax & 0xff;
for (j = 1; j < times; ++j) {
if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
fprintf(stderr, "cpuid_data is full, no space for "
"cpuid(eax:2):eax & 0xf = 0x%x\n", times);
abort();
}
c = &cpuid_data.entries[cpuid_i++];
c->function = i;
c->flags = KVM_CPUID_FLAG_STATEFUL_FUNC;
cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
}
break;
}
case 0x1f:
if (env->nr_dies < 2) {
break;
}
/* fallthrough */
case 4:
case 0xb:
case 0xd:
for (j = 0; ; j++) {
if (i == 0xd && j == 64) {
break;
}
if (i == 0x1f && j == 64) {
break;
}
c->function = i;
c->flags = KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
c->index = j;
cpu_x86_cpuid(env, i, j, &c->eax, &c->ebx, &c->ecx, &c->edx);
if (i == 4 && c->eax == 0) {
break;
}
if (i == 0xb && !(c->ecx & 0xff00)) {
break;
}
if (i == 0x1f && !(c->ecx & 0xff00)) {
break;
}
if (i == 0xd && c->eax == 0) {
continue;
}
if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
fprintf(stderr, "cpuid_data is full, no space for "
"cpuid(eax:0x%x,ecx:0x%x)\n", i, j);
abort();
}
c = &cpuid_data.entries[cpuid_i++];
}
break;
case 0x7:
case 0x12:
for (j = 0; ; j++) {
c->function = i;
c->flags = KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
c->index = j;
cpu_x86_cpuid(env, i, j, &c->eax, &c->ebx, &c->ecx, &c->edx);
if (j > 1 && (c->eax & 0xf) != 1) {
break;
}
if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
fprintf(stderr, "cpuid_data is full, no space for "
"cpuid(eax:0x12,ecx:0x%x)\n", j);
abort();
}
c = &cpuid_data.entries[cpuid_i++];
}
break;
case 0x14:
case 0x1d:
case 0x1e: {
uint32_t times;
c->function = i;
c->index = 0;
c->flags = KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
times = c->eax;
for (j = 1; j <= times; ++j) {
if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
fprintf(stderr, "cpuid_data is full, no space for "
"cpuid(eax:0x%x,ecx:0x%x)\n", i, j);
abort();
}
c = &cpuid_data.entries[cpuid_i++];
c->function = i;
c->index = j;
c->flags = KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
cpu_x86_cpuid(env, i, j, &c->eax, &c->ebx, &c->ecx, &c->edx);
}
break;
}
default:
c->function = i;
c->flags = 0;
cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
if (!c->eax && !c->ebx && !c->ecx && !c->edx) {
/*
* KVM already returns all zeroes if a CPUID entry is missing,
* so we can omit it and avoid hitting KVM's 80-entry limit.
*/
cpuid_i--;
}
break;
}
}
if (limit >= 0x0a) {
uint32_t eax, edx;
cpu_x86_cpuid(env, 0x0a, 0, &eax, &unused, &unused, &edx);
has_architectural_pmu_version = eax & 0xff;
if (has_architectural_pmu_version > 0) {
num_architectural_pmu_gp_counters = (eax & 0xff00) >> 8;
/* Shouldn't be more than 32, since that's the number of bits
* available in EBX to tell us _which_ counters are available.
* Play it safe.
*/
if (num_architectural_pmu_gp_counters > MAX_GP_COUNTERS) {
num_architectural_pmu_gp_counters = MAX_GP_COUNTERS;
}
if (has_architectural_pmu_version > 1) {
num_architectural_pmu_fixed_counters = edx & 0x1f;
if (num_architectural_pmu_fixed_counters > MAX_FIXED_COUNTERS) {
num_architectural_pmu_fixed_counters = MAX_FIXED_COUNTERS;
}
}
}
}
cpu_x86_cpuid(env, 0x80000000, 0, &limit, &unused, &unused, &unused);
for (i = 0x80000000; i <= limit; i++) {
if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
fprintf(stderr, "unsupported xlevel value: 0x%x\n", limit);
abort();
}
c = &cpuid_data.entries[cpuid_i++];
switch (i) {
case 0x8000001d:
/* Query for all AMD cache information leaves */
for (j = 0; ; j++) {
c->function = i;
c->flags = KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
c->index = j;
cpu_x86_cpuid(env, i, j, &c->eax, &c->ebx, &c->ecx, &c->edx);
if (c->eax == 0) {
break;
}
if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
fprintf(stderr, "cpuid_data is full, no space for "
"cpuid(eax:0x%x,ecx:0x%x)\n", i, j);
abort();
}
c = &cpuid_data.entries[cpuid_i++];
}
break;
default:
c->function = i;
c->flags = 0;
cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
if (!c->eax && !c->ebx && !c->ecx && !c->edx) {
/*
* KVM already returns all zeroes if a CPUID entry is missing,
* so we can omit it and avoid hitting KVM's 80-entry limit.
*/
cpuid_i--;
}
break;
}
}
/* Call Centaur's CPUID instructions they are supported. */
if (env->cpuid_xlevel2 > 0) {
cpu_x86_cpuid(env, 0xC0000000, 0, &limit, &unused, &unused, &unused);
for (i = 0xC0000000; i <= limit; i++) {
if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
fprintf(stderr, "unsupported xlevel2 value: 0x%x\n", limit);
abort();
}
c = &cpuid_data.entries[cpuid_i++];
c->function = i;
c->flags = 0;
cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
}
}
cpuid_i = kvm_x86_arch_cpuid(env, cpuid_data.entries, cpuid_i);
cpuid_data.cpuid.nent = cpuid_i;
if (((env->cpuid_version >> 8)&0xF) >= 6
@@ -2227,6 +2277,15 @@ int kvm_arch_init_vcpu(CPUState *cs)
return r;
}
int kvm_arch_pre_create_vcpu(CPUState *cpu, Error **errp)
{
if (is_tdx_vm()) {
return tdx_pre_create_vcpu(cpu, errp);
}
return 0;
}
int kvm_arch_destroy_vcpu(CPUState *cs)
{
X86CPU *cpu = X86_CPU(cs);
@@ -2514,6 +2573,17 @@ int kvm_arch_get_default_type(MachineState *ms)
return 0;
}
static int kvm_confidential_guest_init(MachineState *ms, Error **errp)
{
if (object_dynamic_cast(OBJECT(ms->cgs), TYPE_SEV_GUEST)) {
return sev_kvm_init(ms->cgs, errp);
} else if (object_dynamic_cast(OBJECT(ms->cgs), TYPE_TDX_GUEST)) {
return tdx_kvm_init(ms, errp);
}
return 0;
}
int kvm_arch_init(MachineState *ms, KVMState *s)
{
uint64_t identity_base = 0xfffbc000;
@@ -2523,18 +2593,12 @@ int kvm_arch_init(MachineState *ms, KVMState *s)
Error *local_err = NULL;
/*
* Initialize SEV context, if required
* Initialize confidential guest (SEV/TDX) context, if required
*
* If no memory encryption is requested (ms->cgs == NULL) this is
* a no-op.
*
* It's also a no-op if a non-SEV confidential guest support
* mechanism is selected. SEV is the only mechanism available to
* select on x86 at present, so this doesn't arise, but if new
* mechanisms are supported in future (e.g. TDX), they'll need
* their own initialization either here or elsewhere.
* It's a no-op if a non-SEV/non-tdx confidential guest support
* mechanism is selected, i.e., ms->cgs == NULL
*/
ret = sev_kvm_init(ms->cgs, &local_err);
ret = kvm_confidential_guest_init(ms, &local_err);
if (ret < 0) {
error_report_err(local_err);
return ret;
@@ -2997,6 +3061,11 @@ void kvm_put_apicbase(X86CPU *cpu, uint64_t value)
{
int ret;
/* TODO: Allow accessing guest state for debug TDs. */
if (is_tdx_vm()) {
return;
}
ret = kvm_put_one_msr(cpu, MSR_IA32_APICBASE, value);
assert(ret == 1);
}
@@ -3215,32 +3284,34 @@ static void kvm_init_msrs(X86CPU *cpu)
CPUX86State *env = &cpu->env;
kvm_msr_buf_reset(cpu);
if (has_msr_arch_capabs) {
kvm_msr_entry_add(cpu, MSR_IA32_ARCH_CAPABILITIES,
env->features[FEAT_ARCH_CAPABILITIES]);
}
if (has_msr_core_capabs) {
kvm_msr_entry_add(cpu, MSR_IA32_CORE_CAPABILITY,
env->features[FEAT_CORE_CAPABILITY]);
}
if (!is_tdx_vm()) {
if (has_msr_arch_capabs) {
kvm_msr_entry_add(cpu, MSR_IA32_ARCH_CAPABILITIES,
env->features[FEAT_ARCH_CAPABILITIES]);
}
if (has_msr_perf_capabs && cpu->enable_pmu) {
kvm_msr_entry_add_perf(cpu, env->features);
if (has_msr_core_capabs) {
kvm_msr_entry_add(cpu, MSR_IA32_CORE_CAPABILITY,
env->features[FEAT_CORE_CAPABILITY]);
}
if (has_msr_perf_capabs && cpu->enable_pmu) {
kvm_msr_entry_add_perf(cpu, env->features);
}
/*
* Older kernels do not include VMX MSRs in KVM_GET_MSR_INDEX_LIST, but
* all kernels with MSR features should have them.
*/
if (kvm_feature_msrs && cpu_has_vmx(env)) {
kvm_msr_entry_add_vmx(cpu, env->features);
}
}
if (has_msr_ucode_rev) {
kvm_msr_entry_add(cpu, MSR_IA32_UCODE_REV, cpu->ucode_rev);
}
/*
* Older kernels do not include VMX MSRs in KVM_GET_MSR_INDEX_LIST, but
* all kernels with MSR features should have them.
*/
if (kvm_feature_msrs && cpu_has_vmx(env)) {
kvm_msr_entry_add_vmx(cpu, env->features);
}
assert(kvm_buf_set_msrs(cpu) == 0);
}
@@ -4550,6 +4621,11 @@ int kvm_arch_put_registers(CPUState *cpu, int level)
assert(cpu_is_stopped(cpu) || qemu_cpu_is_self(cpu));
/* TODO: Allow accessing guest state for debug TDs. */
if (is_tdx_vm()) {
return 0;
}
/*
* Put MSR_IA32_FEATURE_CONTROL first, this ensures the VM gets out of VMX
* root operation upon vCPU reset. kvm_put_msr_feature_control() should also
@@ -4650,6 +4726,12 @@ int kvm_arch_get_registers(CPUState *cs)
if (ret < 0) {
goto out;
}
/* TODO: Allow accessing guest state for debug TDs. */
if (is_tdx_vm()) {
return 0;
}
ret = kvm_getput_regs(cpu, 0);
if (ret < 0) {
goto out;
@@ -5350,6 +5432,15 @@ int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run)
ret = kvm_xen_handle_exit(cpu, &run->xen);
break;
#endif
case KVM_EXIT_TDX:
if (!is_tdx_vm()) {
error_report("KVM: get KVM_EXIT_TDX for a non-TDX VM.");
ret = -1;
break;
}
tdx_handle_exit(cpu, &run->tdx);
ret = 0;
break;
default:
fprintf(stderr, "KVM: unknown exit reason %d\n", run->exit_reason);
ret = -1;
@@ -5602,7 +5693,7 @@ bool kvm_has_waitpkg(void)
bool kvm_arch_cpu_check_are_resettable(void)
{
return !sev_es_enabled();
return !sev_es_enabled() && !is_tdx_vm();
}
#define ARCH_REQ_XCOMP_GUEST_PERM 0x1025

View File

@@ -13,6 +13,8 @@
#include "sysemu/kvm.h"
#define KVM_MAX_CPUID_ENTRIES 100
#ifdef CONFIG_KVM
#define kvm_pit_in_kernel() \
@@ -22,6 +24,9 @@
#define kvm_ioapic_in_kernel() \
(kvm_irqchip_in_kernel() && !kvm_irqchip_is_split())
uint32_t kvm_x86_arch_cpuid(CPUX86State *env, struct kvm_cpuid_entry2 *entries,
uint32_t cpuid_i);
#else
#define kvm_pit_in_kernel() 0
@@ -37,6 +42,7 @@ bool kvm_hv_vpindex_settable(void);
bool kvm_enable_sgx_provisioning(KVMState *s);
bool kvm_hyperv_expand_features(X86CPU *cpu, Error **errp);
int kvm_get_vm_type(MachineState *ms, const char *vm_type);
void kvm_arch_reset_vcpu(X86CPU *cs);
void kvm_arch_after_reset_vcpu(X86CPU *cpu);
void kvm_arch_do_init_vcpu(X86CPU *cs);

View File

@@ -9,6 +9,8 @@ i386_kvm_ss.add(when: 'CONFIG_XEN_EMU', if_true: files('xen-emu.c'))
i386_kvm_ss.add(when: 'CONFIG_SEV', if_false: files('sev-stub.c'))
i386_kvm_ss.add(when: 'CONFIG_TDX', if_true: files('tdx.c'), if_false: files('tdx-stub.c'))
i386_system_ss.add(when: 'CONFIG_HYPERV', if_true: files('hyperv.c'), if_false: files('hyperv-stub.c'))
i386_system_ss.add_all(when: 'CONFIG_KVM', if_true: i386_kvm_ss)

View File

@@ -0,0 +1,23 @@
#include "qemu/osdep.h"
#include "tdx.h"
int tdx_kvm_init(MachineState *ms, Error **errp)
{
return -EINVAL;
}
int tdx_pre_create_vcpu(CPUState *cpu, Error **errp)
{
return -EINVAL;
}
int tdx_parse_tdvf(void *flash_ptr, int size)
{
return -EINVAL;
}
void tdx_handle_exit(X86CPU *cpu, struct kvm_tdx_exit *tdx_exit)
{
abort();
}

1612
target/i386/kvm/tdx.c Normal file

File diff suppressed because it is too large Load Diff

72
target/i386/kvm/tdx.h Normal file
View File

@@ -0,0 +1,72 @@
#ifndef QEMU_I386_TDX_H
#define QEMU_I386_TDX_H
#ifndef CONFIG_USER_ONLY
#include CONFIG_DEVICES /* CONFIG_TDX */
#endif
#include <linux/kvm.h>
#include "exec/confidential-guest-support.h"
#include "hw/i386/tdvf.h"
#include "io/channel-socket.h"
#include "sysemu/kvm.h"
#define TYPE_TDX_GUEST "tdx-guest"
#define TDX_GUEST(obj) OBJECT_CHECK(TdxGuest, (obj), TYPE_TDX_GUEST)
typedef struct TdxGuestClass {
ConfidentialGuestSupportClass parent_class;
} TdxGuestClass;
enum TdxRamType{
TDX_RAM_UNACCEPTED,
TDX_RAM_ADDED,
};
typedef struct TdxRamEntry {
uint64_t address;
uint64_t length;
enum TdxRamType type;
} TdxRamEntry;
typedef struct TdxGuest {
ConfidentialGuestSupport parent_obj;
QemuMutex lock;
bool initialized;
uint64_t attributes; /* TD attributes */
char *mrconfigid; /* base64 encoded sha348 digest */
char *mrowner; /* base64 encoded sha348 digest */
char *mrownerconfig; /* base64 encoded sha348 digest */
TdxFirmware tdvf;
MemoryRegion *tdvf_region;
uint32_t nr_ram_entries;
TdxRamEntry *ram_entries;
/* runtime state */
int event_notify_interrupt;
uint32_t event_notify_apic_id;
/* GetQuote */
int quote_generation_num;
SocketAddress *quote_generation;
} TdxGuest;
#ifdef CONFIG_TDX
bool is_tdx_vm(void);
#else
#define is_tdx_vm() 0
#endif /* CONFIG_TDX */
int tdx_kvm_init(MachineState *ms, Error **errp);
void tdx_get_supported_cpuid(uint32_t function, uint32_t index, int reg,
uint32_t *ret);
int tdx_pre_create_vcpu(CPUState *cpu, Error **errp);
void tdx_set_tdvf_region(MemoryRegion *tdvf_region);
int tdx_parse_tdvf(void *flash_ptr, int size);
void tdx_handle_exit(X86CPU *cpu, struct kvm_tdx_exit *tdx_exit);
#endif /* QEMU_I386_TDX_H */

View File

@@ -39,7 +39,6 @@
#include "hw/i386/pc.h"
#include "exec/address-spaces.h"
#define TYPE_SEV_GUEST "sev-guest"
OBJECT_DECLARE_SIMPLE_TYPE(SevGuestState, SEV_GUEST)

View File

@@ -20,6 +20,8 @@
#include "exec/confidential-guest-support.h"
#define TYPE_SEV_GUEST "sev-guest"
#define SEV_POLICY_NODBG 0x1
#define SEV_POLICY_NOKS 0x2
#define SEV_POLICY_ES 0x4